Mi experiencia HackForGood 2015

by Guido García on 16/06/2015

HackForGood es un hackaton centrado en la innovación social, donde se desarrollan de forma colaborativa nuevas ideas que ayuden a resolver problemas sociales. Los días 16-18 de abril sacrificamos un día de vacaciones para pasarnos por la tercera edición de la iniciativa HackForGood en la Universidad de Valladolid.

Nuestro equipo “Senior OS” iba con la idea de hacer un teléfono móvil para personas mayores, sin ningún tipo de configuración y que se pudiese gestionar íntegramente de forma remota. Lo pasamos muy bien y os voy a contar lo que más me llamó la atención.

Anarquía colaborativa

Los proyectos que más me gustaron mezclaban software y hardware para resolver problemas sociales: un lector de libros para personas con deficiencias visuales basado en cámaras de bajo coste y software libre de reconocimiento de textos, un sistema de control domótico basado en Arduino para personas con movilidad reducida, un sistema para automatizar el tratamiento a pacientes de quimioterapia (@quimiopro) también utilizando Arduino o nuestro teléfono basado en Firefox OS.

Quizá no fuesen perfectos, pero son una muestra de que en escasas 48 horas se pueden desarrollar prototipos o comenzar proyectos que en una empresa costaría poner en marcha algún que otro mes de burocracia, reuniones y procedimientos. Hoy en día, con un presupuesto ridículo y un grupo muy pequeño de personas, se puede probar una idea tecnológica. Pienso que, conforme se siga democratizando el acceso a la tecnología, las empresas tecnológicas continuarán perdiendo su ventaja frente a grupos de personas organizadas alrededor de un proyecto. Principalmente porque su motivación va más allá del dinero.

Generación encontrada

Parece probado que la diversidad es un factor enriquecedor de ideas y proyectos. Muchas veces se asocia a la diversidad de género y, aunque es cierto que el porcentaje de mujeres en el evento fue muy bajo (alrededor de un 20%), lo que más me llamó la atención fue la ausencia de personas mayores de 35 años. Ni una. La media rondaba los veintipocos y probablemente yo fuese el mayor de todos.

Es algo que se repite en este tipo de eventos. Parece que se asume que para participar se tiene que ser joven y estudiante. ¿Es un problema de comunicación o hay una edad a partir de la cual asumimos que este tipo de iniciativas no van con nosotros?

La verdad es que en quedé gratamente sorprendido con la forma en la que los jóvenes se organizan en esa suerte de anarquía colaborativa. Una generación que ha recibido todo tipo de calificativos, que algunos llaman generación perdida, pero a la que sólo le hace falta levantar la voz. Es a este tipo de eventos donde hay que ir a encontrar talento y gente con ganas de hacer cosas e innovar.

Nos vemos el año que viene. Sin excusas.

 
6 Likes
Hold on
No Comments yet. Be the first.

Apache Mesos and the future of microservice architectures

by Guido García on 24/03/2015

Here you have a presentation Apache Mesos, Marathon and Flock I gave for the cloud community in Telefónica.

Flock is an unstable proof of concept I’ve been working on lately. It is a next-generation service platform that uses Marathon to allocate services across a cluster of machines.

I think the time to start caring only about services -not machines- has come. Compute resources should be treated as a commodity. As tap water.

 
15 Likes
Hold on
No Comments yet. Be the first.

Simple moving average with bacon.js

by Guido García on 15/01/2015

Bacon is a small functional reactive programming lib for JavaScript. Sometimes it is easier to handle data as a stream and react to changes in the stream instead of processing individual events. In the example below (nodejs), a simple moving average.

var Bacon = require('baconjs')

function avg(array) {
  var sum = array.reduce(function(a, b) { return a + b; }, 0);
  return sum / array.length;
}

var bus = new Bacon.Bus();
bus.slidingWindow(3).map(avg).onValue(console.log);
bus.push(1); // output = 1
bus.push(2); // output = 1.5
bus.push(3); // output = 2

You can see another example (github) where an event stream is created and populated with your mouse positions. The “y” position is represented along with the moving average.

 
8 Likes
Hold on
No Comments yet. Be the first.

Evolutionary computation · TEFcon 2014

by Guido García on 5/11/2014

This is my presentation (spanish) about evolutionary computation for the TEFcon 2014. It was a talk about how we code and how genetic algorithms and genetic programming might help us. Because “programming should be more about the what and less about the how”.

Edit: I’ve pushed the Java code for the queens problem using genetic algorithms to github.

 
11 Likes
Hold on
1 Comment

DTrace para flipar pepinillos

by Guido García on 25/08/2014

This is my presentation (spanish) about DTrace, a tracing framework created by Sun that is really cool. It is available on “Solaris” systems, but also on OSX, BSD, and some Linux ports.

It is a really powerful tool once you get used to it.

 
6 Likes
Hold on
No Comments yet. Be the first.

How to catch an Internet troll

by Guido García on 25/04/2014

Some weeks ago I carried out a social experiment (dameunverso.com, spanish) that consisted in writing a poem in a collaborative and anonymous way. This means that anyone can add a new verse to the poem without identifying themselves or leaving any metadata (no cookies, no IP address tracking, etc).

Our first trolls didn’t take long to appear, mostly in the form of copyrighted material, spam and offensive contents. Is it possible to automatically classify an anonymous verse as spammy?

Troll

Text classification

LingPipe is a powerful Java toolkit for processing text, free for research use under some conditions. I followed the Text Classification Tutorial, to classify verses in one of these categories: “spam” or “love”.

The classifier I built uses 80% of the poem (already classified into “spam” or “love” categories by hand), as a training set to learn and build a language model. Then, it uses the remaining 20% of the poem (48 verses) for cross-validation of this model.

You can find the code in the Annex I, it is less than 50 lines of code.

Classification results

The classification accuracy is 75% ± 12.25%, so we can say that our model performs better than a monkey at significance level of 0.05.

Categories=[spam, love]
Total Count=48
Total Correct=36
Total Accuracy=0.75
95% Confidence Interval=0.75 +/- 0.1225

Confusion Matrix
Macro-averaged Precision=0.7555
Macro-averaged Recall=0.7412
Macro-averaged F=0.7428

It seems pretty promising and it can serve as inspiration but, to be honest, I don’t think it is such a good model. With so few contributions to the poem, it is prone to overfitting, so it is probably learning to classify just our usual trolls that are not original whatsoever.

Moreover, we are not taking into account other factors that would greatly improve the results, such as the structure of the verse (length, number of words, etc), the relation between the verses (rhyme) or the presence of inexistent words and typos. If you want to further investigate, I suggest taking a look at Logistic Regression, to build better models that also include these kind of factors.

On a practical note, if you ever plan to carry out a similar experiment, remember two rules. First, make it easier for you to revert vandalism than for the troll to vandalize your site. Second, don’t feed the troll. They will eventually get tired.

Annex I. Java Code

String[] CATEGORIES = { "spam", "love" };
int NGRAM_SIZE = 6;

String textSpamTraining =
        "Ola ke Ase\n" +
        "censurator\n" +
        "...";

String textLoveTraining =
        "Me ahogo en un suspiro,\n" +
        "miro tus ojos de cristal\n" +
        "...";

String[] textSpamCrossValidation = {
        "os va a censurar",
        "esto es una mierda",
        "..."
};

String[] textLoveCrossValidation = {
        "el experimento ha revelado",
        "que el gran poeta no era orador",
        "y al no resultar como esperado",
        "se ha tornado en vil censor",
        "..."
};

// FIRST STEP - learn
DynamicLMClassifier<NGramProcessLM> classifier =
        DynamicLMClassifier.createNGramProcess(CATEGORIES, NGRAM_SIZE);

{
    Classification classification = new Classification("spam");
    Classified<CharSequence> classified =
        new Classified<CharSequence>(textSpamTraining, classification);
    classifier.handle(classified);
}

{
    Classification classification = new Classification("love");
    Classified<CharSequence> classified =
        new Classified<CharSequence>(textLoveTraining, classification);
    classifier.handle(classified);
}

// SECOND STEP - compile
JointClassifier<CharSequence> compiledClassifier =
                (JointClassifier<CharSequence>) 
                        AbstractExternalizable.compile(classifier);

JointClassifierEvaluator<CharSequence> evaluator =
        new JointClassifierEvaluator<CharSequence>(
                compiledClassifier, CATEGORIES, true);

// THIRD STEP - cross-validate
for (String textSpamItem: textSpamCrossValidation) {
    Classification classification = new Classification("spam");
    Classified<CharSequence> classified =
        new Classified<CharSequence>(textSpamItem, classification);
    evaluator.handle(classified);
}

for (String textLoveItem: textLoveValidation) {
    Classification classification = new Classification("love");
    Classified<CharSequence> classified =
        new Classified<CharSequence>(textLoveItem, classification);
    evaluator.handle(classified);
}

ConfusionMatrix matrix = evaluator.confusionMatrix();
System.out.println("Total Accuracy: " + matrix.totalAccuracy());

System.out.println(evaluator);
 
12 Likes
Hold on
No Comments yet. Be the first.

Lazy loading of modules in nodejs

by Guido García on 27/03/2014

This is a pattern I found in pkgcloud to lazy-load nodejs modules. That is, to defer their loading until a module is actually needed.

var providers = [ 'amazon', 'azure', ..., 'joyent' ];
...

//
// Setup all providers as lazy-loaded getters
//
providers.forEach(function (provider) {
  pkgcloud.providers.__defineGetter__(provider, function () {
    return require('./pkgcloud/' + provider);
  });
});

It basically defines a getter, so modules won’t be loaded until you do:

var provider = pkgcloud.providers.amazon;

It might be useful in applications where you have different adapters (“providers” in the example above) offering different implementations of the same API, and you want to let the user choose which one to use at runtime. This is a common requirement in cloud environments but it could be applicable to other scenarios as well (e.g. choose a payment gateway).

This is the first time I see it, so please share your thoughts on it and any other alternative approaches.

 
45 Likes
Hold on
2 Comments

Cloud is not cheap

by Guido García on 18/03/2014

There is a myth about cloud computing. Many people think they will save money moving their services to the cloud, but the reality is that the cloud is not cheap.

Virtualization, one of the core parts of cloud computing, tries to meet the promise of elastic capacity and pay-as-you-go policies. Despite of this promise, the true story is that today we are running virtual machines that don’t do much because, most part of the time, our applications are not doing anything. Their processors are underutilized. While this is an opportunity for cloud providers to oversubscribe their data centers, it also means we are overpaying for it. There is still much untapped potential for applications running on the cloud.

Services in the 21st century

In the last few years we have seen many improvements in the way applications are packaged and deployed to the cloud, how to automate these processes, and we have learnt that we have to build applications for failure (see “There will be no reliable cloud“).

But what I have not seen yet is anything about services communicating to each other to share its health status. I think services in the cloud should be able to expose their status in real time. This way they could talk to others and say “hey, I’m struggling to handle this load, who can help me out with 2 extra GB of RAM for less than 10 cents/hour?”.

How do you think cloud will change apps in the next 5-10 years?



Ryan Dahl – How do you see the future of PaaS (see 4:38)

 
7 Likes
Hold on
No Comments yet. Be the first.

The long tail in this blog

by Guido García on 23/01/2014

This blog is two years old, and I’d like to share how its >50K visits are distributed.

Long Tail

One single post drives 40% of the traffic to the blog. At the bottom, 70% of its posts represent 4% of the traffic.

In my opinion, the most popular ones are not the best ones. They are about very specific technical subjects, containing keywords in the title and in the URL slug. Google does the rest.

 
9 Likes
Hold on
No Comments yet. Be the first.

Performance is premature optimization

by Guido García on 18/01/2014

I will burn in hell, but performance is premature optimization nowadays. Despite it is very interesting from an engineering perspective, from the practical point of view of someone who wants to follow the make-shit-happen startup mantra, my advice is not to worry much about it when it comes to choosing a programming language.

There are things that matter more than the technology stack you choose. In this post I will try to explain why; then you can vent your rage in the comments section.

Get Shit Done

Your project is not Twitter

It is not Facebook either, and it probably won’t. I am sorry.

Chances of your next project being popular are slim. Even if you are so lucky, you app will not be popular from day one. Even if you are popular enough, hardware is so cheap at that point that it could be considered free for all practical purposes (around one dollar per day for a 1CPU/1GB machine; go compare that with our wages).

Your project will fail

Face it. You are not alone, most projects fail and there is nothing wrong with it. They fail before performance becomes an issue. I do not know a single project that has failed solely due to a bad choice of a programming language.

So I think that, as a rule of thumb, it is a good idea to choose the technology that allows you to try and develop small components faster (nodejs, is that you?). You will have time to throw some of those components away and rebuild their ultra-efficient alternatives from scratch in the unlikely case of needing it.

Conclusion

You are not going to need performance; stop worrying and get shit done instead. I always have a Moët Et Chandon Dom Pérignon 1955 on the fridge to celebrate the day I face performance issues due to choosing X over Y.

 
5 Likes
Hold on
3 Comments