Cloud is not cheap

by Guido García on 18/03/2014

There is a myth about cloud computing. Many people think they will save money moving their services to the cloud, but the reality is that the cloud is not cheap.

Virtualization, one of the core parts of cloud computing, tries to meet the promise of elastic capacity and pay-as-you-go policies. Despite of this promise, the true story is that today we are running virtual machines that don’t do much because, most part of the time, our applications are not doing anything. Their processors are underutilized. While this is an opportunity for cloud providers to oversubscribe their data centers, it also means we are overpaying for it. There is still much untapped potential for applications running on the cloud.

Services in the 21st century

In the last few years we have seen many improvements in the way applications are packaged and deployed to the cloud, how to automate these processes, and we have learnt that we have to build applications for failure (see “There will be no reliable cloud“).

But what I have not seen yet is anything about services communicating to each other to share its health status. I think services in the cloud should be able to expose their status in real time. This way they could talk to others and say “hey, I’m struggling to handle this load, who can help me out with 2 extra GB of RAM for less than 10 cents/hour?”.

How do you think cloud will change apps in the next 5-10 years?

Ryan Dahl – How do you see the future of PaaS (see 4:38)

Hold on
No Comments yet. Be the first.

The long tail in this blog

by Guido García on 23/01/2014

This blog is two years old, and I’d like to share how its >50K visits are distributed.

Long Tail

One single post drives 40% of the traffic to the blog. At the bottom, 70% of its posts represent 4% of the traffic.

In my opinion, the most popular ones are not the best ones. They are about very specific technical subjects, containing keywords in the title and in the URL slug. Google does the rest.

Hold on
No Comments yet. Be the first.

Performance is premature optimization

by Guido García on 18/01/2014

I will burn in hell, but performance is premature optimization nowadays. Despite it is very interesting from an engineering perspective, from the practical point of view of someone who wants to follow the make-shit-happen startup mantra, my advice is not to worry much about it when it comes to choosing a programming language.

There are things that matter more than the technology stack you choose. In this post I will try to explain why; then you can vent your rage in the comments section.

Get Shit Done

Your project is not Twitter

It is not Facebook either, and it probably won’t. I am sorry.

Chances of your next project being popular are slim. Even if you are so lucky, you app will not be popular from day one. Even if you are popular enough, hardware is so cheap at that point that it could be considered free for all practical purposes (around one dollar per day for a 1CPU/1GB machine; go compare that with our wages).

Your project will fail

Face it. You are not alone, most projects fail and there is nothing wrong with it. They fail before performance becomes an issue. I do not know a single project that has failed solely due to a bad choice of a programming language.

So I think that, as a rule of thumb, it is a good idea to choose the technology that allows you to try and develop small components faster (nodejs, is that you?). You will have time to throw some of those components away and rebuild their ultra-efficient alternatives from scratch in the unlikely case of needing it.


You are not going to need performance; stop worrying and get shit done instead. I always have a Moët Et Chandon Dom Pérignon 1955 on the fridge to celebrate the day I face performance issues due to choosing X over Y.

Hold on

Function parameters in Python, Java and Javascript

by Guido García on 18/01/2014

This is a short post about how these programming languages compare with each other when it comes to declaring functions with optional parameters and default values. Feel free to leave alternatives in other languages in the comments.

Python. The good.

Python is my favorite. Use your parameters in any order and define their default values as part of the function signature itself.

def foo(arg1, arg2="default"):
    print "arg1:", arg1, "arg2:", arg2

The price to pay is that you can not define two methods with the same name in the same class.

def sum(a, b):
    return a + b

def sum(a, b, c):
    return a + b + c

I am not a Python expert, but it does not seem such a big deal.

Java. The ugly.

Java is more verbose, but you have strong types and simple refactoring in exchange.

public void foo(String arg1) {
    foo(arg1, "default");

public void foo(String arg1, String arg2) {
    System.out.printf("arg1: %s arg2: %s", arg1, arg2);

Javascript. The bad.

Javascript is a little more ugly.

function foo(arg1, arg2) {
    arg2 = arg2 || 'default';
    console.log('arg1 %s arg2 %s', arg1, arg2);

This is real code we use in Instant Servers, to have an optional first parameter:

CloudAPI.prototype.getAccount = function (account, callback, noCache) {
    if (typeof (account) === 'function') {
        callback = account;
        account = this.account;
    if (!callback || typeof (callback) !== 'function')
        throw new TypeError('callback (function) required');

It is pure crap.

Hold on
No Comments yet. Be the first.

Give your configuration some REST

by Guido García on 2/01/2014

I have built a simple configuration server to expose your app’s configuration as a REST service. Its name is rest-confidence (github). In this post I will try to explain its basics and three use cases where it could be useful:

  1. To configure distributed services.
  2. As a foundation for A/B testing.
  3. As a simple service directory.

Install and run a basic rest-confidence configuration server

The first step is installing the configuration server:

git clone
cd rest-confidence
npm install

After that, you are ready to edit your config.json configuration file. For example:

  "mongodb": {
    "host": "localhost",
    "user": "root"
  "redis": {
    "host": "redis-server",
    "port": 6379
  "logging": {
    "appender": {
      "type": "file",
      "filename": "log_file.log",
      "maxSize": 10240

Launch the configuration server (npm start) and you are done. You are now ready to start retrieving the values associated with any key, in a hierarchical way:

# curl http://localhost:8000/logging/appender


# curl http://localhost:8000/logging/appender/maxSize

Use case #1: Configure distributed services

In my last post I wrote about why I like nodejs, a great platform for building micro-service-based architectures. However, these kind of architectures also come with their own drawbacks. One of them is that they are more difficult to deploy and configure.

Micro Service Architecture

Micro Service Architecture. Image courtesy of James Hughes

With a centralized configuration server such as rest-confidence everything becomes easier. Instead of configuring hundreds of settings on each component, you only need to configure the URL of your configuration server. Your service will go there to look up any configuration property it needs.

Use case #2: A/B testing

A/B testing is a simple way to test different changes to your application and determine which ones produce positive results.

As a simplistic example, imagine you want to test an alternative color for your blue sign-up button, and check how it affects the conversion rate. You can define a $filter with a $range limit in your configuration:

  "color": {
    "$filter": "random",
    "$range": [
      { "limit": 10, "value": "red" }
    "$default": "blue"

So when you retrieve the “color” property value using a random filtering criteria, you’ll get different colors depending on the ranges.

# curl http://localhost:8000/?random=5

And with a different filtering value out of the range you will get the default value.

# curl http://localhost:8000/?random=15

Use case #3: Simple service directory

You can use rest-confidence as a simple service directory, that is, a centralized server that facilitates dynamic location of other services’ endpoints, based on different criteria.

  "myservice": {
    "$filter": "env",
    "production": {
      "url": {
        "$filter": "country",
        "ES": "",
        "UK": "",
        "$default": ""
    "development": {
      "url": "" 

With some criteria applied (for example, env=production and country=ES) you will get the proper service endpoint, or any other information you need:

# curl http://localhost:8000/myservice?country=ES&env=production

I hope you find it useful. There is also a nodejs client. Contributions are welcome.

Hold on
No Comments yet. Be the first.

Why is node.js so cool? (from a Java guy)

by Guido García on 9/12/2013

I confess: I am a Java guy

At least I used to be. Until I meet node.js. I still think the JVM is one of the greatest pieces of technology ever created by man, and I love the Spring Framework, the hundreds of Apache Java libraries or the over-six-hundred-page books about JEE patterns. It is great for big applications that are created by many developers, or applications that are made to last.


But many applications today are not made to last. Sometimes you just want to test something fast. Fail fast, fail cheap, keep it simple… the “be lean” mantra, you know.

Moreover, open source has completely changed the way we build applications, moving from developing tons of code in monolithic applications to assembling small programs that use third-party components as middlewares (nosql databases, queues, caches).

Second confession: I hate(d) Javascipt

Yes, Internet Explorer 4 made me hate Javascript. So the first time I heard about node.js and server-side Javascript I felt a shiver down my spine. It got worse when I started to play with the unfamiliar continuation-passing style, the asynchronous callback hell did not take long to appear.

Node is Asynchronous

A simple pattern: function(err, result) {}

But the absence of rules does not necessarily has to mean chaos. In fact, there is one pattern in node.js: your callbacks will have two arguments; the first argument will be an error object, the second one will be the result. This is your contract with the platform and, more important, with the community. Stick with it and you will be fine.

Using such a popular programming language plus this simple convention is what makes it so easy to start working with node.js. It makes building small modules that work together with other developers’ modules surprisingly easy. This is why we have more than 50K modules in the npm registry. Most of them are probably worthless, but natural selection also applies here, and this evolutionary process is much faster than the Java Community Process (JCP).

With node.js I feel like a productive anarchist. I get shit done.

You should also read “Broken Promises“, “Why is node.js becoming so popular” (quora), and watch Mikeal Rogers’ talk on why is node so successful (24 min).

Hold on

Big teams are not agile in the digital world

by Guido García on 12/08/2013

The post today is not so technical. I have been thinking about why many big corporations, with almost unlimited resources, are not able to deliver top quality products and services. Why companies with a small fraction of resources create new products faster?

I have found several sociopsychological causes, most of them related with an aspect of human activity: working in a team.

Diffusion of responsibility

Diffusion of responsibility is a sociopsychological phenomenon whereby a person is less likely to take responsibility for action or inaction when others are present. Considered a form of attribution, the individual assumes that others either are responsible for taking action or have already done so. The phenomenon tends to occur in groups of people above a certain critical size and when responsibility is not explicitly assigned. (wikipedia)

This is a harmful situation, where everybody’s responsibility becomes nobody’s responsibility and tasks are just words instead of real actions.

Analysis paralysis

Analysis paralysis is the state of over-analyzing (or over-thinking) a situation so that a decision or action is never taken […] rather than try something and change if a major problem arises. (wikipedia)

The perfect is the enemy of good in most cases, and the opportunity cost of decision analysis tends to be higher than taking some risks and launching a sub-optimal product. LinkedIn founder Raid Hoffman said “if you are not embarrassed by the first version of your product you’ve launched too late”.

See also “Performance is premature optimization

Inertia and Groupthink

Inertia is the resistance of any physical object to any change in its motion (including a change in direction). In other words, it is the tendency of objects to keep moving in a straight line at constant linear velocity, or to keep still

Groupthink is a psychological phenomenon that occurs within a group of people, in which the desire for harmony or conformity in the group results in an incorrect or deviant decision-making outcome. Group members try to minimize conflict and reach a consensus decision without critical evaluation of alternative ideas or viewpoints, and by isolating themselves from outside influences. (wikipedia)

Do you remember the monkey banana and water spray experiment? It is hard to change the culture in a big corporation. It is not easy to innovate and disrupt when the main reason to keep doing something is that “we have always done it that way” or by coercion.

The Milgram experiment on obedience to authority figures was a series of social psychology experiments, which measured the willingness of study participants to obey an authority figure who instructed them to perform acts that conflicted with their personal conscience. (wikipedia)

Group intercommunication

The number of communication paths between a team of N people is N x (N – 1)/2. This means that time spent communicating (this includes meetings) increases exponentially while total productivity will only grow linearly.

I like the idea of “two pizza teams” coined by Jeff Bezos: if you can’t feed a team with two pizzas, it’s too large.

When you’ve got a small group, you don’t need to constantly formalize things. You communicate and you know what’s going on. If you have a question about something, you ask someone. Formalized rules, deadlines, and documents start to seem silly. Everyone’s already on the same page anyway (37signals)

Fear of failure

Atychiphobia is the abnormal, unwarranted, and persistent fear of failure. As with many phobias, atychiphobia often leads to a constricted lifestyle, and is particularly devastating for its effects on a person’s willingness to attempt certain activities. (wikipedia)

I can think of at least four consequences of this fear of failure:

  • Overengineering: instead of keeping a solution simple engineers tend to overcomplicate a solution with unneeded features, taking precautions to ensure not to be blamed if something goes wrong (see also “scale later”).
  • Deliberate bad choices: “no one gets fired for buying IBM”. This applies to technological choices, selection of partners and support contracts that are slow, expensive and with questionable usefulness.
  • Pessimistic attitude as a defense mechanism. If you put yourself in the worst scenario, from that point on everything would be better.
  • Fear to say no to authority figures.

Emotional contagion

Emotional contagion is a process in which a person or group influences the emotions or behavior of another person or group through the conscious or unconscious induction of emotion states and behavioral attitudes. (wikipedia)

A whiner is somebody who complains a lot. This attitude is really infectious, and it spreads a negative karma almost impossible to erradicate. It diminishes passion and chances of success: “whether you think that you can, or that you can’t, you are usually right”.

“Little Eichmanns” is a phrase used to describe persons who participate in society in a way that, while on an individual scale may seem relatively innocuous even to themselves, taken collectively create destructive and immoral systems in which they are actually. (wikipedia)


Excessive hierarchy is also dangerous. Too many hierarchical levels can stop or slow down decisions. Even making operative decisions that should take hours, take weeks.

Add more layers and employees will also stop feeling identified with the company. This is some kind of emotional detachment, workers do not think they can make significative contributions to the company, collective responsibility is lost, and problems in the company become someone else’s problems.

Somebody Else’s Problem is a psychological effect where individuals/populations of individuals choose to dissociate themselves from an issue that may be in critical need of recognition. Such issues may be of large concern to the population as a whole but can easily be a choice of ignorance by an individual. (wikipedia)

When roles are too much compartmentalized, some people stop being able to wear many hats. I think this is because they start feeling that doing some tasks or getting their hands dirty would mean a step back in their professional careers, or just discredit. This is completely different in a small company, and clearly makes a difference in terms of speed.

I like passionated, small, flat, focused teams that really embrace agile and self-organization. Bureaucracy can kill agility. Big groups of people can be destructive for innovation and adaptation if not properly managed. The problem is even worse if objectives are not aligned in the company, but I will write about it in another post.

Hold on
No Comments yet. Be the first.

Playing around with Meteor

by Guido García on 1/03/2013

I have been playing around with meteor, an open-source platform for building web apps. The result is a 200 LOC game ladder with a live demo.

The platform is built on top of nodejs, what is great. In my opinion, it is not yet ready for production environments, but I am really impressed with how fast you can create simple web applications with live page updates, automatic data synchronization and many other niceties I have never seen before in any other web framework.

ELO algorithm

There is an open issue with the ranking algorithm. I am looking for a javascript implementation of the ELO algorithm. I am waiting for your pull requests!

Hold on
No Comments yet. Be the first.

Deploy virtual machines on Instant Servers cloud with Java

by Guido García on 17/02/2013

Instant Servers is the infrastructure as a service (IaaS) system I have been working on during the last months in Telefónica Digital.

The service offers a public REST API (Cloud API) that is super simple to use. However, in this post I will show you how to manage your infrastructure using a Java client, without dealing with HTTP requests.

Build the Cloud API client

Man does not live by nodejs alone. There is an instantservers project at github you can easily clone and compile (pull requests are also welcome). In the future it will be published as a proper maven artifact, so you can skip this point.

git clone
cd ./instantservers/instantservers-api-client
mvn install

That will generate an instantservers-api-client-1.0.0.M1.jar library you can use in your own applications.

Deploy your first virtual machine

To deploy a virtual machine on Instant Servers cloud you only need to choose a name for the machine, a package that corresponds to the hardware configuration (cpu, mem, disk) you need, and a dataset that represents the image or template you want to use (i.e. ubuntu 12.04, mongodb, smartos, etc).

Let’s code speak.

package net.guidogarcia;


public class InstantServersExample {
    // there are several datacenters, I use Madrid "eu-mad" in this example
    private static final String CLOUDAPI_URL =

    public static void main(String[] args) throws Exception {
        CloudAPIClient client =
                new CloudAPIClient("username", "password", CLOUDAPI_URL);

        Machine machine = new Machine();

        Machine deployed = client.createMachine(machine);
        System.out.printf("Machine id is %s", deployed.getId());

You will notice that virtual machines are up and running in a matter of seconds. This is due to the fact that the virtualization is based on rock solid Solaris zones.

You will need a username and a password to authenticate API calls, but you can sign up for Instant Servers for free (machines are still not free but you can try it for something like 6 cents per hour).

If anyone is interested in other API operations or about cloud computing in general, leave a comment and I will be happy to write more posts about it.

Hold on
No Comments yet. Be the first.

Node.js running on my Raspberry Pi. A benchmark.

by Guido García on 13/09/2012

Few weeks ago I could not resist the temptation to buy a Raspberry Pi, the super-cheap 35$ computer that comes with 256MB of RAM and a ARM CPU running at 700MHz and fits in your pocket (more information in wikipedia).

Raspberry Pi (wikipedia)

See how nice it looks. I am more of a software guy, so the first thing I did was to install node.js (v0.6.19) develop the simplest web server you can create in node (5 lines, it simply returns a 200 HTTP response code without any contents) and put the beast to work.

var http = require('http');
http.createServer(function (req, res) {

The benchmark

I was interested in testing the number of requests per second the application was able to handle running on the Raspberry in the most optimistic scenario. After having some problems running httperf and autobench on Mac OS, I finally went with apachebench (ab), that can be used to do simple load testings.

These are the results of sending 5120 requests to the node web server, at different concurrency levels, using the following command:

ab -n 5120 -c <concurrency>

raspberry benchmark results

Additional information: Each concurrency level has been executed three times from my laptop and using a wifi connection; the graph shows the average value. The Raspberry Pi was running the Raspbian “wheezy” image (downloads).

Open points

Almost 200 requests per second in this non real world application that does nothing. It is not bad, enough to develop and try the ideas I have in mind. To be honest, I still do not know why the performance drops so much when the concurrency is 512, or which part (my laptop vs the raspberry) is the bottleneck and why. Any ideas?

I have to measure other aspects like CPU and memory usage. In a quick glance, it seems that the CPU quickly goes over 90% usage even with small concurrency leves. I still appreciate this piece of hardware, but in the future I will try to overclock the processor. The memory was under 10%, what is not strange in this simple application.

I am also waiting for the Java Virtual Machine, that is supposed to be included in the default file system in future releases, to repeat the benchmarks (and probably see how it eats the memory).

It seems interesting, from a research point of view, to build a cluster and see how it scales. Donations for this purpose are highly appreciated :)

Hold on