Master on Libre Software Planet

February 11, 2018

My parents in law got me an evergreen bonsai for Christmas, a Ficus Retusa.

It's not until recently that I've got interested in the idea of growing a plant. I find fascinating that you can mold a living being to your liking, within certain constraints. Every branch contains a possibility, and you've got to decide which ones to develop. The fact that it's a slow process that takes years to fully see the results speaks of the patience and constant caring you need to put into it. I don't want to think too much about that because the thought of committing myself to something for so long is scary! At the same time, I've found a sense of calm and bonding in things like cleaning every individual leaf of the tree once a month – I can understand much better now to Paul Richardson, the exo-botanist of Mars.

Being the first plant I own, I'm still learning a lot about everything: its watering needs, what's a good pruning balance, how to identify and treat pests and diseases, etc. So far, it's been enjoyable.

January 27, 2018

Sés is a Galician singersongwriter. Her stage presence and lyrics reminds me of what it means to grow in small villages by the periphery. I feel connected to that dignity and survival skills, because it embodies the attitude of the women I grew around. The Galician matriachy.

January 24, 2018

I’ve just learned that Ursula K. Le Guin is no longer with us. She left multiple worlds for us to play with and learn from. Two of them –The Left Hand of Darkness, and The Dispossessed– are my goto guides when it comes to imagining societies that take into account the role of self-management, genre, language, and free commerce. We cannot bring her back, but we still have her words to read all that she wanted to tell us.

December 02, 2017

December 01, 2017

Mars soundtrack (the National Geographic tv-show) is fantastic. Nick Cave is just the perfect voice to convey that feeling of exploration and fear. Moon, Interstellar, The Martian, etc; it seems sci-fi movies got an appreciation for soundtracks that have a major role in the film – and I enjoy that.

As much as I like Cave’s main theme for Mars, after a few episodes, I was in the need of something like Dylan’s Shelter from the Storm. Exploration needs joy and celebration.

November 25, 2017

Running in Circles is Basecamp’s view of agile product management. They acknowledge the value of working in cycles, but add three pieces: having the time to focus, being able to modify the original plan, and tackle the core unknowns of the feature first.

The first two are enablers that are provided to the makers by management. The last part is how the maker make the most of those powers. Together, they form a process that is nicely captured with the uphill / downhill metaphor. Uphill you are discovering the unknowns and making decisions about what goes in, downhill everything is clear and you are implementing it at warp factor 10:

November 22, 2017

I came across Module Counts, which tracks the number of published modules for major language package managers. At this point, npm has 600k packages published, which is 3 to 4 times what any other package manager has. I’m not aware of download statistic across different package managers, but npm has surpassed the 2 billions downloads a week mark.

November 12, 2017

Software architecture failing: tech writing is biased towards what the big ones do, which usually doesn’t fit most other contexts – but, who got fired for choosing IBM, right? Although I feel connected to this rant at an emotional level, I do think it’s necessary to elaborate more and make a positive contribution: help to create and spread that alternate history of software development. How do you do it? Hat tip: Fran.

November 01, 2017

On November 2016 I had a free month between jobs. Apart from some resting, reading, and general preparations for my new adventure, I still had quite a bit of free time to do new things or build good habits. It was while cleaning my office that I found a keyboard I had bought a couple of years back:

Its layout was a beautiful matrix -which is good for your fingers- and came with Dvorak by default. So it struck me: how about improving my typing during the coming weeks?

As a programmer, typing is an essential skill for me. I had been doing it for more than 15 years in a learn-by-doing way, and I plan to keep typing for years to come. I thought it would be fun to spend a couple of hours a day training in touch-typing and give Dvorak a second try. And so I did.

The experience

Before I switched, I recorded about 15 typing sessions at TypeRacer using the QWERTY layout, which logs typing speed (words per minute) and accuracy (% characters right over the total). I was at 67 wpm and about 95% accuracy at the time.

Progress was very humbling at the beginning; it felt like learning to walk again, and I swear that, sometimes, I could even hear my brain circuits being reconfigured! After a few weeks, though, I was under 40 wpm and, by the end of the month, I was under 50 wpm. I stopped quantifying myself by then: as I started working, I had a lot of typing to do anyway.

During the first months, the only moments I struggled and felt like perhaps the switch wasn’t a good idea after all was during real-time communication: chats, slack, etc. I don’t know what people thought of me, but my velocity at the time was typing-bounded – I was certainly a very slow touch-typist by my own standards.

But time passed and I improved.

Spáñish Dvorak and symbols

Throughout the process I changed my setup quite a bit: I started my journey using the Programmer Dvorak layout with a TypeMatrix keyboard. After a few months, I switched back to my good old ThinkPad keyboard because having to use a mouse again after years not using it was a pain. A few months later, I switched to the Dvorak international, because the Programmers Dvorak layout didn’t quite suit me. Then, I tweaked the common symbols I use for programming so they were better positioned. Besides, although the bulk of my typing is in English, I still need to write decent Spáñish, which basically means using tildes on vowels and ñ. TLDR: the Spanish Dvorak version made things more difficult, so I’ve just tweaked the Dvorak international to accommodate tildes and ñ as I see fit.

At this point, I believe I can patent my own layout:

All the changes I did to the symbol positions have affected my ability to build muscle memory for them – sometimes I still need to look at some specific symbol on the keyboard. However, the current version has been unchanged for months, so I only need a bit more time for them to stick.

The numbers

Given that I was a QWERTY user for 15 years, I thought I would give the new layout a year before comparing any statistics. The fair thing to do would be comparing after 15 years, but I’m a bit impatient for that. I went to TypeRacer again and noted down the results for about 20 races. These are the numbers of this totally unscientific experiment:

A few remarks:

In terms of speed, it seems that I’m mostly there. My median speed now is 65 wpm, 2 words per minute less than before. I had a higher peak (83 vs 79) in one of the current typing sessions, but I was under 60wpm in more sessions this time.

In terms of accuracy, I’ve improved a bit. My median accuracy has increased by 1,5 points, and I had only 2 sessions below 95% of accuracy this time.

Coda

Overall, I’m very happy with the switch to Dvorak. My accuracy has improved, meaning that I can maintain a longer typing rhythm. Not having to correct mistakes makes me a faster typist as well, and by learning to touch-type I also have grown more endurance.

This experiment was very humbling but fun. I believe it increased my brain plasticity by an order of magnitude, and I’m hoping to improve my numbers as years pass as well. However that turns out, though, I think of this as a gift to the elder me, a way to prevent typing pain in the future and promote a healthy use of the tools I heavily depend upon.

October 25, 2017

I spent the weekend reorganizing things, including my blog. I’ve got a new WordPress theme (independent publisher) which looks a lot more lightweight. I’ve consolidated the essays section with stuff that grew out of individual posts (I keep thinking that someday I’ll have the time to publish them as independent e-books), polished the about, fixed some links in the glossary, and started to reorganize the archives.

I’m also going to try a different approach in the following months: instead of having separate blogs for music, lifestream, thoughts, etc I’m going to publish everything here – I do not publish that much anyway, and I like the idea of this having a more personal touch.

October 22, 2017

When we want to acquire a new skill, we are faced with two choices: trial-error, or instruction. One is experience-driven or practice-based, the other is concept-driven or theory-based.

The trade-offs

Trial-error is the built-in mechanism humans come with to acquire knowledge and skills – our thinking processes are optimized for that. However, it may be expensive and impractical in some situations. For instance, learning to pilot an aircraft by trial-error is risky should you want to keep the chances of learning in the future high. We have developed systems that lower the cost of trial-error, though, such as pilot simulators. It can also be time-consuming: we just don’t have the time to trial-error every piece of knowledge our society is based upon!

Learning by instruction appears to be more efficient: we are presented with models and recipes that work, saving us a lot of time that we can use to advance our knowledge further. Nevertheless, the instruction is not always possible; sometimes the map of knowledge of a certain domain isn’t built yet, so we need to rely on the trial-error approach. Even most important is the fact that internalizing abstract knowledge not based on direct experience seems to be more difficult for humans.

What they said is 1) we should recognize the role of the first-hand experience in acquiring knowledge and 2) to become an expert it is necessary to learn the rules, guidelines, and maxims of the particular skill we are interested in.

The rules are the principles that always apply, they don’t depend on anything so they are context-free or non-situational. Examples of rules are the valid movements of a piece in the go game, the set of instructions in programming, the techniques in the Aikido martial art.

The guidelines are the principles that only apply in specifics contexts, so they are context-bound or situational. Things like josekis in the go game (sequences of moves in a specific part of the board), the design patterns in programming, or the katas in Aikido.

The maxims are principles that guide us towards achieving our long-term goal, they help us by assigning a value to guidelines: is this joseki worth it if I’m playing for territory in go? Is the ability to grow new features necessary for this specific part of the application? What specific throw should I use if I want to face the next adversary?

For one to become an expert, rules, guidelines, and maxims should be second nature.

Dreyfuss defines a 5-step process someone goes through to gain knowledge: novice, competence, proficiency, expertise, mastery. Others outline different systems that include three stages. What’s important is to realize that the learning process is at its best when we take a practical approach and theory is presented to the learner as they are prepared to assimilate the next artifact – rules, guidelines, maxims.

Coda

Learning to learn is probably one of the more important skills when we no longer know what’s coming next. The real world TM tends to be more chaotic and intertwined than the sequential process outlined by Dreyfuss. Realizing where are you at a particular skill will help you in making decisions about what focus on. For instance, am I a novice at skill X? Well, at this point, I’m better off focusing on learning the rules and imitate what others have done. And so on.

Learning also takes a lot of time – someone has even published a number, about 10.000 hours to become an expert in anything. It’s a lot! It may be discouraging. Luckily, a practice-based approach makes things more rewarding, and time flies when we are enjoying the process.

October 16, 2017

HACF resonated with me because it was about the pleasure of making things work and the cost of pursuing your dreams. We need a whole lot more stories about the woes and joys of creation to learn how to navigate that world and to inspire us. We need more builders and dreamers capable of not burning themselves out.

Bonus points for using the evolution of computers as the McGuffin. But, as much as I liked the history of computers being the central plot of a well done period drama, HACF wasn’t about computers. The computers aren’t the thing. They are the thing that get us to the thing.

October 06, 2017

This was the first book listening experience that I’ve actually finished. Sean Runnette‘s voice was adequate for setting the tone and rhythm – actually, sometimes I felt I was listening to Feinmann himself!

Having read Surely You’re Joking, Mr. Feynman!, What Do You Care What Other People Think? and some other papers/videos, most of the stories in the book I already knew, but it had some new material that made it interesting nonetheless. This is more mathematical/physical intense than the others, probably because it’s mostly focused on the scientific and less in the human Feynman – but also because many chapters are directly transcribed from conferences he gave. It’s also worth noting that, unlike the other two, this book was published without Feynmann intervention: it’s published 10 years after his death.

If I had to choose only a Feynman book I’d choose Surely You’re Joking, Mr. Feynmann! It’s better edited and has more variety. Then, if you are hungry for more, What do you care what other people think? contains new stories. I liked this one, but I doubt it’s a good introduction to Feynmann lifestyle, work, values, and character.

September 28, 2017

I’ve just finished the book Code Simplicity. It presents a framework for thinking about software development in the form of laws and rules. It’s short but comprehensive. From my experience, the laws and rules hold true. I think the book has value as an overall perspective of what’s important in software development, and there are some chapters that are really spot on: for example, the equation of software design – something that I’ve already included in my glossary and plan to expand.

Code Simplicity doesn’t intend to land the laws and rules to something actionable, though. I’m at a point in my career where I’m focused on consolidating and reflecting upon how to achieve simplicity in software design – that means that I crave for specifics so I can compare them with mine.

As a cross-recommendation, if you are interested in learning about the laws of software development in a manner that is actionable, I’d suggest reading the Beck’s trilogy: Extreme Programming Explained: Embrace Change, Test Driven Development: by example, and Implementation Patterns. Those three books make a great combination of macro-forces (at a project level) and micro-forces (at a coding level) in software design. They were fundamental in consolidating my experiences as a programmer, so I’m highly biased towards them.

September 01, 2017

One of the things I was very into a decade ago was studying the intertwine between technology, culture, and society. From those years, I developed a sensitivity about my role as an engineer, or as an enabler of possible worlds.

This is one of the things I wanted to avoid:

If you have ever had a problem grasping the importance of diversity in tech and its impact on society, watch this video pic.twitter.com/ZJ1Je1C4NW

A person isn’t able to clean his hands because the machine sensors are only prepared to detect white hands! That’s a horror story that could make a BlackMirror episode.

This made me think about the mainstream perception of Machine Learning and Artificial Intelligence technology. Lately, some friends of mine are sharing with me clickbait news like Facebook shuts down robots after they invent their own language. They ask me if robots could take over, soon. Well, I can tell you something: at this stage of technology, I am not worried about robots taking over. What I do worry about is how our inability to understand technology creates racists algorithms that reinforce our biases.

August 28, 2017

August 27, 2017

(…) the number-one indicator of a successful team wasn’t tenure, seniority or salary levels, but psychological safety. Think of a team you work with closely. How strongly do you agree with these five statements?

If I take a chance, and screw up, it will be held against me

Our team has a strong sense of culture that can be hard for new people to join.

My team is slow to offer help to people who are struggling.

Using my unique skills and talents come second to the objectives of the team.

It’s uncomfortable to have open honest conversations about our team’s sensitive issues.

Teams that score high on questions like these can be deemed to be “unsafe”. Unsafe to innovate, unsafe to resolve conflict, unsafe to admit they need help.

What’s the purpose of this code? It takes some input data structures and outputs a markup, either proper HTML or a code to be processed by later stages of the pipeline. At the core, what we are doing is taking a decision based on the input’s state so it can be modeled as a decision tree:

By restating the problem in a more simple language, the structure is made more evident. We are free of the biases that code as a language for thinking introduces (code size, good-looking indenting, a certain preference to use switch or if statements, etc). In this case, conflating the two checks into one reduces the tree depth and the number of leaves:

June 26, 2017

In this paper, we make the case that the high-productivity digital firms are starting to generate a new middle class. It’s a virtuous circle. Consumers flock to those firms because they offer lower prices and better service. Workers migrate there from low-productivity firms because the high-productivity firms offer better wages for the same occupations—and, often, steadier hours and better benefits.

The repository is 86Tb of data, 1 billion of files, and 35 billion of commits. To manage this complexity, they needed to build their own tools: a home-grown Version Control System that can work effectively with such a repository at this scale, editor integration, building and automated testing tools, etc.

They develop all the code against trunk/master, meaning that if you are updating a library, you’ll also need to fix all applications that depend on it. Every project will be up-to-date, even abandoned projects.

The advantages

The main reasons they claim this approach works for them are: it makes easier reusing blocks of knowledge company-wide and reduces the friction to contribute between projects/teams.UI primitives, building tools, etc, all are shared by any project that wants them, it’s just a matter of depending on the master version. It minimizes the costs of versioning/integration and the curse of being left behind when something is updated and you cannot keep up with the changes (the experts will do it for you!).

As a side effect, when working on libraries/frameworks it’s easier to understand the performance/impact/etc of a specific change (you can run tests on real projects) and to put together a task-force to fix issues affecting several applications.

The disadvantages

This approach comes with downsides as well: they mention the amount of maintenance this setup requires even with all the tooling they have already built. With a monolithic repo, it’s easy to run into unnecessary dependencies that bloat the binary size of a project (and they do), the costs inherent to updating basic blocks used through the whole company, etc.

Another point is that it makes difficult having external contributors. Although they have a space in the repository for public/open-sourced projects, the article is unclear on how they manage 3rd-party contributions there – external programmers don’t have access to the internal building tools that Google programmers have. High-profile products like Android or Chrome -where outside contributors are expected and encouraged- have walked away from this approach.

Coda

I highly recommend reading the paper, it’s a pretty unique approach, and the article does a good job on presenting a balanced perspective.

May 01, 2017

In the previous post of the series, I wrote about the nature of value and reference data types, and the differences between shallow and deep operations. In particular, the fact that we need to rely on deep operations to compare things is a major source of complexity in our codebases. But we can do better.

Comparing mutable structures

When working with mutable data structures, things like determining whether an object has been changed or not is not so simple:

var film = {
'title': 'Piratees of the Caribean',
'released': 2003
};
// At some point, we receive an object and one of its properties
// might have changed. But how do we know?
newFilm = doSomething( film );
film === newFilm; // What does a shallow equality yield?

If we are allowed to mutate objects, although film and newFilm identifiers are equal, the payload might have been updated: a shallow equality check won’t suffice, we’ll need to perform a deep equality operation with the original object to know.

Comparing immutable structures

In JavaScript, primitives (numbers, strings, …) are immutable, and reference data types (object, arrays, …) are not. But if mutable structures are the reason why comparing things is difficult, what would happen if we worked with reference data types as if they were immutable?

Let’s see how this would work:

If something changes, instead of mutating the original object, we’ll create a new one with the adequate properties. As the new and the old object will have different identifiers, a shallow equality check will set them apart.

Coda

One of the reasons I started this series of posts was to explain how using immutable reference data types was one of the tricks at the core of Redux and React. Their success is teaching us a valuable lesson: immutability and pure functions are the core ideas of the current cycle of building applications – being the separation between API and interface the dominant idea of the previous cycle.

I have already mentioned this some time ago, but, at the time, I wasn’t fully aware of how quick these ideas will spread to other areas of the industry or how that will force us to gain a deeper understanding of language fundamentals.

I’m glad they did because I believe that investing in core concepts is what really matters to stay relevant and make smart decisions in the long term.

When we create new reference data type variables, they are going to have a brand new identifier, no matter whether the payload is actually the same than other existing variable. Because the language interpreter is comparing identifiers, and they are different, the equality check yields false.

The reason is that x = {'42': 'the meaning of life'} assigns a new identifier to x, that references a different payload – so we’ll be back to the first scenario shown in this block.

(A short aside: in the introduction, I mentioned that references and pointers were different. The above case is a good example of how they’re different: if y was a pointer, it would index the contents of x, so both variables would remain equals after x contents change.)

In computer science, the operations that work with the contents of the variable (be it values or reference identifiers) are called shallow operations, meaning that they don’t go the extra step to find and work with the actual payload. On the other hand, deep operations do the extra lookup and work with the actual payload. Languages usually have shallow/deep equality checks and shallow/deep copy operations.

JavaScript, in particular, doesn’t provide built-in mechanisms for deep equality checks or deep copy operations, these are things that either we build ourselves or use an external library.

Object.assign creates a shallow copy of every own property in the source objects into the target object. If the target has the same prop, it’ll be overwritten. In the example above, we’re assigning a new identifier to the variable y, whose own properties will be the ones present in the object x.

This works as expected for objects whose own properties are value data structures, such as string or number. If any property is a reference data structure, we need to remember that we’ll be working with the identifiers.

Coda

Humans have superpowers when it comes to pattern matching, so we are biased towards using that superpower whenever we can. That may be the reason why the reference abstraction is sometimes confusing and why the behavior of shallow operations might seem inconvenient. At the end, we just want to manipulate some payload, why would do be interested in working with identifiers?

The thing to remember is that programming is a space-time bound activity: we want to work with potentially big data structures in a quick way, and without running out of memory. Achieving that goal require trade-offs, and one that most languages do is having fixed memory structures (for the value data types and reference identifiers) and dynamic memory structures (for the reference payload). This is an oversimplification, but I believe it helps us to understand the role of these abstractions. Having fast equality checks is a side-effect of comparing fixed memory structures, and we can write more memory efficient programs because the copy operation works with identifiers instead of the actual payload.

Working with abstractions is both a burden and a bless, and we need to understand them and learn how to use them to write code that is simple. In the next post, we shall talk about one of the tricks that we have: immutable data structures.

There are a number of ways to classify data types in computer science. Of all of them, I find that the difference between value data types and reference data types is a useful classification for the daily life of application programmers – knowing the differences results in fewer bugs, less time to understand code, and more confidence to sleep well at night.

One way to think about them is by considering what is the content of the variable for each data type:

Value data types store their payload as the contents of the variable.

Reference data types store an identifier as the contents of the variable, and that identifier is a reference to the actual payload in an external structure.

Let’s say the FOO variable is a value data type and its payload is 42, while the BAR variable is a reference data type and has 42 as payload. A visual representation of this might look like:

We usually are interested in the payload of the variable (in green), not in their metadata (in red), yet fundamental operations of the languages we use every day have a different behavior depending on whether the variable content is a value or a reference.

In terms of memory management, it is common for value data types and reference identifiers to be assigned a fixed amount of memory, and to live in a part of the memory called the stack. On the other hand, the reference payload usually doesn’t have a fixed amount of memory assigned so it can grow to any length, and tends to be stored in a different part of the memory sometimes called the heap. This is a generalization and an area that depends heavily on the language and its interpreters, but the reason this distinction exists in some manner is that we want fast and easy operations for an unlimited amount of data: operating with fixed memory variables is easier and faster, but dynamic memory allocation makes a better use of the limited space in memory – it’s a space/time tradeoff.

Boxing and unboxing

Languages with both value and reference data types, tend to provide ways to convert values into references, and vice-versa. This is called boxing and unboxing.

It is common that each value has a reference counterpart. For example, in JavaScript, there is the string primitive and the String object, the number primitive and the Number object, the boolean primitive and the Boolean object.

Also, languages tend to provide automatic boxing and unboxing in some situations. For example, JavaScript primitives don’t have methods or extra properties like the reference objects have; yet, they’ll be automatically boxed to the equivalent reference object when you’re trying to use one of its methods or properties.

var foo = 'meaning of life';
// Defines foo as a primitive string.
// To define it as the reference object String we'd do
// var foo = new String('meaning of life');
foo.toUpperCase();
// This yields 'MEANING OF LIFE'.
// Although foo is a primitive we can use the object methods
// thanks to the autoboxing.
// We could think of it as a type conversion in other languages:
// ((String) foo).toUpperCase();
foo.constructor === String;
// This yields true.
// When we call a property or method belonging the object String,
// foo will automatically boxed, so it behaves like the object.
foo instanceof String;
// This yields false.
// In this case foo is in its natural state (unboxed),
// so we are comparing the primitive to the reference.
typeof foo;
// This yields 'string'.
// In this case, foo is in its natural state (unboxed),
// so we are asking the system what kind of variable it is.

A note about references VS pointers

Some may argue that reference is how Object Oriented languages coined the old pointer data type. They are different things, though. The way I set them apart is by picturing what are the contents of the variables. References contain an identifier of the payload in an external structure; pointers index the content of another variable.

If, for example, a language would allow us to define a variable called Z as a pointer to X, visually it might look like this:

Although the difference between pointers and reference might be subtle, it has deep connotations when it comes to how operations work with them.

Coda

We, applications programmers, are mostly interested in the payload of the variables, but our programs consist of wrangling variables around with operations such as equality checks, copying, and passing arguments to other functions. These operations depend on the nature of the data they work with, so we are bound to deeply understand their inner workings. That will be the topic for the next post of the series.

April 04, 2017

One of the bigger milestones in Q1 2017 was the landing of the new CSS Grid standard in all major browsers.

Personally, the cool thing about this is that support for webkit and blink (namely, safari and chrome browser) was led and developed by IGALIA with a team of people (Manuel, Javier, and Sergio) from Galicia. I love seeing how Baiona or A Coruña can be attractive places for high-tech talent. We are Galifornia!

February 28, 2017

After the last blog categories reorganization, I realized that I talk less about what I do and more about what others do. That make sense, as this blog is part of my learning process and I’m always looking around to find ways to improve myself. Yet, I’d like to start writing more about the little things I do. Writing helps me to reflect upon the how, so eventually I’ll learn more about my thought processes. These are likely to be very small things.

sum-csv

sum-csv is a small utility I have built to help me to crunch some statistics I was working with. I had a complete dataset in a CSV file, but what I wanted was an ordered list of the number of times something happened.

Original CSV:

What I wanted:

A data transformation

This is a small task – my old self whispered. Yet, instead of opening the editor and start coding right away, the first thing I did was drawing things. I am a visual person and drawing helps me to gain understanding. The algorithm I came up with was a succession of mathematical transformations: Which is to say:

transpose the original matrix

eliminate the rows I was not interested in

for each row, group all numerical values (from column 1 onwards) by adding them, to calculate the total

sort the rows by the total

Now, I was prepared to write some code. Amusingly, the gist of it is almost pure English:

Reflection

Creating production-ready code took me four times the effort of devising an initial solution: finding good and tested libraries for some of the operations not built-in in the language such as reading a CSV into a matrix or transpose the matrix itself, creating the tests for being able to sleep well at night, distributing the code in a way which is findable (GitHub/npm) and usable by others/my future self, and, actually, writing the code.

I am not always able to write code as a series of mathematical transformations, but I find pleasure when I do: it is much easier to conceptually proof whether the code is correct. I also like how the code embodied some of the ideas I’m more interested in lately, such as how a better vocabulary helps you to make things simpler.

February 19, 2017

map and friends are more precise, sophisticated ways to talk about consistent patterns in data manipulation. Using them over for is analogous to using the word “cake” instead of “the kind of food that you make by whipping egg whites and maybe adding sugar”.

Interestingly, you can eventually add new layers of category on top of established layers: just like saying that butter cakes constitute a specific family of cakes, one could say that pluck is a specialization of map.

At the core, I’d say this is a wonderful love story, with a positive and naïve message – just what we need right now. That’d be enough to recommend it. At the same time, it is not what would you expect from a Hollywood film: it is sad in many and fundamental ways, which makes the film a modern story about love, life, and personal growth. And has an epic soundtrack.

January 26, 2017

The hacker ethic and the spirit of the information age, by Pekka Himanen, was one of these essays that had a big influence on my early yo. It resonated with how I felt about a lot of things: discovery & learning as a fundamental part of myself or work attached to meaning are two that I remember now. It was the first time I thought about how we divide life into chunks of time, more or less isolated.

One thing I didn’t realize at the time was who this essay was written for: a generation of individuals, with friends & family, but that see themselves as the unit of life in Earth. Given that the book defined the spirit that has driven one of the major social and cultural changes of the human race, this is no minor issue. As Jack Sparrow in the Pirates of the Caribbean, this generation had a mindset of fighting for their liberty against companies and governments. As individuals.

That attitude towards life, I believe, is one of the reasons why the products we create and consume are tailored for the individual and not for the communities: we have a personal music account, a personal pictures account, a personal mail account, etc. Solutions that take the group into account are rare and mostly exist for production-driven organizations (companies or volunteer based bodies), but less for leisure or consumption communities.

As we grew older and our communities became online, they asked us for help. So we create the “pay the service for user” feature as to pay for our relatives accounts, the family plans in our music and video stores, or the network setup for our publishing platform. These changes reflect a fundamental and more humane way to interact with others through technology. We are still in the early stages, but I expect this pattern to become stronger because one of the drivers for this to happen is that we have grown older in the same way that any other generation did in the past: realizing that we are weak as individuals, our natural state is to tribe within groups. Jack Sparrow struggled with this as well; he found out that the Black Pearl can be easily lost if you are on your own, and that you need a team if you want to navigate the ocean in freedom.

I, for one, look forward to a future more community-oriented. What specific shapes it will take is something that only we can invent; so let’s do it.

January 20, 2017

I’ve been working from home for more than 3 years now, and my setup has gone through several iterations – the current one is i4.

After joining Automattic, I was encouraged to think about my office setup. The company sponsors the kind of high-quality office perks that you’ll expect in companies at this level, and I took that opportunity to upgrade my own in ways I had been already thinking about. The fact that you are not in their offices, but in your home adds a different feeling to it. Although I appreciate the company efforts and perks, I’d like to stay frugal within comfortable limits, so I didn’t get anything I wouldn’t buy with my own money. I think of my office setup as a gift for the elder me – I wish he’ll be proud of what his younger self is doing for him.

For the past two months, I’ve been experimenting with that idea to learn what works better for me. I’ve used three main positions -traditional seating, saddle seating, and standing- and a lot of other crazy ones. What I’ve found out is that I change positions through the day as my body asks for it, but I mainly use the saddle position (most of the time, but especially when I need to write) and standing (for consuming information). The traditional seating feels a bit unnatural to me now, although it may be a side effect of using the Capisco which is more tailored for other postures. I also have a more traditional chair at home, but I rarely use it.

This is i4. This setup fits me so well that I cannot imagine what i5 will look like yet.

November 28, 2016

This is part of the invitation I was sent to join Automattic. I accepted. Today marks my first day as an automattician, and I am excited to become part of this family. My day-to-day will be filled with the joys and woes of programming but under Automattic’s creed, I feel safe, motivated and happy to do my best. Fun times ahead!

October 14, 2016

Most programmers who have only casually used PHP know two things about it: that it is a bad language, which they would never use if given the choice; and that some of the most extraordinarily successful projects in history use it. This is not quite a contradiction, but it should make us curious. Did Facebook, Wikipedia, WordPress, Etsy, Baidu, Box, and more recently Slack all succeed in spite of using PHP? Would they all have been better off expressing their application in Ruby? Erlang? Haskell?

September 29, 2016

While it might look like an overnight success in hindsight, the story of React is actually a great example of how new ideas often need to go through several rounds of refinement, iteration, and course correction over a long period of time before reaching their full potential.