Follow this blog by E-mail

Categoría: Programación

I finished my first patch with Rust! \0/ I don’t feel specially proud of it but I’m happy because it is my first contribution to a real project. It has a lot of flaws and technical debt but this is only a prototype and a sandbox for Rust exploration.

In short, the experience is exciting followed by a deep sense of uncertainty, then frustration with a small pinch of enlightenment just to be followed by a rewarding sensation at the end.

You can see 4 cycles here, each with stages of excitement, uncertainty, frustration, enlightenment and finally, the step upwards depicts the reward.

It is exciting because it introduces a lot of new syntax (for me, a new, cool and different syntax compared with JavaScript or Python) with lots of annotations and symbols everywhere. I feel like engineering something precious, delicate but at the same time I feel confident on its robustness cause I’m adding tons of metadata for the compiler. Furthermore, I’ve followed some tutorials and the compiler errors seems very clear, they are even helping and advising me. I’ve read about borrowing, lifetimes and mutability but in the tutorials those never seemed to be a problem.

Then it comes uncertainty. I know what to do but I don’t know how to write it. When learning a new language I lack from two things: a deep knowledge of the API and idioms. And idioms are the most important feature of a language because in the long term you prefer to read idioms. These are patterns that immediate associate with well know and repetitive behaviors. For instance. These three snippets do the same:

My first approach was option 2. It is very similar to C or JavaScript. But then I remember the people that writes Python as it was C. I hate them. Probably I hate you. Naaaah… just joking. But as a Python and JavaScript teacher I try to explain why idioms matter. Then I turned to option 1 and only for this post and after investigating the Option API, I realize about option 3. Currently, I don’t know what is more idiomatic but I would bet for the last option.

Once you are happy with your third refactor, you finally launch the tests and they fail… No, wait! That happens, but much later. First it comes the compiler, the type reasoner and the borrow checker. Let me illustrate the problem with an example. Suppose you want to test the former option 2: in play Rust… (no! no the f*ck*ng game, I did not play it yet but I already hate it), this play Rust(-lang). You want to enclose it in a function, then call it. No run, just see if it compiles.

Read the errors. Try again… Seriously, read the errors. You’ll discover that they make a lot of sense, Option must be an Option of some type so be it Result (but Result needs two types, one for valid values and other for errors).

Well, now it turns out string literals in Rust are not String but &’static str. What does it mean? That means reading the docs again. Fortunately the compiler gives you enough info to solve the errors. Uncertainty again, I could convert this &str to a real String (expected in the errors) but I’m happy with a Result of type &str (found in the errors). So, I modify the signature of the function and…

What the hell!? Ok, now I need a lifetime specifier. Lifetime specifier…, lifetime specifier… where are those bl**d* specifiers!? Oh, here! Well, you get the idea. This time the problem was I missed a lifetime specifier. As Rust needs to know how much this reference is going to live to check if the returning value lives long enough to not leave you with dangling pointers. Do you understand? Well, probably not, but I do (or at least, I did) which leads me to the next point: enlightenment!

Trying to figure out which kind of lifetime I want to assign for the returning structure, I though: I want these errors to be immortal strings (which translates into ‘static lifetimes) in order to not produce new strings and reuse them. Suddenly, for a fraction of time, I understand everything: lifetimes, specifiers, elision and immortality (well… staticness) and then… it’s gone. This is what I call enlightenment. No more than a fraction of second you understand what are you doing in this new context which is Rust.

Similar to what astronauts call overview effect, this is what you experiment with Rust from time to time. When you are not fearing about the huge fall you have below.

Paradoxically, If I would read the errors from the second try more carefully, I should have read that the expected type was &’static str preventing me from the third try but at the same time impeding this moment of truth to happen.

The remaining sensation is rewarding. Bit after bit, I’ll adapt my mind to these new memory and execution models. Reading articles, answers and books I will learn more knowledge about the internals of the data and execution models for Rust at the same time I will borrow some idioms.

Of course, once the code compiles, test are failing and the cycle repeats again. Most hard part is frustration and uncertainty (what else?) but it is completely normal. You can think of yourself as some kind of ninja or choosen-one inside your comfort zone. Modesty aside, I think this of myself when working on JavaScript or Python but when you are in the wild, failure is something normal. During these days, I even realized that my way of debugging / developing with dynamic languages is not suitable for this strongly typed language! But if it was easy, where would be the fun?

Look at those bullet holes on Neo shirt: failure is natural before enlightenment. Mr Anderson learned it the hard way.

Tu voto:

I wrote this quick remap.js plug-in for require.js. Personally, I bundle my JS libraries with UMD but I love require.js for unit testing and with remap.js you can easily rewire your dependencies providing true isolation.

This is an early release and it can be bugged but you’ll get the idea. Enjoy!

Tu voto:

This is the second post of a series of articles about how the several technologies conforming the New Gaia Architecture (NGA) fit together to speed the Web.

In the first chapter, we focused on performance and resource efficiency and we realised the potential conflict between the multi-page approach to web applications where each view is isolated in its own (no iframe) document and the need for keeping in memory all the essential logic & data to avoid unnecessary delays and meet the time constrains for an optimal user experience.

In this chapter I will explore web workers in its several flavours: dedicated workers, shared workers and service workers and how they can be combined to beat some of these delays. As a reminder, here is the breakdown from the previous episode:

Tu voto:

I spent most part of this year working in Service Workers for the New Gaia Architecture (NGA) that Mozilla is preparing to release with Firefox OS 2.5 & 3.0. This is the first of a series of articles about the effort we are putting in evolving Gaia while contributing to best programming practises for Modern Mobile Web.

More than an architecture, NGA is a set of recommendations to reach specific goals, good not only for Firefox OS applications but for any modern web application: offline availability, resource efficiency, performance and continuity. Different technologies exists (and other are coming) to help reaching each target.

Although, there is some confusion about how all these technologies fit together. Even inside the Firefox OS team some see pinning apps (which is a mechanism to keep and entire site cached forever and it must not to be confused with the concept of pinning the web) overlaps with Offline Cache. Other people see the render store unnecessary and overlapping with the prerender technology. If this is your case or simply you did not know about these concepts, continue reading, you deserve an explanation first.

As a meta-programming lover, I’m eager to use proxies for some fancy tricks. The other day I had the chance to use proxies and property getters while implementing an EventEmitter trait. I want to share the implementation to illustrate the differences between intercepting `get` semantics and using a `getter`.

Yep, the getter is gratuitous, it could be simply a computed property with the proxy but let do it this way for learning purposes.

Function `newEmitter()` creates an object ready to be mixed into an object prototype to use it as an event emitter. Methods `on()`, `off()` and `emit()` are trivially simple due to a couple of assumptions:

This allow us to avoid checkings on these properties and keep code simple.

Notice both functions, the getter and the get trap in the proxy give a solution for the same problems: to provide a default value for some properties. The difference is the nature of the knowledge we have about these properties. In case of the getter we want `__handlers` to default to a handler set so the getter overrides itself with an empty handler set. We specifically know which property we want to access in a special way and we implement the behaviour locally.

In case of the get trap we are altering not an specific property of an object but all the future properties. Actually, we are defining a new kind of object where all properties must be sets so we are changing the whole semantics of the object: as soon as we access a property, we obtain a set, if the property does not exist yet, we obtain an empty set. We implement this behaviour globally for all present and future properties.

You use getters to compute properties. While you use get traps to alter semantics. Do you know how to calculate an specific attribute? Use a getter for that attribute. Do you want to change the fact of accessing properties, in general, use the get trap.

Oh! And please, do not use proxies for controlling access or extending or restricting APIs, you already have inheritance and composition for that!

Proxies are really powerfull. I hope to show you more usages soon.

Tu voto:

The other day I was invited to participate in the podcast of Nación Lumpen about the flaming topic Dynamic Typing vs. Static Typing. Luis Osa and I were there to argue in favor of dynamic typing, from Python and JavaScript perspectives respectively.

Lots of things were told during the conversation and, in my opinion, lots of reasons arose to make static typed languages shine over dynamic ones. After 700 km of highway, I’ve got to sort my ideas and conclusions about the podcast. This is a long post, so be prepared!

Types are meaning

From the bare metal perspective, types are nothing. The pure hardware executing the programs in our devices understand only about memory addresses and data sizes. It does not perform any type-checking before running the assembly code, once the code it’s loaded, it’s on its own.

You start introducing types to mean something. Consider simple types in C: they are all about data sizes (int, word, byte), formats (float, double, pointer) and access (const) but you’re helping the compiler to create better target code: memory efficient, faster and safer. In addition, C gives us ways to combine simpler types into complex ones as well by using DEFINE macros and structured types. C is able to calculate each size, format and access type of the new abstraction preventing us from accessing invalid fields of a record or using them in incorrect places but, in addition, C makes new names to be charged with unique meaning (i.e two structures differing only in the name of the structure are actually different types) so we can abuse this feature to create abstract relationships.

This is what I think when talking about types. Types are (or at least, add) meaning. You use types as a way to perform a classification of data, to label some properties that some set of values should have and to establish relationships with other types.