plosquare.com blog

When you run a SELECT statement through MySQL client library (or some driver implemented on top of it, such as Perl's DBD::mysql or PHP's mysql_query), by default the server buffers the complete query result and returns rows to the client only after the entire query has finished. This is undesirable if the result set is potentially huge and you want to obtain rows as soon as possible. Fortunately, there is a documented solution for this problem - you can configure your client library to use the API function mysql_use_result rather than mysql_store_result (which is equivalent to the --quick option in mysql command-line client).

With that option in place, let's say you wish to implement a sort of client-side LIMIT statement, based on some required client-side filtering of the sorted result set, and discard the rest of the rows as soon as you have found your matches. Unfortunately, your luck ends here because mysql_use_result will not easily satisfy this scenario. When you attempt to finish/release the statement handle after reading a couple rows (as you are required to do, e.g. by calling $sth->finish if you use Perl's DBI), under the hood the function mysql_free_result is called. The documentation makes it appear harmless, saying that it "frees the memory". Apparently, for unknown reasons, this function (or more precisely, cli_flush_use_result called from within it) does a lot more and takes ages to complete given a big result set. Your client may remain blocked for one minute or longer before it is given an opportunity to execute the next statement, all while causing a high CPU usage on the server. You really wanted to simply abort the query, but with MySQL you can't.

"Design" is a word which seems to be used differently by distinct groups of people. In software engineering, the word has an almost neutral meaning - something along the lines of "the way how the bits and pieces are put together", recognizing that there are many such ways and that they may be told apart based on some definable characteristics. Of course, there is also talk about "good" and "bad" design, implying that in particular contexts certain arrangements may let you reap benefits while others tend to result in drawbacks.

That stronger flavor of "design" is also found in many other disciplines such as law, manufacturing of tools and appliances, usability engineering, game design, and marketing. Here the emphasis is put on intentionally influencing the behavior of users or participants in some favorable way. (The expected favorable effects may occur either for themselves or for someone else, e.g. the party who hired the designer; if a zero-sum game detrimental to the subjects of design is involved, one should more honestly speak of "manipulation" instead). One may also define "design" in a negative way and point out that it comprises all efforts to prevent, well in advance, undesired behaviors or events from occurring during operation of some system. Either way, the first step of design would be to explicitly identify the actors, the utility function that we want to optimize, and the purported causal relationships that connect designed elements to the actors' behavior and their behavior to the optimized function. These causal claims better be supported by empirical data! Thus, design involves meticulous planning and hypothesizing based on experience and facts. It also involves testing to verify the hypotheses. In short, it benefits from a scientific approach.

However, note that there's also a peculiar understanding of "design" proliferated by practitioners of "visual arts" - as found in the phrase "graphics design" and also worryingly often in "web design". This kind of design is indeed closer to "art" and "aesthetics" and satisfying creative whims rather than purposeful control toward achieving well-defined goals. Many a "web designer" is someone interested in (and hopefully proficient in) creating visually pleasing layouts of graphics and type. I do not wish to criticize the merits of such work in general. Printing and typography are both well-established and non-trivial crafts. I want to contrast them with the action-oriented forms of design mentioned earlier. There is not so much action involved in looking at a finely printed book page.

The foremost goal of an artist is not to optimize the performance of a particular task based on hard empirical data or to ensure that someone's sales targets are met. Rather, it is to evoke certain feelings, thoughts and emotions, to impress the viewer in a particular way, to visually communicate their own mood or state of mind, while perhaps following some specific self-consistent style. Even though emotions doubtlessly drive human behavior (possibly much more so than reason, on average), and even though you don't want your audience to experience negative feelings or be annoyed by botched aesthetics of your product, the causal connection of art-induced moods to identifiable sequences of actions and quantitatively measurable goals is notoriously hard to establish. Usually, it is not the "overall impressions", but rather specific actions induced in specific context that matter, the overall impression and "success" emerging as a sum of hundreds of subtle details done right. Not so for many artists who "succeed" in their own eyes, but still fail miserably on any other objective account, including on account of "design" in the above elucidated stricter senses of the word. If artists called themselves doctors, most of us would prefer to call them quacks.

In conclusion, when you decide to obtain advice from, or services of, a "designer", do pay attention to what they perceive as their job definition, especially to the extent in which they recognize the mundane, utilitarian aspects. Failing that, you might find yourself in place of an art patron rather than recipient of effective business services. Your return on investment may then amount to subjective satisfaction predicated on your own suspension of disbelief and submission to the artist's superior sense of authority. Don't let them fool you.

Today Google demonstrates to us how not to update critical platform software: the latest stable version 35.0.1916.114 contains "a number of fixes and improvements", one of them being dropped support for Oracle's Java plug-in on Linux, as a consequence of discontinuing support for the prerequisite NPAPI (one wonders whether it has anything to do with the ongoing Google vs. Oracle case).

After installing the new version there is absolutely no notification to the user that this functionality, which is essential for some applications (e.g. online banking and some enterprise applications), is no longer available. Although the plug-in symlink is present in the correct location and it wouldn't be difficult to alert about incompatibility, it simply doesn't appear in the about:plugins list. For sure hundreds of hours will be wasted in sum by users trying to troubleshoot this issue on their and following outdated instructions. As of 2014-05-21, there is no replacement Java plug-in for Chrome 35 on Linux available.

If you find yourself stuck in an initramfs Busybox shell, trying to manually mount the root filesystem (using a command such as mount /dev/mapper/yourrootfs /root) and getting a confusing "No such file or directory" message in response, then check whether the appropriate kernel module for your filesystem type is loaded (i.e. run "lsmod"). Something as simple as "modprobe ext2" before the mount command might be the entire solution you need (it was for me today, in context of an interrupted Debian dist-upgrade from squeeze to wheezy)!

Software engineering, just like any kind of engineering, is a discipline which involves many informed, planned compromises—a surprising amount of "economizing".

The task of optimizing software—that is, minimizing its resource requirements within an effort budget allocated toward this purpose—is a classical example in which economic realities of software development were early recognized by the pioneers our our discipline. Thus we have Knuth's adage that "premature optimization is the root of all evil". Reasonably, as performance engineers we should focus our efforts on those areas that (after measurement) consume most resources, rather than waste time on following personal hunches about imagined "best practices". One might describe this process as "top-down", or strategic, optimization. First use measurement tools to identify the critical areas and the big decisions that affect performance (such as algothmic complexity), then fix the identified problems.

The top-down optimization approach works very well in most cases, but sometimes it fails. That is because some performance problems are incremental and combinatorial in nature. Rather than being located in only a few critical areas of code, a low-hanging fruit for a trained performance engineer, they turn out as pervasive issues - where performance suffers "death by a thousand cuts", not due to some grand ommissions, but due to universal cumulative neglect.

Recognition that such systemic problems do exist and that they are a lot more difficult to deal with than local bottlenecks should make us think again about the dangers of the popular, easy-going "make it work, make it right, make it fast" approach to software engineering (or should we say, software prototyping?)

In other industries serious opposite viewpoints already exist that treat quality as an emergent property of products. Historically, mastery in any craft has always involved paying relentless attention to every detail throughout the process of crafting and construction. as opposed to post hoc economizing and extinguishing of only the most critical problems. In that respect, recall-and-repair, also a form of "late improvement", is a practice more related to modern mass production. Not by coincidence, mass production works best where discerning patrons or purchasers are also replaced by relatively hapless consumers onto whom manufacturers may shift some of their production risks in exchange for a lower price (or higher profit). High-end, highly customized products seldom benefit from such conditions.

Incidentally, paying attention to detail is also something that is psychologically natural for individuals who understand and love developing software, and a source of personal pride and motivation (sometimes it also degenerates into hair-splitting; as always, you can have too much of a good thing). At heart of what I call bottom-up optimization approaches is a conviction that a flawless final product can only be a result of combining flawless components withing a flawless process, a core belief that superior quality pays off in a myriad ways, not excluding immaterial rewards from feeling of accomplishment. It is a conviction which most software engineers accept quite readily when it comes to considering functional correctness of their products. Nobody would use a buggy standard library or consider fixing its bugs as either "premature" or "evil". However, possibly due to overemphasizing and misinterpretation of Knuth's wise statements by some lesser capable minds, the same level of attention to detail is rarely spent on designing software performance (except in systems in which hardware costs cannot be shifted onto users or poor performance translates into incorrect execution; or in systems with such great competition that inadequate performance translates into lost sales or operational losses on hardware).

So what should we make of those divergent views on early and late, bottom-up and top-down optimizations? I think the key to striking the correct balance ultimately lies in understanding how value is created and destroyed within the software development process. It is a strategic mistake to focus your optimization efforts on those areas which are easy to measure (e.g. late profiling) while neglecting the difficult ones. On the other hand, it is a tactical mistake to waste effort on improvements of provably little consequence, especially while disregarding clear opportunity costs. But, as in any discipline, it takes some years of practice to reliably distinguish which kind of those mistakes you are making (of which even experts may disagree). Always optimize your bottom line—but understand first how many numbers stand above it and how they are connected.

I took some quick notes comparing procedural and functional programming approaches after listening to Brian Lonsdorf's talk about Underscore.js. Currently, I'm not very enthusiastic about pure functional programming. I'm leaning towards the mainstream opinion which banishes it into the role of syntactic sugar in largely procedural and object-oriented libraries, given the huge amount of existing non-functional code out there.

In procedural programming you invoke functions with arguments. In functional programming you apply functions to arguments (or more generally, to some data structures).

In procedural programming there is a strict time and space separation between defining functions and using (invoking) them. In functional programming it's very common to not just pass around functions as arguments or return values, but also to define new functions by invoking other functions. Established patterns for function generation exist.

Perhaps the most strikingly confusing assumption when looking at FP code through eyes of a procedural programmer is that an expression f(something) represents a runtime invocation of f, which returns some data structure. This assumption, universally true in procedural world, does not hold in functional programming where such expressions commonly represent functions (and are possibly even subject to static optimizations by compilers). Unfortunately, no naming conventions seem to exist that would signify whether a function returns another function or data structure (which seems to me like a rather crucial distinction for aiding code comprehension). Tip: if you see the expression f(x) in functional programming code, try comprehending the whole (and not just f) as a function name.

In procedural programming it's usual that functions are uniquely named and static (they aren't created or destroyed at runtime). In functional programming anonymous temporary functions are common.

One common pattern found in functional programming is creating a new function based on another by fixing the first few arguments to constant values ("partial function application"). Argument lists are intentionally designed to allow this. The rule of thumb is to place the more variable arguments later in an argument list. More specifically, arguments that represent data to be processed should appear after such that represent functions to be applied to the data. This lets you specialize more general functions by replacing their function arguments with more concrete functions of your own.

A programming technique to support the argument fixing trick is known as "currying". Curry is an operation that can be (mechanically) applied to any function with multiple arguments. It returns a semantically equivalent function of the first argument, which in turn returns a function of the second argument, and so on. For example, currying a function of three arguments f(x,y,z) yields f', which you can then apply as follows: f'(x)(y)(z). You can now used this f' to easily create a function f(X,Y,z), with X and Y being fixed constants - simply by invoking f'(X)(Y).

Another common pattern for creating new functions from others is composition. You use composition if you want a function which applies some function to the result to another (a nested function invocation, in procedural terms). More generally, you can compose n functions (i.e. multiple levels of nested invocation). compose(f,g,h) will give you a new function which will produce the same result as f(g(h(...))) when applied.

The 'map' function, commonly found in functional programming languages as an idiom for processing a list of elements by applying some argument function, can be usefully generalized by allowing it to process not just lists but other also other objects known as "functors". To be supported as argument for map, such an object must provide an own implementation of 'map', which is then used in place of the default one to return a list of elements. Useful idioms that can be implemented this way include Maybe(x) and Either(x,y) of Haskell. The Maybe object applies the mapped function to its argument only if it is not null. The Either object applies to the mapped function to one of its arguments (allowing that one of them is not null).

MySQL 5.6 introduces a new feature - you can talk to the InnoDB enginge using the memcached protocol, which is supposed to be faster for simple get/set operations based on a key. For an introduction, refer to this MySQL via Memcache tutorial and the MySQL manual. (The basic idea is that you create a special database in which you describe your tables that should be "exported" to be accessible through the memcached plugin and then enable the plugin itself.)

I didn't find any ballpark figures about how much improvement the new memcached backend might actually offer for applications, so I ran a few benchmarks myself. I compared it to a straightforward way of using an InnoDB table as a key-value store via SQL (either with DELETE/INSERT or REPLACE) and also (just for fun) against a real memcached backend (which of course unlike MySQL offers no persistence). The versions I used specifically were InnoDB 5.6.14 and memcached 1.4.5. The benchmark client wrote about 270000 key-value pairs in random order to the test.demo_test table shipped with the plugin as an example. Each key consisted of 10 and each value of 176 bytes. I also ran another test with 4 such clients running in parallel (on a quad-core AMD Phenom 9650 machine).

I also tested two ways of setting the keys - either with one operation per key or with a batch operation set_multi (this is something offered by the Cache::Memcached::Fast client, not in the memcached protocol, as far as I'm aware).

Note that the DELETE/INSERT approach is not recommended as it is not only slow, but also causes errors and/or deadlocked transactions on collisions (I didn't bother implementing transaction retry).

1 client

4 clients

WRITE

ms/op

ops/client/s

ms/op

ops/client/s

total ops/s

DELETE+INSERT

0.47875

2089

0.78324

1277

5107

(not safe)

REPLACE

0.28412

3520

0.55597

1799

7195

InnoDB set

0.26271

3806

0.51304

1949

7797

InnoDB set_multi

0.12020

8319

0.47455

2107

8429

memcached set

0.12068

8286

0.11235

8900

35602

memcached set_multi

0.04038

24766

0.14384

6952

27809

READ

SELECT

0.16261

6150

0.13342

7495

29981

InnoDB get

0.09920

10080

0.07527

13286

53142

memcached get

0.06140

16288

0.03447

29010

116040

My conclusion is that the new memcached backend can improve MySQL read performance quite a lot (77% more reads per second) and improve write performance a little (8%) compared to a pre-5.6 approach using SELECT/REPLACE. The percentual speedup for reads may seem big, but it is negligible in absolute terms (retrieving single rows through key lookup was already very fast). So switching to the new backend would only make sense for applications that experience really heavy read loads; it won't improve response time of an average application which is mostly sitting around idle. And of course if need no persistence nor SQL access to your cache tables (which sounds nice for performing bulk updates/deletes as may be), then the good old memcached is a clear winner.