tag:blogger.com,1999:blog-83973117663192152182017-11-23T06:53:32.317-08:00metablogMarcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.comBlogger116125tag:blogger.com,1999:blog-8397311766319215218.post-46920853144659602232017-08-07T13:16:00.001-07:002017-08-09T04:17:09.592-07:00The Science Behind the "Google Manifesto"
The "<a href="http://gizmodo.com/exclusive-heres-the-full-10-page-anti-diversity-screed-1797564320">Google Diversity Manifesto</a>" has created
quite a bit of controversy. One thing that hasn't helped is that (at least that's what I read), Gizmodo stripped the links to the
scientific evidence supporting the basic premises. For me, being at least roughly aware of that research, the whole thing seemed
patently unremarkable, to others apparently not so much:<p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Write a doc about how inferior women are, then try to be a hero by offering help to save the *vulnerable* 🤢🤢🤢 Still shaking in anger.</p>&mdash; Jaana B. Dogan 👀 (@rakyll) <a href="https://twitter.com/rakyll/status/893286714961141760">August 4, 2017</a></blockquote> <script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
Now I don't think everyone has to agree with what was written, but it would help to at least get to a common understanding. I didn't find
anything in the text that said or even hinted that women were "inferior", but apart from the chance that I just missed it, it also seems
that some of the ideas and concepts presented might at least "feel" that way when stripped of their context.<p>
Ideally, we would get the original with citations and figures, but as a less-then-ideal stopgap, here are some references to the science
that I found.<p>
UPDATE: The original <a href="https://www.documentcloud.org/documents/3914586-Googles-Ideological-Echo-Chamber.html">document</a> has been published.
<h3>Biases</h3>
The text starts with a list of biases that the author says are prelavent in the political left and the political right.
This seems to be taken directly from Jonathan Haidt.
<iframe width="640" height="360" src="https://www.youtube.com/embed/Gatn5ameRr8" frameborder="0" allowfullscreen></iframe>
<br>
<a href="https://www.edge.org/conversation/jonathan_haidt-the-bright-future-of-post-partisan-social-psychology">Text</a>, <a href="http://www.authorstream.com/Presentation/jhaidt-819710-haidt-postpartisan-social-psychology/">Slides</a>
<p>
Article in the New York Times: <a href="https://campaignstops.blogs.nytimes.com/2012/03/17/forget-the-money-follow-the-sacredness/">Forget the money follow the sacredness</a>
<p>
<h3>Possible non-bias causes of the "gender gap"</h3>
Second, after acknowledging that biases hold people back, the author goes into possible causes of a gender gap in tech that are not bias, and
may even be biological in nature. There he primarily goes into the gender differences in the <a href="https://en.wikipedia.org/wiki/Big_Five_personality_traits">Big Five</a> personality traits.<p>
<img src="https://upload.wikimedia.org/wikipedia/commons/8/86/Bigfive_en.png" alt="" title="" border="0" width="400" height="" />
<p>
As far as I can tell, the empirical solidity of the Big Five and findings around them are largely undisputed, the criticism listed in the Wikipedia page
is mostly about it not being enough, being "just empirical". One thing to note is that terms like "neuroticism" in this context appear to be
different from their everyday use. So someone with a higher "neuroticism" score is not necessarily less healthy than one with a lower score.
Seeing these terms without that context appears to have stoked a significant part of the anger aimed at the paper and the author.<p>
Jordan Peterson has a <a href="https://www.youtube.com/watch?v=nbzAynn80SU">video</a> on the same topic, and here are some papers that
show cross-cultural (hinting at biological causes) and straight biologically caused gender differences in these personality traits:
<ul>
<li><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3149680/">Gender Differences in Personality across the Ten Aspects of the Big Five</a></li>
<li> "Gender differences in personality tend to be larger in gender-egalitarian societies than in gender-inegalitarian societies" – <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1751-9004.2010.00320.x/abstract">Gender Differences in Personality and Interests: When, Where, and Why?</a>
</li>
<li> Confirmed a few years later: "Previous research suggested that sex differences in personality traits are larger in prosperous, healthy, and egalitarian cultures in which women have more opportunities equal with those of men. In this article, the authors report cross-cultural findings in which this unintuitive result was replicated across samples from 55 nations (N = 17,637)." – <a href="https://www.ncbi.nlm.nih.gov/pubmed/18179326">Why can't a man be more like a woman? Sex differences in Big Five personality traits across 55 cultures. </a>
<li>"...the emergence of sex differences was similar across culture." <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4327943/">The Emergence of Sex Differences in Personality Traits in Early Adolescence: A Cross-Sectional, Cross-Cultural Study </a> </li>
<li> <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3166361/">Gendered Occupational Interests: Prenatal Androgen Effects on Psychological Orientation to Things Versus People</a> </li>
</ul>
So yes, there are <em>statistical</em> gender differences. None of these say anything about an individual, just like most physical differences: yes,
men are statisticially taller than women. Yet, there are a lot of women that are taller than a lot of men. Same with the psychological
traits, where the overlap is great and there is also no simple goodness scale attached to the traits.<p>
As a matter of fact, it appears to be that one reason women choose tech less than men is that women who have high math ability also tend
to have high verbal ability, whereas men with high math ability tend to have just the high math ability. So women have more options,
and apparently people of either gender with options tend to avoid tech: <a href="https://www.psychologytoday.com/blog/rabble-rouser/201707/why-brilliant-girls-tend-favor-non-stem-careers">Why Brilliant Girls Tend to Favor Non-STEM Careers</a> <p>
Of course, the whole idea that there are <em>no</em> biological reasons for cognitive differences is The <a href="https://en.wikipedia.org/wiki/Tabula_rasa">Blank Slate</a> hypothesis, which was
pretty thoroughly debunked by Steven Pinker in his book of the same title: <a href="https://en.wikipedia.org/wiki/The_Blank_Slate">The Blank Slate</a>.
What's interesting is that he documents the same sort of witch hunt we've seen here. This is not a new phenomenom.<p>
Even more topical, there was also the <a href="https://www.edge.org/event/the-science-of-gender-and-science-pinker-vs-spelke-a-debate">Pinker/Spelke debate</a> "...on the research on mind, brain, and behavior that may be relevant to gender disparities in the sciences, including the studies of bias, discrimination and innate and acquired difference between the sexes."<br>
<iframe width="640" height="360" src="https://www.youtube.com/embed/9bTKRkmwtGY" frameborder="0" allowfullscreen></iframe>
<p>
This covers a lot of the ground alluded to in the "manifesto", with Pinker providing tons and tons of interlocking evidence for there being
gender-specific traits and preferences that explain the gaps we see. Almost more interestingly, he makes a very good case that the opposite thesis <em>makes incorrect predictions</em>.
<p>
There is lots and lots more to this. One of my favorite accessible (and funny!) intros is the Norwegian Documentary <a href="https://gendertrap.wordpress.com/2013/08/04/the-gender-equality-paradox/">The Gender Equality Paradox</a>. The
documentary examines why in Norway, which is consistently at the top of world-wide country rankings for gender equality, professions are much <em>more</em> segregated than in less egalitarian countries, not less.<p>
<h3>Empathy</h3>
I was surprised to find this, but what he writes about is exactly the thesis of Paul Bloom's recent book <a href="http://bostonreview.net/forum/paul-bloom-against-empathy">Against Empathy</a>. (<a href="https://www.amazon.com/Against-Empathy-Case-Rational-Compassion-ebook/dp/B01CY2LCZI/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=&sr=">amazon</a>, <a href="https://www.goodreads.com/book/show/29100194-against-empathy">goodreads</a>, <a href="https://www.nytimes.com/2016/12/06/books/review-against-empathy-paul-bloom.html?_r=0">New York Times</a>).
<blockquote>
Brilliantly argued, urgent and humane, AGAINST EMPATHY shows us that, when it comes to both major policy decisions and the choices we make in our everyday lives, limiting our impulse toward empathy is often the most compassionate choice we can make.
</blockquote>
One small example he gives is that empathy tends to make us give much weight to an individual being harmed than many people being harmed, which is
a somewhat absurd outcome when you think about it. There's a lot more, it's a fascinating read that forces you to think and question some
sacred beliefs.
<h3>Microagressions</h3>
<a href="http://journals.sagepub.com/doi/full/10.1177/1745691616659391">Microaggressions: Strong Claims, Inadequate Evidence</a>:
<blockquote>
I argue that the microaggression research program (MRP) rests on five core premises, namely, that microaggressions (1) are operationalized with sufficient clarity and consensus to afford rigorous scientific investigation; (2) are interpreted negatively by most or all minority group members; (3) reflect implicitly prejudicial and implicitly aggressive motives; (4) can be validly assessed using only respondents’ subjective reports; and (5) exert an adverse impact on recipients’ mental health. A review of the literature reveals negligible support for all five suppositions.
</blockquote>
<p>
<a href="https://blogs.scientificamerican.com/observations/the-science-of-microaggressions-its-complicated/">The Science of Microaggressions: It’s Complicated</a>:
<blockquote>
Subtle bigotry can be harmful, but research on the concept so far raises more questions than answers.<br>
[..]
<br>
Still, the microaggression concept is so nebulously defined that virtually any statement or action that might offend someone could fall within its capacious borders.<br>
[..]<br>
The science aside, it is crucial to ask whether conceptualizing the interpersonal world in terms of microaggressions does more good than harm. The answer is “We don’t know.” Still, there are reasons for concern. Encouraging individuals to be on the lookout for subtle, in some cases barely discernible, signs of prejudice in others puts just about everyone on the defensive. Minority individuals are likely to become chronically vigilant to minor indications of potential psychological harm whereas majority individuals are likely to feel a need to walk on eggshells, closely monitoring their every word and action to avoid offending others. As a consequence, microaggression training may merely ramp up already simmering racial tensions.
</blockquote>
<h3>Conclusion</h3>
I hope this gives a bit of background and that I haven't mis-represented the author's intent.
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-75892678549302825572017-03-04T12:37:00.001-08:002017-03-05T11:38:52.658-08:00So I wrote a book about performance......specifically <a href="https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0321842847&linkCode=as2&tag=metaobject-20&linkId=4f7c355096938fece9d7c880e6916ce8">iOS and macOS Performance Tuning: Cocoa, Cocoa Touch, Objective-C, and Swift</a>. Despite or maybe because this truly being a labor of love
(and immense time, the first time Addison-Wesley approached me about this was almost ten years ago), I truly expected it to remain
just that: a labor of love sitting in its little niche. So imagine my surprise to see the little badge today:<p>
<a href="https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie=UTF8&camp=1789&creative=9325&creativeASIN=0321842847&linkCode=as2&tag=metaobject-20&linkId=4f7c355096938fece9d7c880e6916ce8"><img src="https://lh3.googleusercontent.com/-DkSNLsBbXf0/WLshdr85g2I/AAAAAAAAAhM/iarSb7fHzwE/ios-macos-number1.png?imgmax=1600" alt="Ios macos number1" title="ios-macos-number1.png" border="0" width="279" height="470" " /></a> <p>
Wow! Number #1 new release in <a href="https://www.amazon.com/gp/new-releases/books/6134007011/ref=zg_b_hnr_6134007011_1">Apple Programming</a> (My understanding is that this link will change over time). And yes I checked, it wasn't the <em>only</em> release in Apple books for the period, there were a good dozen. In fact, iOS and macOS Programming took both the #1 and the #4 spots:
<img src="https://lh3.googleusercontent.com/-oUS5XC0RZJ4/WLslnLoj13I/AAAAAAAAAhY/IQimLfimgjQ/apple-releases.png?imgmax=1600" alt="Apple releases" title="apple-releases.png" border="0" width="333" height="600" />
Oh, and it's also taken #13 overall in Apple programming books.<p>
So a big THANK YOU to everyone that helped make this happen, the people I was allowed to learn from, Chuck who instigated the project and
Trina who saw it through despite me.<p>
Anyway, now that the book is wrapped up, I can publish more performance related information on this blog. Oh, the source code for the book
is on <a href="https://github.com/mpw/iOS-macOS-performance">GitHub</a>.<p>
UPDATE (March 5th, 2017): Now taking both the #1 and #2 spots in Apple new releases and the print edition is in the top 10 for Apple, with the Kindle edition in the top 20. Second update: now at #5 and #21 in overall Apple and #1/#3 in new releases. Getting more amazing all the time. I should probably take a break...<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com1tag:blogger.com,1999:blog-8397311766319215218.post-84335914210106136742017-03-04T02:39:00.001-08:002017-03-04T02:45:15.015-08:00Concept Shadowing and Capture in MVC and its SuccessorsIn a previous <a href="http://blog.metaobject.com/2015/04/model-widget-controller-mwc-aka-apple.html">post</a>, I noted that Apple's
definition of MVC does not actually match the original definition, that it is more like Taligent's Model View Presenter (MVP) or
what I like to to call Model Widget Controller. Matt Gallagher's <a href="https://www.cocoawithlove.com/blog/mvc-and-cocoa.html">look at Model-View-Controller in Cocoa</a> makes a very similar point.
<p>
So who cares? After all, a rose by any other name is just as thorny, and the question of how the 500 pound gorilla gets to
name things is also moot: however it damn well pleases.<p>
The problem with using the same name is <a href="https://en.wikipedia.org/wiki/Variable_shadowing">shadowing</a>: since the
names are the same, accessing the original definition is now hard. Again, this wouldn't really be a problem if it
weren't for the fact that the old MVC solved exactly the kinds of problems that plague the new MVC.
.<p>
However, having to say "the problems of MVC are solved by MVC" is less than ideal, because, well, you sound a bit like a lunatic.
And that <em>is</em> a problem, because it means that MVC is not considered when trying to solve the problems of MVP/MVC. And that,
in turns, is a shame because it solves them quite nicely, IMHO much nicer than a lot of the other suggested patterns.<p>
It turns out that MVC is, just like Algol an <a href="http://web.eecs.umich.edu/~bchandra/courses/papers/Hoare_Hints.pdf">improvement on most of its successors</a>. Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-69406450374748420642017-02-12T03:18:00.001-08:002017-02-12T09:24:29.038-08:00mkfile(8) is severely syscall limited on OS X
When I got my brand-new MacBook Pro (late 2016), I was interested in testing out the phenomenal SSD performance I had been reading about,
reportedly around 2GB/s. Unix geek that I am, I opened a Terminal window and tapped out the following:
<hr>
<blockquote><pre>
mkfile 8G /tmp/testfile
</pre></blockquote>
<hr>
To my great surprise and consternation, both the <code>time</code> command and an <code>iostat 1</code> running in another window showed a measly 250MB/s throughput. That's not much faster than a spinning rust disk, and certainly much much slower than previous SSDs, never mind the speed demon that is supposed the MBP 2016's SSD.<p>
So what was going on? Were the other reports false? At first, my suspicions fell on FileVault, which I was now using for the first time. It didn't make any sense, because what I had heard was that FileVault had only a minimal performance impact, whereas this was roughly a factor 8 slowdown.<p>
Alas, turning FileVault off and waiting for the disk to be decrypted did not change anything. Still 250MB/s. Had I purchased a lemon? Could I return the machine because the SSD didn't perform as well as advertised? Except, of course, that the speed wasn't actually advertised anywhere.<p>
It never occurred to me that the problem could be with <code>mkfile(8)</code>.
Of course, that's exactly where the problem was. If you check the mkfile <a href="https://opensource.apple.com/source/system_cmds/system_cmds-496/mkfile.tproj/mkfile.c">source code</a>, you will see that it writes to disk in 512 byte chunks. That doesn't actually affect the I/O path, which will coalesce those writes. However, you are spending one syscall per 512 bytes, and that turns out to be the limiting factor.
Upping the buffer size increases throughput until we hit 2GB/s at a 512KB buffer size. After that throughput stays flat.<p>
<img src="https://lh3.googleusercontent.com/-L-NRjwJDjMY/WKBEkq_vylI/AAAAAAAAAgQ/0zP91Q6O0o8/mkfile-ssd-throughput.png?imgmax=1600" alt="Mkfile ssd throughput" title="mkfile-ssd-throughput.png" border="0" width="461" height="261" />
X-Axis is buffer size in KB. The original 512 byte size isn't on there because it would be 0.5KB or the entire axis would need to be bytes,
which would also be awkward at the larger sizes. Also note that the X-Axis is logarithmic.
<p>
Radar filed: 30482489. I did not check on other operating systems, but my guess is that the results would be similar.
<p>
UPDATE: In the <a href="https://news.ycombinator.com/item?id=13627875">HN</a> discussion, a number of people interpreted this as saying that syscall speed is slow on OS X. AFAIK that
is no longer the case, and in case not the point. The point is that the hardware has changed so dramatically that even seemingly
extremely safe and uncontroversial assumptions no longer hold. Heck, 250MB/s would be perfectly fine if we still had spinning rust,
but SSDs in general and particularly the scorchingly fast ones Apple has put in these laptops just changed the equation so that
something that used to just not factor into the equation at all, such as syscall performance, can now be the deciding factor.<p>
In the I/O tests I did for my iOS/macOS performance book (see sidebar), I found that CPU nowadays generally dominates over actual
hardware I/O performance. This was a case where I just wouldn't have expected it to be the case, and it took me over day to find
the culprit, because the CPU should be doing virtually nothing. But once again, assumptions just got trampled by hardware advancements.<p>
So check your assumptions.<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com2tag:blogger.com,1999:blog-8397311766319215218.post-50990459277594904472016-05-21T03:50:00.001-07:002016-05-21T05:10:16.727-07:00What's Missing in the Discussion about Dynamic SwiftThere have been some great posts recently on the need for dynamic features
in Swift.<p>
I think <a href="http://shapeof.com/archives/2016/5/dynamic_swift.html">Gus Muller</a>
really nails it with his description of adding an "Add Layer Mask" menu item to
Acorn that directly talks to the <code>addLayerMask:</code> implementation:
<blockquote>
With that in place, I can add a new menu item named "Add Layer Mask" with an action of <code>addLayerMask:</code> and a target of <code>nil</code>. Then in the layer class (or subclass if it's only specific to a bitmap or shape layer) I add a single method named <code>addLayerMask:</code>, and I'm pretty much done after writing the code to do the real work.<p>
[..]<p>
What I didn't add was a giant switch statement like we did in the bad old days of classic Mac OS programming. What I didn't add was glue code in various locations setting up targets and actions at runtime that would have to be massaged and updated whenever I changed something. I'm not checking a list of selectors and casting classes around to make the right calls.<p>
</blockquote>
The last part is the important bit, I think: no need to add boiler-plate glue code.
IMHO, glue code is what is currently killing us, productivity-wise. It is sort of
like software's dark matter, largely invisible but making up 90% of the mass of
our software universe.<p>
After giving some <a href="http://inessential.com/2016/05/17/responder_chain_followup">great</a> &nbsp;<a href="http://inessential.com/2016/05/20/updating_local_objects_with_server_objec">examples</a>, Brent Simmons <a href="http://inessential.com/2016/05/18/what_im_doing_with_these_articles">spells it out</a>:
<blockquote>
In case it’s not clear: with recent and future articles I’m documenting problems that Mac and iOS developers solve using the dynamic features of the Objective-C runtime.
My point isn’t to argue that Swift itself should have similar features — I think it should, but that’s not the point.
The point is that these problems will need solving in a possible future world without the Objective-C runtime. These kinds of problems should be considered as that world is being designed. The answers don’t have to be the same answers as Objective-C — but they need to be good answers, better than (for instance) writing a script to generate some code.
</blockquote>
Again, that's a really important point: it's not that us old Objective-C hands are
saying "we must have dynamic features", it's that there are problems that need
solving that are solved really, really well with dynamic features, and really, really
poorly in languages lacking such dynamic features.<p>
However, many of these dynamic features are definitely hacks, with various issues,
some of which I talk about in <a href="http://blog.metaobject.com/2014/03/the-siren-call-of-kvo-and-cocoa-bindings.html">The Siren Call of KVO and (Cocoa) Bindings</a>.
I think everyone would like to have these sorts of features implemented in a way
that is not hacky, and that the compiler can help us with, somehow.<p>
I am not aware of any technology or technique that makes this possible, i.e. that
gives us the power of a dynamic runtime when it comes to building these types
of generic architectural adapters in a statically type-safe way. And the
<a href="http://www.manton.org/2016/05/apples-mindset-on-swift-dynamic-features.html">direction</a> that Objective-C has been going, and that Swift follows with a
vengeance is to remove those dynamic features in deference to static features.<p>
So that's a big worry. However, an even bigger worry, at least for me, is that
Apple will take Brent's concern extremely literally, and provide static solutions
for exactly the specific problems outlined (and maybe a few others they can think
of). There are some early indicators that this will be the case, for example that
you can <em>use</em> CoreData from Swift, but you couldn't actually build it.<p>
And that would be missing the point of dynamic languages almost completely.<p>
The truly amazing thing about KVC, CoreData, bindings, HOM, NSUndoManager
and so on is that none of them were known when Objective-C was designed,
and none of them needed specific language/compiler support to implement.<p>
Instead, the language was and is sufficiently malleable that its <em>users</em>
can think up these things and then go to implement them. So instead of being
an afterthought, a legacy concern or a feature grudgingly and minimally
implemented, the metasystem should be <em>the</em> primary focus of new
language development. (And unsurprisingly, that's the case in <a href="http://objective.st">Objective-Smalltalk</a>).<p>
To <a href="http://c2.com/cgi/wiki?AlanKayOnMessaging">quote</a> Alan Kay:
<blockquote>
If you focus on just messaging - and realize that a good metasystem can
late bind the various 2nd level architectures used in objects - then much
of the language-, UI-, and OS based discussions on this thread are really
quite moot.<p>
[..]<p>
I think I recall also pointing out that it is vitally important not just to
have a complete metasystem, but to have fences that help guard the crossing
of metaboundaries. [..assignment..] I would say that a system that allowed other metathings to be done
in the ordinary course of programming (like changing what inheritance
means, or what is an instance) is a bad design. (I believe that systems
should allow these things, but the design should be such that there are
clear fences that have to be crossed when serious extensions are made.)<p>
[..]
<p>
I would suggest that more progress could be made if the smart and talented
Squeak list would think more about what the next step in metaprogramming
should be - how can we get great power, parsimony, AND security of meaning?
<p>
</blockquote>
Note that Objective-C's metasystem <em>does</em> allow changing meta-things in
the normal course of programming, and it is rather ad-hoc about which things
it allows us to change and which it does not. It is a bad design, as far as metasystems
are concerned. However, even a bad metasystem is better than no metasystem.
Or to <a href="http://www.cs.virginia.edu/~evans/cs655/readings/steele.pdf">quote</a> Guy Steele (<a href="http://blog.metaobject.com/2014/09/no-virginia-swift-is-not-10x-faster.html">again</a>):
<blockquote>
This is the nub of what I want to say. A language design can no longer be a thing. It must be a pattern—a pattern for growth—a pattern for growing the pattern for defining the patterns that programmers can use for their real work and their main goal.
</blockquote>
The solution to bad metasystems is not to ban metasystems, it is to
design better metasystems that allow these things in a more disciplined <em>and</em>
more flexible way.<p>
As usual, discussion here or on <a href="https://news.ycombinator.com/item?id=11744211">HN</a>.<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-4937390620384830112016-03-23T03:18:00.001-07:002016-03-23T03:18:14.131-07:00FeindenpeinlichkeitSam Harris asks:<p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Surely there&#39;s a German word for the embarrassment I feel over the quality of my enemies... <a href="https://t.co/buv03fbRFm">https://t.co/buv03fbRFm</a></p>&mdash; Sam Harris (@SamHarrisOrg) <a href="https://twitter.com/SamHarrisOrg/status/712433479808516096">March 23, 2016</a></blockquote> <script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
I think the word should be <em>Feindenpeinlichkeit</em>.
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-24836040396958406502015-10-06T15:43:00.001-07:002017-11-14T01:07:04.297-08:00JitterdämmerungSo, Windows 10 has just been released, and with it Ahead Of Time (AOT) compilation feature <a href="http://www.anandtech.com/show/9661/windows-10-feature-focus-net-native">.NET native</a>. Google also just recently introduced <a href="https://source.android.com/devices/tech/dalvik/index.html">ART</a> for
Android, and I just discovered that Oracle <a href="https://www.youtube.com/watch?v=Xybzyv8qbOc">is planning</a> an AOT compiler for mainstream Java.<p>
With Apple doggedly sticking to Ahead of Time Compilation for Objective-C and now their new Swift, JavaScript
is pretty much the last mainstream hold-out for JIT technology. And even in JavaScript, the state-of-the-art
for achieving maximum performance appears to be asm.js, which largely eschews JIT techniques by acting as
object-code in the browser represented in JavaScript for other languages to be AOT-compiled into.<p>
I think this shift away from JITs is not a fluke but was inevitable, in fact the big question is why
it has taken so long (probably industry inertia). The benefits were always less than advertised,
the costs higher than anticipated. More importantly though, the inherent performance characteristics
of JIT compilers don't match up well with most real world systems, and the shift to mobile has only
made that discrepancy worse. Although JITs are not going to go away completely, they are fading
into the sunset of a well-deserved retirement.<p>
<h4>Advantages of JITs less than promised</h4>
I remember reading the copy of the <a href="http://researchweb.watson.ibm.com/journal/sj39-1.html">IBM Systems Journal on Java Technology</a> back in 2000, I think. <a href="http://researchweb.watson.ibm.com/journal/sj39-1.html"><img src="http://researchweb.watson.ibm.com/journal/images/sj39-1x1.jpg" alt="" title="" border="0" width="90" height="" style="float:right; padding:2mm;" /></a> It had a bunch of research articles describing
super amazing VM technology with world-beating performance numbers. It also had a single real-world report
from IBM's San Francisco project. In the real world, it turned out, performance was a bit more "mixed" as
they say. In other words: it was terrible and they had to do an incredible amount of work for the system
be even remotely usable.<p>
There was also the experience of the New Typesetting System (NTS), a rewrite of TeX in Java. Performance
was atrocious, the team took it with humor and chose a snail as their logo.<p>
<img src="https://lh3.googleusercontent.com/-1REq2UE4c6Q/VhROjIVc-CI/AAAAAAAAAVA/_sNxyShIm0g/nts-at-full-speed.png?imgmax=800" alt="Nts at full speed" title="nts-at-full-speed.png" border="0" height="135" style="float:left; padding:2mm;" />
One of the reasons for this less than stellar performance was that JITs were invented for highly dynamic
languages such as Smalltalk and Self. In fact, the Java Hotspot VM can be traced in a direct line to
Self via the <a href="http://strongtalk.org">Strongtalk</a> system, whose creator <a href="https://en.wikipedia.org/wiki/Strongtalk">Animorphic Systems</a> was purchased by Sun in order to acquire the VM technology.<p>
However, it turns out that one of the biggest benefits of JIT compilers in dynamic languages is figuring
out the actual types of variables. This is a problem that is theoretically intractable (equivalent to
the halting problem) and practically fiendishly difficult to do at compile time for a dynamic language.
It is trivial to do at runtime, all you need to do is record the actual types as they fly by. If you
are doing Polymorphic Inline Caching, just look at the contents of the caches after a while. It is
also largely trivial to do for a statically typed language at compile time, because the types are right
there in the source code!<p>
So gathering information at runtime simply isn't as much of a benefit for languages such as C# and Java
as it was for Self and Smalltalk.<p>
<h4>Significant Costs</h4>
The runtime costs of a JIT are significant. The obvious cost is that the compiler has to be run
alongside the program to be executed, so time compiling is not available for executing. Apart
from the direct costs, this also means that your compiler is limited in the types of analyses
and optimizations it can do. The impact is particularly severe on startup, so short-lived
programs like for example the TeX/NTS are severely impacted and can often run slower overall
than interpreted byte-code.<p>
In order to mitigate this, you start having to have multiple compilers and heuristics
for when to use which compilers. In other words: complexity increases dramatically,
and you have only mitigated the problem somewhat, not solved it.<p>
A less obvious cost is an increase in VM pressure, because the code-pages created by the JIT
are "dirty", whereas executables paged in from disk are clean. Dirty pages have to be written
to disk when memory is required, clean pages can simply be unmapped. On devices without a
swap file like most smartphones, dirty vs. clean can mean the difference between a few unmapped
pages that can be swapped in later and a process getting killed by the OS.<p>
VM and cache pressure is generally considered a much more severe performance problem than a little extra
CPU use, and often even than a lot of extra CPU use. Most CPUs today can multiply numbers
in a single cycle, yet a single main memory access has the CPU stalled for a hundred cycles or more.<p>
In fact, it could very well be that keeping non-performance-critical code as compact interpreted
byte-code may actually be better than turning it into native code, as long as the code-density
is higher.
<h4>Security risks</h4>
Having memory that is both writable and executable is a security risk. And forbidden on iOS,
for example. The only exception is Apple's own JavaScript engine, so on iOS you simply
can't run your own JITs.<p>
<h4>Machines got faster</h4>
On the low-end of performance, machines have gotten so fast that pure interpreters are often
fast enough for many tasks. Python is used for many tasks as is and PyPy isn't really taking
the Python world by storm. Why? I am guessing it's because on today's machines, plain old
interpreted Python is often fast enough. Same goes for Ruby: it's almost comically slow
(in my measurements, serving http via <a href="http://www.sinatrarb.com">Sinatra</a> was almost 100 times slower than using <a href="https://www.gnu.org/software/libmicrohttpd/">libµhttp</a>),
yet even that is still 400 requests per second, exceeding the needs of the vast majority of web-sites
including my own blog, which until recently didn't see 400 visitors per <em>day</em>.<p>
The first JIT I am aware of was <a href="http://c2.com/cgi/wiki?PeterDeutsch">Peter Deutsch's</a>
PS (<a href="http://c2.com/cgi/wiki?PortableSmalltalk">Portable Smalltalk</a>), but only about a decade later Smalltalk was fine doing multi-media
with just a <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.103.99">byte-code</a>&nbsp;<a href="http://sdmeta.gforge.inria.fr/FreeBooks/CollectiveNBlueBook/formatted-btf-once-more.pdf">interpreter</a>. And native primitives.<p>
<h4>Successful hybrids</h4>
The technique used by Squeak: interpreter + C primitives for heavy lifting, for example for multi-media
or cryptography has been applied successfully in many different cases. This hybrid approach was described
in detail by John Ousterhout in <a href="http://web.stanford.edu/~ouster/cgi-bin/papers/scripting.pdf">Scripting: Higher-Level Programming for the 21st Century</a>: high level "scripting" languages are used to glue together
high performance code written in "systems" languages. Examples include Numpy, but the ones I found most
impressive were "<a href="http://homepages.cwi.nl/~robertl/papers/1998/fgcs1/paper.pdf">computational steering</a>" systems apparently used in supercomputing facilities such as
Oak Ridge National Laboratories. Written in Tcl.<p>
What's interesting with these hybrids is that JITs are being squeezed out at both ends: at the "scripting"
level they are superfluous, at the "systems" level they are not sufficient. And I don't believe that this
idea is only applicable to specialized domains, though there it is most noticeable. In fact, it seems
to be an almost direct manifestation of the observations in Knuth's famous(ly misquoted) quip about
"Premature Optimization":<p>
<blockquote>
Experience has shown (see [46], [51]) that most of the running time in non-IO-bound programs is concentrated in about 3 % of the source text.<p>
[..]
The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12 % improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in soft- ware engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies.
<p>
There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: <b>premature optimization is the root of all evil</b>.<p>
Yet we should not pass up our opportunities in that critical 3 %. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail. After working with such tools for seven years, I've become convinced that all compilers written from now on should be designed to provide all programmers with feedback indicating what parts of their programs are costing the most; indeed, this feedback should be supplied automatically unless it has been specifically turned off.
<p>
[..]
<p>
(Most programs are probably only run once; and I suppose in such cases we needn't be too fussy about even the structure, much less the efficiency, as long as we are happy with the answers.)
When efficiencies do matter, however, the good news is that usually only a very small fraction of the code is significantly involved.
<footer>
<cite>
<a href="http://dl.acm.org/citation.cfm?doid=356635.356640">Structured Programming with go to Statements</a>, Donald Knuth, 1974</cite>
</footer>
</blockquote>
For the 97%, a scripting language is often sufficient, whereas the critical 3% are both critical enough
as well as small and isolated enough that hand-tuning is possible and worthwhile.<p>
I agree with Ousterhout's critics who say that the split into scripting languages and systems languages
is arbitrary, Objective-C for example combines that approach into a single language, though one that is
very much a hybrid itself.
The "Objective" part is very similar to
a scripting language, despite the fact that it is compiled ahead of time, in both performance and ease/speed of
development, the C part does the heavy
lifting of a systems language. Alas, Apple has worked continuously and fairly successfully at destroying
both of these aspects and turning the language into a bad caricature of Java. However, although the
split is arbitrary, the competing and diverging requirements are real, see Erlang's split into a
functional language in the small and an object-oriented language in the large.<p>
<h4>Unpredictable performance model</h4>
The biggest problem I have with JITs is that their performance model is extremely unpredictable. First,
you don't know when optimizations are going to kick in, or when extra compilation is going to make you
slower. Second, predicting which bits of code will actually be optimized well is also hard and a moving
target. Combine these two factors, and you get a performance model that is somewhere between unpredictable
and intractable, and therefore at best statistical: on average, your code will be faster. Probably.<p>
While there may be domains where this is acceptable, most of the domains where performance matters at all
are not of this kind, they tend to be (soft) real time. In real time systems average performance matters
not at all, predictably meeting your deadline does. As an example, delivering 80 frames in 1 ms each and
20 frames in 20 ms means for 480ms total time means failure (you missed your 60 fps target 20% of the time)
whereas delivering 100 frames in 10 ms each means success (you met your 60 fps target 100% of the time),
despite the fact that the first scenario is more than twice as fast on average.<p>
I really learned this in the 90ies, when I was doing pre-press work and delivering highly optimized
RIP and Postscript processing software. I was stunned when I heard about daily newspapers switching
to pre-rendered, pre-screened bitmap images for their workflows. This is the most inefficient format
imaginable for pre-press work, with each page typically taking around 140 MB of storage uncompressed,
whereas the Postscript source would typically be between 1/10th and 1/1000th of the size. (And at
the time, 140MB was a <em>lot</em> even for disk storage, never mind RAM or network capacity.<p>
The advantage of pre-rendered bitmaps is that your average case is also your worst case. Once you
have provisioned your infrastructure to handle this case, you know that your tech stack will be able to
deliver your newspaper on time, no matter what the content. With Postscript (and
later PDF) workflows, you average case is much better (and your best case ridiculously so), but you
simply don't get any bonus points for delivering your newspaper early. You just get problems
if it's late, and you are not allowed to average the two.<p>
<blockquote>
Eve could survive and be useful even if it were never faster than, say, Excel. The Eve IDE, on the other hand, can't afford to miss a frame paint. That means Imp must be not just fast but predictable - the nemesis of the SufficientlySmartCompiler.
<footer>
<cite><a href="https://github.com/jamii/imp/blob/master/diary.md#operators">Eve blog</a> </cite>
</footer>
</blockquote>
I also saw this effect in play with Objective-C and C++ projects: despite the fact that Objective-C's
primitive operations are generally more expensive, projects written in Objective-C often had better
performance than comparable C++ projects, because the Objective-C's performance model was so much
more simple, obvious and predictable.<p>
When Apple was still pushing the Java bridge, Sun engineers did a stint at a WWDC to explain how
to optimize Java code for the Hotspot JIT. It was comical. In order to write fast Java code,
you effectively had to think of the assembler code that you wanted to get, then write the
Java code that you thought might net that particular bit of machine code, taking into
account the various limitations of the JIT. At that point, it is a <em>lot</em> easier to just
write the damn assembly code. And more vastly more predictable, what you write is what you get.<p>
Modern JITs are capable of much more sophisticated transformations, but what the creators of these
advanced optimizers don't realize is that they are making the problem <em>worse</em> rather than
better. The more they do, the less predictable the code becomes.<p>
The same, incidentally, applies to <a href="http://prog21.dadgum.com/40.html">SufficentlySmart</a> AOT compilers such as the one for the Swift
language, though the problem is not quite as severe as with JITs because you don't have the
dynamic component. All these things are well-intentioned but all-in-all counter-productive.<p>
<h4>Conclusion</h4>
Although the idea of Just in Time Compilers was very good, their area of applicablity, which was
always smaller than imagined and/or claimed, has shrunk ever further due to advances in technology,
changing performance requirements and the realization that for most performance critical tasks,
predictability is more important than average speed. They are therefore slowly being phased
out in favor of simpler, generally faster and more predictable AOT compilers. Although they
are unlikely to go away completely, their significance will be drastically diminished.<p>
Alas, the idea that writing high-level code without any concessions to performance (often
justified by misinterpreting or simply just misquoting Knuth) and then letting a sufficiently
smart compiler fix it lives on. I don't think this approach to performance is viable, more
predictability is needed and a language with a hybrid nature and the ability for the programmer
to specify behavior-preserving transformations that alter the performance characteristics of
code is probably the way to go for high-performance, high-productivity systems. More on that
another time.<p>
What do you think? Are JITs on the way out or am I on crack? Should we have a more manual
way of influencing performance without completely rewriting code or just trusting the
SmartCompiler?<p>
<h4>Update: Nov. 13th 2017</h4>
The Mono Project has just <a href="http://www.mono-project.com/news/2017/11/13/mono-interpreter/">announced</a> that they are adding a byte-code interpreter: "We found that certain programs can run faster by being interpreted than being executed with the JIT engine."
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com13tag:blogger.com,1999:blog-8397311766319215218.post-76808709281468302932015-10-04T10:08:00.001-07:002017-02-09T14:35:16.562-08:00Are Objects Already Reactive?<br>
TL;DR: Yes, obviously.<br><br>
My post from last year titled <a href="">The Siren Call of KVO and Cocoa Bindings</a> has been one of my
most consequential so far. Apart from being widely circulated and discussed, it has also been a
focal point of my ongoing work related to <a href="http://objective.st">Objective-Smalltalk</a>.
The ideas presented there have been central to my <a href="https://www.youtube.com/watch?v=B5hGmXFHQS4">talks</a>&nbsp;<a href="http://oleb.net/blog/2015/06/uikonf-2015-talks/">on</a>&nbsp;<a href="https://github.com/mpw/mobile-optimized-2015">software</a>&nbsp;<a href="http://objcgn.com/videos/">architecture</a>, and I have
finally been able to <a href="http://blog.metaobject.com/2015/09/very-simple-dataflow-constraints-with.html">present</a> some early results I find very promising.<p>
Alas, with the good always comes the bad, and some of the reactions (sic) have no been quite so
positive. For example, consider the following I wrote:<p>
<blockquote>
[..] Adding reactivity to an object-oriented language is, at <a href="http://khanlou.com/2014/02/reactive-cocoa/">first blush</a>, non-sensical and certainly causes confusion [because] whereas functional programming, which is per definition static/timeless/non-reactive, really needs something to become interactive, reactivity is already inherent in OO. In fact, reactivity is the quintessence of objects: all computation is modeled as objects reacting to messages.
</blockquote>
This seemed quite innocuous, obvious, and completely uncontroversial to me,
but apparently caused a bit of a stir with at least some of the creators of ReactiveCocoa:<p>
<blockquote class="twitter-tweet" lang="en"><p lang="en" dir="ltr"><a href="https://twitter.com/Javi">@Javi</a> <a href="https://twitter.com/kyleve">@kyleve</a> lol, that’s the guy who said “OO is already reactive, so who needs FRP”</p>&mdash; Justin Spahr-Summers (@jspahrsummers) <a href="https://twitter.com/jspahrsummers/status/560109820062072833">January 27, 2015</a></blockquote> <script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
Ouch! Of course I never wrote that "nobody" needs FRP: Functional Programming definitely needs
FRP or something like it, because it isn't already reactive like objects are. Second, what I wrote is
that this is non-sensical "at first blush" (so "<a href="https://en.wiktionary.org/wiki/at_first_blush">on first impression</a>"). Idiomatically, this phrase is usually sets up a "
...but on closer examination", and lo-and-behold, almost the entire rest of the post
talks about how the related concepts of dataflow and dataflow-constraints are highly desirable.<p>
The point was and is (obviously?) a
terminological one, because the existing term "reactivity" is being overloaded
so much that it confuses more than it clarifies. And quite frankly, the idea
of objects being "reactive" is (a) so self-evident (you send a message, the object reacts
by executing method which usually sends more messages) and (b) so deeply ingrained and
basic that I didn't really think about it much at all. So obviously, it could very
well be that I was wrong and that this was "common sense" to me in the Einsteinian
sense.<p>
I will explore the terminological
confusion more in later posts, but suffice it to say that Conal Elliott contacted
the ReactiveCocoa guys to <a href="https://github.com/ReactiveCocoa/ReactiveCocoa/issues/1342">tell them</a> (politely) that whatever ReactiveCocoa was, it certainly <em>wasn't</em> FRP:
<blockquote>
I'm hoping to better understand how the term "Functional Reactive Programming" gets applied to systems that are so far from the original definition and principles (continuous time with precise & simple mathematical meaning)
</blockquote>
He also wrote/talked more about this confusion in his 2015 talk "<a href="https://github.com/conal/talk-2015-essence-and-origins-of-frp">Essence and Origins of FRP</a>":
<blockquote>
The term has been used incorrectly to describe systems like Elm, Bacon, and Reactive Extensions.
</blockquote>
Finally, he seems to agree with me that the term "reactive" wasn't really well chosen for
the concepts he was going after:<p>
<a href="https://vimeo.com/6686570"><img src="https://lh3.googleusercontent.com/-d12jVwDUrvA/VhFdBFRGWxI/AAAAAAAAAUo/L-8B3wFYtDU/What-is-Functional-Reactive-Programming.png?imgmax=800" alt="What is Functional Reactive Programming: Something of a misnomer. Perhaps Functional temporal programming" title="What-is-Functional-Reactive-Programming.png" border="0" width="" height="344" /></a>
<p>
So having established the the term "reactive" <em>is</em> confusing when applied to whatever
it is that ReactiveCooca is or was trying to be, let's have a look at how and whether it is
applicable to objects. The <a href="http://cacm.acm.org">Communication of the ACM</a> "<a href="http://dl.acm.org/citation.cfm?id=219717">Special issue on object-oriented experiences and future trends</a>" from 1995 has the following to say:
<blockquote>
A group of leading experts from industry and academia came together last fall at the invitation of IBM and ACM to ponder the primary areas of future needs in software support for object-based applications.
<p>
[..]
<p>
In the future, as you talk about having an economy based on these entities (whether we call them “objects” or we call them something else), they’re going to have to be more proactive. Whether they’re intelligent agents or subjective objects, you enable them with some responsibility and they get something done for you. That’s a different view than we have currently where <b>objects are reactive</b>; you send it a message and it does something and sends something back.
<footer>
— <cite><a href="http://dl.acm.org/citation.cfm?doid=226239.226247">The promise and the cost of object technology: a five-year forecast</a></cite>
</footer>
</blockquote>
But lol, that's only a group of leading researchers invited by IBM and the ACM writing in
arguably one of the most prestigious computing publications, so what do they know?
Let's see what the <a href="http://stephane.ducasse.free.fr/FreeBooks/BlueBook/Bluebook.pdf">Blue Book</a> from 1983 has to say when defining what objects are:<p>
<img src="http://sdmeta.gforge.inria.fr/FreeBooks/BlueBook/blueBook.jpg" align="left" alt="" title="" width="150" height="" hspace=10 />
<blockquote>
The set of messages to which an object can respond is called its interface with the rest of the system. The only way to interact with an object is through its interface. A crucial property of an object is that its private memory can be manipulated only by its own operations. A crucial property of messages is that they are the only way to invoke an object's operations. These properties insure that the implementation of one object cannot depend on the internal details of other objects, only on the messages to which they <b>respond</b>.
<footer>
— <cite><a href="http://stephane.ducasse.free.fr/FreeBooks/BlueBook/Bluebook.pdf">Smalltalk 80, The Language and It's Implementation</a>, p. 6</cite>
</footer>
</blockquote>
So the crucial definition of objects according the creators of Smalltalk is that they <em>respond</em>
to messages. And of course if you check a dictionary or thesaurus, you will find that <em>respond</em>
and <em>react</em> are synonyms. So the fundamental definition of objects is that they react to messages.
Hmm...that sounds familiar somehow.<p>
While those are seemingly pretty influential definitions, maybe they are uncommon? No.
A simple google search reveals that this definition is extremely common, and has been
around for at least the last 30-40 years:<p>
<blockquote>
A conventional statement of this principle is that a program should never declare that a given object is a SmallInteger or a LargeInteger, but only that it <b>responds</b> to integer protocol.
<footer>
— <cite><a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">Design Principles Behind Smalltalk</a>, Dan Ingalls 1981</cite>
</footer>
</blockquote>
But lol, what do Adele Goldberg, David Robson or Dan Ingalls know about Object Oriented
Programming? After all, we have <em>one of the creators of ReactiveCocoa here</em>!
(Funny aside: LinkedIn once asked me "Does Dan Ingalls know about Object Oriented
Programming?" Alas there wasn't a "Are you kidding me?" button, so I lamely clicked "Yes").
<p>
Or maybe it's only those crazy dynamic typing folks that no-one takes seriously these days? No.
<blockquote>
So the only thing relevant thing for typing purposes is how an object
reacts to messages.
<footer>
— <cite><a href="https://books.google.de/books?id=PCx8GcVKF3QC&pg=PA65&lpg=PA65&dq=%22object+reacts+to+messages%22&source=bl&ots=GPZps7Nl1C&sig=Gwe1vnI5ZCbRCl3M1PEzkfTqv34&hl=en&sa=X&ved=0CCEQ6AEwAGoVChMIibnktOimyAIVC10sCh1s4wcT#v=onepage&q=%22object%20reacts%20to%20messages%22&f=false">Foundation of Object-Oriented Languages</a> de Bakker, de Roever, 1990, p. 65 </cite>
</footer>
</blockquote>
Here's a section from the Haiku/BeOS documentation:
<blockquote>
A <a class="link" href="BHandler.html" title="BHandler"><code class="classname">BHandler</code></a>
object responds to messages that are handed to it by a
<a class="link" href="BLooper.html" title="BLooper"><code class="classname">BLooper</code></a>. The
<a class="link" href="BLooper.html" title="BLooper"><code class="classname">BLooper</code></a> tells the
<a class="link" href="BHandler.html" title="BHandler"><code class="classname">BHandler</code></a> about a message by invoking the
<a class="link" href="BHandler.html" title="BHandler"><code class="classname">BHandler</code></a>'s
<a class="link" href="BHandler.html#BHandler_MessageReceived" title="MessageReceived()"><code class="methodname">MessageReceived()</code>
</a> function.
<footer>
— <cite><a href="https://www.haiku-os.org/legacy-docs/bebook/BHandler_Overview.html">The Be Book - System Overview - The Application Kit</a></cite>
</footer>
</blockquote>
A book on OO graphics:<p>
<blockquote>
The draw object reacts to messages from the panel, thereby creating an IT
to cover the canvas.
<footer>
— <cite><a href="https://books.google.de/books?id=4rerCAAAQBAJ&pg=PA60&lpg=PA60&dq=%22object+reacts+to+messages%22&source=bl&ots=AqAWjkbuYQ&sig=obbp5FIwyNMquYlsOcdGs6_4obI&hl=en&sa=X&ved=0CCYQ6AEwAWoVChMIibnktOimyAIVC10sCh1s4wcT#v=onepage&q=%22object%20reacts%20to%20messages%22&f=false">Advances in Object-Oriented Graphics I</a>edited by Edwin H. Blake, Peter Wisskirchen, 1991</cite>
</footer>
</blockquote>
CS lecture on OO:
<blockquote>
Properties implemented as "fields" or "instance variables"
<ul>
<li>constitute the "state" of the object</li>
<li>affect how object reacts to messages</li>
</ul>
<footer>
— <cite><a href="http://cs.williams.edu/~andrea/cs136/Lectures/Lec2.html">CS 136 lecture</a>, cs.williams.edu </cite>
</footer>
</blockquote>
Heck, even the Apple Cocoa/Objective-C docs speak of "objects responding to messages", it's
almost like a conspiracy.
<blockquote>
By separating the message (the requested behavior) from the receiver (the owner of a method that can <b>respond</b> to the request), the messaging metaphor perfectly captures the idea that behaviors can be abstracted away from their particular implementations.
<footer>
— <cite><a href="https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/OOP_ObjC/Articles/ooObjectModel.html#//apple_ref/doc/uid/TP40005149-CH5-SW4">Object-Oriented Programming with Objective-C</a> -- Apple</cite>
</footer>
</blockquote>
Book on OO Analysis and Design:
<blockquote>
As the object structures are identified and modeled, basic processing
requirements for each object can be identified. How each object
responds to messages from other objects needs to be defined.
<footer>
— <cite> <a href="https://books.google.de/books?id=8v9vZMPV3scC&pg=PA75&lpg=PA75&dq=%22object+responds+to+messages%22&source=bl&ots=mdki9dxhBq&sig=RppV-cb5vp2CyUhY8n_0yXHuRR8&hl=en&sa=X&ved=0CB8Q6AEwADgKahUKEwjm5dyM7abIAhUFFiwKHeKQAyE#v=onepage&q=%22object%20responds%20to%20messages%22&f=false">Object-Oriented Information Engineering: Analysis, Design, and Implementation</a>, 1994</cite>
</footer>
</blockquote>
<blockquote>
An object's behavior is defined by its message-handlers(handlers). A message-handler for an object responds to messages and performs the required actions.
<footer>
— <cite></cite>
</footer>
</blockquote> <a href="http://home.agh.edu.pl/~ligeza/wiki/clips:object">CLIPS - object-oriented programming</a> <p>
Or maybe this is an old definition from the 80ies and early 90ies that has fallen out of use? No.
<blockquote>
The behavior of related collections of objects is often defined by a class,
which specifies the state variables of an objects (its instance variables)
and how an object responds to messages (its instance methods).
<footer>
— <cite><a href="https://books.google.de/books?id=xpT6AQAAQBAJ&pg=PA362&lpg=PA362&dq=%22object+responds+to+messages%22&source=bl&ots=xhbea2owpO&sig=_gwjuSjbSZzUb1yEAivIT8rMwOA&hl=en&sa=X&ved=0CDwQ6AEwB2oVChMIg43J_OymyAIVRRUsCh0tiAsh#v=onepage&q=%22object%20responds%20to%20messages%22&f=false">Design Concepts in Programming Languages</a>, 2008 p 362</cite>
</footer>
</blockquote>
<blockquote>
Methods: Code blocks that define how an object responds to messages.
Optionally, methods can take parameters and generate return values.
<footer>
— <cite> <a href="https://books.google.de/books?id=gMrLLqFboE0C&pg=PT86&lpg=PT86&dq=%22object+responds+to+messages%22&source=bl&ots=fYSqD7OR1C&sig=Pq7N8ombmFkrJGiLjDWfxTvNOOI&hl=en&sa=X&ved=0CC8Q6AEwA2oVChMIg43J_OymyAIVRRUsCh0tiAsh#v=onepage&q=%22object%20responds%20to%20messages%22&f=false">Cocoa</a>, by Richard Wentk, 2010</cite>
</footer>
</blockquote>
<p>
<blockquote>
The main difference between the State Machine and the immutable is the way the <b>object reacts</b> to messages being sent (via methods invoked on the public interface). Whereas the State Machine changes its own state, the Immutable creates a new object of its own class that has the new state and returns it.
<footer>
— <cite> <a href="http://www.artima.com/interfacedesign/Immutable.html">Interface Design</a> by Bill Venners </cite>
</footer>
</blockquote>
So to sum up: classic OOP is definitely reactive. FRP is not, at least according to the guy
who invented it. And what exactly things like ReactiveCocoa and Elm etc. are, I don't think
anyone really knows, except that they are not even FRP, which wasn't, in the end reactive.<p>
Tune in for "What the Heck is Reactive Programming, Anyway?"<p>
As always, comments welcome here or on <a href="https://news.ycombinator.com/item?id=10328180">HN</a><p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-23164541988933236442015-09-26T08:59:00.001-07:002015-09-26T22:56:35.719-07:00Very Simple Dataflow Constraints with Objective-SmalltalkEarly last year, I wrote a lengthy <a href="http://blog.metaobject.com/2014/03/the-siren-call-of-kvo-and-cocoa-bindings.html">piece</a> on the connection between Apple
technologies such as Key Value Observing (KVO) and Bindings and general
Computer Science concepts such as constraint solving, particularly
dataflow constraints (aka. Spreadsheet Constraints).<p>
I also wrote that I was working on something, and despite being somewhat
distracted with becoming a father, joining a startup and being acquired,
I now have <a href="https://github.com/mpw/TemperatureConstraints">working code</a>.<p>
The sample application contains two examples, one a classic temperature
converter that I will cover later, the other an implementation of the
ReactiveCocoa password validation <a href="https://github.com/ReactiveCocoa/ReactiveCocoa/tree/master/Documentation/Legacy">example</a>. The basic
idea is super-simple, we want to enable a login button when the password
field and the confirmation field contain the same value, expressed as
follows in Objective-C:<p>
<hr>
<blockquote><pre>
loginButton.enabled = [password.stringValue isEqual:passwordConfirm.stringValue];
</pre></blockquote>
<hr>
Again, this is super simple to write down, but it's not the entire story, because
we want to evaluate this statement continuously as the password field and the
passwordConfirm field change. The mess of callbacks require to keep those
states in sync vastly exceeds the one-time evaluation, as explained in
a very good <a href="http://nshipster.com/reactivecocoa/">article</a> on Reactive
Cocoa by <a href="http://nshipster.com">NSHipster</a>. That article uses a slightly more elaborate example, the
one in the ReactiveCocoa <a href="https://github.com/ReactiveCocoa/ReactiveCocoa/tree/master/Documentation/Legacy">documentation</a> is the following:<p>
<hr>
<blockquote><pre>
RAC(self, createEnabled) = [RACSignal
combineLatest:@[ RACObserve(self, password), RACObserve(self, passwordConfirmation) ]
reduce:^(NSString *password, NSString *passwordConfirm) {
return @([passwordConfirm isEqualToString:password]);
}];
</pre></blockquote>
<hr>
What's noticeable, apart from the macros that are necessary, is the <a href="http://blog.metaobject.com/2009/01/semantic-noise.html">semantic noise</a> apparently inherent in this solution. Instead
of focusing on what we want to accomplish (hidden inside the last return),
the focus is on generic RAC classes such as <code>RACSignal</code> and methods
such as <code>combineLatest:</code> and <code>reduce:</code>. I didn't really
want to combine and reduce, I just wanted to keep some different states in sync,
and with Objective-Smalltalk, I can do just that.<p>
Let's first recast the original Objective-C expression into Objective-Smalltalk.
Since Smalltalk is not burdened by the syntactic legacy of C, we can lose the square
brackets. Because we have binary selectors (a bit like operators) and use ':=' for
assignment, we can use '=' to check for equality instead of having to write out
'isEqual:'. The dots get replaced by slashes because Polymorphic Identifiers use
URI syntax, and finally we use periods instead of semicolons at the end of sentences,
er, statements.
<hr>
<blockquote><pre>
loginButton/enabled := password/stringValue = passwordConfirm/stringValue.
</pre></blockquote>
<hr>
Again, this is semantically the same statement as the original Objective-C, it
assigns the right hand side to the left hand side. This can be viewed as a
a one way constraint: the left hand side should be the same as the right hand side.
The constraint has a fundamental flaw, though, because it is only maintained
instantaneously as the line of code is executed. After that, left hand side
and right hand side can diverge again. What we want is for that constraint
to be maintained indefinitely: whenever the right hand side changes, the
left hand side should be updated. In Objective-Smalltalk, you can now
express this by replacing the ":=" assignment operator (technically: connector),
with the "|=" constraint connector:
<hr>
<blockquote><pre>
loginButton/enabled |= password/stringValue = passwordConfirm/stringValue.
</pre></blockquote>
<hr>
A single character change, so no syntactic and no semantic noise.<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com4tag:blogger.com,1999:blog-8397311766319215218.post-12603721373931949882015-09-13T09:00:00.001-07:002015-09-13T09:00:10.623-07:00Why Software Is Hard
A while ago, the guys from the "<a href="http://atp.fm">Accidental Tech Podcast</a>" had an <a href="http://atp.fm/episodes/54-goto-fail">episode</a> about the goto fail; disaster and seemed to be struggling a bit with why software is hard / complex, "the most complex man made thing". Although the fact that it's all built is an important aspect, I think that that is a fairly small part.<p>
Probably the biggest problem is the state-space. Software is highly non-linear and discontinuous, unlike for example a bridge, or most other physical objects. If you change or remove a single bolt from a bridge, it is still the same bridge and its characteristics are largely the same. You need to remove quite a number of bolts
for that to change, and the effects become noticeable before that (though they do
get catastrophically non-linear at some point!). If you change one bit in a piece of software, the behavior is completely unpredictable. It could be the same, it could just crash, it could quietly corrupt data. That's why all those corner cases in the layers matter so much. Again, coming back to the bridge, if one beam has steel that has a slightly odd edge-case, it doesn't matter so much, you don't have to know everything about every beam, as long as they are within rough tolerances. And there are tolerances, and you can improve your odds by making things with
tighter tolerances than required. Again, with software it isn't really the
case, discrete problems are much harder than continuous ones.<p>
You can see this at work in optimization problems. As long as you have linear equations of real values, there are efficient algorithms for solving such an optimization problem (simplex typically runs in linear time, interior point methods are polynomial). Intuitively, restricting the variables to take only integer values should be easier/quicker, but the reverse is true, and in a big way: once you have integer programming or mixed-integer programming, everything becomes NP-hard. <p>
In fact, I just saw this in action during Joe Spolsky's talk "<a href="https://www.youtube.com/watch?v=0nbkaYsR94c">You suck at Excel</a>":
he turned on goal-seeking (essentially a solver), and it diverged dramatically.
The problem is that he was rounding the results. Once he turned rounding off,
the solver converged to a solution fairly quickly.<p>
The second part that they touched upon, is that it is all abstract, which I think is what they were getting at with the idea that is 100% built. Software being abstract means that we have no intuitions from physical objects to guide us. When building a house, everyone has an idea of how difficult it will be to build a single room vs. the whole house, how much material it will take etc. With software, not so much: this one seemingly little sub-function can potentially be more complex than the entire rest of the program. Even when navigating a hierarchical file-system, there is no indication of how much is hidden behind each directory entry at a particular level.<p>
The last part is related to the second, in that there are no physical or geometric constraints to the architecture and connection complexity. Again, in a physical system we know that something in one corner has very limited ways of influencing something in a different corner, and whatever effect there is will be attenuated by distance in a very predictable way. Again, in software we cannot generally know this. Good software architecture tries to impose artificial constraints to make construction and understanding tractable.<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-41879160633142102562015-08-26T02:39:00.001-07:002017-02-18T04:03:55.451-08:00What Happens to OO When Processors Are Free?A while ago, I presented as a crazy thought experiment the <a href="http://blog.metaobject.com/2007/09/or-transistor.html">idea</a> of using Montecito's transistor budget
for creating a chip with tens of thousand of ARM cores. Well, it seems the idea wasn't so crazy after
all: The <a href="http://apt.cs.manchester.ac.uk/projects/SpiNNaker/project/">SpiNNaker project</a> is trying to build a system with a million ARM CPUs, and it is designing
a custom <a href="http://apt.cs.manchester.ac.uk/projects/SpiNNaker/SpiNNchip/">chip</a> with lots of ARM cores on it.<p>
Of course they only have 1/6th the die area of the Montecito and are using a conservative <a href="http://spectrum.ieee.org/computing/hardware/lowpower-chips-to-model-a-billion-neurons">135nm process</a> rather
than the 95nm of the Montecito or the 15nm that is state of the art, so they have a much lower
transistor budget. They also use the later ARM 9 core and add 54 SRAM banks with 32KB each (from the die
picture, 3 per core), so in the
end they "only" put 18 cores on the chip, rather than many thousands. Using a state of the art
14nm process would mean roughly 100 times more transistors, a Montecito-sized die another factor
of six. At that point, we would be at 10000 cores per chip, rather than 18.<p>
One of the many interesting features of the SpiNNaker project is that <em>"the micro-architecture assumes
that processors are ‘free’: the real cost of computing is energy."</em> This has interesting consequences
for potentially simplifying object- or actor-oriented programming. Alan Kay's original idea of
objects was to scale down the concept of "computer", so every object is essentially a self-contained
computer with CPU and storage, communicating with its peers via messages. (Erlang is probably the
closest implementation of this concept).<p>
In our core-scarce computing environments, this had to
be simulated by multiplexing all (or most) of the objects onto a single von Neumann computer, usually
with a shared address space. If cores are free and we have them in the tens of thousands, we can
start entertaining the idea of no longer simulating object-oriented computing, but rather of
implementing it directly by giving each object its own core and attached memory. Yes, utilization
of these cores would probably be abysmal, but with free cores low utilization doesn't matter, and
low utilization (hopefully) means low power consumption.<p>
Even at 1% utilization, 10000 cores would still mean throughput
equivalent to 100 ARM 9 cores running full tilt, and I am guessing pretty low power consumption
if the transistors not being used are actually off. More important than 100 core-equivalents running is
probably the equivalent of 100 bus interfaces running at full tilt. The aggregate on-chip memory
bandwidth would be staggering.<p>
You could probably also run the whole
thing at lower clock frequencies, further reducing power. With each object having around 96KB
of private memory to itself, we would probably be looking at coarser-grained objects, with pure
data being passed between the objects (Objective-C or Erlang style) and possibly APL-like
array extensions (see <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.3.6196">OOPAL</a>).
Overall, that would lead to de-emphasis of expression-oriented programming models, and a more
architectural focs.<p>
This sort of idea isn't new, the <a href="https://en.wikipedia.org/wiki/Transputer">Transputer</a> got there in
the late 80ies, but it was conceived when Moore's law didn't just increase transistor counts, but also
clock-frequencies, and so Intel could always bulldozer away more intelligent architectures with better
fabs. This has stopped, clock-frequencies have been stagnant for a while and even geometries are starting
to <a href="http://www.theverge.com/2015/7/16/8976223/intel-q2-2015-earnings-moores-law-skylake-kaby-lake-cannonlake">stutter</a>. So maybe now the time for intelligent CPU architectures has finally come, and
with it the impetus for examining our assumptions about programming models.<p>
As always, comments welcome here or on <a href="https://news.ycombinator.com/item?id=13405772">Hacker News</a>.<p>
UPDATE: The kilo-cores are here:<ul>
<li><a href="http://vcl.ece.ucdavis.edu/pubs/2016.06.vlsi.symp.kiloCore/2016.vlsi.symp.kiloCore.pdf">Kilocore</a>: 1000 processors,
1.78 <em>Trillion</em> ops/sec, and at 1.78pJ/Op super power-efficient, so at 150 GOps/s only uses 0.7 watts. On a 32nm process, so
not yet maxed out.</li>
<li><a href="http://fpga.org/2017/01/12/grvi-phalanx-joins-the-kilocore-club/">GRVI Phalanx joins The Kilocore Club</a>: 1680 cores.</li>
</ul>
No reports of any of them running actors, but <a href="http://stefan-marr.de/renaissance/">ensembles</a> might work :-)<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com1tag:blogger.com,1999:blog-8397311766319215218.post-10039524960570213952015-07-07T13:00:00.001-07:002015-07-10T04:26:01.757-07:00Greek Choices: A German's PerspectiveMy <span style="text-decoration:line-through">6Wunderkinder</span> Microsoft colleague James Duncan Davidson penned some beautiful notes
on the <a href="https://medium.com/duncan-davidson/an-impossible-choice-for-greece-630b53e73825">Impossible Greek Choices</a> that happened effectively during his wedding and honeymoon (congratulations!!), to the people he has grown to know and, in some cases, love.
It's a wonderful piece that I highly recommend. There were some calls for a German perspective,
which is by necessity going to be less personal, but I hope it can provide some additional background.<p>
<h3>The Referendum</h3>
The Greek Referendum was an odd one, because it asked voters whether to accept or not accept an offer
by the 18 other Euro states and the IMF that was, in fact, no longer on the table. So that's at
least a little weird, as there really wasn't a choice to make, and from here it looked like pure
grandstanding / political theater.<p>
I don't know how it was perceived in Greece, but from here it looked like the government presented the
choices as "bow to German dictates" vs. "we can do without them". And the capital controls imposed
by the Greek government because the banks were effectively insolvent were described as "terrorism"
perpetrated by the EU. That's a bit rich when you take into account that the only
thing keeping the country afloat was EU funds and the money that is still
available is coming from EU emergency funds. If "terrorists" are people who give you money,
I want some more terrorists around me.<p>
Even weirder is the fact that Tsiprias seems to think that the No vote will strengthen his
position in future negotiations, that now he will be able to negotiate a better deal. I
have a feeling that this will go down as one of the biggest political miscalculations in history,
but I may be wrong and he will now get everything he wanted.<p>
<h3>The "German" View</h3>
The German perspective on the Greek debt crisis is really quite simple: you can't live beyond
your means indefinitely. You can do it temporarily by taking on debt, but at some point that
debt has to be repaid. At that point you have to not only revert to living within your
original means, but significantly below because you have the debt to repay.<p>
I think this is completely obvious and non-controversial, as it relies on no more than
basic arithmetic. But then again, I <em>am</em> German.<p>
This doesn't mean you can never take on debt. For example, it makes sense to time-shift
availability of things. Or to invest in things that increase your earning potential.
Or one-time events, especially with a strong time component. But never to lift your
general standard of living. There is no way that can work.<p>
<h3>The Keynesian View</h3>
Keynesians say that governments are different from private individuals, and the debt they
take on is different from normal household debt. Important is the role of government debt
in a recession: the government should take on more debt to minimize economic contraction
and spur growth. The worst thing a government can do in a recession is start saving and
imposing "austerity": the economy will contract even more, and apart from hurting
citizens, this actually tends to make the debt problem <em>worse</em>. The reason
is that government debt is generally measured relative to GDP, because GDP/economic output
is a pretty direct indicator of a country's ability to pay back its debt.<p>
While not quite as uncontroversial, I think this is also largely trivially true.<p>
Although these two views are often described as contradictory, for example
Krugman criticizes the "Austerians" by showing how Keynesian predictions
turn out to be true, I don't think they are. After all, you can pick up
debt in a crisis and then pay back the debt when the economy is doing well,
and IIRC, that is exactly what Keynes said governments should do, behave
anti-cyclically.<p>
Of course, this requires discipline: why should I do something uncomfortable
when things are going well? After all, the fact that things are going well
proves that what I am doing is right, right? Empirically, governments do
not seem to pay back debt in any significant way shape or form, for reasons
I do not fully understand. Part appears to be simple expediency, why cut
spending (=happy voters) when you don't absolutely have to? Part might be
that our pensions systems tend to depend on having "safe" government debt
to invest in. Financial markets also appear to be fine with a small-ish
permanent fiscal deficit.<p>
I personally think this is hugely problematic and undermines democratic
principles, because governments sell out their sovereign (the voters)
to the financiers for a little bit of extra spending money. Panem et
circenses.<p>
<h3>A Third Way (I)</h3>
Of course, there <em>is</em> a way of living beyond your means without having to suffer
the consequences, which is to have someone else pick up the tab, for example by not
repaying your debt. The most obvious way is a default, but if you are a country and the
debt is in your own currency, you get more subtle options
such as devaluation, where you just lessen the value of the debt (and of all
the money in circulation, but hey...) or
high inflation, which has the same effect but stretches it out over time so
it is less noticeable.<p>
In the past, at least Italy and Greece used to do this all the time, but of course this
means that people are much less willing to lend you money, and they will demand much
higher interest payment, pricing in actual or potential inflation and risk of devaluation
or default. In the worst case, people will stop lending you money altogether.<p>
Without control over a currency and attached printing press for said currency, inflating or devaluing
your way out of trouble is no longer an option, which is why joining the Euro area was
contingent on meeting "convergence criteria" on inflation and public debt.<p>
Whereas Italy made a real effort to meet the criteria, Greece never really did. Even
the official figures were at best marginal, but these had been <a href="http://www.spiegel.de/international/europe/greek-debt-crisis-how-goldman-sachs-helped-greece-to-mask-its-true-debt-a-676634.html">doctored</a> by US
money house and <a href="http://www.rollingstone.com/politics/news/the-vampire-squid-strikes-again-the-mega-banks-most-devious-scam-yet-20140212">vampire squid</a> Goldman Sachs. <p>
However, private money lenders were unaware (possibly willfully) of this, and lent
Greece money at Euro-group rates, way, way below previously attainable rates.
Greece, freed from the difficulty and expense of obtaining debt, went on a
debt-fueled spending spree:
GDP skyrocketed, from Euro introduction in 2001 to the start of the financial
crisis in 2008 by a factor of 2.3!<p>
<iframe width="400" height="325" frameborder="0" scrolling="no" marginwidth="0" marginheight="0" src="https://www.google.de/publicdata/embed?ds=d5bncppjof8f9_&amp;ctype=l&amp;strail=false&amp;bcs=d&amp;nselm=h&amp;met_y=ny_gdp_pcap_cd&amp;scale_y=lin&amp;ind_y=false&amp;rdim=region&amp;idim=country:GRC:TUR:DEU&amp;ifdim=region&amp;hl=en_US&amp;dl=en&amp;ind=false"></iframe>
Did the Greek economy become super-competitive in this time, did exports soar, did tourism? As far
as I know, the answer to these questions is "No", the rise in GDP was largely fueled by cheap
debt. The Euro area in general did well during this time, but for example Germany's GDP
in 2008 was only 50% higher than the high point in the mid 90ies.<p>
Greece <a href="http://de.statista.com/statistik/daten/studie/200550/umfrage/staatseinnahmen-und-staatsausgaben-in-griechenland/">piled</a> on public debt in this period at >10% of the budget.<p>
<h3>The Greek Problem</h3>
When the financial crisis hit, the toxicity of all the debt held by pretty much everyone became
evident, but for some reason, Greece was particularly hard hit, despite the fact that their overall
debt levels were not <em>that</em> much worse than everyone else's. However, how bad existing debt is is
very much influenced by your creditworthiness, because debt is constantly refinanced and a bad
credit rating means that what used to be sustainable levels of debt can become unsustainable, a
self-fulfilling prophecy.<p>
Well, our favorite vampire-squid Goldman Sachs announced that something was "fishy" with Greece.
How did they know this? Easy, as we saw above they were the ones who had helped Greece cook
the books in the first place! Suddenly Greece's credit load was unsustainable, because rates
skyrocketed. Well, Greece's rates reverted to pre-Euro levels, because it became
known that the information that had been used to justify low Euro-area rates (meeting
the "convergence criteria") had been falsified.<p>
Without those falsifications, Greece's borrowing costs would have been much higher, and
those high borrowing costs would have prohibited taking on as much debt as had been taken
on to fuel Greece's GDP bubble. Added factors were that Greece no longer had the option
of devaluing creditors' assets by devaluing or inflating debt away, but at the core is
trustworthiness: do I, as a creditor, believe that my debtor will pay back their debt
to me in full? If I am 100% certain, rates are low, with every bit of doubt rates rise
until they are infinite, i.e. you can't borrow.<p>
Greece simply was never trustworthy, and for most of history this was well known, except
for the period between 2001 (Euro introduction) and 2008 (financial crisis). Of course,
it <em>could</em> have been known if the banks lending Greece money had bothered to look.
But they didn't look, which was reckless, beyond the lie that Goldman Sachs had helped
Greece sell, which was criminal.<p>
So what I think should have happened (aside from the debt never accumulating in the first
place because of people and institutions not lying and doing their jobs properly) is that
Greece should have defaulted on those debts, the banks that recklessly lent
that money took the hit they deserved to take and then sued Goldman Sachs and the
Greek government(s) and the officials for the damages.<p>
What happened instead was the typical and sickening socialization of private losses,
while profits from those same transactions continued and continue to be privatized.<p>
At this point, we could have taken the economically correct Keynesian approach of
helping the Greek economy recover with the help of possibly even more debt, but
debt that becomes sustainable because GDP rises faster than debt and thus the
all important debt/GDP ratio falls.<p>
Instead we got the mess we're in now, with GDP down 25% from peak, though still
significantly higher than before the Euro. Why? As far as I can tell, a large
part of the reason is that no one in the EU trusts pretty much any Greek
government with a scheme that entails "you get stuff now, and you will
do the right things later". No one.<p>
The reason for this is that, in the context of the EU, successive Greek governments
have flouted every rule, broken every agreement, ignored every regulation on
the books. Consistently. Over the last 30 plus years. All while receiving
billions in EU funds that are contingent on compliance with those rules. Rules
that every other EU country had to abide by, and when non-compliance was
detected, funds were withheld. This includes the old countries as well
as the newest. Except Greece.<p>
For example, one of the largest pieces of the budget is the Common Agriculture
Policy (CAP), a system of farm subsidies. Every country is required to have
specific electronic reporting systems in order to receive CAP funds. The
new states all had to install them, even the poorest, and when they did
not, CAP funds were withheld. Except Greece, which to this day does
not have such a system.<p>
The plastic olive trees used to obtain additional farm subsidies are
legendary, but sadly not mythical. Having better controls would make
this sort of creative farming (and accounting) much more difficult.<p>
There are also requirements for good governance and tax collection standards,
for example you need to have a land registry in order to collect property
tax (and protect property rights). Greece does not have such a registry,
which is pretty novel for an ostensibly developed nation.<p>
They were required to have one. Didn't bother successive Greek governments.
At one point they got significant EU funds to build such a system (€100m
IIRC UPDATED: I initially recalled incorrectly that it was €400m, the correct figure is €100m). After a couple of years, an EU official was sent to check. At
first, the Greek authorities didn't know what he was talking about. Then
they remembered. Of course, not a thing had been done to create a registry.<p>
After being reminded that they had received (and accepted) €400m in funds
specifically dedicated to the purpose of building such a registry, they
offered to "refund" half the amount, €200m. The mind boggles.<p>
Of course, tax collection is a huge domestic problem in Greece, with
often huge wealth and income taxed at best theoretically. Real estate
plays a huge role in this, and this is why a land registry is not
popular with the elites. When the tax collectors come to these
quite fabulous houses, nobody there knows who the hoses belong to.<p>
I still really like <a href="http://www.volker-pispers.de">Volker Pispers's</a> idea of tax collection via bulldozer:
knock at the first house on the block. If an owner can't be found, knock
it down. I am pretty certain tax compliance will improve markedly.<p>
Anyway, as far as I can see, Greek governments see flouting EU rules as
their birthright, as something that cannot possibly have consequences,
a view that they has never been challenged until now. So I think their
indignation at actually being held accountable is quite real, and they
do see enforcement of rules as cruel and certainly unusual punishment,
because for them it is just that: unusual, unexpected, unfamiliar.<p>
And of course, EU institutions that let them get away with, well, everything
over so many years certainly share some of the blame. But only some,
not nearly as much as Greek politicians would like everyone to believe.<p>
When Syriaza was voted into power, I was hopeful that things would
improve, but they almost immediately started placing their
relatives in well-paid positions, and apparently their proposals
are just as vague about finally taxing the actual significant wealth that
exists, and just as concrete about squeezing lower incomes to
enrage public opinion and effectively use these people as human
shields to protect the wealthy. I think I've used the word
"sickening" before.<p>
<h3>A Third Way (II)</h3>
Duncan writes:
<blockquote>
What should be on the table is a decision by Europe to strengthen the economic union by sharing the eurozone’s debt. While the particulars of the Greek situation sent them over the edge first in the financial meltdown of 2008, sharing a currency between states without sharing debt is unsustainable in the long term for the entire eurozone. This isn’t news.
</blockquote>
I confirmed with him that he meant "debt + fiscal" union. Yes. It is the only way a monetary
union can work, in the long run. This has been pointed out many times, by many people. Alas,
he adds:<p>
<blockquote>
The only surprising thing at this point is that states like Germany insist on keeping Greece under the weight of a debt that can’t ever be repaid.
</blockquote>
I take exception to that on several levels:
<ol>
<li>Nobody forced Greece to take on loans (from private banks) that it knew it couldn't repay</li>
<li>Nobody forced Greece to cheat in order to obtain those loans, which it would never have had
access to without cheating.</li>
<li>Nobody forced Greece to avoid certain default, with all the awful consequences that we are now
only beginning to see, by accepting a bailout using public/taxpayer money. Which came with
certain conditions. If you don't like to conditions, don't take the money.</li>
</ol>
That said, I don't think anyone in the remaining Eurozone governments has any illusions about
Greece avoiding having to default on at least part of that debt. After all, said governments already negotiated
a partial default, with both public and private lenders taking a hit.<p>
However, those governments have a justified interest in the same thing not happening again
and again. The easiest way to do this would be to have a fiscal union, with member states
giving up some chunk of their sovereignty over spending. However, national governments
currently balk at this, most vehemently the Greek government with their referendum and
"national pride" demagoguery.<p>
Given that fiscal union is not on the table, the other alternative are stringent reforms.
And given the simple fact that Greek governments have not been the least bit trustworthy,
including this one, the only way to get those reforms is "under the gun". This is
unfortunate, and economically stupid, but there does not appear to be any alternative
other than letting Greece continue to spend as it wishes with others picking up the
tab from time to time. And that is also not an option.<p>
So yeah, it's complicated, and life sometimes sucks, especially for all the people
that get to suffer from the consequences of their governments' actions.<p>
<h3>The Seldon Crisis You Ordered Is Ready, Where Would You Like It Delivered?</h3>
Coming back to Euro,
my view of the Euro project was always that it was never really about monetary union. The people who
instigated it were dedicated Europeans, a view I share. They wanted political union. They knew
that political union was not possible at the time. So they created monetary union, which was
politically possible, knowing that over time it would create a situation that would force
political union as the only reasonable alternative. A classic <a href="https://en.wikipedia.org/wiki/Seldon_Crisis">Seldon crisis</a>.
The only weird thing is that the crisis is here and yet it is not being used to create the
fiscal and political union as one would expect. There were <a href="http://www.spiegel.de/wirtschaft/soziales/spiegel-eu-plan-fuer-eine-echte-fiskalunion-a-837949.html">plans</a>, it seems, but these
were not acted upon. Is it because our current crop of leaders are too fearful, too used
to just waiting it out, whatever "it" is? Are they too timid to take the bold leap required
to create something great? Or maybe the crisis just isn't deep enough yet? I truly don't
know, and am completely flabbergasted.<p>
Of course, things are always a bit more complicated, for example the Euro was also a precondition
for German unification, demanded by various European "partners", particularly France in order
to limit German power post-unification. The fact that the Euro is now described as an evil
German master plan to subjugate the proud southern European nations make this somewhat ironic,
or more likely demonstrates how off-base and populist those types of claims are.<p>
<h3>The US angle</h3>
You've probably noticed the central role of US investment bank and favorite vampire squid
(sorry for the repitition, I just can't resist writing that...) Goldman Sachs in this
saga: not only were they absolutely instrumental in creating the problem, they then
blabbed about at the exact worst possible moment, in the middle of the worst financial
crisis the world has seen in the last 80 years or so.<p>
While I am not much into conspiracy theories, this is just too much of a coincidence to be...
a coincidence, especially when you keep in mind the revolving door between GS and various
parts of the US government and the fact that the US is not very fond of the EU, and
absolutely hates the Euro.<p>
When taken together, the EU is the world's largest economic power, with more people than the US
and a greater GDP. If and when it gets its political act together, it will also be a significant
political power, with 2 seats on the security council, a nuclear arsenal, unparalleled economic
might and much less hatred for it in the world than the US. Having the EU get its political
act together is not in the US's national interest, and as the NSA affair amply demonstrated
(not that it was ever in dispute), there are no friendships at this level, just national interests
that may coincide from time to time.<p>
While the EU as such is a problem for US interests, the Euro presents an actual real danger.
Not only is it a central piece of getting to political union (not just, but especially if
my idea that it was intended to hasten the process is true), it has started to become an alternative
to the dollar as a petro-, trade- and reserve-currency.<p>
Why is this problematic? One of the mysteries of the US economy is how it has been able to
have huge trade deficits, expand the money supply ("print money") and do all sorts of other
things, without ever being punished by the usual consequences: high inflation and high
interest rates.<p>
While part of the answer is the "confidence fairy" that Professor Krugman ridicules in
other contexts (people just think that the US will continue to be a good investment),
a huge part is the use of dollars by other countries, which soaks up the dollars you
just printed and spent.<p>
An alternative reserve-/trading-currency means that those dollars will not just no
longer be soaked up, but even worse existing reserves will likely be released, so
all those dollars flood the market and cause the effects that were previously
avoided. Which could be financially disastrous.<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com1tag:blogger.com,1999:blog-8397311766319215218.post-54183621649678409442015-07-02T23:25:00.001-07:002015-07-02T23:25:29.692-07:00When Is My Unit Test Coverage Adequate?<ol>
<li>When you are not afraid of changing any of the code.</li>
<li>When you are comfortable with releasing as soon as the tests are green (i.e. always).</li>
<li>Tertium non datur :-)</li>
</ol>
<h4>Why unit tests and not integration test?</h4>
<a href="https://vimeo.com/80533536">Integrated Tests Are a Scam</a> [vimeo]
<h4>EOM</h4>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-32560660831129861992015-06-26T06:54:00.001-07:002015-06-26T06:54:21.254-07:00Guys, "guys" is perfectly fine for addressing diverse groupsWith the Political Correctness police gaining momentum again after being laughed out of the 80ies, the word "guys"
has apparently come under attack as being "non-inclusive". After discussing the topic a bit on twitter,
I saw Peter Hosey's <a href="http://boredzo.org/blog/archives/2015-06-19/alternatives-to-guys">post</a>
declaring the following:<p>
<blockquote>
"when you’re addressing a mixed-gender or unknown-gender group, you should not use the word 'guys'."
</blockquote>
As evidence, he references a <a href="http://jvns.ca/blog/2013/12/27/guys-guys-guys/">post</a> by Julia Evans
purportedly showing that for most uses, people perceive "guys" to be gender-specific. Here is the graph
of what she found:<p>
<a href="http://jvns.ca/blog/2013/12/27/guys-guys-guys/"><img src="http://jvns.ca/images/guys-guys-guys-chart.png" alt="" title="" border="0" width="500" height="" /></a>
<p>
What I find interesting is that that data show exactly the opposite of Peter's claim. Yes, most of the usage
patterns are perceived as gender-specific by more people than not, but all of those are third person.
The one case that is second person plural, the case of addressing a group of people, is <em>overwhelmingly</em>
perceived as being gender neutral, with women perceiving it
as gender neutral slightly more than men, but both groups at over 90%.<p>
This matches with my intuition, or to be more precise, I find it somewhat comforting that my intuition about
this appears to still match reality. I find "hey guys" neutral (2nd person plural), whereas
"two guys walked into a store" is male.<p>
<h3>Prescription vs. Description</h3>
Of course, they could have just checked their friendly local dictionary, for example Webster's online:
<blockquote>
guy (noun)<br>
Definition of GUY
<ol>
<li>
often capitalized : a grotesque effigy of Guy Fawkes traditionally displayed and burned in England on Guy Fawkes Day</li>
<li>
chiefly British : a person of grotesque appearance </li>
<li>
a : man, fellow<br>
b : person —used in plural to refer to the members of a group regardless of sex <saw her and the rest of the guys></li><li>
4
: individual, creature &lt;the other dogs pale in companion to this little guy&gt;</li>
</blockquote>
So there we go: "used in plural to refer to the members of a group regardless of sex". It is important
to note that unlike continental dictionaries (German, French), which proscribe correct usage, the
anglo-saxon tradition is descriptive, meaning actual use is documented. In addition, my recollection
is that definitions are listed chronologically, with the older last and newer ones first. So the
word's meaning is shifting to be more gender neutral. This is called progress.<p>
What I found interesting is that pointing out the dictionary definition was perceived as prescriptive,
that I was using trying to force an out-of-touch dictionary definition on a public that perceives the
word differently. Of course, the opposite is the case: a few people are trying to force their
perception based on outdated definitions of the word on a public and a language that has moved on.<p>
<h3>Language evolution and the futility of PC</h3>
Speaking of Anglo-Saxons and language evolution: does anyone feel the oppression when ordering
<em>beef</em> or <em>pork</em>? Well, you should. These words for the meat of certain animals
were introduced to English in 1066 with the conquering Normans. The french words for the animals
were now used to describe the food the upper class got served, whereas the anglo-saxon words shifted
to denote just the animals that the peasants herded. Yeah, and medieval oppression was actually
real, unlike some other "oppression" I can think of.<p>
Of course, we don't know about that today, and the words don't have those association anymore,
because language just shifts to adapt to and reflect reality. Never the other way around,
which is why the PC brigade's attempts to affect reality by policing language is so misguided.<p>
Take the long history of euphemisms for "person with disability". It started out as "cripple",
but that word was seen as stigmatizing, so it was replaced with "handicapped", because it wasn't
something a defect with the person, but a handicap they had. Then that word got to be stigmatized
and we switched to "disabled". Then "person with disabilities", "special", "challenged", "differently
abled". And so on and so forth. The problem is that it never works: the stigma moves to the new
word that was chosen because it was so far stigma-free, so nowadays calling someone "special" is
no longer positive. And calling homeless people "the temporarily underhoused" because "home is
wherever you are" also never helped.<p>
So leave language be and focus on changing the underlying reality instead. All of this does not mean
that you can't
be polite: if someone feels offended by being addressed in a certain way, by all means accomodate
them and/or come to some understanding.<p>
Let the Hunting begin :-))<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com2tag:blogger.com,1999:blog-8397311766319215218.post-1239295176752202912015-06-16T16:25:00.001-07:002015-06-16T22:05:11.615-07:00Protocol-Oriented Programming is Object-Oriented ProgrammingCrusty here, I just saw that my young friend Dave Abrahams gave a <a href="https://developer.apple.com/videos/wwdc/2015/?id=408">talk</a> that was based on a little keyboard session we had just a short while ago. Really sharp
fellow, you know, I am sure he'll go far someday, but that's the problem with young folk these days: they
go rushing out telling everyone what they've learned when the lesson is only one third of the way through.<p>
You see, I was trying to impart some wisdom on the fellow using the old Hegelian dialectic: thesis, antithesis,
synthesis. And yes, I admit I wasn't completely honest with him, but I swear it was just a little white lie
for a good educational cause. You see, I presented ADT (Abstract Data Type) programming to him and called
it OOP. It's a little ruse I use from time to time, and decades of Java, C++ and C# have gone a long way
to making it an easy one.<p>
<h3>Thesis</h3>
So the thesis was simple: we don't need all that fancy shmancy OOP stuff, we can just use old fashioned
structs 90% of the time. In fact, I was going to show him how easy things look in MC68K assembly
language, with a few macros for dispatch, but then thought better of it, because he might have seen
through my little educational ploy.<p>
Of course, a lot of what I told him was nonsense, for example OOP isn't at all about subclassing, for
example the guy who coined the term, Alan I think, <a href="http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en">wrote</a>: "So I decided to leave out inheritance as a built-in feature until I understood it better." So not only isn't inheritance <em>not</em> the defining feature of OOP as I let on, it actually
wasn't even in the original conception of the thing that was first called "object-oriented programming".<p>
Absolute reliance on inheritance and therefore structural relationships is, in fact, a defining feature
of ADT-oriented programming, particularly when strong type systems are involved. But more on that later.
In fact, OOP best practices have always (since the late 80ies and early 90ies) called for composition
to be used for known axes of customization, with inheritance used for refinement, when a component needs
to be adapted in a more ad-hoc fashion. If that knowledge had filtered down to young turks writing
their <a href="http://www.metaobject.com/papers/Diplomarbeit.pdf">master's thesis</a> back in what, 1997,
you can rest assured that the distinction was well known and not exactly rocket science.<p>
Anyway, I kept all that from Dave in order to really get him excited about the idea I was peddling to
him, and it looks like I succeeded. Well, a bit too well, maybe.<p>
<h3>Antithesis</h3>
Because the idea was really to first get him all excited about not needing OOP, and then turn around
and show him that all the things I had just shown him in fact <em>were</em> OOP. And still are,
as a matter of fact. Always have been. It's that sort of confusion of conflicting truth seeming
ideas that gets the gray matter going. You know, "sound of one hand clapping" kind of stuff.<p>
The reason I worked with him on a little graphics context example was, of course, that I had written
a graphics context wrapper on top of CoreGraphics a good three years ago. In Objective-C. With a <a href="https://github.com/mpw/MPWDrawingContext/blob/master/DrawingContext/MPWDrawingContext.h">protocol</a>
defining the, er, <a href="http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html">protocol</a>. It's called <a href="https://github.com/mpw/MPWDrawingContext">MPWDrawingContext</a>
and live on github, but I also <a href="http://blog.metaobject.com/2012/06/pleasant-objective-c-drawing-context.html">wrote</a> about it, <a href="http://blog.metaobject.com/2012/10/coregraphics-patterns-and-resolution.html">showed</a> how protocols combine with blocks to make CoreGraphics patterns
easy and intuitive to use and how to combine this type of drawing context with a more advanced
OO language to make live <a href="https://www.youtube.com/watch?v=sypkOhE-ufs">coding/drawing</a> possible.
<iframe width="540" height="290" src="https://www.youtube.com/embed/sypkOhE-ufs" frameborder="0" allowfullscreen></iframe>
And of course this is real <em>live</em> programming, not the "not-quite-instant replay" programming that
is all that Swift playgrounds can provide.
<p>
The simple fact is that actual Object Oriented Programming <em>is</em> Protocol Oriented Programming,
where <em>Protocol</em> means a set of messages that an object understands. In a true and pure object
oriented language like Smalltalk, it is all that can be, because the only way to interact with an
object is to send messages. Even if you do simple metaprogramming like checking the class, you are
still sending a message. Checking for object identity? Sending a message. Doing more intrusive
metaprogramming like "directly" accessing instance variables? Message. Control structures like
<b>if</b> and <b>while</b>? Message. Creating ranges? Message. Iterating? Message.
Comparing object hierarchies? I think you get the drift.<p>
So all interacting is via messages, and the set of messages is a protocol. What does that make
OO? Say it together: Protocol Oriented Programming.<p>
<h4>Synthesis</h4>
So we don't need objects when we have POP, but at the same time POP is OOP. Confused? Well,
that's kind of the point of a good dialectic argument.<p>
One possible solution to the conflict could be that we don't need any of this stuff. C, FORTRAN
and assembly were good enough for me, they should be good enough for you. And that's true to
a large extent. Excellent software was written using these tools (and ones that are much, much
worse!), and tooling is not the biggest factor determining success or failure of software projects.<p>
On the other hand, if you want to look beyond what OOP has to offer, statically typed ADT programming
is not the answer. It is the <em>question</em> that OOP answered. And statically typed ADT programming
is not Protocol Oriented Programming, OOP is POP. Repeat after me: OOP is POP, POP is OOP.<p>
To go beyond OOP, we actually need to go beyond it, not step back in time to the early 90ies, forget
all we learned in the meantime and declare victory. My personal take is that our biggest challenges
are in "the big", meaning programming in the large. How to connect components together in a meaningful,
tractable and understandable fashion. Programming the components is, by and large, a solved problem,
making it a tiny bit better may make us feel better, but it won't move the needle on productivity.<p>
Making architecture malleable, user-definable and thus a first class citizen of our programming
notation, now that is a worthwhile goal and challenge.<p>
Crusty out.<p>
As always, comments welcome here and on <a href="https://news.ycombinator.com/item?id=9729305">HN</a>.<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com9tag:blogger.com,1999:blog-8397311766319215218.post-37731169938712271972015-06-07T12:12:00.001-07:002015-06-07T14:32:58.100-07:00Steve Jobs on SwiftNo, there is no actual evidence of Steve commenting on Swift. However, he did say something about the
road to <a href="http://blog.metaobject.com/2014/04/sophisticated-simplicity.html">sophisticated simplicity</a>.<p>
In short, at first you think the problem is easy because you don't understand it. Then you begin to understand
the problem and everything becomes terribly complicated. Most people stop there, and Apple used to make fun
of the ones that do.<p>
<iframe width="420" height="260" src="https://www.youtube.com/embed/IH_NlWbHrIk" frameborder="0" allowfullscreen></iframe> <p>
To me this is the perfect visual illustration of the <a href="http://www.quora.com/Which-features-overcomplicate-Swift-What-should-be-removed/answer/Rob-Rix">crescendo of special cases</a> that is Swift.<p>
The answer to this, according to Steve, is "[..] a few people keep burning the midnight oil and finally understand the underlying principles of the problem and come up with an elegantly simple solution for it. But very few people go the distance to get there."<p>
Apple used to be very much about going that distance, and I don't think Swift lives up to that standard.
That doesn't mean
it's all bad or that it's completely irredeemable, there are good elements. But they stopped at sophisticated
complexity. And "well, it's not all bad" is not exactly what Apple stands for or what we as Apple customers
expect and, quite frankly, deserve. And had there been a Steve in Dev Tools, he would have said: do it
again, this is not good enough.<p>
As always, comments welcome here or on <a href="https://news.ycombinator.com/item?id=9675997">HN</a><p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-77166621407298610182015-05-23T12:42:00.001-07:002015-05-23T14:33:15.580-07:00I am jealous of SwiftReally, I am. They get to do everything wrong there is in language design and yet the results get fawned upon and
the obvious flaws not just overlooked but turned into their opposite.<p><h4>Language Design</h4>
What do I mean? Well, primarily this:
<blockquote>
Swift is a crescendo of special cases stopping just short of the general; the result is complexity in the semantics, complexity in the behaviour (i.e. bugs), and complexity in use (i.e. workarounds).
<footer>
- <cite> Rob Rix, <a href="http://www.quora.com/Which-features-overcomplicate-Swift-What-should-be-removed/answer/Rob-Rix">Which Features Overcomplicate Swift</a></cite></footer></blockquote>
The list Rob compiled is impressively well-researched. Although "special cases stopping just short of the general"
for me is enough, it is THE cardinal sin of language design, I would add "needlessly replacing the keyword message syntax
at exactly the point where it was no longer an issue and adding it back as an abomination of accidental complexity
the world has never seen before". Let's see what Gilad Bracha, an actual programming language designer, has to say
on the keyword syntax:
<blockquote>
This notation makes it impossible to have an arity error when calling a method. In a dynamically typed language, this is a huge advantage.<p>
I am keenly aware that this syntax is unfamiliar to most programmers, and is a potential barrier to adoption. However, it improves usability massively. Furthermore, a growing number of programmers are learning this notation because of its use in Objective-C (e.g., the iOS APIs).
<footer>
- <cite> Gilad Bracha, <a href="http://bracha.org/newspeak-spec.pdf">Newspeak Language Specification (2015)</a></cite></footer></blockquote>
Abandoning keyword syntax at this point in time takes "snatching defeat from the jaws of victory" to a whole
new and exciting level!<p>
Or the whole idea of having every arithmetic operation be a potential crash point, despite the fact that
proper <a href="http://en.wikipedia.org/wiki/Numerical_tower">numeric towers</a> have been around for many decades and decently optimized (certainly no slower than unoptimized Swift).<p>
And yet, Rob for example writes that the main culprit for Swift's complexity is Objective-C, which I find
somewhat mind-boggling. After all, the requirement for Objective-C interoperability couldn't exactly
have come as a last minute surprise foisted on an existing language. Folks: if we're designing a
replacement language for Apple's Cocoa frameworks, Objective-C compatibility needs to be designed
in from the beginning and not added in as an afterthought. And if you don't design your language
to be at odds with the frameworks you will be supporting, you will discover that you can get a
much cleaner design.<p><h4>Performance</h4>
The situation is even more bizarre when it comes to performance. For example, here's a talk
titled <a href="http://realm.io/news/swift-summit-joseph-lord-performance/">How Swift is Swift</a>.
The opening paragraph declares that "Swift is designed to be fast, very fast", yet a few
paragraphs (or slides) down, we learn that debug builds are often 100 times slower than
optimized builds (which themselves don't really rival C).<p>
Sorry, that's not the sign of a language that's "designed to be fast". Those are the
characteristics of a language design that is inherently super, super slow, and that
requires leaning heavily on the optimizer to get performance to an acceptable level.<p>
And of course, the details bear that out: copy semantics are usually expensive, they need the
optimizer to elide those copies in the majority of cases. Same with ARC, which is built
in and also requires the optimizer to be effectively clairvoyant (and: on) in order not
to suffer <a href="http://blog.metaobject.com/2014/06/compiler-writers-gone-wild-arc-madness.html">30x regressions</a>.<p>
Apart from the individual issues, the overriding one is that Swift's performance model is extremely
opaque (100x for turning the optimizer on). Having the optimizer do a heroic job of optimizing
code that we don't care about is of no use if we can't figure out why the code we do care about
is slow or how to make it go fast.
<h4>Jealousy</h4>
So what makes little amateur language designer me jealous is that I really do try and get these things
right, make sure the design is parsimonious, and these guys just joyfully ignore every rule
in the book, and then trample on said book in ways that should get the language designer's guild
to revoke their license, yet there is almost universal fawnage.<p>
Whoever said life was fair?<p>
As always, comments welcome here or on <a href="https://news.ycombinator.com/item?id=9594091">HN</a><p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com7tag:blogger.com,1999:blog-8397311766319215218.post-35074018492361217522015-04-05T04:29:00.001-07:002015-04-05T04:33:39.497-07:00React.native isn'tWhile we're on the subject of terminological <a href="http://blog.metaobject.com/2015/04/model-widget-controller-mwc-aka-apple.html">disasters</a>, Facebook's <a href="http://www.reactnative.com">react.native</a> seems to be doing a good job of muddling the waters.<p>
While some parts make use of native infrastructure, a lot do not:<p><ol><li>uses views as drawing results, rather than as drawing sources, has a</li><li>parallel component hierarchy, </li><li>ListView isn't UITableView (and from what I <a href="https://github.com/facebook/react-native/issues/143">read</a>, can't be),</li><li>even buttons aren't UIButton instances,</li><li>doesn't use responder chain, but implements something "similar", and finally,</li><li>oh yes, JavaScript</li></ol><p>
None of this is necessarily bad, but whatever it is, it sure ain't "native".<p>
What's more, the rationale given for React and the Components framework that was also just released
echoes the <a href="http://blog.metaobject.com/2015/04/model-widget-controller-mwc-aka-apple.html">misunderstandings</a> Apple shows about the MVC pattern:<p><a href="https://youtu.be/XhXC4SKOGfQ?t=1762"><img src="http://lh6.ggpht.com/-PGprWtcg2f8/VSEcoLFQNFI/AAAAAAAAARs/WJtDeeQUGvQ/mvc-data-event-flow-fb-components.png?imgmax=800" alt="Mvc data event flow fb components" title="mvc-data-event-flow-fb-components.png" border="0" width="416" /></a><p>
Just as a reminder: what's shown here with controllers pushing data to view at any time is <em>not</em> MVC, unless you use that to mean "Massive View Controller".<p>
In Components and react.native, this "pushing of mutable state to the UI" is supposed to be replaced by "a (pure) function of the model".
That's what a View (UIView or NSView) is, and what <code>drawRect::</code> does. So next time you
are annoyed by pushing data to views, instead of creating a whole new framework, just drag a Custom View
from the palette into your UI and then implement the <code>drawRect::</code>. Creating views as a result
of drawing (and/or turning components into view state mutations) is <em>more</em> stateful than drawRect::,
not less.<p>
Again, that doesn't mean it's bad or useless, it just means it isn't what it says on the tin. And that's
a problem. From what I've heard so far, the most enthusiastic response to react.native has come from
web developers who can finally code "native" apps without learning Objective-C/Swift or Java. That may
or may not be useful (past experience suggests not), but it's something completely different from what
the claims are.<p>
Oh and finally, the "react" part seems to refer to "one-way reactive data flow", an even bigger
terminological disaster that I will examine in a future post.<p>
As always, comments welcome here or at <a href="https://news.ycombinator.com/item?id=9323749">HN</a><p>Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-36921114202904866752015-04-03T05:42:00.001-07:002017-11-23T05:23:46.241-08:00Model Widget Controller (MWC) aka: Apple "MVC" is not MVCI probably should have taken more notice that time that after my question about why a specific piece
of the UI code had been structured in a particular way,
one of my colleagues at 6wunderkinder informed
me that Model View Controller meant the View must not talk to the model, and instead the Controller
is responsible for mediating all interaction between the View and the Model. It certainly didn't
match the definition of MVC that I knew, so I checked the Wikipedia page on MVC just in case I had gone
completely senile, but it checked out with that I remembered:<ol><li>the controller updates the model,</li><li>the model notifies the view that it has changed, and finally</li><li>the view updates itself by talking to the model</li></ol>
(The labeling on the graphic on the Wikipedia is a bit misleading, as it suggests that the model updates
the view, but the text is correct).<p>
What I <em>should</em> have done, of course, is keep asking "<a href="http://en.wikipedia.org/wiki/5_Whys">Why</a>?", but I didn't, my excuse being that we were
under pressure to get the Wunderlist 3.0 release out the door. Anyway, I later followed up some of
my confusion about both React.native and ReactiveCocoa (more on those in a later post) and found the
following incorrect diagram in a Ray Wenderlich tutorial on ReactiveCocooa and MVVC.<p><a href="http://www.raywenderlich.com/74106/mvvm-tutorial-with-reactivecocoa-part-1"><img src="https://koenig-media.raywenderlich.com/uploads/2014/06/MVCPattern-2.png" alt="" title="" border="0" width="400" height="" /></a><p>Hmm...that's the same confusion that my colleague had. The plot thickens as I re-check Wikipedia just
to be sure. Then I had a look at the original <a href="https://heim.ifi.uio.no/~trygver/themes/mvc/mvc-index.html">MVC papers</a> by Trygve Reenskaug, and yes:<p><blockquote>
A view is a (visual) representation of its model. It would ordinarily highlight certain attributes of the model and suppress others. It is thus acting as a presentation filter.
A view is attached to its model (or model part) and gets the data necessary for the presentation from the model by asking questions.
</blockquote>
<p>The 1988 JOOP article "MVC Cookbook" also confirms:<p><img src="http://lh6.ggpht.com/-aivUpwPPMDQ/VR6Ks4Nka7I/AAAAAAAAARU/nAQ7qzCVbv0/MVC-Interaction-Krasner-88.png?imgmax=800" alt="MVC Interaction Krasner 88" title="MVC-Interaction-Krasner-88.png" border="0" width="420" /><p>
So where is this incorrect version of MVC coming from? It turns out, it's in the <a href="https://developer.apple.com/library/mac/documentation/General/Conceptual/DevPedia-CocoaCore/MVC.html">Apple documentation</a>, in the overview section!<p><a href="https://developer.apple.com/library/mac/documentation/General/Conceptual/DevPedia-CocoaCore/MVC.html"><img src="http://lh6.ggpht.com/-4gNltHJOeOA/VR6KsbR-QyI/AAAAAAAAARM/3p9b4g8LTVI/model_view_controller.jpg?imgmax=800" alt="Model view controller" title="model_view_controller.jpg" border="0" width="400" /></a><p>I have to admit that I hadn't looked at this at least in a while, maybe ever, so you can imagine my surprise
and shock when I stumbled upon it. As far as I can tell, this architectural style comes from having
self-contained widgets that encapsulate very small pieces of information such as simple strings, booleans
or numbers. The MVC architecture was not intended for these kinds of small widgets:
<blockquote>
MVC was conceived as a general solution to the problem of users controlling a large and complex data set.
</blockquote>
If you look at the examples, the views are large both in size and in scope, and they talk to a complex model.
With a widget, there is no complex model, not filtering being done by the view. The widget contains its own
data, for example a string or a number. An advantage of widgets is that you can meaningfully assemble them in a tool like Interface Builder, with a more MVC-like large view, all you have in IB is a large blank space labeled
'Custom View'. On the other hand, I've had very good experiences with "real" (large view) MVC in creating
high performance, highly responsive user interfaces.<p>
Model Widget Controller (MWC) as I like to call it, is more tuned for forms and database programming, and
has problems with more reactive scenarios. As Josh Abernathy <a href="http://joshaber.github.io/2015/01/30/why-react-native-matters/">wrote</a>:<p><blockquote>
Right now we write UIs by poking at them, manually mutating their properties when something changes, adding and removing views, etc. This is fragile and error-prone. Some tools exist to lessen the pain, but they can only go so far. UIs are big, messy, mutable, stateful bags of sadness.<p></blockquote>
To me, this sadness is almost entirely a result of using MWC rather than MVC. In MVC, the "V" <em>is</em>
essentially a function of the model, you don't push or poke at it, you just tell it "something changed" and
it redraws itself.<p>
And so the question looms: is react.native just a result of (Apple's) misunderstanding (of) MVC?<p>
As always, your comments are welcome here or on <a href="https://news.ycombinator.com/item?id=9315518">HN</a>.
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com12tag:blogger.com,1999:blog-8397311766319215218.post-40674129703672947512015-03-18T23:51:00.001-07:002015-03-19T03:30:53.688-07:00Why overload operators?One of the many things that's been puzzling me for a long time is why operator overloading
appears to be at the same time problematic and attractive in languages such as C++ and
now Swift. I know I certainly feel the same way, it's somehow very cool to massage the
language that way, but at the same time the thought of having everything redefined underneath
me fills me with horror, and what little I've seen and heard of C++ with heavy overloading
confirms that horror, except for very limited domains. What's really puzzling is that
binary messages in Smalltalk, which are effectively the same feature (special characters like *,+ etc. can be used as message names taking
a single argument), do not seem to not have
either of these effects: they are neither particularly attractive to Smalltalk programmers,
nor are their effects particularly worrisome. Odd.<p>
Of course we simply don't have that problem in C or Objective-C: operators are built-in
parts of the language, and neither the C part nor the Objective part has a comparable
facility, which is a large part of the reason we don't have a useful number/magnitude
hierarchy in Objective-C and numeric/array libraries are't that popular: writing
<code>[number1 multipliedBy:number2]</code> is just too painful.<p>
Some recent articles and talks that dealt with operator overloading in Apple's new
Swift language just heightened my confusion. But as is often the case, that
heightened confusion seems to have been the last bit of resistance that pushed through
an insight.<p>
Anyway, here is an example from <a href="https://twitter.com/mattt">NSHipster</a> Matt Thompson's excellent post on <a href="http://nshipster.com/swift-operators/">Swift Operators</a>,
an operator for exponentiation wrapping the <code>pow()</code> function:
<blockquote><pre>
func ** (left: Double, right: Double) -> Double {
return pow(left, right)
}
</pre></blockquote>
This is introduced as "the arithmetic operator found in many programming languages, but missing in Swift [is **]".
Here is an example of the difference:
<blockquote><pre>
pow( left, right )
left ** right
pow( 2, 3 )
2 ** 3
</pre></blockquote></li>
How come this is seen as an improvement (and to me it does)? There are two candidates for what the difference
might be: the fact that the operation is now written in infix notation and that it's using
special characters. Do these two factors contribute evenly or is one more important than
the other. Let's look at the same example in Smalltalk syntax, first with a normal keyword
message and then with a binary message (Smalltalk uses <code>raisedTo:</code>, but let's stick
with <code>pow:</code> here to make the comparison similar):
<blockquote><pre>
left pow: right.
left ** right.
2 pow: 3.
2 ** 3.
</pre></blockquote>
To my eyes at least, the binary-message version is no improvement over the keyword message,
in fact it seems somewhat worse to me. So the attractiveness of infix notation appears to
be a strong candidate for why operator overloading is desirable. Of course, having to use
operator overloading to get infix notation is problematic, because special characters generally
do not convey the meaning of the operation nearly as well as names, conventional arithmetic
aside.<p>
Note that dot notation for message sends/method calls does not really seem to have the same effect, even though it could technically also be considered
an infix notation:
<blockquote><pre>
left.pow( right)
left ** right
2.pow( 3 )
2 ** 3
</pre></blockquote>
There is more anecdotal evidence. In Chris Eidhof's <a href="http://realm.io/news/functional-programming-swift-chris-eidhof/">talk on functional swift</a>, scrub to around the 10 minute mark. There you'll find the
following code with some nested and curried function calls:
<blockquote><pre>
let result = colorOverlay( overlayColor)(blur(blurRadius)(image))
</pre></blockquote>
"This does not look to nice [..] it gets a bit unreadable, it's hard to see what's going on" is the quote.
<blockquote><pre>
let result = colorOverlay( overlayColor)(blur(blurRadius)(image))
</pre></blockquote>
Having a special compose function doesn't actually make it better
<blockquote><pre>
let myFilter = composeFilters(blur(blurRadius),colorOverlay(overlayColor))
let result = myFilter(image)
</pre></blockquote>
Infix to the rescue! Using the <code>|&gt;</code>operator:
<blockquote><pre>
let myFilter = blur(blurRadius) |&gt; colorOverlay(overlayColor)
let result = myFilter(image)
</pre></blockquote>
Chris is very fair-minded about this, he mentions that due to the special characters involved,
you can't really infer what |&gt; means from looking at the code, you have to know, and having
many of these sorts of operators makes code effectively incomprehensible. Or as one twitter
use put it:
<blockquote class="twitter-tweet" lang="en"><p>Every time I pick up Scala I think I&#39;m being trolled. How many different possible meanings of _=&gt;.+-&gt;_ can one language have??</p>&mdash; kellan (@kellan) <a href="https://twitter.com/kellan/status/510885623352553472">September 13, 2014</a></blockquote><script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
Like most things
in engineering, it's a trade-off, though my guess is the trade-off would shift if we had
infix without requiring non-sensical characters.<p><h5>Built in</h5>
I do believe that there is another factor involved, one that is more psychologically subtle
having to do with the idea of language as a (pre-defined) thing vs. a mechanism for building
your own abstractions that I mentioned in my previous <a href="http://blog.metaobject.com/2014/09/no-virginia-swift-is-not-10x-faster.html">post on Swift performance</a>.<p>
In that post, I mentioned BASIC as the primary example of the former, a language as a
collection of built-in features, with C and Pascal as (early) examples of the latter,
languages as generic mechanisms for building your own features. However, those
latter languages don't treat all constructs equally. Specifically, all the operators
are built-in, not user-definable over -overridable. They also correspond closely
to those operations that are built into the underlying hardware and map to
single instructions in assembly language. In short: even in languages with
a strong "user-defined" component, there is a hard line between "user-defined"
and "built-in", and that line just happens to map almost 1:1 to the operator/function
boundary.<p>
Hackers don't like boundaries. Or rather: they love boundaries, the overcoming of.
I'd say that overloaded operators are particularly attractive (to hacker mentalities,
but that's probably most of us) in languages where this boundary between user-defined
and built-in stuff exists, and therefore those overloaded operators let you cross
that boundary and do things normally reserved for language implementors.<p>
If you think this idea is too crazy, listen to John Siracusa, Guy English and Rene Ritchie
discussing Swift language features and operator overloading on <a href="http://www.imore.com/debug-49-siracusa-round-2">Debug Podcast Number 49, Siracusa Round 2</a>, starting at 45:45. I've transcribed a bit
below, but I really recommend you listen to the podcast, it's very good:
<ul><li>45:45 Swift is a damning comment on C++ [benefits without the craziness]</li><li>46:06 You can't do what Swift did [putting basic types in the standard library]
without operator overloading. [That's actually not true, because in Swift the operators are just syntax -> but it is exactly the idea I talked about earlier] </li><li> 47:50 If you're going to add something like regular expressions to the language ...
they should have some operators of their own. That's a perfect opportunity for
operator overloading </li><li> 48:07 If you're going to add features to the language, like regular expressions or so [..]
there is well-established syntax for this from other languages.
</li><li> 48:40 ...or range operators. Lots of languages have range operators these days.
Really it's just a function call with two different operands. [..]
You're not trying to be clever
All you're trying
to do is make it natural to use features that exist in many of other languages.
The thing about Swift is you don't have to add syntax to the language to do it.
Because it's so malleable.
If you're not adding a feature, like I'm adding regular expressions to the language.
If you're not doing that, don't try to get clever. Consider the features as existing
for the benefit of the expansion of the language, so that future features look natural
in it
and not bolted on even though technically everything is in a library. Don't think of
it as in my end user code I'm going to come up with symbols that combine my types in
novel ways, because what are you even doing there?
</li><li> 50:17 if you have a language like this, you need new syntax and new behavior to
make it feel natural. [new behavior strings array] and it has the whole struct thing. The
basics of the language, the most basic things you can do, have to be different,
look different and behave different for a modern language.
</li><li> 51:52 "using operator overloading to add features to the language" [again, not actually true]
</li></ul>
The interesting thing about this idea of a boundary between "language things" and "user things" is
that it does not align with the "operators" and "named operators" in Swift, but apparently it still
feels like it does, so we "extending the language" is seen as roughly equivalent to "adding
some operators", with all the sound caveats that apply.<p>
In fact, going back to Matt Thompson's article from above, it is kind of odd that he talks
about exponentiation operator as missing from the language, when if fact the operation is
available in the language. So if the operation crosses the boundary from function to
operator, then and only then does it become part of the language.<p>
In Smalltalk, on the other hand, the boundary has disappeared from view. It still exists in the
form of primitives, but those are well hidden all over the class hierarchy and not something
that is visible to the developer. So in addition to having infix notation available for
named operations, Smalltalk doesn't have the notion of something being "part of the language"
rather than "just the library" just because it uses non-sensical characters. Everything
is part of the library, the library is the language and you can use names or special
characters as appropriate, not because of asymmetries in the language.<p>
And that's why operator overloading is a a thing even in languages like Swift, whereas it
is a non-event in Smalltalk.<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com2tag:blogger.com,1999:blog-8397311766319215218.post-61296864439490702702014-09-11T11:06:00.001-07:002014-09-18T23:08:25.539-07:00iPhone 6 Plus and The End of PixelsIt's been a long time coming. NeXTStep in 1989 featured DisplayPostscript, and therefore
a device independent imaging model that meant you did not specify graphics in pixels, but
rather in physical units. The default was a variant of the printer's point at 1/72nd of
an inch, which happened to be close the the typical pixel resolution of displays at the
time. However, 1 point never meant 1 pixel, it meant 1/72nd of an inch,
and the combination of floating point coordinates and transformation matrices
meant you could use pretty much any unit you wanted. When NeXT bought Apple, it
brought this imaging model with it, although with some modifications due to Adobe
intransigence about licensing and the addition of anti-aliasing.<p>
However, despite the device-independent APIs, we still have pixel-based content,
and "pixel-accurate" graphics. This has made less and less sense over time,
with retina displays making pixel-accuracy moot (no more screen fonts!) scaled
modes making it impossible and both iOS 7 and OS X 10.10 going for a more
geometric look. Still, the design community has resisted, talking about
@3 pixel art etc.<p>
No more.<p>
The iPhone 6 Plus has a 1920x1080 panel, but the simulator renders at 3x. These
two resolutions don't match and so the pixels will need to be downsampled to
the display resolution. Whether that is accomplished by downsampling pixel art
(which happens automagically with Quartz and the proper device transform set)
or as a separate step that downsamples the entire rendered framebuffer doesn't
matter (much). Either way, there are no more "pixel perfect" pre-rendered
designs.<p>
Device-independent graphics, here we come at last.
We're only a quarter century late.<p><b>Update</b>: "Its 401 PPI display is the first display I’ve ever used on which, no matter how close I hold it to my eyes, <em>I can’t perceive the pixels.</em> " - <a href="http://daringfireball.net/2014/09/the_iphones_6">John Gruber</a> (emphasis mine)
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-7639422749244452522014-09-10T11:16:00.001-07:002014-09-16T04:13:31.054-07:00collect is what for doesI recently stumbled on Rob Napier's explanation of the <a href="http://robnapier.net/maps?utm_content=buffer1e521&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer">map
function in Swift</a>. So I am reading along yadda yadda when suddenly I wake up and
my eyes do a double take:
<blockquote>
After years of begging for a map function in Cocoa [...]
</blockquote>
Huh? I rub my eyes, probably just a slip up, but no, he continues:
<blockquote>
In a generic language like Swift, “pattern” means there’s a probably a function hiding in there, so let’s pull out the part that doesn’t change and call it map:
</blockquote>
Not sure what he means with a "<em>generic language</em>", but here's how we would implement a map function in Objective-C.
<hr><blockquote><code><pre>
#import &lt;Foundation/Foundation.h&gt;
typedef id (*mappingfun)( id arg );
static id makeurl( NSString *domain ) {
return [[[NSURL alloc] initWithScheme:@"http" host:domain path:@"/"] autorelease];
}
NSArray *map( NSArray *array, mappingfun theFun )
{
NSMutableArray *result=[NSMutableArray array];
for ( id object in array ) {
id objresult=theFun( object );
if ( objresult ) {
[result addObject:objresult];
}
}
return result;
}
int main(int argc, char *argv[]) {
NSArray *source=@[ @"apple.com", @"objective.st", @"metaobject.com" ];
NSLog(@"%@",map(source, makeurl ));
}
</pre></code></blockquote><hr>
This is less than 7 non-empty lines of code for the mapping function, and took me less
than 10 minutes to write in its entirety, including a trip to the kitchen for an
extra cookie, recompiling 3 times and looking at the <code>qsort(3)</code> manpage
because I just can't remember C function pointer declaration syntax (though it took
me less time than usual, maybe I am learning?). So really, years of "begging" for
something any mildly competent coder could whip up between bathroom breaks or
during a lull in their twitter feed?<p>
Or maybe we want a version with blocks instead? Another 2 minutes, because I am a klutz:
<hr><blockquote><code><pre>
#import &lt;Foundation/Foundation.h&gt;
typedef id (^mappingblock)( id arg );
NSArray *map( NSArray *array, mappingblock theBlock )
{
NSMutableArray *result=[NSMutableArray array];
for ( id object in array ) {
id objresult=theBlock( object );
if ( objresult ) {
[result addObject:objresult];
}
}
return result;
}
int main(int argc, char *argv[]) {
NSArray *source=@[ @"apple.com", @"objective.st", @"metaobject.com" ];
NSLog(@"%@",map(source, ^id ( id domain ) {
return [[[NSURL alloc] initWithScheme:@"http" host:domain path:@"/"] autorelease];
}));
}
</pre></code></blockquote><hr>
Of course, we've also had <code><a href="http://en.wikipedia.org/wiki/Higher_order_message">collect</a></code> for a good <a href="http://www.metaobject.com/papers/HOM-Presentation.pdf">decade</a>&nbsp;<a href="http://www.macdevcenter.com/pub/a/mac/2004/07/16/hom.html?page=last&x-order=date">or</a>&nbsp;<a href="http://www.metaobject.com/papers/Higher_Order_Messaging_OOPSLA_2005.pdf">so</a>, which turns the client code into the following,
much more readable version (<a href="http://objective.st">Objective-Smalltalk</a> syntax):
<hr><code><pre>
NSURL collect URLWithScheme:'http' host:#('objective.st' 'metaobject.com') each path:'/'.
</pre></code><hr>
As I wrote in my <a href="http://blog.metaobject.com/2014/09/no-virginia-swift-is-not-10x-faster.html">previous post</a>, we seem to be regressing to a mindset about computer
languages that harkens back to the days of BASIC, where everything was baked into the
language, and things not baked into the language or provided by the language vendor do not exist.<p>
Rob goes on the write "The mapping could be performed in parallel [..]", for example like <a href="https://github.com/mpw/MPWFoundation/blob/master/Collections.subproj/MPWDelimitedTable.m">parcollect</a>? And then "This is the heart of good functional programming." No. This is the heart of good <em>programming</em>.<p>
Having processed that shock, I fly over a discussion of filter (select) and stumble over
the next whopper:
<blockquote><h2>It’s all about the types</h2></blockquote>
Again...huh?? Our map implementation certainly didn't need (static) types for the list, and
all the Smalltalkers and LISPers that have been gleefully using higher order
techniques for 40-50 years without static types must also not have gotten the memo.<p><blockquote>
We [..] started to think about the power of functions to separate intent from implementation. [..] Soon we’ll explore some more of these transforming functions and see what they can do for us. Until then, stop mutating. Evolve.
</blockquote><em>All</em> modern programming separates intent from implementation. Functions are a
fairly limited and primitive way of doing so. Limiting power in this fashion can be
useful, but please don't confuse the power of higher order programming with the
limitations of functional programming, they are quite distinct.<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com0tag:blogger.com,1999:blog-8397311766319215218.post-84660421075316061352014-09-09T08:57:00.001-07:002015-06-03T22:56:49.744-07:00No Virginia, Swift is not 10x faster than Objective-CAbout a month ago, Jesse Squires published a post titled <a href="http://www.jessesquires.com/apples-to-apples-part-two/">Apples to Apples</a>, documenting benchmark results that he
claims show Swift now with a roughly 10x performance advantage over Objective-C.
Although completely bogus,
the post was retweeted by Chris Lattner (who should know better, and was supposedly
mostly interested in highlighting the improvements in the Swift optimizer, rather than
the bogus comparison) and has now been <a href="http://rustyshelf.org/2014/08/07/thoughts-on-swift-from-an-idiot">referenced</a> a number of times as background
knowledge as to the state of Swift. More importantly, though the actual mistake
Jesse makes is pretty basic and not that interesting, it does point to some deeper
misunderstandings about performance and language that I at least do find interesting.<p>
So what's the mistake? Ironically, given the post's title, is that he is comparing
apples to oranges, so to speak. The following table, which shows the time to sort
an array of 10000 numbers 10 times in millisecond, illustrates the problem:
<table><th><td>NSNumber</td><td>native integer</td></th><tr><td>Objective-C</td><td bgcolor="d0d0d0">6.04</td><td>0.97</td></tr><tr><td>Swift</td><td>11.92</td><td bgcolor="d0d0d0">0.8</td></tr></table>
Jesse compared the two versions highlighted, so native Swift integers with Objective-C
NSNumber object wrappers.
All times are for binaries with optimization enabled, the machine was a 13" MBR with
2.9 GHz Intel Core i7 and 8GB of RAM. The integer sort was done using a C integer
array and the system <code>qsort()</code> function. When you compare apples to apples,
Objective-C has a roughly 2x edge with NSNumbers and is around 18% slower for native
integers, at least when using <code>qsort()</code><p>
Why the 18% disadvantage? The <code>qsort()</code> function is made generically applicable to
different types of arrays using a function pointer parameter for the comparison function
that itself is parametrized using pointers to the elements to be compared. This means
there is a per-comparison overhead of one function call and two pointer dereferences
per comparison. That overhead overwhelms the actual comparison operation, which is
a single machine instruction on most processors.<p>
Swift, on the other hand, appears to produce a version of the sort function that is
specialized to
the integer type, with the comparison function inlined to the generated function
so there is no function call or pointer dereference overhead.
That is very clever and a Good Thing™ for performance. Sort of. The drawback is
that this breaks separate compilation, because the functions actually have to be
combined during the compile/link process every time it is used (I assume there is
caching going on so we only got one per type combination).<p>
Apart from making the compiler/linker slower , possibly significantly so
(like C++ headers, though I presume they use LLVM bitcode to optimize the process),
it also likely bloats the executable, causing cache and memory pressure. So it's
a tradeoff, as usual, and while I think having the ability to specialize at
compile-time is good, not being able to control it is not.<p>
Objective-C doesn't have this ability to automagically adapt a function or method
to parameters, if you want inlining the relationship has to be known at definition
not at point of use. However, if the benefit of inlining is only 21% for the
most primitive type, a machine integer, then it is clear that that the set of
types for which compile-time specialization is beneficial at all is small.<p>
Cocoa of course already provides specialized collection classes for the <code>byte</code> and
<code>unichar</code> types, <code>NSData</code> and <code>NSString</code> respectively.
I never quite understood why this wasn't extended to the other primitive types, particularly
integer and float/double. On the other hand, the omission never bothered me much, I
just implemented those classes myself in <a href="https://github.com/mpw/MPWFoundation">MPWFoundation</a>. <a href="https://github.com/mpw/MPWFoundation/blob/master/Collections.subproj/MPWRealArray.h">MPWRealArray</a> even has support for DisplayPostscript
binary object sequences, it's that old!<p>
Both MPWRealArray and the corresponding <a href="https://github.com/mpw/MPWFoundation/blob/master/Collections.subproj/MPWIntArray.m">MPWIntArray</a> classes are small and fairly trivial to implement, and once I have them,
using a specialized integer or real array is at least as convenient as using an <code>NSArray</code>, just a lot faster. They could also be quite a bit smaller than they are, sharing
code either via subclassing or poor-man's generic programming via include files. Once
I have a nice OO interface, I can swap out the implementation for something really quick
like a dual-pivot integer sort I found in Java-land and adapted to C. (It is surprising
just how similar they are at that level). With that sort, the test time drops to 0.56 ms,
so 42% faster than the Swift version and almost twice as fast as the system <code>qsort()</code>
function.<p>
So the takeaway is that if you are using NSNumber objects in performance-sensitive code: stop.
This is always a mistake. The native number types for Objective-C are <code>int</code>,
<code>float</code>, <code>double</code> and friends, not NSNumber. After all, how
do you perform arithmetic? Either directly on a primitive or by unboxing the NSNumber
and then performing arithmetic on the primitive and then reboxing. Use primitive scalar
types as much as possible where they make sense.<p>
A second takeaway is that the question "which language is faster" doesn't really make
sense, a more relevant and interesting question is "does this language make it
hard/possible/easy to write fast code". Objective-C lets you write really fast code,
if you want to, because it has the low-level chops and an understandable performance
model. Swift so far can achieve reasonable performance at times, ludicrously bad
at other times (especially with the optimizer turned off, which hardly fazes Objective-C),
with as far as I can tell fairly little predictability or control. Having 10% faster
(or slower) performance for code I don't particularly care about is not worth nearly
as much as the knowledge that I can get the 1-5% of code that I do care about in
shape no matter what. Swift is definitely not there yet, and given the direction
it is taking I am not sure whether it will allow that kind of control, at least in
comprehensible ways.<p>
A third point is something more general about language. The whole argument that
NSNumber and NSArray are "built in" somehow and int is not displays a lack of
understanding of Objective-C that to me seems staggering. Even more so, the
whole idea that you must only use what comes provided with Cocoa and are not
allowed to build your own flies in the face of modern language design, throwing
us back to the times of BASIC (<a href="http://www.folklore.org/StoryView.py?project=Macintosh&story=MacBasic.txt">Arthur Luehrmann</a>, in the comments):<p><blockquote>
I had added graphics primitives to Dartmouth Basic around 1976 and developed an X-Y pen-plotter to carry out graphics commands mixed in with the text being sent to Teletype terminals.
</blockquote>
The idea is that is that a language is a bundle of features, or to put it linguistically,
a language is a list of words to be used as is.<p>
Both C and Pascal introduced me to a new notion: that languages are not lists of
words, but means of constructing your own words. For example, C did/does not have
I/O as a language feature. I/O was just another set of functions placed in a
library that you included just like any of your own functions/libraries.
And there were two sets of them, the stdio package and the raw Unix
I/O. <p>
At around the same time I was introduced to both top-down and bottom-up
programming. Both assume there is a recursive de-composition of the
problem at hand (assuming the problem sufficiently complex to warrant it).<p>
In bottom-up programming, you build up the vocabulary (the procedures and functions) that are necessary to succinctly describe your top-level problem,
and then you describe your program in terms of that vocabulary you created.
In top-down programming, you start at the other end and write your top-level
program in terms of the vocabulary you wish you had to optimally describe
the problem. Then you fill in the blanks.<p>
In both, you define your own language to fit the problem, then you solve the
problem using the language you defined. You would not add plotting commands
to the language, you would either add plotting commands as a library or, if
that were not possible, a way
of adding plotting commands as a library. You would not look at whether plotting
comes with the "standard library" or not. To quote Guy Steele in <a href="http://www.cs.virginia.edu/~evans/cs655/readings/steele.pdf">Growing a Language</a>:
<blockquote>
This is the nub of what I want to say. A language design can no longer be a thing. It must be a pattern—a pattern for growth—a pattern for growing the pattern for defining the patterns that programmers can use for their real work and their main goal.
</blockquote>
So build your own libraries, your own abstractions. It's easy, fun and useful.
It's the heart of <a href="http://books.google.com/books/about/Domain_Driven_Design.html?id=hHBf4YxMnWMC&redir_esc=y">Domain Driven Design</a>, probably the most productive and effective
software construction technique we as an industry have come up with to date.
See what abstractions you can build easily and which ones are hard. Analyze
the latter and you have started on the road to modern language design.<p>
<p>
CORRECTION (June 4th 2015): I misattributed the Dartmouth BASIC quote to Cathy Doser, when
the comment line on the Macintosh folklore entry clearly said Arthur Luehrmann. (Cathy's
comment was a bit earlier).
<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com2tag:blogger.com,1999:blog-8397311766319215218.post-51879331656144108442014-08-30T13:16:00.001-07:002014-09-11T06:14:52.020-07:00So how are those special Swift initializers working out?Splendidly:
<blockquote>
If you're building a UIView subclass that needs to set up a mess of subviews this can get old really quick. Best option I've found so far? Just initialize them with a default value like you would a regular variable. Now the compiler's off your back and and you can move on with your life, or at least what's left of it after choosing software development as a career.
</blockquote><a href="http://themainthread.com/blog/2014/08/swift-lesson-of-the-day.html">Justin Driscoll</a><p>
This is something people who create <a href="http://blog.metaobject.com/2014/06/remove-features-for-greater-power-aka.html">elaborate mechanisms</a> to force people to "Do the Right Thing"
never seem to understand: they hardly ever achieve what they are trying to achieve.
Instead, people will do the minimal amount of work to get the compiler off their backs.
Compare Java's checked exceptions.<p>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com1tag:blogger.com,1999:blog-8397311766319215218.post-59895356503745890292014-07-11T03:54:00.001-07:002014-07-11T03:56:55.182-07:00OverspeccingI just took my car to its biennial TüV inspection and apart from the tires that had simply
worn out everything was A-OK, nothing wrong at all. Kind of surprising for a 7 year old
mechanical device that has been used: daily commute from Mountain View to Oakland, tight
cornering in the foothills, shipped across the Atlantic twice and now that it is back in its
native country, occasional and sometimes prolonged sprints at 200 km/h. All that with not all
that much maintenance, because the owner is not exactly a car nut.<p>
Cars used to not be nearly this reliable, and getting there wasn't easy, it took the
industry both plenty of time and a lot of effort. It's not that the engineers
didn't know how to build reliable cars, but making them reliable <em>and</em> keeping them
affordable <em>and</em> still allowing car companies to turn a profit, <em>that</em>
was hard.<p>
One particular component is the alternator belt, which had to be changed so frequently
that engine compartments were specially designed to make the belt easily accessible.
That's no longer the case, and the characteristic screeching sound of a worn belt
is one that I haven't heard in a long time.<p>
My late dad, who was in the business, told me how it went down, at least at Volkswagen.
As other problems had been whittled away over the decades, alternator belts were becoming
a real issue on the reliability reports compiled by motoring magazines, and the engineers
were tasked with the job of fixing the problem. And fix it they did: they came up with
a design that would "never" break or wear out, and no I don't know the details of how
that was supposed to work.<p>
Problem was: it was a tad expensive. Much more expensive than the existing solution
and simply too expensive for the price bracket they were aiming for (this may seem
odd to outsiders considering the total cost of a car, but pennies matter). Which of course
was one reason why they had put up with unreliable belts for so long. Then word came in that
the Japanese had solved the problem as well, and were offering it on their cheap(er)
models. Next auto-show, they went to the both of one of those Japanese companies
and popped the hood.<p>
The engineers scoffed: the design the Japanese was cheaper because it was much, much
more primitive than the one they had come up with, and it would, in fact, also wear
out much more quickly. But exactly how much more quickly would it wear out? In other
words, what was the expected lifetime of this cheaper, inferior alternator belt design?<p>
About the expected lifetime of the car.<p>
Ahh. As far as I can tell, the Japanese design or variants thereof conquered the world. I can't
recall the last time I heard the screech of a worn out belt, engine compartments
these days are not designed with accessibility in mind and cars are still affordable,
although changing the belt if it does break will cost more in labor because of the
less accessible placement.<p>
What do alternator belts have to do with software development? Probably nothing, but
to me at least, the situation reminds me of the one I write about in <a href="http://blog.metaobject.com/2014/06/the-safyness-of-static-typing.html">The Safyness of Static Typing</a>. I am actually with those commenters who scoffed at the idea that the safety
benefit of static typing is only around 2%, because theoretically having a tight specification
of possible values checked at compile-time absolutely <em>should</em> bring a greater
benefit.<p>
For example, when static typing and protocols were introduced to Objective-C, I absolutely
expected them to catch my errors, so I was quite surprised when it turned out that in practice they didn't: because
I could actually compile/run/test my code without having to specify static types, by the time
I added static types the code simply no longer had type errors, because the vast majority
of those were caught by running it. The dynamic safety also
helped, because instead of a random crash, I got a nice clean error message
"object abc doesn't understand message xyz".
<p>
My suspicion is that although dynamic typing and the practices that go with it may only be,
let's say, 50% as good at catching type errors as a good static type system, they are
actually 98% effective at catching real world type errors. So if static type systems
are twice as good, they would be 196% effective at catching real world type errors, which
just like the perfect, german-engineered alternator belts, is simply more than is
actually needed (96% more with my hypothetical numbers).<p>
There are obviously other factors at play, but I think this may account for a good part of
the perceived discrepancy.<p>
What do you think? Comments welcome here or on <a href="https://news.ycombinator.com/item?id=8019545">Hacker News</a>.<br>
Marcel Weiherhttp://www.blogger.com/profile/11651004661887001433noreply@blogger.com2