Tohono Consulting

Wednesday, March 26, 2014

After working flawlessly for a long time, my local Leiningen script suddenly failed to upgrade Leiningen to the latest version. Instead, it spit out the following error message:

The script at /usr/local/bin/lein will be upgraded to the latest stable version.
Do you want to continue [Y/n]? y
Upgrading...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 137 100 137 0 0 121 0 0:00:01 0:00:01 --:--:-- 182
curl: (35) error:14077458:SSL routines:SSL23_GET_SERVER_HELLO:reason(1112)
Failed to download https://github.com/technomancy/leiningen/raw/stable/bin/lein
It's possible your HTTP client's certificate store does not have the
correct certificate authority needed. This is often caused by an
out-of-date version of libssl. Either upgrade it or set HTTP_CLIENT
to turn off certificate checks:
export HTTP_CLIENT="wget --no-check-certificate -O" # or
export HTTP_CLIENT="curl --insecure -f -L -o"
It's also possible that you're behind a firewall haven't yet
set HTTP_PROXY and HTTPS_PROXY.

The Leiningen script seems to be (incorrectly) guessing at the problem, which had me spinning my wheels for a short time. The solution I found online was to add a '-3' or '--sslv3' flag to the 'curl' command within the lein script. The final modified line (as of the date of this post) is:

Tuesday, November 13, 2012

OK, just kidding...my wife and I don't really believe that the worldis coming to an end at the end of the current Mayan Great Cycle.And to demonstrate our optimism that we'll still be here afterDecember 21st of this year, we are finally releasing the firstversion of our Mayan calendar generation program (written inClojure, of course). Version 1.0.0 is available on GitHub athttps://github.com/hickst/mayancal.

The Mayancal program generates a PDF file containing an illustratedMayan calendar for the specified year (the default is 2012...just incase). The earliest year available is 1900, but this should allowmost users to generate a calendar for their birth year.

The Maya developed a sophisticated calendar based on theintersection of various cycles, especially the 260 day Tzolkin(ritual calendar) and the 365 day Haab (a rough solar calendar). TheTzolkin named each day; like our days of the week. There were 20 daynames, each represented by a unique symbol. The days were alsonumbered from 1 to 13 (Trecena cycle). Since there were 20 daynames, after the count of thirteen was reached the next day wasnumbered 1 again. Since 13 and 20 have no common divisors, thissystem uniquely represents all 260 (13*20) days of the sacred yearwith a unique number and day-name combination.

The Haab was a rough solar year of 365 days. The Haab year containednamed months called Uinals. These were 18 regular months of 20 dayseach and one special five-day month called Uayeb. Days of the Haabmonths were numbered 0 to 19. Each day had a number and day namefrom the 260-day Tzolkin as well as a number for each day of theHaab month (Veintena cycle). Using the intersections of thesecycles, each day can be identified by a four-tuple:[Tzolkin number,Tzolkin day, Haab number, Haab month].

Using Clojure's infinite lazy sequences these interacting cycles caneasily be generated and combined together with an infinite Gregoriandate sequence (see the mcal.clj file in the source code). Tosimplify the program, all sequences are synchronized to start at theGregorian date of 1/1/1900. Any given point is then found bydropping the appropriate number of elements from the heads of thesynchronized sequences.

Some miscellaneous notes: Because of the many great public domainimages and icons used to illustrate the calendar, the output PDFfile tends to be rather large so you probably don't want to emailcalendars to all your relatives for Christmas. Also, we've triedviewing the calendar in Preview 5.5.3, Adobe Reader 10.0.1, Skim1.3.22, and GoodReader for iPad 3.18.6 and we've seen a fewdifferences between these viewers. Only Adobe and GoodReader, forexample, were able to follow the embedded links and GoodReader hadtrouble displaying the links on the last page.

As I said, this is version 1, so if you encounter any issues pleaselet us know. We hope that this toy provides you with some fun anddiversion from the more serious, real-world uses of Clojure.

Thursday, September 6, 2012

Randy Kahle has recently written a blog entry which artfully expresses the thinking behind the yearnings for a new ROC (Resource Oriented Computing) language. I suspect that most ROC users would be happy just to see a better syntax for module creation but, as Randy points out, a ROC language should provide a better way to conceptualize the information flow through the system.

I've thought for some time now that what's missing in NetKernel is an abstraction of the higher level patterns of information flow through spaces; an implementation of EIPs (Enterprise Integration Patterns) for ROC. EIPs in ROC could be made manifest in a couple of ways, the simplest being to express the pattern as a particular composition of existing space elements. This approach, however, strikes me as analogous to expressing a pattern in an assembly language without macros: too detailed and hard to repeat correctly. A more powerful solution would be to express some of the simpler EIPs in ROC by encoding them directly as new types of overlays. The most recent overlays, such as the Pluggable overlay and the Branch-Merge overlay, seem to be attempts to encapsulate such patterns of usage. Sadly, these fall far short of the simplicity and beauty that they could have if they were not mired in the grammar-less verbosity of XML. (1)

Please note that, by saying "patterns" (inherent in EIP above) I mean something more abstract and more encompassing than the existing recipes for the connection and interaction of spaces, which are labeled as "Module / Space Patterns" in the NetKernel documentation. While these recipes are a step in the right direction, their level of granularity seems too small to express anything but the simplest EIP. In addition, their descriptions focus on the mechanics of space interconnection and it's very hard to elicit how they can be composed into EIPs.

A new ROC language built around Enterprise Integration Patterns would allow ROC programmers to concentrate on solving their application problems by focusing on the high-level, logical flow of information through the system. Such a language would include, at a minimum, the ability to compose, connect, and visualize some base set of EIPs. An
additional ability to implement arbitrary EIPs would be extremely
powerful but might require that the existing facilities to build modules
and factories be enhanced, simplified, and canonized into a clean API
(re: this forum discussion fragment). If the new ROC language were to eschew XML and to rely on a simple syntax, I feel this would be a huge win. Finally, I believe that a new language built around EIPs would greatly contribute to the usability and adoption of NetKernel and ROC.

1. It's interesting to speculate on the reasons why there are not more higher-level patterns and why they are not easy to spot in NetKernel's standard module. I think the principle of Linguistic Relativity is at work here; the idea that the structure of a language affects the ways in which its users conceptualize their world.

Friday, March 9, 2012

This occurred to me after reading several NetKernel forum entries crying out for help with NK's "declarative syntax". I think there are some great ideas in NetKernel but the idea of burying your programming language within XML is not one of them.

Friday, May 27, 2011

I'm reposting this post from Cosmin Stejerean (offbytwo.com) as a reminder to myself about how to solve this problem that's plagued me for years when connecting to the U via SSH.

'If you are having problems with your SSH connection getting dropped after a certain amount of time (usually caused by NAT firewalls and home routers), you can use the following setting to keep your connection alive

Host * ServerAliveInterval 180

You can place this either in ~/.ssh/config for user level settings or in /etc/ssh/ssh_config for machine level settings. You may also replace * with a specific hostname or something like *.example.com to use on all machines within a domain. This is the cleanest way of making sure your connections stay up and doesn’t require changes to the destination servers (over which you may not have control)."

Wednesday, June 16, 2010

An update to my previous post about Manning: they've now officially cancelled the CouchDB in Action book. To their credit, they are taking good care of customers (like me) who had already ordered the MEAP edition. We have been offered the choice of (1) getting our money back or (2) a replacement book or eBook (depending on our original order) AND another eBook free. I am very happy with this arrangement and have already taken the replacement offer for other eBooks. I did, however, check the eBooks' starting and (projected) publication dates. I picked books which had at least 4 or 5 chapters already available and I made sure that the book was being actively worked upon. Manning seems to have several books that have drifted off into the figurative weeds (for example see Taming Text, started in June 2008!)

Saturday, May 22, 2010

Great op-ed piece in the NY Times about how Obama is blowing the opportunity to use the Gulf oil disaster to lead the country to real, long-range solutions (which would help prevent disasters like this in the future):

Thursday, April 22, 2010

I'm becoming more and more disappointed with Manning Publications. They used to be a great source of eBooks on cutting edge technologies by leaders in the field. They are the publisher for some of the leading tech reference. Books such as Spring In Action, Groovy In Action and Ant In Action are tech "classics".

Lately, however, I've noticed that their book times are greatly increasing, author quality is decreasing, authors are unknown in the community, books are being threatened with cancellation, there is more and more advertising of "vaporware" (books with only 1 or 2 small chapters), and eBook releases are poorly screened for even minimal formatting quality.

For examples:

Just days ago I received an email from Manning describing how the authors of Couch DB in Action have fallen so far behind in their progress that the content is already out-of-date. Manning is debating whether to proceed with the existing content, entirely rewrite it or cancel it. There is no mention of what happens to customers who purchased the early access (MEAP) version (as I did) if the book is cancelled.

Then this morning, I received an update to Spring Integration in Action, usually a good and welcome thing. Unfortunately, the formatting of this version has some serious problems that were not present in the previous version. The text size varies wildly from chapter to chapter and, in those chapters were it is greatly increased, many of the figures are obscuring adjacent text and several of the figures are just not visible at all.

Now, of course, it must be acknowledged that this is an Early Access version of the eBook and various formatting, font, and figure problems must be expected for these drafts. However, to be useful at all, there must be some minimal standard of readability; which there was in the first MEAP version of the eBook that I received. The loss of this basic readability in the update gives the impression that no one is even reviewing the product before releasing it.

And, finally, to add insult to injury, Manning sent me a link to an online survey asking frequent customers for feedback. I patiently and completely filled out the form but when I tried to submit it, it claimed that I had not answered a couple of questions and refused to take my submission. Rechecking the form showed that all questions had been completely answered! Manning remains completely oblivious to my disappointments with them.

Manning used to be great but, in my opinion, they are going downhill fast!

Sunday, January 24, 2010

A couple weeks ago I gave the monthly presentation at the Tucson JUG on the programming language Clojure. Only six JUG members showed up but they were very interested in the language and kept me talking for over an hour beyond my initially allotted hour.

There are many reasons why Clojure has really caught on in the last year or so. For me, it's a well-designed and pragmatic amalgam of Lisp, concurrent techniques, and functional programming built on the JVM. It also helps that there's a great book, a friendly and helpful community, and dozens of enthusiastic side projects.

Just look around you when you're at your office, favorite coffee shop, in an airplane seat, a classroom, or even a library. There are almost always glaring light sources above and behind you. What were Apple's designers thinking? Don't they use their own products?

James Leigh, in a recent blog post, makes a couple of good comments on the importance of code readability and the presence of redundancy.

The accompanying poll question, however, (which asks if easily readable code is important) begs the deeper question...of course easily readable code is extremely important but the real question is how to achieve it.

For example, using abbreviations in identifier names is a poor way to make the names shorter and more concise.

Abbreviated names suffer several problems including:

1) ambiguity: is 'getReq' short for getRequest, getRequirement, or getRequisition?

2) cognitive burden: abbreviations requires much more mental effort to remember which fragment of a word is being employed. This "ideolexical" design makes the API seem much more complex and daunting than it should.

As an example, is the abbreviation for 'declareDescription' going to be:

3) lack of consistency: even with only one programmer creating the abbreviated identifier names, it seems highly probably that inconsistencies will creep into the naming scheme, making it harder to use.

4) loss of readability and documentation: longer names are often clearer and document the code better than abbreviations (or shorter names).

In these days of IDEs there is little reason not to use longer, clearer, self-documenting names: it is trivial to start a name and then hit the appropriate completion key. Even if you program in a non-IDE (as I do....I use Emacs a lot of the time) the importance of good names as documentation cannot be over-emphasized and is well worth a tiny bit of extra typing.

Thursday, October 9, 2008

"In October 1958, John McCarthy published one in a series of reports about his then ongoing effort for designing a new programming language that would be especially suited for achieving artificial intelligence. That report was the first one to use the name LISP for this new programming language. 50 years later, Lisp is still in use. This year we are celebrating Lisp's 50th birthday."

Thursday, September 11, 2008

On this 7th anniversary of 9/11 it is time to finally admitwhat most Americans already know in their hearts: that theterrorists fully achieved their objectives, even beyondtheir own wildest dreams. And, since then, unscrupulous men,corporations, and our own government have helped theterrorists to continue their success.

Seven years after 9/11, our country has turned against itsown ideals and principles in the name of security, whileironically justifying its actions as preserving "freedom".

Seven years after 9/11, We are saddled with a costly andpointless war against a country which had nothing to do withthe attacks. We have a massive new homeland securitybureaucracy which is hard at work trying to impose amandatory national ID system. The privacy of millions ofAmerican phone conversations and emails have been secretlyand illegally violated by the government and submissivecorporations. Citizens of other countries have beenarbitrarily labeled as terrorists and jailed for yearswithout formal charges or a trial. Freedom of travel hasbeen restricted by secret and erroneous government "watchlists", to which there is no judicial appeal. And through itall, government agencies, such as the INS and DHS, havesimply declared themselves to have sweeping new powers.

Joseph Stalin is reputed to have said "When we hang thecapitalists they will sell us the rope" but the terroristsof 9/11 have turned us against ourselves in a much moreinsidious manner; they played upon our fear of death. Andthe possibility of death by terrorism is being exaggeratedby those within our country who seek to maintain or expandtheir power. The truth is that you are thousands of timesmore likely to be killed by a traffic accident than by aswarthy foreigner with a bomb. Over a quarter of a millionpeople have died in traffic accidents in the U.S. since 9/11and yet there is no massive new Department of AutomotiveSecurity.

The terrorists of 9/11 won by instilling fear in thepopulace, causing us to give up some of our fundamentalfreedoms and rights. It is time to awake from our longnational nightmare and to put our fears into perspective.It is time to stop being overly afraid and to reject theerosion of our hard-won liberties. It is time to stopletting the terrorists win.

Saturday, August 16, 2008

Some friends of mine at 1060 Research recently sent me a new version of some software they are working on. After reading one of the XML configuration files, I asked if they had an XML Schema for it (which would define the grammar for legal configurations). The answer was that they did not, as they were moving away from formal grammars and more towards a rule-based approach like Schematron: which uses a set of pattern assertions (rules) for XML validation.

When I thought about this approach, I realized that it is duck typing for data. In object-oriented programming, the use of duck typing means that an object's behavior, rather than its class or inheritance structure, determines its interpretation and usage. The application of rule-based systems to categorize a data file or message is a data-oriented form of duck typing. Using "data duck typing", data is categorized (in this case validated) by having the right elements in the right locations.

Data duck typing means that a data file does not have to fully conform to a specific, rigid grammar as long as some of its parts meet the requirements of the particular rule set used for categorization. Thus, data messages for an application can come in all shapes and sizes as long as they contain the essential required elements with the right structural relationships. Applications which use this approach embody the design principle which says "be lenient in the messages that you accept" and will be much more flexible than applications based on rigid adherence to formal grammars.

Saturday, July 26, 2008

Randy Kahle and I are writing a series of articles on Resource Oriented Computing (ROC), which you can think of as REST principles applied to application software development. The first article of the series posted a couple of days ago on TheServerSide.com under the title

Perhaps this title was somehow misleading since the article immediately engendered a passionate (and not always civil nor complementary) debate on various aspects of REST. Most of these were off-topic from the article. As a long time reader of TSS, I expected that something like this could happen. My attitude toward this is to encourage rational discussion, clarify misunderstood points, and ignore misbehavior. This is, BTW, an approach used successfully to deal with patients at mental hospitals.

"Interesting", I thought, "but why didn't you guys just use CORBA and get it over with?"

A snippet in the blog post seems to have anticipated that question:

"OK, I know what you're thinking: "Yet another IDL?" Yes, you could call it that. But, IDLs in general have earned a reputation for being hopelessly complicated."

Complexity sounds like a strawman here.....the major problem with IDLs is that they are built upon a shared definition, which requires all parties to update and recompile when the definition changes. And once you recompile, you've lost the ability of the system to handle the old message format (so versioning is a serious problem unless you plan for it from the beginning).

Of course, these problems are ameliorated when:1) the IDL is for internal use only and,2) you control both ends of the conversation,as Google does...er...did up until now.

I also wonder why Google didn't just use an existing protocol like Hessian:

Google does tend to favor technology we invent ourselves ...[snip]... OTOH, some of the systems we've built ourselves have been blockbuster hits that enable much of what you know as "Google" today.

In response to your comment about backward compatibility, Protocol Buffers are actually explicitly designed so that you can add fields and whatnot and still be able to read in records stored in the old format.

I have to admit, the "WOW" factor on some of Google's software has inspired competition and innovation. So, the next time I'm looking for a binary wire protocol, I'll take a harder look at Protocol Buffers.