Friday, May 9, 2014

Sometimes TheHackerCIO does something on a front-end Web site. In this case, I wanted to get an unusual font into the mix on a web page. Stackoverflow held the immediate answer, but when I tried it out, there was no change on the browser page.

After an hours frustration, further googling, and tweaking, I took a break. I washed my face. Then came the necessary reflection.

What are you doing? You're trying to combine multiple learning streams. On the one hand, you're learning the new touch gestures of Windows 8 and where everything has been moved around. On the other hand, you're attempting to actually DO something with WindoZe. So then it clicked. One of the posters HAD said, "any decent browser ..."

And that was the problem. I was using what came installed on the box. I wasn't using a decent product.

So I immediately returned to the laptop, downloaded Firefox, and the font showed. Just for the heck of it, I downloaded all the others -- Chrome, Opera, Safari. All fine. I'd just wasted an hour with WindoZe.

But it's important to have continual reminders. Luckily, I didn't spend that much time on it. I can amortize the time against learning the Windows 8 Touch gestures and re-org.

But you do have to stand in awe, how nothing out-of-the ordinary ever works in a WindoZe environment

After all, why should Microsoft support what every other browser on the face of the planet does?

Wednesday, May 7, 2014

RESTful Web APIs is the current book, and we had a near overflow turnout for the discussion of chapters 2 and 3.

The book is turning out to be good. But it's the give-and-take of the group that really makes the club work!

Some members have worked ahead, which gives them a bit of an unfair advantage, but TheHackerCIO won't hold that against them. :-)

I particularly liked the fact that the author jumped right in with an actual API to take a look at. A Simple API (chapter 2), in fact, is a micro-blogging API that he uses to illustrate his points. He suggested using wget as a command-line tool to play with it.

I found that I don't have wget available on my MacbookProRetina.

As I mentioned to the group. When someone asks you how long something is going to take, always ask them, "Is this an estimate, or a commitment?" Because management needs to be reminded of this. A lot! Anyway, I curl-ed wget, but I couldn't get it to build because my Xcode was a version or two out, and that download/install seemed to be taking plenty of time. What should have been a 10 minute diversion threw me off for a half hour.

I switched over to simply using Advanced Rest Client, a Chrome extension I highly recommend! You can issue whatever RESTfull calls you wish, with any verb you please, graphically from within your browser, and you can even keep them organized in a file structure for reuse. A very handy tool.

There were too many take-aways from the discussion, but some of the biggest for me were:

1. Being Liberated by Constraints. As the author notes, constraining your REST design, for instance, by using his "personal standard," which requires the JSON to follow some structure, can indeed keep your developers from going wild.

2. I never knew about LINK and UNLINK, but they seem worthwhile. We talked for a while about how POST should be probably be used to create the resource, perhaps with an embedded link to whatever other resource is this resource-target. But then LINK can be used to create a further relationship, going the other direction, from target back to the dependent. (I may be missing something here, because the author tell's me he's going to fill me in on Chapter 11. )

3. PATCH -- looks like a performance enhancement. I'd say to avoid it until absolutely necessary, because it's not idempotent. You're only going to patch a portion of a representation, rather than replacing the whole thing.

4. OPTIONS -- we need to ALL be using this, and straight off the bat, to get the full "billboard" of what we can and should be able to do with our API. No one does. But that doesn't mean we shouldn't start!

5. Overloaded POST -- The author points out how on the web APIs we use POST is overloaded with anything and everything. As he put's it:

The HTTP specification says that POST can be used for:

Providing a block of data, such as the result of submitting a form, to a data-handling process.

That 'data-handling process' can be anything. It's legal to send any data whatsoever as part of a POST request, for any purpose at all. The definition is so vague that a POST request really has no protocol semantics at all, POST doesn't really mean 'create a new resource'; it means, 'whatever.' [p. 41]

And "Whatever", is never a good thing to mean.

So don't mean that.

And restrict your POST to creation of new resources with newly-created identifiers.

Tuesday, May 6, 2014

Seeing a lot of hardware hackers at the AT&T Wearables Hackathon, back at years end, was partially a reminder.

But as one member noted. Hardware is how we started.

Several interesting themes emerged from the roundtable. The crucial need to find new strategies for keeping up with technology. The Radar isn't enough. It need augmentation. The lab we have is a wonderful augmentation. We need to figure out ways to capitalize on it. Rick pointed out that with the advances in virtualization technology, we can now use a lab in ways that a decade ago simply weren't possible. We can practically learn/design/plan/test/build a virtual datacenter with totally agnostic/fungible kit: cisco, dell, IBM, Oracle, Juniper, ... whatever. We can build it out with one set of physical & swap it later. And a major theme I raised was the crucial nature of fighting with the bugs.

The problems need to be highlighted, rather than worked-through. If anything, it's the problem areas where the learning/growth is going to take place. We need to figure out strategies in the lab to track the issues and problems, and get other to face them as well! That's counter to the way it normally happens isn't it?

But it's precisely the contesting with actual concrete problems that brings the abstract designs back to the reality-point. That's what "Closes the loop."

And that, by the way, was the other major theme of my presentation. Every time I've heard grand abstractions presented, and I've been able to force through an actual physical, concrete implementation example, the disconnect between the theory/abstraction and the concrete/implementation has been immense. Enormous. Totally surprising. So much so, that I now almost completely discount abstractions presented in the lack of any supporting test example or demonstration.

One of our further goals for The group, in conjunction with our partner Meetup, L.A. Cloud Engineering Group, will be our attempt to produce a crowd-sourced eval platform -- probably with a Geeky/Social approach -- where in my view, capturing such "proof points," "test cases", "demonstrations," or even "benchmarks" in a repeatable, verifiable way will be a central feature. I might even create a widget/button called "Prove it, dammit!"

Seeing a lot of hardware hackers at the AT&T Wearables Hackathon, back at years end, was partially a reminder.

But as one member noted. Hardware is how we started.

Several interesting themes emerged from the roundtable. The crucial need to find new strategies for keeping up with technology. The Radar isn't enough. It need augmentation. The lab we have is a wonderful augmentation. We need to figure out ways to capitalize on it. Rick pointed out that with the advances in virtualization technology, we can now use a lab in ways that a decade ago simply weren't possible. We can practically learn/design/plan/test/build a virtual datacenter with totally agnostic/fungible kit: cisco, dell, IBM, Oracle, Juniper, ... whatever. We can build it out with one set of physical & swap it later. And a major theme I raised was the crucial nature of fighting with the bugs.

The problems need to be highlighted, rather than worked-through. If anything, it's the problem areas where the learning/growth is going to take place. We need to figure out strategies in the lab to track the issues and problems, and get other to face them as well! That's counter to the way it normally happens isn't it?

But it's precisely the contesting with actual concrete problems that brings the abstract designs back to the reality-point. That's what "Closes the loop."

And that, by the way, was the other major theme of my presentation. Every time I've heard grand abstractions presented, and I've been able to force through an actual physical, concrete implementation example, the disconnect between the theory/abstraction and the concrete/implementation has been immense. Enormous. Totally surprising. So much so, that I now almost completely discount abstractions presented in the lack of any supporting test example or demonstration.

One of our further goals for The group, in conjunction with our partner Meetup, L.A. Cloud Engineering Group, will be our attempt to produce a crowd-sourced eval platform -- probably with a Geeky/Social approach -- where in my view, capturing such "proof points," "test cases", "demonstrations," or even "benchmarks" in a repeatable, verifiable way will be a central feature. I might even create a widget/button called "Prove it, dammit!"

Seeing a lot of hardware hackers at the AT&T Wearables Hackathon, back at years end, was partially a reminder.

But as one member noted. Hardware is how we started.

Several interesting themes emerged from the roundtable. The crucial need to find new strategies for keeping up with technology. The Radar isn't enough. It need augmentation. The lab we have is a wonderful augmentation. We need to figure out ways to capitalize on it. Rick pointed out that with the advances in virtualization technology, we can now use a lab in ways that a decade ago simply weren't possible. We can practically learn/design/plan/test/build a virtual datacenter with totally agnostic/fungible kit: cisco, dell, IBM, Oracle, Juniper, ... whatever. We can build it out with one set of physical & swap it later. And a major theme I raised was the crucial nature of fighting with the bugs.

The problems need to be highlighted, rather than worked-through. If anything, it's the problem areas where the learning/growth is going to take place. We need to figure out strategies in the lab to track the issues and problems, and get other to face them as well! That's counter to the way it normally happens isn't it?

But it's precisely the contesting with actual concrete problems that brings the abstract designs back to the reality-point. That's what "Closes the loop."

And that, by the way, was the other major theme of my presentation. Every time I've heard grand abstractions presented, and I've been able to force through an actual physical, concrete implementation example, the disconnect between the theory/abstraction and the concrete/implementation has been immense. Enormous. Totally surprising. So much so, that I now almost completely discount abstractions presented in the lack of any supporting test example or demonstration.

One of our further goals for The group, in conjunction with our partner Meetup, L.A. Cloud Engineering Group, will be our attempt to produce a crowd-sourced eval platform -- probably with a Geeky/Social approach -- where in my view, capturing such "proof points," "test cases", "demonstrations," or even "benchmarks" in a repeatable, verifiable way will be a central feature. I might even create a widget/button called "Prove it, dammit!"

Monday, May 5, 2014

Kyle Kingsbury is doing an amazing job with his Jepson project. TheHackerCIO has been long disturbed by the tendency for people to make these assertions and claims without the experimental evidence to back them up or provide an assessment basis for them.

Especially in the database world.

Here are a handful of the problems:

I can't tell you how many time's I've heard, "Oh, in the inner-join using RDBMS X, a nested-loop algorithm will of course perform better depending on the which table is the outer and which is the inner."

No doubt.

But these DBMSs have an optimizer. They have tables full of statistics about the data, presumable updated on a regular basis. These vendors have had 20 years to tweak optimizations. Yet, the documentation gives no indication as to whether their "optimizer" can pick the right outer table and inner table, or whether you must explicitly pick the right one yourself.

So lots of people just assume that the optimizer can/will do this. Which isn't unreasonable.

But the days have come where things need to be specified tighter.

We simply need clear black/white, preferably not greatly hedged, statements in the documentation. Statements that can be tested. Verified. Proven. Or disproven.

The newer world of NoSql is no exception to this rule or problem.

But Kyle has been there.

Kyle got interested in understanding the issues around the NoSql databases. But he did things the right way: he set up a controlled environment, and began systematically testing, examining, and proving out how the CAP theorem implications actually work in a partitioning environment. This led to a number of surprises for the vendors, ... not to mention the users???

To get a proper sense for this correct, test-based approach, recommend this. Here are just a few enticing flavor notes, taken from a section to please devote your most careful attention, entitled, "Testing Partitions":

Theory bounds a design space, but real software may not achieve those bounds. We need to test a system's behavior to really understand how it behaves.

To cause a partition, you'll need a way to drop or delay messages: for instance, with firewall rules.

Running these commands repeatably on several hosts takes a little bit of work.

Work might be a necessary evil. But understanding isn't going to come without it. Or without actually, experimental testing.

In this article, you will see exactly what to set up to get started with your own multi-node, partition-able, experimental test-bed, within which you can see how your NoSql is going to behave.