Java/J2ee on Ted Neward's Bloghttp://blogs.tedneward.com/platforms/java/j2ee/
Recent content in Java/J2ee on Ted Neward's BlogHugo -- gohugo.ioen-usMon, 02 Jan 2017 20:54:34 -08002017 Tech Predictionshttp://blogs.tedneward.com/post/2017-tech-predictions/
Mon, 02 Jan 2017 20:54:34 -0800http://blogs.tedneward.com/post/2017-tech-predictions/
<p>It&rsquo;s that time of the year again, when I make predictions for the upcoming year.
As has become my tradition now for nigh-on a decade, I will first go back over last years&rsquo;
predictions, to see how well I called it (and keep me honest), then wax prophetic on what I
think the new year has to offer us.</p>
<p>As per previous years, I&rsquo;m giving myself either a <strong>+1</strong> or a <strong>-1</strong> based on a
purely subjective and highly-biased evaluational criteria as to whether it actually happened
(or in some cases at least started to happen before 31 Dec 2016 ended).</p>
<p>Bear with me for a moment, though. This is just too good.</p>
<h2 id="in-2015:816dfc775e64d848cd23e6c10e386ae4">In 2015&hellip;</h2>
<p>&hellip; <a href="http://blogs.tedneward.com/post/2015-tech-predictions/">I said</a>:</p>
<blockquote>
<p>Microsoft acquires Xamarin.</p>
</blockquote>
<p>Oh, baby. Off by a year. I&rsquo;m should go back and give myself a <strong>+1</strong> for this one. It was really
surprising that they hadn&rsquo;t. As a matter of fact, if Microsoft had listened to me and done it in
2015, they&rsquo;d probably have saved themselves a TON of money compared to what they actually paid for
Xamarin in 2016. But they made the acquisition, Xamarin is now part of the Microsoft family, and
(finally!) .NET developers have access to the Xamarin toolchain and can build native iOS and Android
apps without having to shell out additional cash to do so. &lsquo;Bout time, Microsoft. (I suspect this
had everything to do with Satya, to be honest.)</p>
<p>OK, gloat over.</p>
<h2 id="in-2016:816dfc775e64d848cd23e6c10e386ae4">In 2016&hellip;</h2>
<p>&hellip; <a href="http://blogs.tedneward.com/post/2016-tech-predictions/">I said</a>:</p>
<ul>
<li><strong>Microsoft will continue to roll out features on Azure, and start closing the gap between it and AWS.</strong>
Calling this one a <strong>+1</strong>; it doesn&rsquo;t take much research to see this has definitely been happening in 2016.
However, it&rsquo;s not necessarily a true statement that they&rsquo;ve been closing the gap; Amazon keeps adding
stuff as well, and the feature-parity lists are starting to get ridiculous. Whether these features are
actually <em>of use</em>, however, is an important distinction, and something for the second half of this post.</li>
<li><strong>(X)-as-a-Service providers will continue to proliferate.</strong> Oh my, yes, Ted gets another <strong>+1</strong> for this.
When running a gaming convention has an (X)-as-a-Service for it (seriously, <a href="https://tabletop.events/">here</a>)
then you know the proliferation is in full swing. PaaS providers are exploding everywhere, and while
a few have disappeared (farewell, Parse!), it&rsquo;s clear that this was the gold rush of 2016.</li>
<li><strong>Apple will put out two, maybe three new products, and they&rsquo;ll all be &ldquo;meh&rdquo; at best.</strong> I should&rsquo;ve
broken this into two predictions: one about Apple&rsquo;s &ldquo;meh&rdquo; products, and one about wearables. If I&rsquo;d
done that, I&rsquo;d have scored two <strong>+1</strong>&rsquo;s for it, because not only have wearables not really gone very
far (show me somebody wearing a smart watch, and I&rsquo;ll show you a geek with too much time on their
hands and not enough &ldquo;discrimination&rdquo; in their discriminatory income), but Apple&rsquo;s product releases
have been&hellip; &ldquo;meh&rdquo;! I&rsquo;m looking at you, iPhone 7, and I&rsquo;m <em>really</em> looking at you, MacBook Pro.
(When Consumer Reports doesn&rsquo;t give the MBP its top rating, you know the luster has failed.) More on
Apple in the second half.</li>
<li><strong>iOS 10 will be called iOSX.</strong> Dangit. Such an opportunity wasted. <strong>-1</strong></li>
<li><strong>Android N will be code-named &ldquo;Nougat&rdquo;.</strong> Why, hello there, Android 7.0 Nougat. So pleased to make
your acquaintance. <strong>+1</strong></li>
<li><strong>Java9 will ship.</strong> As I noted last year, @olivergierke <a href="https://twitter.com/olivergierke/status/684642273561329664">pointed out</a>
that Java9 had already slipped to 2017, so this one was already a <strong>-1</strong>. Sigh. And I called it a
&ldquo;no duh&rdquo; event, too&mdash;I&rsquo;m going to let this one cancel out the extra +1 I&rsquo;d have given myself for
the Apple/wearables thing, just to keep the math safe (and my ego relatively sized).
<a href="http://www.infoworld.com/article/3011445/java/java-9-delayed-by-slow-progress-on-modularization.html">The article he cited</a>
says that Oracle &ldquo;blamed the delay on complexities in developing modularization&rdquo;, a la Project
Jigsaw.</li>
<li><strong>Facebook will start looking for other things to do.</strong> Welllllllll, it&rsquo;d be really tempting to say
that Facebook&rsquo;s now &ldquo;things to do&rdquo; was &ldquo;Be the deciding factor in who gets elected by passively
encouraging the widespread dissemination of fake news and outright falsehoods!&ldquo;, but seriously,
who would&rsquo;ve believed that even if I had predicted it? Which I didn&rsquo;t. <strong>-1</strong></li>
<li><strong>Google will continue to quietly just sort of lay there.</strong> A year ago, I wrote, &ldquo;Google, for all
that they are on the top of everybody&rsquo;s minds since that&rsquo;s the search engine most of us use, hasn&rsquo;t
really done much by way of software product invention recently. &hellip; I suspect the same will be true of
2016&ndash;they will continue to do lots of innovative things, but it&rsquo;ll all be &ldquo;big&rdquo; and &ldquo;visionary&rdquo;
stuff, like the Google Car, that won&rsquo;t have immediate impact or be something we can use in 2016
(or 2017).&rdquo; And&hellip;. yeah. <strong>+1</strong> More emphasis around the existing products they&rsquo;ve built, but as a company,
they&rsquo;ve clearly spent most of 2016 on the Alphabet/Google restructure (which accomplished&hellip; what,
exactly?), and anything new has been either way quiet or way removed from the business.</li>
<li><strong>Oracle will quietly continue to work on Java.</strong> A year ago, I wrote, &ldquo;[Oracle is not] going to
kill it, but there&rsquo;s really not a whole lot of need to go around preaching its message, either.
So they let the evangelists go, and they&rsquo;ll just keep on keepin&rsquo; on.&rdquo; Score a <strong>+1</strong> for the
long-haired geek in Seattle; they just keep posting new code.</li>
<li><strong>C# 7 will be a confused morass.</strong> If we permit me the freedom to call it &ldquo;.NET Core&rdquo; instead
of just &ldquo;C# 7&rdquo;, then wow do I get a <strong>+1</strong> on this one. Even if I just constrain my prediction
to C# 7/Roslyn, I still score one, but once you throw in the CoreCLR and &ldquo;dotnetcore&rdquo; and the
different profiles and&hellip;. Holy spaghetti web browser history, Batman! The demarcation lines
of the different project teams working on this whole thing are starting to become <em>really</em>
clear as the different OSS projects each look really consistent within themselves, but then,
when you get to the borders, things just&hellip;. fall apart.</li>
<li><strong>Another version of Visual Basic will ship, and nobody will really notice.</strong> Alas, there was
no new version of Visual Basic, since it would be in lockstep with the release of C# 7 (which
didn&rsquo;t ship), but nobody really noticed. Or cared. Still, nothing shipped, so <strong>-1</strong>.</li>
<li><strong>Apple will now learn the &ldquo;joys&rdquo; of growing a language in the public as well.</strong> First there
was Swift 2, which was itself source-incompatible with Swift 1, and then during the summer,
Apple shipped Swift 3, which was&hellip; source-incompatible with Swift 2, owing to some language
changes that the community effectively decided was necessary. <strong>+1</strong>. (And thanks for that, by
the way&mdash;made teaching iOS this Fall a royal PITA.)</li>
<li><strong>Ted will continue to layer in a few features into the blog engine.</strong> You&rsquo;ve got comments!
And I&rsquo;ll take that <strong>+1</strong>, thank you very much.</li>
</ul>
<p>Nine up (ten, if we count my Xamarin prediction from 2015), four down. Not bad. But now, we move
on to the more interesting part of the post: 2017.</p>
<h2 id="2017-predictions:816dfc775e64d848cd23e6c10e386ae4">2017 Predictions</h2>
<p>The calendar year 2017 is going to be a wild one for the tech industry, largely owing to the
rather large orange elephant in the room&mdash;Donald Trump&rsquo;s election to President of the United
States is a huge wildcard whose randomness simply cannot be understated. The man <em>thrives</em> on
being unpredictable, and like most industries, the tech industry (for all that it cherishes
&ldquo;innovation&rdquo; and &ldquo;disruption&rdquo;) thrives on predictability. His collection of &ldquo;tech titans&rdquo; at
Trump Tower last month yielded absolutely zero positive traction that I can see, and I suspect
that the various corporate tech leaders (Nadella, Bezos, Cook, etc) are all looking at him right
now the way humans do a rogue elephant&mdash;he could be good for them, so long as he doesn&rsquo;t go
wild and start tramping everything in his path out of spite, anger, fear, or any other of a half-
dozen emotions. There&rsquo;s no prediction here, though&mdash;just a &ldquo;wow, this is an X-factor&rdquo; that
in turn makes predictions that much harder.</p>
<p>But on that note&hellip;.</p>
<ul>
<li><strong>The Congress will call for an investigation into the &lsquo;hacking&rsquo; of the 2016 US election.</strong>
(0.8 probability) To be honest, I&rsquo;m not sure if anybody knows exactly what we mean when we say
&ldquo;the Russians &lsquo;hacked&rsquo; the US election&rdquo; in casual conversation. There&rsquo;s no clear evidence that
the voter machines themselves were cracked or tampered with, but it&rsquo;s fairly easy to see a
correlation with the DNC hacks and Wikileaks disclosures and Trump&rsquo;s corresponding favorability
gains in the polls. That said, though, the five-hundred or so US politicians that make up the Congress
(and excluding Trump himself and his transition team) are not comfortable with the idea that
somebody outside the US engaged in some kind of manipulation of the election, and they are
going to want answers. Just yesterday or the day before, though, Trump made the comment that
hacking is &ldquo;extremely hard to prove&rdquo;, and he&rsquo;s right about that&mdash;without some kind of &ldquo;smoking
gun&rdquo; found in a Russian government employee&rsquo;s possession, it&rsquo;s going to remain a major point of
contention in the coming year, and investigation or not, it&rsquo;s not going to go away regardless of
what the investigation finds.</li>
<li><strong>Security becomes a HUGE deal for the industry.</strong> (0.8) The election is just the tip of the iceberg;
consumers may have gotten used to (and complacent about) corporate security disclosures, but the
idea that the election could be hacked is sending shivers down the collective spines of anyone
who does anything online. The downside is that it&rsquo;s such a complex topic, it&rsquo;s hard for anyone
who&rsquo;s not a computer security expert to really understand what to do; even among experts, there&rsquo;s
a fair amount of disagreement, even on simple issues like scope (how widespread is it) or actual
facts vs hype. Pair that with the paranoia that is inherent in any security professional (if you
think computer security types are paranoid, try talking to physical security professionals for a
while), and you have an industry that&rsquo;s ripe for a lot of snake oil and hyperbole. My prediction,
then, is that <strong>the industry starts to see the first set of &ldquo;security snake oil&rdquo; products</strong>
somewhere within the calendar year 2017. And by that, I mean products that claim to provide
security for your interactions online but in fact do nothing of the sort. (Late-night
infomercials about downloadable web pages that can &ldquo;clean your system&rdquo; of viruses and malware,
move over&mdash;it&rsquo;s time for late-night infomercials about downloadable web page that can &ldquo;secure
you against even the most determined attacker&rdquo;!)</li>
<li><strong>Apple continues to plummet.</strong> (0.7) Their products this year were merely slightly-enhanced copies
of the previous line of products (iPhone 7 vs iPhone 6) or containing gimmicky &ldquo;enhancements&rdquo;
while the core of the product remained essentially unchanged from prior generations (MacBook Pro).
Sorry, folks, the TouchBar does not qualify as &ldquo;disruption&rdquo; or &ldquo;innovation&rdquo;; it&rsquo;s a strip of
touch-sensitive glass from an iPad designed to start prepping you for the idea that Apple can
remove the keyboard entirely, replace it with a touchpad, and then put a hinge in between two
iPad Pros and call it a &ldquo;MacBookPad Pro&rdquo; and charge you $10k for it. (And by the way, if you&rsquo;re
thinking about one of the new MacBook Pro machines, make sure you go into an Apple Store and
try it out&ndash;the keyboard is definitely not the same as its been for years. It feels like they
took about half of the keys&rsquo; &ldquo;press depth&rdquo; away, and it totally changes the &ldquo;touch&rdquo; on the
keyboard. I imagine somebody could get used to it in time, but&hellip; ugh.)</li>
<li><strong>Apple doesn&rsquo;t introduce any new products this year.</strong> (0.6) And by new, I mean something that&rsquo;s not
an incremental improvement on what they&rsquo;ve already got. Heck, I&rsquo;ll even go so far as to say that this
means that there&rsquo;s no new form factors to the existing product line. (Meaning, no new-sized iPad
or iPhone or laptop.)</li>
<li><strong>PC manufacturers double their efforts to build a MacBook Pro.</strong> (0.8) The MBP is vulnerable, for
the first time in a half-decade, and PC manufacturers are going to look for ways to capitalize on
that. Somebody is going to put out a similarly-sized, similarly-weighted non-touch-screen Windows 10
laptop with 32GB of RAM and a 1 or 2 TB SSD, the usual collection of ports, and price it around the
same as MacBook Pro ($2k to $4k), and developers will start buying them. (Bonus points to that
manufacturer if they offer Linux as an out-of-the-box option.) I know I will&hellip;.</li>
<li><strong>Apple rumors about Tim Cook&rsquo;s departure begin.</strong> (0.6) Cook has proven that he&rsquo;s no Steve Jobs;
in fact, the comparisons between his and Steve Ballmer&rsquo;s reign at Microsoft are proving eerily and
entirely similar. Both basically took companies that were defining the marketplace and shepherded
them into a position of trying to manage the cost structures and find better price-points, and in
doing so, killed off much of the mojo that drove both firms. Ballmer took close to a decade to be
run out of Microsoft (and even then, it took BillG&rsquo;s intervention behind the scenes, from what I
can tell), but I don&rsquo;t think the Apple Board is going to wait that long&mdash;I think by the end of
2017 we&rsquo;re going to start hearing serious rumors about Cook being offered a golden parachute to give
up the center chair and let somebody else in to run the show.</li>
<li><strong>Oracle will continue to just write Java.</strong> (0.7) Oracle, despite the best efforts of media and
journalists everywhere, just refuses to get drawn into &ldquo;techno-drama&rdquo;. Java hasn&rsquo;t been the Trojan
Horse into corporate pocketbooks that all the Java-doomsdayers were predicting back when Oracle
acquired Sun, and releases of Java just keep coming through both commercial and OSS channels.
There&rsquo;s really no reason at this point to doubt that Oracle is going to do anything but continue
down that path. Make no mistake, I&rsquo;m sure they&rsquo;re looking for ways to monetize Java in some way
so that they can try to earn back the cash they spent to buy Sun, but I don&rsquo;t think it&rsquo;s going to
be through selling or charging for the JDK or JRE anytime soon.</li>
<li><strong>Oracle Cloud emerges onto the cloud scene in a big splash.</strong> (0.6) IBM now has Bluemix and
Watson, and they were really the last of the &ldquo;big-iron&rdquo; holdouts around the cloud. (What I mean by
that is that all companies have been quietly flirting with cloud, but some push it loud and clear,
a la Microsoft or Google, and some were playing it very quietly for a while.) With IBM acquiring
Loopback (a NodeJS server-side stack) last year, it&rsquo;s clear that IBM is going to push JavaScript
as their main cloud development play, essentially ceding the Java-cloud development ground to
somebody else. Amazon has historically been the place that Java developers have gone to run their
Java code in the cloud, but if Oracle can build a compelling offering (particularly with a free
tier that AWS currently lacks), this could be a relatively big splash. Between Oracle&rsquo;s reputation
in the database world, if they have a solid &ldquo;stack&rdquo; offering that basically makes a Java-based
back-end a snap to start up, Oracle could essentially claim the Java-favored cloud play from
Amazon. (Yes, Heroku is out there and holds a fair amount of Java and Scala love, but now that
they&rsquo;re owned by Salesforce I suspect the Java-leaning flavor of Heroku to wane a bit.)</li>
<li><strong>Salesforce makes a major database acquisition.</strong> (0.5) Salesforce is growing, and they&rsquo;re
clearly interested in expanding their cloud to be more than just the CRM. With Heroku, they have
a Platform that developers can feel comfortable on, but they don&rsquo;t have a big-name database
(relational or otherwise) that complements that play. They currently are sitting on a ton of
cash, and <a href="http://talkincloud.com/saas-software-service/10-salesforce-acquisitions-2016">last year&rsquo;s crop of acquisitions</a>
didn&rsquo;t include a big database storage name. There&rsquo;s not a ton of players left out there, but I
could see them making a strong push to get something like Cassandra or Couchbase. (Yes, they
have Data.com, but that doesn&rsquo;t seem to be making much headway in the developer mindset space.)</li>
<li><strong>Salesforce releases a new programming language.</strong> (0.4) Let&rsquo;s call the spade a spade: Apex is
a Java knock-off, and it shows a lot of warts, particularly since it hasn&rsquo;t really kept up with
what few improvements Java-the-language has made in recent years. The last company to be in this
position&mdash;a red-hot platform but a language feeling a little creaky at the corners and just plain
&ldquo;old&rdquo; everyhwere else&mdash;was Apple right before they released Swift. Salesforce has the engineering
power, they are looking to command more of the developer mindshare, and they have a ton of cash
to blow, so&hellip;. Whether this happens this year, next year, or 2019, I&rsquo;m not sure, but if it
doesn&rsquo;t happen this year, the odds go up each year after that.</li>
<li><strong>LinkedIn Learning starts to make a serious dent in online developer training.</strong> (0.5) Between
the fact that LinkedIn Learning (formerly Lynda.com) is growing out its library to a pretty
respectable degree, and the fact that Microsoft now owns LinkedIn, it&rsquo;s pretty reasonable to
assume that Microsoft is going to start making this available to its developer community in
various ways. This may happen in 2018, though, depending on how swiftly Microsoft moves to
incporate LinkedIn assets across the rest of the firm; if they bought LinkedIn solely for the
CRM data to go with Dynamics, for example, then this probably won&rsquo;t happen for a few years.</li>
<li><strong>Swift doesn&rsquo;t go to 4.</strong> (0.7) Swift 3 held breaking changes from Swift 2, and the folks at
Apple are not stupid. Swift 4 will be far, far down the horizon for a few years yet, given that
each major version number bump has heralded incompatibilities. Apple will not want to call anything
&ldquo;Swift 4&rdquo; and dredge up memories of incompatibilities in their customers&rsquo; minds for a while.
Swift might get a 3.1 in the summer, but that&rsquo;s as far as it&rsquo;ll go.</li>
<li><strong>Microsoft ships C# 7.</strong> (0.8) Roslyn needs to ship in 2017 if Microsoft is going to be able to
call this open-source process a success. Otherwise it&rsquo;ll start a lot of people grumbling. (Yes,
a new version of Visual Basic will come with it, and it will make basically no news.)</li>
<li><strong>No new Android version.</strong> (0.4) Android-N is still slowly making its way through the networks,
and while we&rsquo;ll probably start hearing rumors of what Android-8 (Oreo?) will include, with a
targeted ship date of 2018, probably 1Q or 2Q.</li>
<li><strong>Twitter will continue its slide into irrelevancy.</strong> (0.5) Let&rsquo;s face it, Twitter&rsquo;s days are
numbered. If you&rsquo;re holding Twitter stock, now&rsquo;s a good time to sell&mdash;when Twitter was left out
of Trump&rsquo;s &ldquo;tech summit&rdquo; last month, the stated reason was that it was &ldquo;too small&rdquo;. Put that into
your brain-pan and circulate for a while&mdash;the service that invented microblogging and is one of
the core founders of &ldquo;social media&rdquo; was &ldquo;too small&rdquo; for the PEOTUS&rsquo; time. Twitter hasn&rsquo;t really
done anything &ldquo;new&rdquo; or &ldquo;interesting&rdquo;, but simply continued to be the 140-character microblogging
platform it&rsquo;s always been. It&rsquo;s reaching commodity status, in fact. That&rsquo;s not a good sign for
a company that wants to be more than it is. I suspect Jack Dorsey gets tossed on his can, the
company starts looking for a new CEO, and the &ldquo;new vision&rdquo; will start to take shape by the end
of the year (2017), and then in 2018 we find out that the &ldquo;new vision&rdquo; is terrible, takes them
out of their &ldquo;core business&rdquo;, and the slide accelerates. But nobody buys them this year, not yet.</li>
<li><strong>The &ldquo;Internet of Things&rdquo; continues to draw hype, and continues to fail to deliver.</strong> (0.6)
It&rsquo;s been how many years we&rsquo;ve heard about IoT now, and how it will revolutionize our lives,
and all we&rsquo;ve really seen thus far is the wide variety of Internet-enabled devices being subverted
for a widespread DDoS attack. Wearables, &ldquo;smart refrigerators&rdquo; and other IP-enabled devices are
proliferating, but&mdash;to perhaps everybody&rsquo;s surprise but mine&mdash;nobody&rsquo;s quite sure what to DO
with these things once you have them. Your thermostat is online; terrific. Does it have an API
that will let me query meter usage? No, that&rsquo;s a different thing, and a different API, and a
different connection endpoint, and&hellip;. Oh, and be careful, somebody could remote-hack your
thermostat and <a href="http://motherboard.vice.com/read/internet-of-things-ransomware-smart-thermostat">hold your house hostage</a>.
Because that&rsquo;s worth the risk.</li>
<li><strong>Tech &ldquo;unicorns&rdquo; will start to watch the bubble pop.</strong> (0.3) Uber, Lyft, all these companies that are
valued at double-digit billions with zero profits, major losses, and no real assets to sell in
the event of a bankruptcy&hellip;. All of this is going to start to make some investors nervous,
particularly when they look around and realize that the tech sector has been carrying the
country&rsquo;s economy through its &ldquo;recovery&rdquo; (yes, we&rsquo;ve been in a recovery for the last half-decade!).
All it takes is a few small stones to start the avalanche.</li>
<li><strong>Voice-controlled fart apps will emerge.</strong> (0.6) Seriously. As Alexa and Siri and these other
voice-activated systems start to move into stationary devices in your home, and as the SDKs for
these systems start to become more widespread, the first thing developers will do is build some
kind of ridiculously silly app (it would be a kindness to call it a game) that will somehow
sweep everybody&rsquo;s sense of humor into the toilet. (Seriously. Imagine it. &ldquo;Alexa, did you have
beans for dinner?&rdquo; &ldquo;Yes, I did, and&ndash; BRAAAAAAAAAAP!&rdquo; It&rsquo;s exactly the kind of thing that would
get people giggling for hours on end, particularly in a weed-induced state. Did I mention I live
in Seattle?)</li>
<li><strong>Facebook will find that preventing &lsquo;fake-news sites&rsquo; is a lot easier said than done.</strong> (0.8) As a
result, they&rsquo;ll put some kind of &ldquo;AI&rdquo; filter on linked sites, declare a victory, and try to get
out of the political game entirely. It&rsquo;s a lose-lose scenario for them: one man&rsquo;s &ldquo;fake news&rdquo;
site is another man&rsquo;s &ldquo;revolutionary take&rdquo; backed by the First Amendment, and Facebook does not
want to be anywhere near a court trying to justify their actions against Free Speech. (Old-timers
like me will remember Prodigy, <a href="http://www.techrepublic.com/blog/classics-rock/prodigy-the-pre-internet-online-service-that-didnt-live-up-to-its-name/">an online service</a>
that started censoring content, which started its slide into doom.) Zuckerberg doesn&rsquo;t want to be
held responsible for swaying important political events one way or another, but neither does he
want to be the target of numerous political activist lawsuits (from all directions). As Joshua
(the AI in the WOPR, back in the 80s movies that every geek my age openly worshipped) learned,
Zuck will discover that sometimes &ldquo;the only winning move is not to play&rdquo;.</li>
<li><strong>A driverless car will kill somebody.</strong> (0.5) It&rsquo;s only a matter of time. The circumstances
may not be the software&rsquo;s fault&mdash;and in fact it&rsquo;s likely that it won&rsquo;t be, when the final analysis
comes back&mdash;but the headlines will scream, and the widespread fear of a human &ldquo;not being in the loop&rdquo;
will set driverless cars back by years. Expert testimony and repeated demonstrations will do
nothing to shake the public&rsquo;s fear that a computer-driven car could &ldquo;hit a bug and kill me&rdquo;.</li>
<li><strong>The topic of ethics and programming will begin to become fashionable.</strong> (0.3) Somewhere alongside
the driverless car&rsquo;s first fatality, people will start asking how the car&rsquo;s programming makes
decisions that most humans make in a split-second without even thinking about it. Case in point: the
car detects that a motorcycle rider has had a problem and the rider has laid the bike down in the
road right in front of the car. (For discussion purposes, there is no room left to brake; the rider
is too close.) The car can either swerve to the side to avoid the now-helpless rider, potentially
causing a major accident involving multiple people; or the car can simply continue forward, running
over (and very likely killing) the motorcycle rider but avoiding the possibility of multiple fatalities
from a larger accident. Most humans would swerve&mdash;but is that the &ldquo;right&rdquo; decision? More to the
point, what should the software be programmed to do? Once the public gets wind of these kinds of
decisions being made by geeks behind flat-screen LCDs, it&rsquo;s going to cause a major outcry. (And yes,
these kinds of decisions are going to be encoded in the software, somewhere.)</li>
<li><strong>&ldquo;The cloud&rdquo; continues to grow, even as consumers wonder what the hell it is.</strong> (0.7) Let&rsquo;s be
clear&mdash;as of right now, the cloud is basically a developer thing. My parents really don&rsquo;t &ldquo;get&rdquo;
the cloud, largely because there&rsquo;s really nothing they get from it. Sure, one can argue that GMail
is the world&rsquo;s most popular cloud email service&hellip;. but your email is just stored on a server that
Google owns, as opposed to a server that your ISP owns. (If that&rsquo;s your definition of &ldquo;cloud&rdquo;, then
pretty much all client-server computing is &ldquo;cloud&rdquo; in your world.) People are looking at
more online services for things like bill payment, true, but those are basically services being
offered by vendors with whom these people are already doing business&ndash;again, that&rsquo;s not &ldquo;cloud&rdquo;.
Cloud offerings have basically found a home in the developer world, but general-purpose cloud,
the way that cloud was first being sold, is losing its window of opportunity to get hold of
general consumers&rsquo; minds. (I lose this prediction if my parents are suddenly smitten with a product
that stores or computes for them and isn&rsquo;t a vendor they already have a relationship with.)</li>
<li><strong>&ldquo;Blockchain&rdquo; remains the most opaque &lsquo;thing&rsquo; of the year.</strong> (0.8) Everybody will go on and on about its
huge technical advantages and obvious benefits, while never actually describing what it is or how it
could work to change the world it&rsquo;s so clearly destined to change. It&rsquo;s the ultimate hype machine,
and it will show no signs of slowing down until maybe the end of the year. By that time, something
will emerge out of it (the way blockchain emerged out of bitcoins and cryptocurrency) that will
carry forward the legacy of &ldquo;changing the world&rdquo; without actually changing anything.</li>
<li><strong>Artificial intelligence will continue to remain a &lsquo;future&rsquo; thing.</strong> (0.8) Part of the reason I say
this is because AI is like magic&mdash;if you can understand it, it&rsquo;s not interesting anymore and it&rsquo;s just
an implementation detail. We&rsquo;ve had rules engines and natural language processing for years. When
Amazon started doing &ldquo;predictive analysis&rdquo; of what you would like to buy, we pulled &ldquo;data science&rdquo;
and &ldquo;behavioral analytics&rdquo; out of the &ldquo;AI&rdquo; world and into its own category. When AI figured out how
to make the spoken word make sense, we called it &ldquo;speech-to-text&rdquo; and it was a feature on Android
alreday back in the v2 days. (Marry speech-to-text up with a natural language parser, and you have
Siri&mdash;which, remember, was its own company before Apple acquired them.) No, Alexa is not going to
revolutionize the world any more than Siri did&mdash;the act of talking to a machine is not particularly
new, and it&rsquo;s only as good as the services that sit behind the parser and can &ldquo;hook in&rdquo; to the
parsed text. &ldquo;Cortana, fire up StarCraft 2&rdquo; is easy to parse and start an application; &ldquo;Cortana,
fire up StarCraft 2, and find me a random Hard co-op match as Artanis&rdquo; requires not just firing
up an application, but also &ldquo;hooking&rdquo; inside the application to know how to carry out the rest of
the request. That requires an API platform that all applications can hook into, provide, and describe
(in natural-text terms) to the voice-control system. That is not going to be easy to define, adopt,
or test.</li>
</ul>
<p>On a personal note, several predictions come to mind:</p>
<ul>
<li><strong>Ted will celebrate his one-year anniversary at Smartsheet in September.</strong> I&rsquo;m optimistic about these
guys, and the things we can do together. I&rsquo;m looking forward to taking them into the developer limelight
in a variety of different ways.</li>
<li><strong>Ted will do less speaking this year.</strong> My new role actually encourages me to help develop new talent
for my employer to go out and do the actual speaking, so while I&rsquo;m definitely down for doing a few
conferences this year, it&rsquo;s not going to be more than 12, total, for the calendar year. I enjoy speaking,
but I&rsquo;m looking to be a lot more careful about where I speak now.</li>
<li><strong>Ted will not be renewed as a Microsoft MVP.</strong> Actually, this appears to be fact, not a prediction.
MVP renewals for the January cycle went out already, and I didn&rsquo;t receive one. Fortunately, most of
the stuff I care about in the Microsoft world is all open-source (or moving that way) anyway, and
while it&rsquo;s been nice being on the MVP mailing lists, there&rsquo;s really been nothing there that&rsquo;s been
all that insightful or amazing. (And, fortunately, living in Redmond makes it trivially easy to get
together with anybody on a product team if I really want or need to, and I am privileged to call many
of the people on those teams &ldquo;friend&rdquo;.) It would&rsquo;ve been 14 years, but as we Stoics say, &ldquo;All good things,
in time, must come to an end.&rdquo;</li>
<li><strong>Ted will look to engage with other tech companies beyond Microsoft.</strong> Google just started a new
MVP-like program, and I&rsquo;ve been teaching Android and Angular and some Google Cloud Platform stuff for
a while, so perhaps they&rsquo;ll welcome me into their fold.</li>
<li><strong>Ted will continue to teach at UW.</strong> I&rsquo;ve been guest-lecturing at UW for the past three years now,
and I&rsquo;m loving it. The students are bright, eager, and a helluvalot smarter than I was at that age.
They&rsquo;re an incredible joy to teach.</li>
<li><strong>Ted will look to publish a few mobile apps.</strong> I&rsquo;ve had a few ideas floating around for a while, but
just never really made the time to do it. Even if they never turn a dime in profit, I&rsquo;m long overdue
for having a few apps in the respective mobile stores.</li>
<li><strong>Ted will continue to write for various tech &lsquo;zines.</strong> I love having the back-page editorial at
CODE Magazine, the column in MSDN, and the various series on developerWorks, among others. I fully
intend to keep all that going at full speed. (And I&rsquo;m always looking for new outlets, if anybody has
any leads on paid technical content gigs!)</li>
<li><strong>And finally, Ted will try to blog more.</strong> The perennial projection. I&rsquo;ve got much to blog about,
including the patterns series, as well as some interesting themes and ideas floating around the ol&rsquo;
brain pan.</li>
</ul>
<p>Happy Holidays, and thanks for reading!</p>
2016 Tech Predictionshttp://blogs.tedneward.com/post/2016-tech-predictions/
Mon, 04 Jan 2016 20:54:34 -0800http://blogs.tedneward.com/post/2016-tech-predictions/
<p>As has become my tradition now for nigh-on a decade, I will first go back over last years&rsquo;
predictions, to see how well I called it (and keep me honest), then wax prophetic on what I
think the new year has to offer us.</p>
<h2 id="in-2015:a4c6fac861e72246054ac04aa0f1f2b1">In 2015&hellip;</h2>
<p>As per previous years, I&rsquo;m giving myself either a <b>+1</b> or a <b>-1</b> based on a
purely subjective and highly-biased evaluational criteria as to whether it actually happened
(or in some cases at least started to happen before 31 Dec 2015 ended).</p>
<p>In 2015, I said:</p>
<ul>
<li><b>"Big data", "Big data", "Big data". You will get sick of this phrase.</b>
I can't speak for everybody, but I can tell that the end is near for the term, because
suddenly everybody is using the term, and they're using it to mean anything and everything.
"Big data" is not just about doing deep data science analysis on petabytes of data; it's
about any analysis (even simple reporting) on any collection of data (no matter how large
or small) for any reason. <b>+1</b>
</li>
<li><b>"Internet of Things". You will get sick of this phrase, too.</b> This hasn't
quite happened yet, but we're close. IoT is also starting to fray at the edges as a
definition, and when that happens, it's immediately ripe for abuse and marketing. More
importantly, though, lots of people are starting to realize that IoT is neither the huge
"automatic win" that we all sort of assumed it would become. <b>+1</b>
</li>
<li><b>"Internet of Medicine" or "Big Med".</b> Well, nobody's started using the term yet,
but certainly they're spending a lot of time in this space. They just don't like my term
yet. I'm pouting over it, but it's still a <b>-1</b>.
</li>
<li><b>"Tech bubble" becomes a "thing".</b> Oh, this one came down to the very wire, but
as December of 2015 rolled in, concerns over the actual valuations of the so-called "unicorns"
were starting to show up, and lots of people were beginning to openly wonder if Silicon Valley
and Wall Street were experiencing a falling-out. It took a hail-mary pass to do it, but I'm
claiming my <b>+1</b>.
</li>
<li><b>C# and Java will both make big announcements.</b> C#6 shipped, but Java9 didn't, leaving
me sort of confused as to how to score this one. However, I did say, "Those who care will
take note, those who don’t, won’t. Really, we’re kind of past the point where either of
those languages are going to be interesting to anyone who’s not already in that space", and
frankly, if you weren't a C# or Java developer, you probably didn't even hear a whisper about
either one (pro/shipping or con/not-shipping), either way. <b>+1</b>
</li>
<li><b>Go is going to either take off, or crash and burn.</b> The point of this one was that
Go was reaching an inflection point, and while I think it's gathering some momentum (including
with me personally--this new blog is using Go to do the site generation), I can't really tell
if it reached an inflection point, so <b>-1</b> to me.
</li>
<li><b>Microsoft acquires Xamarin.</b> Oh, as much as I thought (and still think) that it would
be a great story for both sides, it didn't happen, and probably never will. *sigh* <b>-1</b>
</li>
<li><b>Amazon just quietly keeps churning.</b> I dunno how I would measure this one, but
in some ways, as long as Amazon just keeps churning out new feature after new feature on AWS,
and keeps making money selling stuff on their main web property--which they continue to do--then
I think pretty much anything here qualifies as a <b>+1</b>. But it was kind of a lame
prediction to begin with, now that I re-read it.
</li>
<li><b>Google continues to throw sh*t against the wall, looking for their Next Big Thing.</b> I
believe the exact phrase I used was, "Expect a lot of announcements, a lot of "beta"s, and
none of it with any kind of realistic or even well-planned business model behind it--including
the Google Car." And, sure enough, we've heard a ton about the Google Car, among other
initiatives, but nothing has stepped up as a product yet to even come close to acting as a
second line of income for the firm. I call it an easy <b>+1</b>.
</li>
<li><b>Web use on mobile devices decreases in favor of apps.</b> In particular, I said,
<blockquote>This is going to happen whether the public wants it or not, because companies have
figured out that it behooves them to have you "trapped" inside their app (where they can
control all the content) rather than on their website. More and more websites are going to try
and redirect you to inside their app, rather than allow you to casually browse on their site,
because then they think they "own" your eyeballs. The only way this changes is if/when some
firm gets crushed in the court of public opinion by doing something really stupid...
and that won't happen in 2015. Wait for it in 2016.
</blockquote>
Various "clickbait" sites were the ones I was thinking about in particular, and while some of
them (I'm looking at you, Uberfacts) have floated a mobile app out there, the apps themselves
don't seem to be lighting any fires in the mobile marketplaces. I'll talk more about this
in a bit, but for now, I'm giving myself a <b>-1</b>.
</li>
<li><b>Hipster "Uber for X" apps will be all the rage.</b> Have you been to San Francisco
recently? Talked to anybody on the street there? This one was a slam-dunk <b>+1</b>.
</li>
<li><b>Mark Zuckerberg grows up a little.</b> Zuckerberg will never admit it, but now that he's
married and starting a family and all, he's starting to grow up. His paternity leave step was
a big one, and signals that maybe he's finally ready to "adult" now. If so, he's in good
company--it took Bill Gates getting married and having kids to come in out of the rain, too,
and ever since that time, Bill's become a philanthropist of the highest order. <b>+1</b>
</li>
<li><b>Larry Ellison buys a sports team.</b> Didn't happen yet. <b>-1</b></li>
<li><b>Perl makes one final gasp at relevancy, fails, and begins to decompose.</b> Oh, this one
is funny; how could I be so right, and so wrong at the same time? Wrong, because Perl 6 actually
<a href="https://perl6advent.wordpress.com/2015/12/25/christmas-is-here/">finally shipped</a>. And yet,
so right because... well, how many people do you know using it? Or were even paying attention
when it came out? Or.... Yeah. <b>+1</b>
</li>
</ul>
<p>Nine up, four down. Not bad.</p>
<p>That was the easy part. Now, on to the&hellip;.</p>
<h2 id="2016-predictions:a4c6fac861e72246054ac04aa0f1f2b1">2016 Predictions</h2>
<p>In no particular order:</p>
<ul>
<li><strong>Microsoft will continue to roll out features on Azure, and start closing the gap between it and AWS.</strong>
This one is not hard to imagine. Microsoft is committed to making Azure a core part of their company
success and survival, and Amazon has a list of features that Azure lacks, so it really boils down to
&ldquo;take one down, cross it off the list, lather, rinse, repeat&rdquo;.</li>
<li><strong>(X)-as-a-Service providers will continue to proliferate.</strong> We&rsquo;re seeing a huge surge in these various
companies that are providing some vertical thing as a service, and for most of those they&rsquo;re tech-related
(such as Database-as-a-Service, Container-as-a-Service, and so on). Part of that is because if it&rsquo;s one
thing software developer geeks know what to do, it&rsquo;s what they wish they had when they were working that
last project and really wished they could have when they were working it. This coming year will mark the
high-water mark of companies that provide *aaS products to the developer community, and then they&rsquo;ll all
start cannibalizing each other and some shutdowns, acquisitions and partnerships will kick in.</li>
<li><strong>Apple will put out two, maybe three new products, and they&rsquo;ll all be &ldquo;meh&rdquo; at best.</strong> Let&rsquo;s be frank,
folks, the luster is off the shiny Apple logo on the site of the building. Tim Cook is no Steve Jobs,
and Apple of 2015 was not the Apple of 2005 or 2010. The Apple Watch is interesting, but it certainly
hasn&rsquo;t taken off. No watch seems to have, in fact, become the &ldquo;it&rdquo; thing. I don&rsquo;t see many of them (or
their Android competitors, to be fair) at tech conferences, and casually glancing around the airport
doesn&rsquo;t show a ton of them in use. I don&rsquo;t think this is going to change any time soon, either. For
most people, the wearable just hasn&rsquo;t really offered up that compelling reason yet, and I don&rsquo;t think
2016 is going to see one, either.</li>
<li><strong>iOS 10 will be called iOSX.</strong> Just because they can, and because it would confuse the hell out of
people, and because Steve Jobs is not here anymore to tell that VP of Marketing to sit down, shut up,
and let the grown ups do this.</li>
<li><strong>Android N will be code-named &ldquo;Nougat&rdquo;.</strong> They might go with &ldquo;Nutella&rdquo;, but that would involve
copyright and trademark issues, which I imagine they&rsquo;d want to avoid.</li>
<li><strong>Java9 will ship.</strong> This is a &ldquo;no-duh&rdquo; prediction, but I&rsquo;m not above claiming a few of those. Really,
the bigger question there will be <em>what</em> will ship that Oracle calls &ldquo;Java9&rdquo;, and my personal feeling
is that modules/Jigsaw/whatever-we&rsquo;re-calling-them-now won&rsquo;t be in it. Slamming a module mechanism
on top of a platform that&rsquo;s a decade old, millions of programmers wide and a billion lines of code
high is not easy, and I don&rsquo;t think Oracle really has the energy, motivation or need to push them
through the morass of headaches that will stem from imposing a module system into place.
<strong>UPDATE</strong>: @olivergierke <a href="https://twitter.com/olivergierke/status/684642273561329664">points out</a>
that Java9 had already slipped to 2017, so this one is automatically going to be a &ldquo;miss&rdquo; next
January. <a href="http://www.infoworld.com/article/3011445/java/java-9-delayed-by-slow-progress-on-modularization.html">The article he cites</a>
says that Oracle &ldquo;blamed the delay on complexities in developing modularization&rdquo;, a la Project
Jigsaw. Honestly, I&rsquo;m going to stand by this prediction, because it would not surprise me in the
slightest if Oracle comes back at some point in 2016 and says, &ldquo;You know what? Fuck it.&rdquo; and ships
Java9 without modularization in place&ndash;I don&rsquo;t really think Java9 needs it at this point, and I&rsquo;m
not entirely sure that shipping <em>with</em> it will make Java all that much better. Time will tell&hellip;</li>
<li><strong>Facebook will start looking for other things to do.</strong> Yes, Facebook has been ridiculously
successful to date; it claims more population than most nations on Earth, in fact. But the company
is led by a classic Type-A personality, and the softening of his character by the birth of his
firstborn notwithstanding, this is when Zuckerberg comes back from leave and says, &ldquo;OK, boys and
girls, it&rsquo;s time to take us down a new path!&rdquo; and charges off into who-the-Hell-knows-what. I won&rsquo;t
hinge the prediction on <em>what</em> that would be, I just think it&rsquo;ll be something outside of the social
media realm (or tied to it just a little bit).</li>
<li><strong>Google will continue to quietly just sort of lay there.</strong> Google, for all that they are on the top
of everybody&rsquo;s minds since that&rsquo;s the search engine most of us use, hasn&rsquo;t really done much by way of
software product invention recently. Google+, Google Hangouts, yeah, sure, that was so 2013, but
what have you done for us lately? And honestly, what have they done recently, in 2015? Casting back
through my memory (and setting Android off to the side, since I consider that more or less an
independent effort in a lot of ways), I came up with nothing. I suspect the same will be true of
2016&ndash;they will continue to do lots of innovative things, but it&rsquo;ll all be &ldquo;big&rdquo; and &ldquo;visionary&rdquo;
stuff, like the Google Car, that won&rsquo;t have immediate impact or be something we can use in 2016
(or 2017).</li>
<li><strong>Oracle will quietly continue to work on Java.</strong> Oracle took a bit of a PR hit this year when they
fired/let go a number of &ldquo;Java evangelists&rdquo;, and that set the newsstands aflame with hints and
rumors that Oracle was getting ready to abandon Java. Frankly, if I&rsquo;m Larry Ellison (or the VP
that has Java under my umbrella), I&rsquo;m asking a very fundamental question: What the hell does Java
need with evangelists at this point? Everybody more or less knows what it is already, there&rsquo;s
nothing to sell in of itself, and that money could probably be put to better use hiring people to
work on the codebase itself, or putting the cash back into the rest of the firm to hire a few more
Oracle Database salespeople. Oracle didn&rsquo;t acquire Java because they saw it as a way to inflict
the Oracle Database upon the world&ndash;quite the opposite. Oracle acquired Java because they <em>use</em>
Java, all over the place, and this way they had control over a technology that they had &ldquo;bet the
farm&rdquo; on in a variety of ways. They&rsquo;re not going to kill it, but there&rsquo;s really not a whole lot
of need to go around preaching its message, either. So they let the evangelists go, and they&rsquo;ll
just keep on keepin&rsquo; on.</li>
<li><strong>C# 7 will be a confused morass.</strong> Microsoft is now striding boldly into that open-source world
it timidly courted just a few years ago. But in a lot of ways, this is highly uncharted territory
for the software giant, and for the OSS world as well. Sure, Linus has been releasing Linux kernel
after Linux kernel for years, but with himself as the autocrat in charge of it all. Microsoft wants
to make use of the open source man-hours to help advance the cause of C# 7, but whether they&rsquo;ve
smoothed out what that process will look like and/or how they will deal with the inevitable
conflicts between committers and company isn&rsquo;t yet clear. (Oracle is in this same boat, in a lot
of ways, and there&rsquo;s a lot of people who think that Java is too much Oracle, not enough OSS, so
to speak.) I think the C# 7 release will be one of the first that the world gets to see take
shape in a purely public forum, and they will be a bit confused and surprised at how chaotic
a product release can really be. (Yes, C# 6 was sort of in that same boat, but only a handful
of folks were really paying attention.)</li>
<li><strong>Another version of Visual Basic will ship, and nobody will really notice.</strong> Actually, that
already happened&ndash;remember when C# 6 shipped? They shipped a new version of VB then, too.
Alas, the ship has sailed on VB, and frankly, at this point, it&rsquo;s really just a husk of its
former self. Most of the VB luminaries are all speaking and/or writing in C# these days, and
only staunch loyalty to their fond memories of the language is what keeps it at all in the
conversation anymore. Sad, but&hellip; Oh, well.</li>
<li><strong>Apple will now learn the &ldquo;joys&rdquo; of growing a language in the public as well.</strong> Swift is now
open-source, and that will bring with it the same pains as what Oracle and Microsoft are feeling.
Enjoy, guys!</li>
<li><strong>Ted will continue to layer in a few features into the blog engine.</strong> For example, right now
I have no comments feature, and I suspect people will want to start telling me how incredibly
<strong>wrong</strong> I am about so many of these. So, on the docket already, Disqus or Discourse or some
other JavaScript-based comment-engine integration. Plus, I want to tweak the template I&rsquo;m using
for the blog&rsquo;s look and feel a little (although keeping it way simple, especially compared
to what I had before), so there&rsquo;s likely to be more than a few tweaks here and there. (Again,
not really a hard prediction to make, but I always like to close on a prediction that I have a
relatively .9 probability chance of hitting.)</li>
</ul>
<p>Happy Holidays, and thanks for reading!</p>
Peoples be talkin'...http://blogs.tedneward.com/post/peoples-be-talkin/
Wed, 04 Jun 2014 04:13:43 -0800http://blogs.tedneward.com/post/peoples-be-talkin/<p>
"Ted, where the hell did you go?"
</p>
<p>
I've been getting this message periodically over a variety of private channels, asking
if I've abandoned my blog and/or if I'm ever going to come back to it. No, I haven't
abandoned it, yes, I'm going to come back to it, but there's going to be a few changes
to my online profile that I'll give you a heads-up around... if anybody cares. :-)
</p>
<p>
First of all, <a href="http://blogs.tedneward.com/2013/12/10/On+Endings.aspx">as I
mentioned before</a>, LiveTheLook and I parted ways back at the end of 2013. Sad,
but every cloud has a silver lining in that I found a new home as the CTO of <a href="http://www.itrellis.com">iTrellis</a>,
a custom software development and IT continuous improvement consultancy. And therein...
lies the root of my problem.
</p>
<p>
Truth time: I'm ridiculously busy. And even more ridiculously happy.
</p>
<p>
For years now, almost a full decade in fact, people have been asking me when I was
going to start up a consulting company. (In fact, before I jumped in to LiveTheLook,
I interviewed about a job at GitHub, and Phil Haack, who's known me for years, expressed
outright surprise at the idea. "I've always pictured you as the consummate consultant--what
makes you want to go work at a product company?" And truth was, he was right--the
idea of working for a product company (like GitHub) wasn't really a strong appeal.
What was appealing was the idea of growing a team, managing a group of developers,
making them better as a group in a variety of manager-y ways. That was a large part
of the attraction of LiveTheLook, though I never got to the point of hiring anyone
to work with me there.) My response to people has always been the same: I believe
that a company needs a triumvirate of people at the top--one to handle sales/marketing/business
development, one to handle the technology, and one to handle the operations. I could
never seem to find a great biz-dev guy, nor a great ops guy, and so thoughts of building
a consulting firm were pretty far off in the distance.
</p>
<p>
But after LtL, a mutual acquaintance heard that I was looking, and he knew two guys
who were looking for a CTO for this new consulting company they were spinning up.
Chris (CEO) and Paul (CFO) and I met a few times. Chris and I in particular spent
a fair amount of time talking, weighing the mutual decision to jump into this thing
together, because it was obvious from the very beginning that he and I would need
to be able to work well together--if he was going to go off and do biz-dev, he had
to trust that I could carry the implementation through, and I needed to trust that
he wasn't going to sell a bill of goods that was impossible for me to deliver while
he did it, and so on and so on and so on.
</p>
<p>
Six months later, we're at four current clients (with a fifth one scheduled to spin
up in July), five billable consultants (including Chris and I, working together to
do an IT assessment project for a $10bn business unit of a $100bn company out on the
East Coast), and there's strong evidence to suggest that we'll crest the $1mn mark
in our first year of existence.
</p>
<p>
Yeah... it's been a fun ride so far. :-) And neither Chris nor I have any intention
of slowing down any time soon.
</p>
<p>
But, what I'm finding is that between billable hours, biz-dev meetings, implementation
meetings, one-on-ones with my people, speaking, and writing for the various publications
I still write for, I have almost no energy left to blog. At least, for now.
</p>
<p>
I have plans, though. Here's what I'm looking to do:
</p>
<ul>
<li>
First, we're going to stand up an iTrellis blog, and a lot of technical content I
write will be hosted in both places (there and here), where and when it makes sense.
Maybe, over time, the content will shift in quantity to over there, but I'll probably
always keep this channel open in some fashion.</li>
<li>
Second, I want to spin up a "personal blog", one in which I feel more comfortable
expressing completely non-technical ideas and topics, including politics and such.
That way, those who are interested in just the technical content can still get that,
and those who want to hear what I think about the rest of the world can tune in on
a separate channel.</li>
<li>
Third, I'll likely migrate this content into a new technical blog over at the "new"
professional website I'm slowly building out for myself, at <a href="http://www.newardassociates.com">www.newardassociates.com</a>.
That will eventually, over time, become the only technical channel I use, but I'll
set something up at this domain to redirect links to the corresponding blog entries
over there. That is going to be the real PITA in all of this, because I really want
to preserve the old links without having to stand up the same blog system over there.
(I'm "done" with the idea of a server-side processed blog--the blog entries should
be just plain ol' HTML, generated from whatever source I choose to write in, a la
Jekyll and its ilk. Plus, I never again want a blog with anything other than tech-agnostic
URLs; the whole ".../On+Endings.aspx" thing is soooooo 1997. Why should you--or I--care
what the underlying implementation is?)</li>
</ul>
Of course, like all plans, this is subject to change based on whatever obstacles pop
up to distract me. ("Want to make God laugh? Tell Him your plans." --old Yiddish proverb)
<p>
(By the way, if you have any experience with taking a dasBlog blog and redirecting
the links over to a new site, please email me how you did it and/or what tools you
used to do it. I'd really prefer to not have to write that redirect handler myself,
if I can help it. I don't even care too much about the comments--it's the entry links
I really want to preserve. I'm even willing to discuss payment measured in bottles
of Scotch... :-) )
</p><p>
I will, at a minimum, promise to keep up the Tech Predictions, though, no matter what
else happens. That's an eight-year tradition that I have absolutely no intention of
ever giving up. Even when I'm old and crotchety and every prediction reads, "I remember
when Swift was first released... you young'un's have NO IDEA what it was like to actually
type your code into an editor. It was hard! It was painful on the fingers! And WE
LIKED IT!"
</p>
Seattle (and other) GiveCampshttp://blogs.tedneward.com/post/seattle-and-other-givecamps/
Thu, 29 Aug 2013 12:19:45 -0700http://blogs.tedneward.com/post/seattle-and-other-givecamps/<p>Too often, geeks are called upon to leverage their technical expertise (which, to most non-technical peoples' perspective, is an all-encompassing uni-field, meaning if you are a DBA, you can fix a printer, and if you are an IT admin, you know how to create a cool HTML game) on behalf of their friends and family, often without much in the way of gratitude. But sometimes, you just gotta get your inner charitable self on, and what's a geek to do then? Doctors have "Doctors Without Boundaries", and lawyers can always do work "pro bono" for groups like the Innocence Project and so on, but geeks....? Sure, you could go and join the Peace Corps, but that's hardly going to really leverage your skills, and Lord knows, there's a ton of places (charities) that could use a little IT love while you're off in a damp and dismal jungle somewhere.</p>
<p>(Not you, Seattle. You're just damp today. Dismal won't be for another few months, when it's raining for weeks on end.)</p>
<p>(As if in response, the rain comes down even harder.)</p>
<p>About five or so years ago, a Microsoft employee realized that geeks didn't really have an outlet for their desires to volunteer and help out in their communities through the skills they have patiently mastered. So Chris created <a href="http://givecamp.org/">GiveCamp</a>, an organization dedicated to hosting "GiveCamps" all over the US, bringing volunteer developers, designers, and other IT professionals together with charities that need some IT love, whether that's in the form of a new mobile app, some touch-up on the website, a port from a Microsoft Access app to something even remotely more modern, or whatever.</p>
<p><a href="http://www.seattlegivecamp.org/">Seattle GiveCamp</a> is coming up, October 11-13, at the Microsoft Commons. No technical bias is implied by that--GiveCamp isn't an evangelism event, it's a "let's help people" event. Bring your Java, PHP, Python, and yes, maybe even your Perl, and create some good karma for groups that are doing good things. And for those of you not local to Seattle, there's lots of other GiveCamps being planned all over the country--consider volunteering at one nearby.</p>
On speakers, expenses, and stipendshttp://blogs.tedneward.com/post/on-speakers-expenses-and-stipends/
Mon, 26 Aug 2013 20:09:01 -0700http://blogs.tedneward.com/post/on-speakers-expenses-and-stipends/<p>In the past, I've been asked about <a href="http://channel9.msdn.com/Shows/HanselminutesOn9/Hanselminutes-on-9-The-Death-of-the-Professional-Conference-Speaker">my thoughts on conferences and the potential "death" of conferences</a>, and the question came up again more recently in a social setting. It's been a while since I commented on it, and if anything, my thoughts have only gotten sharper and clearer.</p>
<h3>On speaking professionally</h3>
<p>When you go to the dentist's office, who do you want holding the drill--the "enthused, excited amateur", or the "practiced professional"?</p>
<p>The use of the term "professional" here, by the way, is not in its technical use of the term, meaning "one who gets paid to perform a particular task", but more in a follow-on to that, meaning, "one who takes their commitment very seriously, and holds themselves to the same morals and ethics as one who would be acting in a professional capacity, particularly with an eye towards actually being paid to perform said task at some point". There is an implicit separation between someone who plays football because they love it, for example, going out on Sunday afternoons and body-slamming other like-minded individuals just because of the adrenaline rush and the male bonding, and those who go out on Sunday afternoons and command a rather decently-sized salary ($300k at a minimum, I think?) to do so. Being a professional means that not only is there a paycheck associated with the activity, but a number of responsibilities--this means not engaging in stupid activity that prevents you from being able to perform your paid activity. In the aforementioned professional athlete's case, this means not going out and doing backflips on a dance floor (*ahem*, Gronkowski) or playing some other sport at a dangerous level of activity. (In the professional speaker's case, it means arranging travel plans to arrive at the conference at least a day before your session--never the day of--and so on.)</p>
<p>For a lot of people, speaking at an event is an opportunity for them to share their passion and excitement about a given topic--and I never want to take that opportunity away from them. By all means, go out and speak--and maybe in so doing, you will find that you enjoy it, and will be willing to put the kind of time and energy required into doing it well.</p>
<p>Because, really, at the end of the day, the speakers you see in the industry that are very, very good at what they do, they weren't just "born" that way. They got that way the same way professional athletes got that way, by doing a lot of preparation and work behind the scenes. They got that way because they got a lot of "first team reps", speaking at a variety of events. And they continue to get better because they continue to speak, which means continuously putting effort and energy into new talks, into revising old talks, and so on.</p>
<p>But all of that time can't be for free, or else people won't do it.</p>
<p>Go back to the amateur athlete scenario: the more time said athlete has to work at a different job to pay the bills, the less time they have to prep and master their athletic skills. This is no different for speakers--if someone is already spending 8 hours a day working, and another 6 to 8 hours a day sleeping, then that's 8 to 10 hours in the day for everything else, including time spent with the family, eating, personal hygiene, and so on, including whatever relaxation time they can carve out. (And yes, we all need some degree of relaxation time.) When, exactly, is this individual, excited, passionate, enthused (or not), supposed to get those "first team reps" in? By sacrificing something else: time with the family, sleep, a hobby, whatever.</p>
<p>Don't you think that they deserve some kind of compensation for that time?</p>
<p>I know, I know, the usual response is, "But they're giving back to the community!" Yes, I know, you never really figured anything out on your own, you just ran off to StackOverflow or Google and found all the code you needed in order to learn the new technology--it was never any more effort on your own part than that. You OWE the community this engagement. And, by the way, you should also owe them all the code you ever write, for the same reason, because it's not like your employer ever gave you anything for that code, and it's not like you did all that research and study for the code you work on for them.</p>
<p>See, the tangled threads of "why" we do something are often way too hard to unravel. So let's instead focus on the "what" you did. You submitted an abstract, you created an outline, you concocted some slides, you built some demos, you practiced your talk, you delivered it to the audience, and you submitted yourself to "life's slings and arrows" in the form of evaluations. And for all that, the conference organizers owe you nothing? In fact, you're required to pay for the privilege of doing all that?</p>
<h3>On "professional" conferences</h3>
<p>One dangerous trend I see in conferences, and it's not the same one I saw in 2009, is that the main focus of a conference is shifting; no longer is it a gathering of like-minded professionals who want to improve their technical skills by learning from others. Instead, it's turning into a gathering of people who want to party, play board games, gorge themselves on bacon, drink themselves to a stupor, play in a waterpark or go catch a Vegas show with naked women in it. Somehow, "professional developer conference" has taken on all the overtones of a Bacchanalian orgy, all in the name of "community".</p>
<p>Don't get me wrong--I think it can be useful to blow off some steam during a show, particularly because for most people, absorbing all this new information is mentally exhausting, and you need time to process it, both socially (in the form of hallway conversations) and physically (meaning, go give your body something to do while your mind is churning away). But when the focus of the conference shifts from "speakers" to "bacon bar", that's a dangerous, dangerous sign.</p>
<p>And you know what the first sign is that the conference doesn't think it's principal offering is the technical content? When they won't even cover the speakers' costs to be at that event.</p>
<p>Seriously, think about it for a moment: if the principal focus of this event is the exchange of intellectual and industrial information, through the medium of a lecture given by an individual, then where should your money go? The bacon bar? Or towards making sure that you have the best damn lecturers your budget can afford?</p>
<p>When a conference doesn't offer to pick up airfare and hotel, then in my mind that conference is automatically telling the world, "We're willing to bring in the best speakers that are willing to do this all for free!" And how many of you would be willing to eat at a restaurant that said, "We're willing to bring in the best chefs that are willing to cook for free!"? Or go to a hospital that brings in "the best doctors that are willing to operate for free!"?</p>
<p>And how many of you are willing to part of your own money to go to it?</p>
<p>For community events like CodeCamps, it's an understood proposition that this is more about the networking and community-building than it is about the quality of the information you're going to get, and frankly, given that the CodeCamp is a free event, there's also an implicit "everybody here is a volunteer" that goes with it that explains--and, to my mind, encourages--people who've never spoken before to get up and speak.</p>
<p>But when you're a CodeMash, a devLink, or some of these other shows that are charging you, the attendee, a non-trivial amount of money to attend, and they're not covering speakers' expenses at a minimum, then they're telling you that your money is going towards bacon bars and waterparks, not the quality of the information you're receiving.</p>
<p>Yes, there are some great speakers who will continue to do those events, and Gods' honest truth, if I had somebody to cover my mortgage and/or paid me to be there, I'd love to do that, too. But many of those people who are paid by a company to be speaking at events are called "evangelists" and "salespeople", and developers have already voted with their feet often enough to make it easy to say that we don't want a conference filled with "evangelists" and "salespeople". You want an unbiased technical view of something? You want people to talk about a technology that don't have an implicit desire to sell it to you, so that they can tell you both what it's good for and where it sucks? Then you want speakers who aren't being paid by a company to be there; instead, you want speakers who can give you the "harsh truth" about a technology without fear of reprisal from their management. (And yes, there are a lot of evangelists who are very straight-shooting speakers, and I love 'em, every one. But there's a lot more of them out there who aren't.)</p>
<p>In many cases, for the conference to deliver both the bacon bar and the speakers' T&E, it would require your attendance fee to go up some. By rough back-of-the-napkin calculations, probably about $50 for each of you, depending on the venue, the length of the conference, the number of speakers (and the number of talks they each do), and the total number of attendees. Is it worth it?</p>
<p>When you go to the dentist's office, do you want the "excited, enthused amateur", or the "practiced professional"?</p>
On startupshttp://blogs.tedneward.com/post/on-startups/
Mon, 26 Aug 2013 14:37:25 -0700http://blogs.tedneward.com/post/on-startups/<p>Curious to know what Ted's been up to? Head on over to <a href="http://signup.livethelook.com">here</a> and sign up.</p>
<p>Yes, I'm a CTO of a bootstrap startup. (Emphasis on the "bootstrap" part of that--always looking for angel investors!) And no, we're not really in "stealth mode", I'll be happy to tell you what we're doing if you drop me an email directly; we're just trying to "manage the message", in startup lingo.</p>
<p>We're only going to be under wraps for a few more weeks before the real site is live. And then.... *crossing fingers*</p>
<p>Don't be too surprised if the tone of some of my blog posts shifts away from low-level tech stuff and starts to include some higher-level stuff, by the way. I'm not walking away from the tech, by any stretch, but becoming a CTO definitely has opened my eyes, so to speak, that the entrepreneur CTO has some very different problems to think about than the enterprise architect does.</p>
Programming Interviewshttp://blogs.tedneward.com/post/programming-interviews/
Mon, 19 Aug 2013 21:30:55 -0700http://blogs.tedneward.com/post/programming-interviews/<p>Apparently I have become something of a resource on programming interviews: I've had three people tell me they read the last two blog posts, one because his company is hiring and he wants his people to be doing interviews right, and two more expressing shock that I still get interviewed--which I don't really think is all that fair, more on that in a moment--and relief that it's not just them getting grilled on areas that they don't believe to be relevant to the job--and more on that in a moment, too.</p>
<p>A couple of things have emerged in the last few weeks since the saga described earlier, so I thought I'd wrap the thing up with a final post. Besides, I like things that come in threes.</p>
<p><b>First, go see <a href="http://channel9.msdn.com/Events/ALM-Summit/ALM-Summit-3/Technical-Interviewing-You-re-Doing-it-Wrong">this video</a>.</b> Jonathan pinged me about it shortly after the second blog post came out, and damn if he and Mitch don't nail a bunch of things directly on the head. Specifically, I want to call out two lists they put into their slides (which I can't find online, or I'd include a link, sorry).</p>
<p>One, what are the things you're trying to answer in an interview? They call it out as three questions an interviewer or interview team is seeking to answer:
<ol>
<li>Can they do the job?</li>
<li>Will they be motivated?</li>
<li>Would they get along with the team?</li>
</ol>
Personally, #2 to me is a red herring--frankly, I expect that if you, the candidate, take a job with my company, then either you have determined that you will be motivated to work here, or else you can force yourself to be. I don't really expect you to be the company cheerleader (unless, of course, I'm hiring you for that role), but I do expect professionalism: that you will be at work when you are scheduled or expected to be, that you will do quality work while you are there, and that you will look to make the best decisions possible given the information you have at the time. Motivation is not something I should be interviewing for; it's something you should be bringing.</p>
<p>But the other two? Spot-on.</p>
<p>And this brings me to my interim point: <b>I'm not opposed to a programming test.</b> I think I gave the impression to a number of readers that I think that I'm too good or too famous or whatever to be tested on my skills; that's the furthest thing from the truth. I think you most certainly should be verifying that I have the technical chops to do the job you want me to do; what I do want to suggest, however, is that for a number of candidates (myself included), there are ways to determine my technical chops without forcing me to stand at a whiteboard and code with a pen. For some candidates, you can examine their GitHub profile and see how many repos they have that're public (and have a look through some of the code they wrote). In fact, what I think would be a <i>great</i> interview question would be to look at a repo they haven't touched in a year, find some element of the code inside there, and ask them to explain what they were thinking when they wrote it. If it's well-documented, or if it's simple code, they'll be able to do that fairly quickly (once they context-swap to the codebase--got to give them time to remember, after all). If it's a complex or tricky bit, and they can't explain it...</p>
<p>... well, you just learned something about the code they write, now didn't you?</p>
<p>In my case, I have no public GitHub profile to choose from, but I'm an edge case, in that you can also watch my videos, and/or read my books and articles. Granted, there's a chance that I have amazing editors who save me from incredible stupidity and make me look good... but what are the chances that somebody is doing that for over a decade, across several technology platforms, and all without any credit? Probably pretty close to nil, IMHO. I'm not unique in this case--there's others whose work more or less speaks for itself, and I think you're disrespecting the candidate if you don't do your homework on the interview first.</p>
<p>Which, by the way, brings up another point: As an interviewer, you have a responsibility to do your homework on the candidate before they walk in the door, particularly if you're expecting them to have done their homework on your firm. Don't waste my time (and yours, particularly since yours is probably a LOT more expensive than mine, considering that a lot of companies are doing "interview loops" these days with a team of people, and all of their time adds up). If you're not going to take my candidacy seriously, why should I take your job or job offer or interview seriously?</p>
<p>The second list Jon and Mitch call out is their "interviewing antipatterns" list:
<ul>
<li>The Riddler</li>
<li>The Disorienter</li>
<li>The Stone Tablet</li>
<li>The Knuth Fanatic</li>
<li>The Cram Session</li>
<li>Groundhog Day</li>
<li>The Gladiator</li>
<li>Hear No Evil</li>
</ul>
I want you to watch the video, so I'm not going to summarize each here; go watch it. If you're in a position of doing hiring, ask yourself how many of those you yourself are perpetrating.</p>
<p><b>Second, go read <a href="http://firstround.com/article/The-anatomy-of-the-perfect-technical-interview-from-a-former-Amazon-VP">this article</a>.</b> I don't like that he has "Dig into algorithms, data structures, code organization, simplicity" as one of his takeaways, because I think most interviewers are going to see "algorithms" and "data structures" and stop there, but the rest seems pretty spot-on.</p>
<p><b>Third, ask yourself the critical question: What, exactly, are we doing wrong?</b> You think you're an agile organization? Then ask yourself how much feedback you get on your interviewing process, and how you would know if you screwed it up. Yes, you will know if hire a bad candidate, but how will you know if you're letting good candidates go? Maybe you're the hot company that everybody wants to work at, and you can afford to throw some wheat out with the chaff a few times, but you're not going to be in that position for long if you do, and more importantly, you're not going to be in that position for long, period. If you don't start trying to improve your hiring process now, by the time you need to, it'll be too late.</p>
<p><b>Fourth, practice!</b> When unit-testing came out, many programmers said, "I don't need to test my code, my code is great!", and then everybody had a good laugh at their expense. Yet I see a lot of companies say essentially the same thing about their hiring and interview practices. How do you test an interview process? Easy--interview yourselves. Work with known-good conditions (people you know, people who work with you already, and so on), and run them through the process, but with the critical stipulation that <i>you must treat them exactly as you would a candidate</i>. If you look at your tech lead and say, "Yeah, this is where I'd ask you a technical question, but I already know...", then unless you're prepared to do that for your candidates, you're cheating yourself on the feedback. It's exactly like saying, "Yeah, this is where I'd write a test checking to see how we handle a null in that second parameter, but I already know...". If you're not prepared to do the latter, don't do the former. (And if you are prepared to do the latter, then I probably don't want to work with you anyway.)</p>
<p><b>Fifth, remember: Interviewing is not easy!</b> It's not easy on the candidates, and it shouldn't be on you. It would be great if you could just test somebody on one dimension of themselves and call it good, but as much as people want to pretend that a programmer is just a code-spewing cog in a machine, they're not. If you want well-rounded candidates, then you must interview all aspects of that well-roundedness to determine if they are or not.</p>
<p>Whatever you interview for, that's what you will get.</p>
On "Exclusive content"http://blogs.tedneward.com/post/on-exclusive-content/
Mon, 19 Aug 2013 19:17:56 -0700http://blogs.tedneward.com/post/on-exclusive-content/<p>Although it seems to have dipped somewhat in recent years, periodically I get requests from conferences or webinars or other presentation-oriented organizations/events that demand that the material I present be "exclusive", usually meaning that I've never delivered said content at any other organized event (conference or what-have-you). And, almost without exception, I refuse to speak at those events, or else refuse to abide by the "exclusive" tag (and let them decide whether they still want me to speak for them).</p>
<p>People (by which I mean "organizers"--most speakers seem to get it intuitively if they've spoken at more than five or so conferences in their life) have expressed some surprise and shock at my attitude. So, I decided to answer some of the more frequently-asked questions that I get in response to this, partly so that I don't have to keep repeating myself (yeah, right, as if said organizers are going to read my blog) and partly because putting something into a blog is a curious form of sanity-check, in that if I'm way off, commenters will let me know posthaste.</p>
<p>Thus...:
<ul>
<li><b>"Nobody will come to our conference/listen to our webinar if the content is the same as elsewhere."</b> This is, by far, the first and most-used reaction I get, and let me be honest: if people came to your conference or fired up your webinar solely because of the information contained, they would never come to your conference or listen to your webinar. The Internet is huge. Mind-staggeringly huge. Anything you could possibly ever want about any topic you could ever possibly imagine, it's captured it somewhere. (There's a corollary to that, too; I call it "Whittington's Law", which states, "Anything you can possibly imagine, the Internet not only has it, but a porn site version of it, as well".) You will never have exclusive content, because unless I invented the damn thing, and I've never shown it to anybody or ever used it before, somebody will likely have used it, written a blog post or a video tutorial or what-have-you, and posted it to the Internet. Therefore, by definition, it can't be exclusive.</li>
<li>But even on top of that first point, no presentation given by the same guy using the same slides is ever exactly the same. Anybody who's ever seen me give a talk twice knows that a lot of how I give my presentations is extremely ad-hoc; I like to write code on the fly, incorporate audience feedback and participation, and sometimes I even get caught up in a tangent that we explore along the way. None of my presentations are ever scripted, such that if you filmed two of them and played them side-by-side, you'll see marked and stark differences between them. And frankly, if you're a conference organizer, you should be quite happy about this, because one of the first rules of presenting is to "Know thy audience", but if you can't know your audience ahead of time, what course is left to you but to poll the audience when you first get started, and adjust your presentation based on that?</li>
<li><b>"Sure, the experience won't be as great as if they were in the room at the time, but if they can get the content elsewhere, why should they come to our conference?"</b> Well.... Honestly, that question really needs to be rephrased: "Given all the vast amounts of information out there on the Internet, why should someone come to your conference, period?" If you and your fellow organizers can't answer that question, then my content isn't going to help you in the slightest. TechEd and other big conferences that stream all of their content to the Web seem to be coming to the realization that there is something about the in-person experience that still creates value for attendees, so maybe you should be thinking about that, instead. Yes, you will likely lose a few ticket sales from people watching the content online, but if those numbers are staggeringly large, it means that your conference offered nothing but content in the first place, and you were going to see those numbers drop off significantly anyway once the majority of your audience figured out that the content is available elsewhere. And for free, no less.</li>
<li><b>"But why is this so important to you?"</b> Because, my friends, everything gets better with practice, and that includes presentations. When I taught for <a href="http://www.develop.com">DevelopMentor</a> lo those many years ago, one of the fundamental rules was that "You don't really know a deck until you've delivered it five times". (I call it "Sumida's Law", after the guy who trained me there.) What's more, the more often you've presented on a subject, the more easily you see the "right" order to the topics, and better ways of explaining and analogizing those topics occur to you over time. ("Halloway's Corollary to Sumida's Law": "Once you've delivered a deck five times, you immediately want to rewrite it all".) To be quite honest with you all, the first time I give a talk is much like the beta release of any software product: it takes user interaction and feedback before you start to see the non-obvious bugs.</li>
</ul>
</p>
<p>I still respect the conference or webinar host that insists on exclusive content, and I wish you well finding your next speaker.</p>
More on the Programming Tests Sagahttp://blogs.tedneward.com/post/more-on-the-programming-tests-saga/
Thu, 25 Jul 2013 14:19:48 -0700http://blogs.tedneward.com/post/more-on-the-programming-tests-saga/<p>A couple of people had asked how the story with the company that triggered the "I Hate Programming Tests" post ended, so I figured I'd follow up with the rest of that story, and some thoughts.</p>
<p>After handing in the disjoint-set solution I'd come up with, the VP pondered things for a bit, then decided to bring me in for an in-person interview loop with a half-dozen of the others that work there. I said I'd be happy to, and came in, did a brief meet-and-greet with the group of folks I'd be interviewing with (plus, I think, a few others), and then we got to the first interview mono-a-mono, and after a brief "Are you familiar with MVC?", we get into...</p>
<p>... another algorithm challenge. A walk-up-to-the-whiteboard-and-code-this challenge.</p>
<p>OK, whatever. I already said I'm not great with algorithmic challenges like this, but maybe this guy didn't get the memo or he's just trying to see how I reason things through. So, sure, let's attack this, even though I haven't done this kind of problem in like twenty years. (One of the challenges was "How do you sort a file of integer numbers when you can't store the entire collection of numbers in memory?", which wasn't an unfair challenge, just not something that I generally have to mess with. Honestly, in the working world, I'll start by going through the file number by number--or do chunks of the file in parallel using actors, if the file is large enough--and shove them into a database that's indexed on that number. But, of course, as with all of these kinds of challenges, the interviewer continues to throw constraints at the problem until we either get to the solution he wants or Ted runs out of imagination; in this case, I think it was the latter.) End result: not a positive win.</p>
<p>Next interviewer walks in, he wasn't there for the meet-and-greet, which means he has even less context about me than the guy before me, and he immediately asks... another algorithmic challenge. "If you have a tree of nodes, and you want to get a list of the nodes in rank order" (meaning, a breadth-first search, where each node now gets a "sibling" pointer pointing to the sibling on its right in the tree, or null if it's the rightmost node at that depth level) "how would you do it?" Again, a fail, and now I'm getting annoyed. I admitted, from the outset, that this is not the kind of stuff I'm good at. We've already made that point. I accept the "F" on that part of my report card. What's even more annoying, the interviewer keeps sighing and drumming his fingers in an obvious state of "Why is this bozo wasting my time like this, I could be doing something vastly more important" and so on, which, gotta say, was kind of distracting. End result: total fail.</p>
<p>By this point, I'm really annoyed. The VP comes to meet me, asks how it's going, and I tell him, flatly, "Sucks." He nods, says, yeah, we're going to kill the interview loop early, but I want to talk to you over lunch (with another employee along for company) and then have you meet with one more person before we end the exercise.</p>
<p>Lunch goes quite well, actually, and the last interview of the day is with their Product Manager, who then presents me with a challenge: "Suppose I want to build an online system for ordering pizzas. Customers can order pizzas, in other words. Build for me either the UI or the data model for this system." OK, this is different. I choose the data model, and build a ridiculously simple one-to-many relationship of customers to orders, and a similar one-to-many for orders to pizzas. She then proceeds to complicate the model step by step, sometimes in response to my questions, sometimes out of the blue, until we have a fairly complex roughly-sketched data model on the whiteboard. Result: win.</p>
<p>The VP at this point is on the horns of a dilemma: two of the engineers in the interview loop are convinced I'm an idiot. They're clearly voting no on this. But he's read my articles, he's seen some of my presentations, he knows I'm not the idiot the others assume me to be, and he's now trying to figure out what his next steps are. He takes a week to think about it, then emails me yesterday to say that it's not going to work.</p>
<p>Here's my thoughts, and folks, if you interview people or are part of an interview process, I'm trying to generalize this beyond this one experience to take it into a larger context:
<ul>
<li><b>Know what you want to prove with your interview.</b> I get the feeling that this interview loop was essentially a repeat of every interview loop they've ever done before, with no consideration to the candidate himself. An interview is a chance for the company to get to know the candidate better, in order to make a well-informed decision. In this particular case, trying to suss out my skills around algorithms was a wasted effort--I'd already conceded that point. Therefore, find new questions! Find new areas in which to challenge the candidate to see what their skills are. (If you can't think of something else to ask, then you're not really thinking about the interview all that hard, and you're just going through the motions.)</li>
<li><b>Look for the proof you seek in other areas.</b> With the growth of things like Github and open source projects in general, it's becoming easier and easier to prove to yourself as a company that a candidate does or does not have the coding skills you're looking for. Did this guy submit some pull requests to a project? Did this guy post some blogs about interesting technical tidbits? (Or, Lord help us, write articles for major publications?) Did this guy author an open-source project, or work on a project that other people know about? Look at it this way: If Anders Heljsberg, Bjarne Stroustrup or James Gosling walk through the door, are you going to put them through the same interview questions you put the random recruiter-found candidate goes through? Or are you willing to consider their established body of work and call it covered? As an interviewer, it behooves you to look for that established body of work, so that you can spend the interview loop looking at other things.</li>
<li><b>Be clear in what you want.</b> One of the things the VP said to me was that he was looking for somebody who had a similar skillset to what he had; that is, had a architectural view of things and an interest in managing the people involved. By then submitting my candidacy to a series of tests that didn't really test for those things, he essentially torpedoed whatever chances it might have had.</li>
<li><b>Be willing to assert your authority.</b> If you're the VP of the company, and the people who work for you disagree with your decisions, sometimes the Right Thing To Do is to simply overrule them. Yes, I know, it's not all politically correct to do that, and if you do it too often you'll ruin whatever sense of empowerment that you want your employees to have within the company, but there are times when you just need to assert that authority and say, "You know what? I appreciate y'all's input, but this is one of those cases where I think I have a different enough perspective that I am going to just overrule and do it anyway." Sometimes you'll be right, yay, and sometimes you'll be wrong, boo, but there is a reason you're the VP or the Director or the Team Lead, and not the others. Leadership means making hard decisions sometimes.</li>
<li><b>Be willing to change up the process.</b> So your candidate comes in, and they're a junior programmer who's just graduated college, with zero experience. Do you then start asking them questions about their experience? That would be a waste of their time and yours. So you'll have to come up with new questions and a new approach. Not all interviews have to be carbon copies of each other, because certainly your candidates aren't carbon copies of each other. (At least, you'd better hope not, or else you're going to end up with a pretty single-dimensional staff.) If they've proven their strength in some category, or admitted a lack in another, then drop your standard set of questions, and go to something different. There is no honor in asking the exact same questions of every candidate.</li>
<li><b>Be willing to hire somebody that offers complementary skills.</b> If your company already has a couple of engineers who know algorithms really well, then hire somebody for a different skillset. Likewise, if your company already has a couple of people who are really good with customers, you don't need another one. Look for people that have skills that fall outside the realm of what you currently have, and trust that when that individual is presented with a problem that attacks their weakness, they'll turn to somebody else in the firm to help them with it. When presented with an algorithmic challenge, you're damn well sure that I'm going to turn to somebody next to me and say, "Hey, dude? Help me walk through this for a bit, would you?" And, in turn, if that engineer has to give a presentation to a customer, and they turn to me and say, "Hey, dude? Help me work on this presentation, would you?", I'm absolutely ready to chip in. That's how teams are built. That's why we have teams in the first place.</li>
</ul>
In the end, this is probably the best of all possible scenarios, not working for them, particularly since I have some other things brewing that will likely consume all of my attention in the coming months, but there's that part of me that hates the fact that I failed at this. That same part of me is now going back through a few of the "interview challenges" books that I picked up, ironically, for my eldest son when he goes out and does his programming interviews, just to work through a few of the problems because I HATE feeling inadequate to a challenge.</p>
<p>And that, in turn, raises my next challenge: I want to create a website, just a static thing, that has a series of questions that, I think, are far better coding challenges than the ones I was given. I don't know when or if I'm going to get to this, but I gotta believe that any of the problems out of the book "Programming Challenges" (by Skiena and Revilla, Springer-Verlag, 2003) or the website from which those challenges were drawn, would be a much better test of the candidate's ability, particularly if you look at the ancillary parts of the challenge: do they write tests, how do they write their tests, do they pair well with somebody, and so on. THOSE are the things you really care about, not how well they remember their college lessons, which are easily accessible over Google or StackOverflow.</p>
<p>Bottom line: Your time is precious, people. Interview well, or just don't bother.</p>
Programming Testshttp://blogs.tedneward.com/post/programming-tests/
Tue, 09 Jul 2013 00:02:11 -0700http://blogs.tedneward.com/post/programming-tests/<p>It's official: I hate them.</p>
<p>Don't get me wrong, I understand their use and the reasons why potential employers give them out. There's enough programmers in the world who aren't really skilled enough for the job (whatever that job may be) that it becomes necessary to offer some kind of litmus test that a potential job-seeker must pass. I get that.</p>
<p>And it's not like all the programming tests in the world are created equal: some are pretty useful ways to demonstrate basic programming facilities, a la the FizzBuzz problem. Or some of the projects I've seen done, a la the "Robot on Mars" problem that ThoughtWorks handed out to candidates (a robot lands on Mars, which happens to be a cartesian grid; assuming that we hand the robot these instructions, such as LFFFRFFFRRFFF, where "L" is a "turn 90 degrees left", "R" is a "turn 90 degrees right", and "F" is "go forward one space, please write control code for the robot such that it ends up at the appropriate-and-correct destination, and include unit tests), are good indicators of how a candidate could/would handle a small project entirely on his/her own.</p>
<p>But the ones where the challenge is to implement some algorithmic doodad or other? *shudder*.</p>
<p>For example, one I just took recently asks candidates to calculate the "disjoint sets" of a collection of sets; in other words, given sets of { 1, 2, 3 }, { 1, 2, 4 } and { 1, 2, 5 }, the result should be sets of {1,2},{3},{4}, and {5}. Do this and calculate the big-O notation for your solution in terms of time and of space/memory.</p>
<p>I hate to say this, but in twenty years of programming, I've never had to do this. Granted, I see the usefulness of it, and granted, it's something that, given large enough sets and large enough numbers of sets, will make a significant difference that it bears examination, but honestly, in times past when I've been confronted with this problem, I'm usually the first to ask somebody next to me how best to think about this, and start sounding out some ideas with them before writing any bit of code. Unit tests to test input and its expected responses are next. Then I start looking for the easy cases to verify before I start attacking the algorithm in its entirety, usually with liberal help from Google and StackOverflow.</p>
<p>But in a programming test, you're doing this alone (which already takes away a significant part of my approach, because being an "external processor", I think by talking out loud), and if it's timed (such as this one was), you're tempted to take a shortcut and forgo some of the setup (which I did) in order to maximize the time spent hacking, and when you end up down a wrong path (such as I did), you have nothing to fall back on.</p>
<p>Granted, I screwed up, in that I should've stuck to my process and simply said, "Here's how far I got in the hour". But when you've been writing code for twenty years, across three major platforms, for dozens of Fortune 500 companies and architected platforms that others will use to build software and services for thousands of users and customers, you feel like you should be able to hack something like this out fairly quickly.</p>
<p>And when you can't, you feel like a failure.</p>
<p>I hate programming tests.</p>
<p><b>Update:</b> By the way, as always, I would love some suggestions on how to accomplish the disjoint-set problem. I kept thinking I was close, but was missing one key element. I particularly would LOVE a nudge in doing it in a object-functional language, like F# or Scala (I've only attempted it in C# so far). Just a nudge, though--I want to work through it myself, so I learn.</p>
<p><b>Postscript</b> An analogy hit me shortly after posting this: it's almost as if, in order to test a master carpenter's skill at carpentry, you ask him to build a hammer. After all, if he's all that good, he should be able to do something as simple as affix a metal head to a wooden shaft and have the result be a superior device to anything he could buy off the shelf, right?</p>
<p><b>Further update:</b> After writing this, I took a break, had some dinner, played a game of Magic: The Gathering with my wife and kids (I won, but I can't be certain they didn't let me win, since they knew I was grumpy about not getting this test done in time), and then came back to it. I built up a series of little steps, backed by unit tests to make sure I was stepping through my attempts at reasoning out the algorithm correctly, backed up once or twice with a new approach, and finally solved it in about three hours, emailing it to the company at 6am (0600 for those of you reading this across the Atlantic or from a keyboard marked "Property of US Armed Forces"), just for grins. I wasn't expecting to get a response, since I was grossly beyond the time allotted, but apparently it was good enough to merit a follow-up interview, so yay for me. :-) Upshot is, though, I have an implementation that works, though now I find myself wondering if there's a way to do it in a functional/no-side-effect/persistent-data-structure kind of way....</p>
<p>I still hate them, though, at least the algorithm-based ones, and in a fleeting moment of transparent honesty, I will admit it's probably because I'm not very good at them, but if you repeat that to anyone I'll deny it as outrageous slander and demand satisfaction, Nerf guns at ten paces.</p>
More on Typeshttp://blogs.tedneward.com/post/more-on-types/
Wed, 01 May 2013 02:54:21 -0700http://blogs.tedneward.com/post/more-on-types/<p>With my most recent blog post, some of you were a little less than impressed with the idea of using types, One reader, in particular, suggested that:</p> <blockquote> <p>Your encapsulating type aliases don't... encapsulate :|</p> </blockquote> <p>Actually, it kinda does. But not in the way you described.</p> <blockquote> <p>using X = qualified.type;</p> <p>merely introduces an alias, and will consequently (a) not prevent assignment of <br />a FirstName to a LastName (b) not even be detectible as such from CLI metadata <br />(i.e. using reflection).</p> </blockquote> <p>This is true—the using statement only introduces an alias, in much the same way that C++’s “typedef” does. It’s not perfect, by any real means.</p> <blockquote> <p>Also, the alias is lexically scoped, and doesn't actually _declare a public name_ (so, it would need to be redeclared in all 'client' compilation units.</p> <p>(This won't be done, of course, because the clients would have no clue about <br />this and happily be passing `System.String` as ever).</p> <p>The same goes for C++ typedefs, or, indeed C++11 template aliases:</p> <p>using FirstName = std::string; <br />using LastName = std::string;</p> <p>You'd be better off using BOOST_STRONG_TYPEDEF (or a roll-your-own version of this thing that is basically a CRTP pattern with some inherited constructors. When your compiler has the latter feature, you could probably do without an evil MACRO).</p> </blockquote> <p>All of which is also true. Frankly, the “using” statement is a temporary stopgap, simply a placeholder designed to say, “In time, this will be replaced with a full-fledged type.”</p> <p>And even more to the point, he fails to point out that my “Age” class from my example doesn’t really encapsulate the fact that Age is, fundamentally, an “int” under the covers—because Age possesses type conversion operators to convert it into an int on demand (hence the “implicit” in that operator declaration), it’s pretty easy to get it back to straight “int”-land. Were I not so concerned with brevity, I’d have created a type that allowed for addition on it, though frankly I probably would forbid subtraction, and most certainly multiplication and division. (What does multiplying an Age mean, really?)</p> <p>See, in truth, I cheated, because I know that the first reaction most O-O developers will have is, “Are you crazy? That’s tons more work—just use the int!” Which, is both fair, and an old argument—the C guys said the same thing about these “object” things, and how much work it was compared to just declaring a data structure and writing a few procedures to manipulate them. Creating a full-fledged type for each domain—or each fraction of a domain—seems… heavy.</p> <p>Truthfully, this is <strong>much</strong> easier to do in F#. And in Scala. And in a number of different languages. Unfortunately, in C#, Java, and even C++ (and frankly, I don’t think the use of an “evil MACRO” is unwarranted, if it doesn’t promote bad things). The fact that “doing it right” in those languages means “doing a ton of work to get it right” is exactly why nobody does it—and suffers the commensurate loss of encapsulation and integrity in their domain model.</p> <p>Another poster pointed out that there is a <em>much</em> better series on this at <a href="http://www.fsharpforfunandprofit.com">http://www.fsharpforfunandprofit.com</a>. In particular, check out the series on <a href="http://fsharpforfunandprofit.com/series/designing-with-types.html">&quot;Designing with Types&quot;</a>—it expresses everything I wanted to say, albeit in F# (where I was trying, somewhat unsuccessfully, to example-code it in C#). By the way, I suspect that almost every linguistic feature he uses would translate pretty easily/smoothly over to Scala (or possibly Clojure) as well.</p> <p>Another poster pointed out that doing this type-driven design (TDD, anyone?) would create some serious havoc with your persistence. Cry me a river, and then go use a persistence model that fits an object-oriented and type-oriented paradigm. Like, I dunno, an <a href="http://www.db4o.com">object database</a>. Particularly considering that you shouldn’t want to expose your database schema to anyone outside the project anyway, if you’re concerned about code being tightly coupled. (As in, any other code outside this project—like a reporting engine or an ETL process—that accesses your database directly now is tied to that schema, and is therefore a tight-coupling restriction on evolving your schema.)</p> <p>Achieving good encapsulation isn’t a matter of trying to hide the methods being used—it’s (partly) a matter of allowing the type system to carry a significant percentage of the cognitive load, so that you don’t have to. Which, when you think on it, is kinda what objects and strongly-typed type systems are supposed to do, isn’t it?</p>
On Typeshttp://blogs.tedneward.com/post/on-types/
Fri, 26 Apr 2013 17:59:12 -0700http://blogs.tedneward.com/post/on-types/<p>Recently, having been teaching C# for a bit at Bellevue College, I’ve been thinking more and more about the way in which we approach building object-oriented programs, and particularly the debates around types and type systems. I think, not surprisingly, that the way in which the vast majority of the O-O developers in the world approach types and when/how they use them is flat wrong—both in terms of the times when they create classes when they shouldn’t (or shouldn’t have to, anyway, though obviously this is partly a measure of their language), and the times when they should create classes and don’t.</p> <p>The latter point is the one I feel like exploring here; the former one is certainly interesting on its own, but I’ll save that for a later date. For now, I want to think about (and write about) how we often don’t create types in an O-O program, and should, because doing so can often create clearer, more expressive programs.</p> <h3>A Person</h3> <p>Common object-oriented parlance suggests that when we have a taxonomical entity that we want to represent in code (i.e., a concept of some form), we use a class to do so; for example, if we want to model a “person” in the world by capturing some of their critical attributes, we do so using a class (in this case, C#):</p> <p><font face="Consolas">class Person <br />{ <br />&#160;&#160;&#160; public string FirstName { get; set; } <br />&#160;&#160;&#160; public string LastName { get; set; } <br />&#160;&#160;&#160; public int Age { get; set; } <br />&#160;&#160;&#160; public bool Gender { get; set; } <br />}</font></p> <p>Granted, this is a pretty simplified case; O-O enthusiasts will find lots of things wrong with this code, most of which have to do with dealing with the complexities that can arise.</p> <p>From here, there’s a lot of ways in which this conversation can get a lot more complicated—how, where and when should inheritance factor into the discussion, for example, and how exactly do we represent the relationship between parents and children (after all, some children will be adopted, some will be natural birth, some will be disowned) and the relationship between various members who wish to engage in some form of marital status (putting aside the political hot-button of same-sex marriage, we find that some states respect “civil unions” even where no formal ceremony has taken place, many cultures still recognize polygamy—one man, many wives—as Utah did up until the mid-1800s, and a growing movement around polyamory—one or more men, one or more women—looks like it may be the next political hot-button around marriage) definitely depends on the business issues in question…</p> <p>… but that’s the whole point of encapsulation, right? That if the business needs change, we can adapt as necessary to the changed requirements without having to go back and rewrite everything.</p> <h4>Genders</h4> <p>Consider, for example, the rather horrible decision to represent “gender” as a boolean: while, yes, at birth, there are essentially two genders at the biological level, there are some interesting birth defects/disorders/conditions in which a person’s gender is, for lack of a better term, screwed up—men born with female plumbing and vice versa. The system might need to track that. Or, there are those who consider themselves to have been born into the wrong gender, and choose to live a lifestyle that is markedly different from what societal norms suggest (the transgender crowd). Or, in some cases, the gender may not have even been determined yet: fetuses don’t develop gender until about halfway through the pregnancy.</p> <p>Which suggests, offhand, that the use of a boolean here is clearly a Bad Idea. But what suggests as its replacement? Certainly we could maintain an internal state string or something similar, using the get/set properties to verify that the strings being set are correct and valid, but the .NET type system has a better answer: Given that there is a finite number of choices to gender—whether that’s two or four or a dozen—it seems that an enumeration is a good replacement:</p> <p><font face="Consolas">enum Gender <br />{ <br />&#160;&#160;&#160; Male, Female, <br />&#160;&#160;&#160; Indeterminate, <br />&#160;&#160;&#160; Transgender <br />}</font></p> <p><font face="Consolas">class Person <br />{ <br />&#160;&#160;&#160; public string FirstName { get; set; } <br />&#160;&#160;&#160; public string LastName { get; set; } <br />&#160;&#160;&#160; public int Age { get; set; } <br />&#160;&#160;&#160; public Gender Gender { get; set; } <br />}</font></p> <p>Don’t let the fact that the property and the type have the same name be too confusing—not only does it compile cleanly, but it actually provides some clear description of what’s being stored. (Although, I’ll admit, it’s confusing the first time you look at it.) More importantly, there’s no additional code that needs to be written to enforce only the four acceptable values—or, extend it as necessary when that becomes necessary.</p> <h3></h3> <h4>Ages</h4> <p>Similarly, the age of a person is not an integer value—people cannot be negative age, nor do they usually age beyond a hundred or so. Again, we could put code around the get/set blocks of the Age property to ensure the proper values, but it would again be easier to let the type system do all the work:</p> <p><font face="Consolas">struct Age <br />{ <br />&#160;&#160;&#160; int data; <br />&#160;&#160;&#160; public Age(int d) <br />&#160;&#160;&#160; { <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; Validate(d); <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; data = d; <br />&#160;&#160;&#160; }</font></p> <p><font face="Consolas">&#160;&#160;&#160; public static void Validate(int d) <br />&#160;&#160;&#160; { <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; if (d &lt; 0) <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; throw new ArgumentException(&quot;Age cannot be negative&quot;); <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160; if (d &gt; 120) <br />&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; throw new ArgumentException(&quot;Age cannot be over 120&quot;); <br />&#160;&#160;&#160; }</font></p> <p><font face="Consolas">&#160;&#160;&#160; // explicit int to Age conversion operator <br />&#160;&#160;&#160; public static implicit operator Age(int a) <br />&#160;&#160;&#160; { return new Age(a); }</font></p> <p><font face="Consolas">&#160;&#160;&#160; // explicit Age to int conversion operator <br />&#160;&#160;&#160; public static implicit operator int(Age a) <br />&#160;&#160;&#160; { return a.data; } <br />}</font></p> <p><font face="Consolas">class Person <br />{ <br />&#160;&#160;&#160; public string FirstName { get; set; } <br />&#160;&#160;&#160; public string LastName { get; set; } <br />&#160;&#160;&#160; public Age Age { get; set; } <br />&#160;&#160;&#160; public Gender Gender { get; set; } <br />}</font></p> <p>Notice that we’re still having to write the same code, but now the code is embodied in a type, which is itself intrinsically reusable—we can reuse the Age type in other classes, which is more than we can say if that code lives in the Person.Age property getter/setter. Again, too, now the Person class really has nothing to do in terms of ensuring that age is maintained properly (and by that, I mean greater than zero and less than 120). (The “implicit” in the conversion operators means that the code doesn’t need to explicitly cast the int to an Age or vice versa.)</p> <p>Technically, what I’ve done with Age is create a restriction around the integer (System.Int32 in .NET terms) type; were this XSD Schema types, I could do a derivation-by-restriction to restrict an xsd:int to the values I care about (0 – 120, inclusive). Unfortunately, no O-O language I know of permits derivation-by-restriction, so it requires work to create a type that “wraps” another, in this case, an Int32.</p> <h4></h4> <h4></h4> <h4>Names</h4> <p>Names are another point of problem, in that there’s all kinds of crazy cases that (as much as we’d like to pretend otherwise) turn out to be far more common than we’d like—not only do most people have middle names, but sometimes women will take their husband’s last name and hyphenate it with their own, making it sort of a middle name but not really, or sometimes people will give their children to multiple middle names, Japanese names put family names first, sometimes people choose to take a single name, and so on. This is again a case where we can either choose to bake that logic into property getters/setters, or bake it into a single type (a “Name” type) that has the necessary code and properties to provide all the functionality that a person’s name represents.</p> <p>So, without getting into the actual implementation, then, if we want to represent names in the system, then we should have a full-fledged “Name” class that captures the various permutations that arise:</p> <p><font face="Consolas">class Name <br />{&#160;&#160; <br />&#160;&#160;&#160; public Title Honorific { get { ... } } <br />&#160;&#160;&#160; public string Individual { get { ... } } <br />&#160;&#160;&#160; public string Nickname { get { ... } } <br />&#160;&#160;&#160; public string Family { get { ... } } <br />&#160;&#160;&#160; public string Full { get { ... } } <br />&#160;&#160;&#160; public static Name Parse(string incoming) { ... }&#160; <br />}</font></font></p> <p><font face="Consolas"></font></p> <p>See, ultimately, everything will have to boil back to the core primitives within the language, but we need to build stronger primitives for the system—Name, Title, Age, and don’t even get me started on relationships.</p> <h4></h4> <h4></h4> <h4>Relationships</h4> <p>Parent-child relationships are also a case where things are vastly more complicated than just the one-to-many or one-to-one (or two-to-one) that direct object references encourage; in the case of families, given how complex the modern American family can get (and frankly, it’s not any easier if we go back and look at medieval families, either—go have a look at any royal European genealogical line and think about how you’d model that, particularly Henry VIII), it becomes pretty quickly apparent that modeling the relationships themselves often presents itself as the only reasonable solution.</p> <p>I won’t even begin to get into that example, by the way, simply because this blog post is too long as it is. I might try it for a later blog post to explore the idea further, but I think the point is made at this point.</p> <h3></h3> <h3>Summary</h3> <p>The object-oriented paradigm often finds itself wading in tens of thousands of types, so it seems counterintuitive to suggest that we need more of them to make programs more clear. I agree, many O-O programs are too type-heavy, but part of the problem there is that we’re spending too much time creating classes that we shouldn’t need to create (DTOs and the like) and not enough time thinking about the actual entities in the system.</p> <p>I’ll be the first to admit, too, that not all systems will need to treat names the way that I’ve done—sometimes an age is just an integer, and we’re OK with that. Truthfully, though, it seems more often than not that we’re later adding the necessary code to ensure that ages can never be negative, have to fall within a certain range, and so on.</p> <p>As a suggestion, then, I throw out this idea: <strong><em>Ensure that all of your domain classes never expose primitive types to the user of the system.</em></strong> In other words, Name never exposes an “int” for Age, but only an “Age” type. C# makes this easy via “using” declarations, like so:</p> <p><font face="Consolas">using FirstName = System.String; <br />using LastName = System.String;</font></p> <p>which can then, if you’re thorough and disciplined about using the FirstName and LastName types instead of “string”, evolve into fully-formed types later in their own right if they need to. C++ provides “typedef” for this purpose—unfortunately, Java lacks any such facility, making this a much harder prospect. (This is something I’d stick at the top of my TODO list were I nominated to take Brian Goetz’s place at the head of Java9 development.)</p> <p>In essence, encapsulate the primitive types away so that when they don’t need to be primitives, or when they need to be more complex than just simple holders of data, they don’t have to be, and clients will never know the difference. That, folks, is what encapsulation is trying to be about.</p>
Say that part about HTML standards, again?http://blogs.tedneward.com/post/say-that-part-about-html-standards-again/
Sat, 13 Apr 2013 01:30:45 -0700http://blogs.tedneward.com/post/say-that-part-about-html-standards-again/<p>In incarnations past, I have had debates, public and otherwise, with friends and colleagues who have asserted that HTML5 (by which we really mean HTML5/JavaScript/CSS3) will essentially become the platform of choice for all applications going forward—that essentially, <em>this</em> time, standards will win out, and companies that try to subvert the open nature of the web by creating their own implementations with their own extensions and proprietary features that aren’t part of the standards, lose.</p> <p>Then, I read the Wired news post about <a href="http://www.wired.com/wiredenterprise/2013/04/blink/" target="_blank">Google’s departure from WebKit</a>, and I’m a little surprised that the Internet (and by “the Internet”, I mean “the very people who get up in arms about standards and subverting them and blah blah blah”) hasn’t taken more issues with some of the things cited therein:</p> <blockquote> <p>Google’s decision is in tune with its overall efforts to improve the infrastructure of the internet. When it comes to browser software and other web technologies that directly effect the how quickly and effectively your machine grabs and displays webpages, the company likes to use open source technologies. That way, it can feed their adoption outside the company — and ultimately improve the delivery of its many online services (including all important advertisements). But if it believes the rest of the web is moving too slowly, it has no problem starting up its own project.</p> </blockquote> <p>Just to be clear, Google is happy to use open-source technologies, so it can feed adoption of those technologies, but if it’s something that Google thinks is being adopted too slowly—like, say, Google’s extensions to the various standards that aren’t being picked up by its competitors—then Google feels the need to kick off its own thing. Interesting.</p> <blockquote> <p>… [T]he trouble with WebKit is that is used different “multi-process architecture” than its Chrome browser, which basically means it didn’t handle concurrent tasks in the same way. When Chrome was first released in 2008 WebKit didn’t have a multi-process architecture, so Google had to build its own. WebKit2, released in 2010, adds multi-process features, but is quite different from what Google had already built. Apple and Google don’t see eye to eye on the project, and it became too difficult and too time-consuming for the company juggle the two architectures. “Supporting multiple architectures over the years has led to increasing complexity for both [projects],” the post says. “This has slowed down the collective pace of innovation.”</p> </blockquote> <p>So… Google tried to use some open-source software, but discovered that the project didn’t work the way they built the rest of their application to work. (I’m certain that’s the first time that has happened, ever.) When the custodians of the project did add the feature Google wanted, the feature was implemented in a manner that still wasn’t in lockstep with the way Google wanted things to work in their application. This meant that “innovation” is “slowed down”.</p> <p>(As an aside, I find it fascinating that whenever a company adopts open-source, it’s to “foster interoperability and open standards”, but when they abandon open-source, it’s to “foster innovation and faster evolution”. And I’m sure it’s entirely accidental that most of the time, adopting “open standards” is usually when the company is way behind on the technology curve for a given thing, and adopting “faster innovation” is usually when that same company thinks they’ve caught up the distance or surged ahead of their competitors in that space.)</p> <p>Of course, a new implementation has its risks of bugs and incompatibilities, but Google has a plan for that:</p> <blockquote> <p>“Throughout this transition, we’ll collaborate closely with other browser vendors to move the web forward and preserve the compatibility that made it a successful ecosystem,” the announcement reads.</p> </blockquote> <p>Ah, there. See? By collaborating closely with their competitors, they will preserve compatibility. Because when Microsoft did that, everybody was totally OK with that…. uh, and… yeah… it worked pretty well, too, and….</p> <p>Look, it seems pretty reasonable to assume that even if the tags and the DOM and the APIs are all 100% unchanged from Chrome v.Past to v.Next, there’s still going to be places where they optimize differently than WebKit does, which means now that developers will need to learn (and implement) optimizations in their Web-based applications differently. And frankly, the assumption that Chrome’s Blink and WebKit will somehow be bug-for-bug compatible/identical with each other is a pretty steep bar to accept blindly, considering the history.</p> <p>Once again, we see the cycle coming around: in the beginning, when a technology is fleshing out, companies yearn for standards in order to create adoption. After a certain tipping point of adoption, however, the major players start to seek ways to avoid becoming a commodity, and start introducing “extensions” and “innovations” that for some odd reason their competitors in the standards meetings don’t seem all that inclined to adopt. That’s when they start forking and shying away from staying true to the standard, and eventually, the standard becomes either a least-common-denominator… or a joke.</p> <p>Anybody want to bet on which outcome emerges for HTML5?</p> <p>(Before you reach for the “Comment” link to flame me all to Hell, yes, even an HTML 5 standard that is 80% consistent across all the browsers is still pretty damn useful—just as a SQL standard that is 80% consistent across all the databases is useful. But this is a far cry from the utopia of interconnectedness and interoperability that was promised to us by the HTMLophiles, and it simply demonstrates that the Circle of TechnoLife continues, unabated, as it has ever since PC manufacturers—and the rest of us watching them--discovered what happens to them when they become a commodity.)</p>
Programming language "laws"http://blogs.tedneward.com/post/programming-language-laws/
Tue, 19 Mar 2013 18:32:43 -0700http://blogs.tedneward.com/post/programming-language-laws/<p>As is pretty typical for that site, Lambda the Ultimate has <a href="http://lambda-the-ultimate.org/node/4698">a great discussion</a> on some insights that the creators of Mozart and Oz have come to, regarding the design of programming languages; I repeat the post here for convenience:
<blockquote>
Now that we are close to releasing Mozart 2 (a complete redesign of the Mozart system), I have been thinking about how best to summarize the lessons we learned about programming paradigms in CTM. Here are five "laws" that summarize these lessons:
<ol>
<li>A well-designed program uses the right concepts, and the paradigm follows from the concepts that are used. [Paradigms are epiphenomena]</li>
<li>A paradigm with more concepts than another is not better or worse, just different. [Paradigm paradox]</li>
<li>Each problem has a best paradigm in which to program it; a paradigm with less concepts makes the program more complicated and a paradigm with more concepts makes reasoning more complicated. [Best paradigm principle]</li>
<li>If a program is complicated for reasons unrelated to the problem being solved, then a new concept should be added to the paradigm. [Creative extension principle]</li>
<li>A program's interface should depend only on its externally visible functionality, not on the paradigm used to implement it. [Model independence principle]</li>
</ol>
Here a "paradigm" is defined as a formal system that defines how computations are done and that leads to a set of techniques for programming and reasoning about programs. Some commonly used paradigms are called functional programming, object-oriented programming, and logic programming. The term "best paradigm" can have different meanings depending on the ultimate goal of the programming project; it usually refers to a paradigm that maximizes some combination of good properties such as clarity, provability, maintainability, efficiency, and extensibility. I am curious to see what the LtU community thinks of these laws and their formulation.
</blockquote>
This just so neatly calls out to me, based on my own very brief and very informal investigation into multi-paradigm programming (based on James Coplien's work from C++ from a decade-plus ago). I think they really have something interesting here.
</p>
Ted Neward on Java 8 adoptionhttp://blogs.tedneward.com/post/ted-neward-on-java-8-adoption/
Mon, 18 Mar 2013 18:46:36 -0700http://blogs.tedneward.com/post/ted-neward-on-java-8-adoption/<p>Every once in a while, there is a moment in your life when inspiration just BAM! strikes out of nowhere, telling you what your next blog post is.</p> <p>Then, there’s this one.</p> <p>This blog post wasn’t inspired by any sort of bolt from the blue, or even a conversation with a buddy that led me to think, “Yeah, this is something that I should share with the world”. No, this one comes directly to you, from you. You see, I was cruising through my blog logs, and in particular looking at the Google Search queries that led to the blog site, and yesterday apparently two different Google Searches, both titled “Ted Neward on Java 8 adoption”, came in twice each.</p> <p>I take that as a sign that y’all are kinda curious what my thoughts on Java 8 adoption are. Consider the message received: from your fingers to my eyes, as the old saying (slightly rephrased) goes.</p> <h2>Java 8: Overview</h2> <p>For those of you who’ve been too busy to track what’s going on with the Java language recently, the upcoming release of the JDK, the JavaSE 8 release, marks a fairly significant moment in Java’s history, one that ranks right up there with Java 5, in that the language is going to get a significant “bump” in functionality. Historically, Sun tried very hard to avoid such changes: Java 1.1 introduced inner classes, Java 1.4 introduced “assert”, and beyond that the language was the same language we’d been using since 1996 or so. The JVM saw some huge growth, by leaps and bounds, and the Java libraries grew exponentially, it seemed, but the language itself remained pretty static until Java 5. With Java 5 we got generics, enumerations, annotations, enhanced for loops, variable argument declarations, and a few other things besides; with Java 7 (the last release) we got a couple of trivial changes that really didn’t ruffle anybody’s hair, much less blow anybody’s socks off.</p> <p>Java 8 represents another Java 5-like “sea change” kind of release. Not because there’s a ton of new features, like Java 5 had, but because the introduction of lambdas—anonymous function literals—will change a lot of the ways we can express concepts in Java, and that’s going to ripple throughout the language and the ecosystem. (Well, over time, it will—it’s hard to say exactly how much things will change in the days and months immediately following 8’s release.)</p> <p>I won’t go into the details of Java 8’s new syntax—that’s not only still being finalized, but it’s also been pretty well-documented and discussed elsewhere (including a forthcoming Java Magazine issue from Oracle TechNet on the subject that’s been written by yours truly), and I only have a few minutes to write this in between flights home from a conference, to boot. For those who are familiar with lambdas, suffice to say that Java lambdas will look astonishingly like Scala or C# lambdas, partly because there’s really only a few ways you can make lambdas look in a C-style language, and partly because the folks writing the new features want the syntax to look familiar to programmers, and borrowing somebody else’s syntax (or at least big chunks of it) is a good way to do that.</p> <h2>Java 8: Adoption</h2> <p>When we talk about “adoption” of a given Java release, there’s a couple of different concepts we should tease out and examine individually: those customers who will deploy their non-Java8-written code on top of the Java8 JVM; those customers who will start using libraries written using Java8 features; and those customers who will start writing their own designs and implementations in the Java8 syntax and style.</p> <p><strong>Customers deploying Java8 for the JVM.</strong> Frankly, I expect this to happen relatively quickly, in line with the Java releases before this one. The JVM gets better and better with each release, and there’s no reason to assume that this release will be any different, and once Oracle and the JVM itself have demonstrated that there’s little to no risk to dropping the new JVM into the production data center and firing up your current version of JBoss or Tomcat or whatever on top of it, customers will begin to take a hard look at the risks involved in doing so (if any) and make that transition. It’s really a high-win-low-cost thing to do, again, once the Java8 JVM has some actual production miles under its belt, so to speak. (This isn’t a new rewrite of the JVM, by the way—customers just don’t want to be the first one to discover stupid bugs. My Dad once summarized this attitude this way: “Pilots never want to the fly the ‘A’ model of any aircraft.”) I give it about a year, maybe as early as six months, after the Java8 release before customers start putting Java8 into production.</p> <p><strong>Customers using libraries written using Java8 features.</strong> And let’s be clear, by “Java8 features” we’re talking about lambdas and virtual extension methods (a.k.a. “defender methods” from earlier draft specs), and by “libraries”, we’re talking about major open-source favorites like Spring, Hibernate, Commons Collections and so on. Essentially, the reason this is important as a category centers around the idea that Java developers, like a lot of developers, aren’t going to adopt the language features of the new Java until they see them in action—passing lambdas in to Spring for executing inside a database transaction, for example, or passing a lambda in to a collection for execution across a collection. The timeline here will be somewhat dependent on the library, and on the commitment of the developers around those libraries, but I’m a little less optimistic here—many of the open-source committers have historically been the loudest to cry foul over some of the changes Sun made to the language, and I’m not convinced yet that they have come around to embrace Oracle’s intentions regarding the language’s evolution. (In many ways, the image that strikes me is that of a large number of grumpy old men sitting around the office, gruffly tossing off one-liners like “Didn’t work like that in MY day” and “Don’t these kids realize that sometimes the old ways are the best ways?”.) I’m guessing that this transition will take longer, like two years at the minimum, and some libraries will never actually make the transition at all, choosing instead to remain “pre-Java8 compatible”, in the same way that some libraries chose to remain “pre-Java5 compatible” (and, IMHO, essentially put themselves out to pasture as a result).</p> <p><strong>Customers writing their own designs and implementations in Java8.</strong> And really, what I mean here is “how long before they start creating classes that utilize lambdas in the domain object design”? Interestingly enough, I think this is tangentially related to how quickly the open-source community adopts Java8 (the previous point), because then customers will begin to see some design patterns and idioms that they can copy/follow/embrace/extend, but even if the open-source community roundly rejects Java8, I still see customers starting to design and build code using lambdas by 2015 or ‘16. Some will jump on it early, or be able to transition their existing anonymous-inner-class-based (that is, “poor man’s lambda”) code over to lambdas within months of Java8’s release, but it will take longer to percolate through the rest of the industry—there are more than a few companies out there still running Java6, for example, and those folks aren’t going to accelerate their use of Java8 just to get lambdas.</p> <h2>Java 8: Perception</h2> <p>Having said all that, though, I think the overall perception of Java8’s adoption will be entirely dependent on how well Oracle addresses some of the recent “security flaws” that have been coming out of Java in the press. Even though the security flaws all seem to be applet- or client-side Java related, the perception that Java is somehow insecure likely has Microsoft chuckling internally—it certainly has Microsoft’s community (of which I and a number of my friends are a part) giggling and roaring and engaging in a few “Neener-neener-neener” moments; after all the crap that Java guys gave the Microsoft community back in the days of Bill Gates’ famous Security Memo, I can’t say that it’s unwarranted.</p> <p>Aside from that, though, I think there’s no real reason not to expect adoption of Java8 to follow the same broad strokes path that previous Java releases have enjoyed, and thus within three years I fully expect that widescale adoption will be well under way.</p>