http://origin.jrj.org/jrjBlog2016-11-30T21:46:09-07:00The personal blog of Joseph R. Jones, with an emphasis on security and privacy, technology and the future.JRJhttp://jrj.org/Jekyllhttp://origin.jrj.org/2016/11/28/VR-vs-AR-Immersive-technologies/Which Will Be Dominant: Augmented Reality (AR) or Virtual Reality (VR)?2016-11-28T05:11:00-07:00JRJhttp://jrj.org/I answered a question on Quora that's common for folks just starting to explore immersive technologies...<p>Someone on Quora asked the question “What will be more dominant in the next few years, Augmented Reality (AR) or Virtual Reality (VR)? I’m reposting <a href="https://www.quora.com/What-will-be-more-dominant-in-the-next-few-years-augmented-reality-or-virtual-reality/answer/Joseph-R-Jones?srid=oWkM&amp;share=605d933c">my response</a> here on the blog.</p>
<p>Google’s Clay Bavor managed to perfectly explain this in &lt;140 characters, it’s difficult for me to improve on his response:</p>
<p><a href="https://twitter.com/claybavor/status/704853268393025541"><img src="/images/bavor-tweet.png" alt="VR vs. AR cheat sheet: VR can take you anywhere. AR can bring anything to you. Both are important. Neither will win." /></a></p>
<p>These are different technologies which, while related, are neither mutually exclusive nor entirely competitive. In the long term, I expect the line between the two to blur (an AR device that can generate sufficient opacity can emulate VR, a VR system with cameras to bring in the outside world can emulate AR) but in the near-term they have distinctly different missions, as Bavor so elegantly describes.</p>
<p>However, in 2017 I expect to see more VR than AR, mostly because the problems that need to be solved to provide a compelling VR experience at a (high-end) consumer price point are largely solved, while that’s not really the case for AR yet: computer vision and image recognition technologies need to improve, and be made more performant, characteristics like FOV (Field of View) need to improve, and price points need to be brought down significantly before consumer availability of AR systems from companies like Microsoft and MagicLeap (and, of course, Meta… but I wouldn’t bet on their long-term future.) I expect we’ll see consumer availability of first generation consumer AR devices around the time we see the NEXT generation of VR devices, within 2–3 years.</p>
<p>Right now, VR is better for a lot of entertainment experiences because it’s so immersive, and AR tends to be better for casual experiences and productivity scenarios because it doesn’t take over the user’s sensory inputs completely, and doesn’t have the same vestibular discomfort issues.</p>
2016-11-28T05:11:00-07:00http://origin.jrj.org/2016/03/14/Enter-the-Void-VR-Holodeck/Enter the Void: Extreme VR2016-03-14T19:45:00-06:00JRJhttp://jrj.org/I had an oportunity to try out humanity's current best attempt at building the Holodeck from Star Trek... it was everything I hoped it would be.<p>Reading about <a href="http://www.roadtovr.com/we-take-a-sneak-peek-at-the-voids-ted2016-experience/">The Void deploying a set of experiences for TED attendees</a> reminded me that I was fortunate enough to participate in an early beta experience at <a href="https://thevoid.com">The Void</a> back in November (and again in December) and, although I shared my thoughts on Facebook at the time, I forgot to write it up here.</p>
<p>What The Void is creating is impressive, a completely immersive experience that’s on an entirely different level from consumer-grade room-scale VR. There were a couple of brief moments during which I genuinely forgot I was in a simulated environment.</p>
<p>In terms of technology, when I went through they were using Oculus DK2 units as stand-ins, but <a href="http://www.roadtovr.com/the-void-rapture-vr-headset-2k-curved-oled-display/">their proprietary HMD is significantly higher resolution and with a wider field of view</a>. I don’t think they have released detailed specifications (beyond saying that it’s a “2K per eye resolution” and “180 degree FoV”) but the HMD looks to be very impressive.</p>
<p>More important than the HMD tech is that <strong>they are creating a physical environment that matches (perfectly) the virtual environment from a tactile perspective– if you reach out and touch a virtual wall it has substance.</strong> When you reach out to use the virtual touch screen controls to open a door, it works. The effect is quite compelling, and succeeds in creating an illusion that’s quite convincing.</p>
<iframe width="576" height="324" src="https://www.youtube.com/embed/cML814JD09g" frameborder="0" allowfullscreen=""></iframe>
<p>Their body/motion tracking is spot-on.<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> During one experience, while holding a rifle, I was able to accurately aim it using the sights. The virtual gun was exactly in the location and orientation of the plastic toy I was holding.</p>
<p>The moment of maximum presence for me: As I walked near a door in a science fiction environment it slid open, and I felt a rush of cold air from “outside.” I walked through the door and was on narrow walkway on an alien world. It was markedly cooler. I could look up and see the alien sky, and I could look down and see a ~500 foot drop with no guardrail. I stepped toward the edge, and felt the edge with my foot. (I later talked to the attendant… it’s a drop of only a few inches– just enough so you’d feel an edge under your foot, but low enough you weren’t likely to trip and fall if you tried to commit virtual suicide.) <strong>In that moment, I was on that alien world, and the little flying sentry pods that were approaching me felt like a real threat.</strong> When I succeeded in shooting each one down, I felt <em>relief</em> not <em>simple gaming accomplishment</em>. <strong>For that brief moment, it was real to me.</strong></p>
<p>I look forward to trying their proprietary headset when they get it worked out. Higher resolution and higher frame rates will make the experience even more convincing. There were definitely glitches (they had to restart the simulation a few times when the system locked up) but what they have accomplished is clear even in this very early beta form. I think this has potential to be hugely successful when they launch later this year, and since it will be ~3 miles from my house I have a feeling I’m going to spend WAY too much money there.</p>
<p>This is a great reminder to me that we live in the future. It’s worth watching <a href="https://www.youtube.com/watch?v=cML814JD09g&amp;feature=youtu.be">their promotional video</a>. They aren’t overstating it at all– what you see in the video where they are in the lab and battling monsters and giant sci-fi spiders is what they have working today, and the experience really is what they are showing. I hope that the experiences go beyond just gaming and specifically first-person shooters<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>, the technology has much more potential than that.</p>
<p><a href="https://www.youtube.com/watch?v=oCXthgLTj3Q&amp;feature=youtu.be">Video of someone walking through an earlier version of one of the two experiences</a></p>
<p>They can simulate a space of unlimited size (techniques like redirected walking, where you think you are walking in a straight line but really it’s a huge circle, and similar techniques) so I hope they are ambitious.</p>
<p>However, im sure the early stuff will be mostly repurposed video game paradigms.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Oh, and zero vestibular discomfort (except for a beta-induced glitch where the system was dropping frames.) Zero motion sickness or dizziness. <a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:2">
<p>My hope is they are more creative than that. One of the experiences was definitely a first person shooter kind of thing, but the other decidedly was not. (More like being Indiana Jones exploring ruins.) Because of how inexpensive it will be to refit the physical spaces for new experiences, I’m hoping there will be a lot of variety. <a href="#fnref:2" class="reversefootnote">&#8617;</a></p>
</li>
</ol>
</div>
2016-03-14T19:45:00-06:00http://origin.jrj.org/2016/02/17/Apple-Encryption-Fight-iOS-iPhone-FBI/Apple Fights for Encryption2016-02-17T20:45:00-07:00JRJhttp://jrj.org/Apple is playing chicken with the FBI over their right to build software and hardware that securely protects user data...<p>As you’re probably already aware, <a href="http://www.apple.com/customer-letter/">Apple is refusing a demand from the FBI (and corresponding court order) to enable the decryption of an iOS device</a> recovered from one of the San Bernadino shooters. I won’t bother going over the basics of the story– I’ll assume you’ve already read some of the <a href="http://www.theguardian.com/technology/2016/feb/17/inside-the-fbis-encryption-battle-with-apple?CMP=share_btn_tw">mainstream accounts</a> so I can focus on the some of the more esoteric (and, at least to me, more interesting) technical matters.</p>
<p>First, a couple assertions that I believe to be correct, but cannot prove:</p>
<ul>
<li>Apple and the FBI are both willing and able to see this through to the end, and it will ultimately be decided by the Supreme Court.</li>
<li>The FBI is not seeking data on this phone, but rather the precedent for future use.</li>
</ul>
<p>While I can’t prove either of those assertions, I believe that the information that’s publicly available <em>is</em> sufficient to conclusively determine that <strong>it’s not possible for Apple to decrypt this one phone without making such an attack possible against all similar phones.</strong> Apple has said as much, but I believe that the published information about the iOS security architecture (and third party penetration testing to date) conclusively validates that claim.</p>
<h2 id="well-documented-security-architecture">Well-documented security architecture</h2>
<p>Apple does not depend on “security through obscurity” to protect iOS devices. They are sufficiently confident in the security architecture that they have been quite open about it, <a href="https://www.apple.com/business/docs/iOS_Security_Guide.pdf">publishing detailed information about the protection techniques used</a>. The architecture has been widely vetted by the security community.</p>
<p>To summarize it, the data on a modern iOS device is AES encrypted with a unique key per file. That key is stored in the file’s metadata<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> encrypted with a <code class="highlighter-rouge">file system key</code>. The <code class="highlighter-rouge">file system key</code> is stored in the device’s secure enclave,<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup> and the key can be securely deleted by the user interactively (“erase all content and settings”) or in response to a security event such as a “remote wipe” or, if the user has so configured, after 10 consecutive incorrect login attempts. Without this <code class="highlighter-rouge">file system key</code>, the encrypted data is unrecoverable.<sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup></p>
<p>Since the <code class="highlighter-rouge">file system key</code> is securely deleted after 10 unsuccessful login attempts, it’s impractical to attempt to brute-force even a simplistic 4-digit numeric passcode, since the cost of that incorrect 10th answer is unrecoverable data loss. However, <strong>without that protection in place, brute-forcing a typical numeric passcode could be reliably accomplished in the time it takes for Dominos to deliver a pizza, so the FBI is demanding that Apple make it possible for them to defeat that mechanism so they can attack the weakest link in the security chain: the user’s passcode.</strong> It’s worth noting that this technique would not be effective on a device with a strong password, and using a strong password is quite practical on a modern iOS device thanks to the TouchID sensor. I do it, and you should too.</p>
<p>So why can’t the FBI simply “hack” the OS themselves? Because Apple’s security features prevent a modified OS image from being installed. It’s possible to update the operating system on an iOS device via USB without the passcode, but only if the OS image is digitally signed by Apple, a restriction which is enforced in hardware via Apple’s Secure Boot chain.</p>
<p>So, while it’s possible to create a new OS that won’t delete the encryption key after 10 successful login attempts, and it’s possible to install such an OS on an iOS device without first logging in, Apple would need to digitally sign that OS image. This is what the FBI is demanding: they want Apple to create (and, more importantly, digitally sign) an OS image that be installed on the phone in question. With the erase-after-10-bad-login functionality disabled, the FBI could then simply brute-force the passcode (which is typically just a 4 or 6 digit number.)<sup id="fnref:4"><a href="#fn:4" class="footnote">4</a></sup> Apple is refusing to comply with this demand.</p>
<h2 id="what-does-this-all-mean">What does this all mean?</h2>
<p>There are a couple of interesting facets here.</p>
<p>First of all, the phone in question is a 5c (which is, essentially an iPhone 5 in a colorful plastic case.) The device pre-dates the inclusion of the “secure enclave,” which was introduced with the 5S an its A7 SoC. Based on my reading of Apple’s security documentation, it’s not clear to me that such a technique would be effective on a newer device, since the secure enclave is designed to maintain the integrity of Data Protection even if the main OS and kernel have been compromised. However, even though this specific technique is likely not applicable, it’s highly probable that <a href="http://techcrunch.com/2016/02/17/why-apple-is-fighting-not-to-unlock-iphones-for-the-government/">some approach is possible with newer devices even though the technical demands would be different</a>. With the precedent set, the FBI could demand Apple create a new circumvention tool the next time around.</p>
<p>Second, the FBI is being very strategic here. They’ve chosen a case that’s a.) highly incendiary/emotional, and b.) involves parties who are all dead. It’s very difficult politically for Apple to fight this battle, but they’re doing it anyway. It’s risky, and I’m not convinced they can win it, but I’m impressed they are trying.</p>
<p>The question here is simple (and mirrors <a href="https://en.m.wikipedia.org/wiki/Crypto_Wars">a similar battle that the law enforcement community lost in the 90s</a>.) <strong>Is it legal for a company to produce a product which protects data so well that the company itself cannot recover it in response to a court order?</strong> If courts find that the answer is “no” there will be far-reaching effects. For example, I use cloud-based backup because I was able to validate that the data is encrypted locally, and the encryption key is never sent to the cloud. I would not be able to use a cloud-based backup solution without such protection since if the company is compromised my data would be at risk. Ther are dozens (if not hundreds) of similar solutions that would no longer be feasible to implement.</p>
<p>Another way to frame the same question: <a href="http://www.macworld.com/article/3034355/ios/why-the-fbis-request-to-apple-will-affect-civil-rights-for-a-generation.html">Should companies be legally obligated to integrate security circumvention technologies into their products?</a></p>
<p>For some reason, one simple fact seems to not be obvious to some people: U.S. Law only applies within our borders. If you prevent companies like Apple from implementing secure solutions (and/or force them to create circumvention functionality that bypasses their own security) you advantage international competitors. When faced with a choice between a known-compromised/backdoored device from an American company or a known-secure solution from a foreign competitor, which one do you think people will choose? Yes, you can mandate such solutions in the U.S., but we represent a small part of the global market. Even if we COULD compel companies outside our borders (hint: we can’t) there’s still the small problem that solutions, many of them open source, already did exist to encrypt data. The encryption cat is already out of the bag. (I’m just guessing, but I don’t think terrorists would select the known backdoored solutions either– they have a nasty habit of not cooperating with the preferences of law enforcement.)</p>
<p><strong>Personally, I’m on the side of Apple, the <a href="https://www.aclu.org/news/aclu-comment-fbi-effort-force-apple-unlock-iphone">ACLU</a>, the <a href="https://www.eff.org/deeplinks/2016/02/eff-support-apple-encryption-battle">EFF</a>, and others on this fight. I’m not confident they can win it, but I commend them for trying.</strong></p>
<p>Update: I wanted to add a couple links that do a great job of explaining some of the technical and legal issues here. First, <a href="http://www.wired.com/2016/02/apples-fbi-battle-is-complicated-heres-whats-really-going-on/">one in plain English from Wired</a>, and the second <a href="http://blog.cryptographyengineering.com/2014/10/why-cant-apple-decrypt-your-iphone.html">more focused on security and crypto nerds by Matt Green</a>.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Files are encrypted with AES 256 (Cipher Block Chaining) using a unique key per file. The initialization vector is calculated with a block offset into the file, encrypted with a SHA-1 hash of the file key. The key is then wrapped (using AES key wrapping compliant with RFC 3394 and stored in the file’s metadata. <a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:2">
<p>The “Secure Enclave” is a separate security coprocessor that is part of Apple’s System on a Chip (SoC) A7 and above. It has its own secure boot, registers, and processing capability that is separate and explicitly segmented from the main CPU and memory, and it performs cryptographic and security operations on behalf of the user without making information available to the rest of the system. The concept of a secure enclave is not new– it’s included in the ARM specifications for TrustZone– but Apple’s implementation is subtly different and optimized for slightly different scenarios. The Secure Enclave has a feature called “Effacable Storage” designed to mitigate the difficulty of securely erasing flash storage so that keys can be securely erased at the request of the user or in response to a security event. <a href="#fnref:2" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:3">
<p>I’m using “Unrecoverable” as a simplification. To be more accurate, recovery of the data is computationally infeasible without the decryption key, and guessing the decryption key through brute force is also infeasible. With a modern computer, it would take 3×10<sup>51</sup> years, even accounting for routine doubling of computational power via Moore’s Law you are still flirting with the timeline by which our Sun will go super-Nova and we’ll have larger concerns if we’re still around. Quantum computing will make this problem more tractable, but cannot mount a practical attack against a 256 bit AES key in the foreseeable future. <a href="#fnref:3" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:4">
<p>The passcode is entangled with the device’s UID, forcing brute force attacks to take place on the device being attacked (i.e. preventing offline attacks.) On newer devices (5S and above) time delays are enforced in hardware extending a theoretical brute force attack from ~30 minutes to multiple years unless Apple is forced to update the firmware in the Secure Enclave. <a href="#fnref:4" class="reversefootnote">&#8617;</a></p>
</li>
</ol>
</div>
2016-02-17T20:45:00-07:00http://origin.jrj.org/2015/12/18/science-is-not-a-body-of-knowledge/Science is not a Body of Knowledge2015-12-18T18:45:00-07:00JRJhttp://jrj.org/A brief story from my youth that taught me a lesson about science...<p>A typical “internet argument” recently reminded me of a story.</p>
<p>When I was a little kid, I was really interested in astronomy. I had a huge poster of the solar system on my wall. I loved it, and read everything I could get my hands on.</p>
<p>I recall a specific instance in which we were studying the planets of our solar system in an elementary school science class. Later that week, the following question was on a test:</p>
<p><em>“How many moons does Jupiter have?”</em> I answered <em>“17”</em> and was marked wrong.</p>
<p>I went to the teacher and said <em>“Why is this wrong? Jupiter has 17 moons.”</em></p>
<p>The teacher said <em>“No, Jupiter has 14 moons”</em> and showed me the page in the textbook showing 14 moons. I was really confused, because I had just read a wonderful picture-filled article from National Geographic about the Voyager missions, and it said 17.<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> I could even name them in order of distance from the planet.</p>
<p><strong>Turns out the text book was published the same year I was born.</strong></p>
<p>The really frustrating part? The grade stuck– the teacher argued that I was supposed to learn the “fact” that was in the textbook, and was unswayed by my argument that it was incorrect. Obviously, most teachers (particularly <em>science teachers</em>) would not have been so stupid– indeed, most of my later science teachers did not have that mindset. However, it illustrates my main point nicely: Science is not a body of knowledge. It’s not a list of facts. It’s a method of inquiry. Trying to teach it as a bunch of unchanging facts is folly.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>As of now, we are aware of 67 Jovian moons. <a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
</li>
</ol>
</div>
2015-12-18T18:45:00-07:00http://origin.jrj.org/2015/08/05/android-vulnerability-shines-light-on-carriers-oems/Android Stagecraft Vulnerability Shines Light on Carriers and OEMs2015-08-05T12:18:00-06:00JRJhttp://jrj.org/Android users need to do some serious soul searching about security, because wireless carriers and device OEMs aren't helping...<p>Lorenzo Franceschi-Bicchierai, who covers the security beat for Vice/Motherboard, is <a href="http://motherboard.vice.com/read/goodbye-android">reluctantly moving away from Android in response to the latest security vulnerability in Google’s mobile operating system</a>. I can’t blame him: absent rapid security updates in response to issues like this, no operating system can remain secure in the face of the modern security climate. I think his piece is a must-read for anyone who uses an Android device.</p>
<p>The latest “Stagecraft” vulnerability is just the latest and loudest, but the concept is nothing new. The lack of security updates made available to Android users is a serious problem in the face of modern vulnerabilities and exploits. (Singular exception: Nexus devices purchased directly from Google.)</p>
<p>The wireless carriers are the primary obstacle, but they’re not the only problem. OEMs, who are far more interested in selling you a new phone than updating your old one deserve some of your scorn as well.</p>
<p>There are four parties involved, and all of them have to come together to update a phone:</p>
<ol>
<li>Google has to update the OS. (They are VERY good about doing this– they are quick to respond to security vulnerabilities, and do a good job, but none of that matters without the other 3.)</li>
<li>Device manufacturers (OEMs) have to take Google’s update and apply it to the image associated with specific device models. With new devices devices most do a reasonable (not great) job with this, but with phones more than 12 months old they tend to conveniently forget they exist.</li>
<li>On the off chance your device’s OEM steps up and provides an update, carriers have to “certify” it before it can be made available to consumers. Carriers are awful about this, and the few updates they do certify lag by 6-9 months or more. It’s inexcusable.</li>
<li>End users need to apply updates when they are made available. Users aren’t great at doing this in a timely fashion, but they are nowhere near being the long pole in this scenario.</li>
</ol>
<p>Or, you could buy an iPhone and get updates on day one. It sucks that those are your options, but there’s no denying that, unless something changes, Android users are going to remain at risk as new vulnerabilities are discovered.</p>
2015-08-05T12:18:00-06:00http://origin.jrj.org/2015/05/17/ai-bots-revenge-poker-and-artificial-intelligence/Revenge of the Bots: Poker &amp; AI2015-05-17T22:18:00-06:00JRJhttp://jrj.org/A look at trends in poker AI, and predictions on impact over the next decade…<!---
![Poker AI](/assets/postheads/pokerai.png "Poker AI")
-->
<p>Early on in my poker playing “career” (before legislation passed in my then home state of Washington that made playing online poker a felony) I played frequently online. One of the trends I started to notice was that a small trickle of poker “bots” were starting to be used. Players would deploy software to play on their behalf, and these “bots,” simply through a combination of very tight play and the horrible loose-passive opponents at micro-limits, could apparently eke out small win rates (which, when multi-tabling 24/7 could presumably add up to enough money to offset the cost of the bot.) However, their play was atrocious. I remember reading articles at the time with tons of hand-wringing about these bots, but they didn’t bother me. Indeed, I maintained a list of players that I suspected were bots and sought them out—they were predictable and highly exploitable. I would find one playing, and wait for the seat to its left to open up, and grind out nice profits at their expense. It was fun, and I enjoyed knowing that I was harming the already pathetic win-rate of unscrupulous players who were using software that was clearly against the terms of service of the site (AKA “Cheating.”) None of the accounts lasted very long, so I suspect that the bots were actually losing money in the long run, and consequently fell into disuse. Also, poker rooms started deploying countermeasures and looking for evidence of bot use, and suspending accounts. Fast forward almost a decade, and what was once an amusing and exploitable novelty may now be morphing into a legitimate threat.</p>
<p>2015 has been a momentous year for the development of artificial intelligence and the game of poker with huge accomplishments by researchers. Unfortunately, as is so often the case with complex subjects, the mainstream media has done a horrible job of reporting these developments, usually vastly overstating them. Publications focused on AI have done a poor job of reporting these developments because of a lack of poker understanding, and poker writers have done an even worse job due to a lack of general AI knowledge. I thought I might share some thoughts on the current state of poker AI research, and how I expect it will impact the game of poker in the coming decade.</p>
<p>In late 2014 and early 2015, the Internet was hyperventilating over <a href="http://www.sciencemag.org/content/347/6218/145.abstract">a paper published by the AAAS journal Science</a>. Most of the confusion came from the published title of the paper (“Heads-Up Limit Hold’em Poker is Solved”) which I believe to be misleading. Since nobody reads the actual articles (let alone academic papers) the mainstream press pressed hard on the hyperbole accelerator pedal, stating <a href="http://www.slate.com/blogs/future_tense/2015/01/14/two_player_heads_up_limit_texas_hold_em_poker_weakly_solved_by_ai_researchers.html">that poker had been solved</a>, often <a href="http://www.cbc.ca/news/technology/heads-up-limit-texas-hold-em-poker-solved-by-university-of-alberta-scientists-1.2893987">comparing it to IBM’s Deep Blue</a>’s stunning <a href="http://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov">victory over chess champion Gary Kasparov in the late 90s</a>. However, nothing like that has actually occurred. Yet.</p>
<h1 id="specific-and-esoteric-variation-of-poker">Specific and esoteric variation of Poker</h1>
<p>First of all, this was a very specific game type: Heads-Up Limit Hold’Em (which I will abbreviate from here on out as HU-LHE.) This is a game that, by and large, humans don’t play. Even before this paper was published, it would have been impossible to find this game being spread in any bricks-and-mortar casino, and you’d likely be unable to even find such a game actively played on any of the myriad online poker sites. Heads Up (2-player) Limit (structured betting with fixed limits) Hold’Em (A stud variation with 2 hole cards and 5 community cards) just isn’t a game humans play in the real world. It’s essentially the simplest form of poker—artificially so. It’s a convenient simplification that exists for two reasons:</p>
<ul>
<li>
<p>To facilitate research exactly like this—by evaluating a simplified subset of the game of poker, researchers (and the game engine they create) have a more tractable problem to solve. By limiting to two players, you drastically simplify the game—easily by an order of magnitude or more. However, by using structured betting (typically a small blind that is ½ of the big blind, a “small bet” during the first 2 betting rounds equal to the big blind, and a “big bet” in the later betting rounds of double the big blind) instead of pot limit or no-limit, you’re combinatorially limiting the number of possible actions per betting round. Anyone with even a passing familiarity with poker would say that HU-LHE is several orders of magnitude less complex than the more common 6-10 player games with no structured betting limits (No Limit Hold’Em, abbreviated NLHE.)</p>
</li>
<li>
<p>To create poker machines for casinos to replace the familiar video poker that plays outdated draw poker with something that looks more like what modern players see on TV without exposing the casino to undue risk.<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup></p>
</li>
</ul>
<h1 id="solved-not-quite">Solved? Not quite.</h1>
<p>So the stories that showed up in the first half of this year that suggested online play would no longer be playable were wrong, even if based solely on evaluating only the title of the paper in question. However, when you dig into the abstract (<a href="http://www.sciencemag.org/content/347/6218/145.abstract">still available without paying for the journal</a>, by the way) you find that even restricting the statement to just the one game (that, again, humans don’t really play) isn’t enough to eliminate the hyperbole.</p>
<p>To quote the abstract: (emphasis added)</p>
<blockquote>
<p>“…Whereas many perfect-information games have been solved (e.g., Connect Four and checkers), no nontrivial imperfect-information game played competitively by humans has previously been solved. Here, we announce that heads-up limit Texas hold’em <em>is now essentially weakly solved</em>”</p>
</blockquote>
<p><em>“Essentially weakly solved.”</em> That sounds like a lot of wiggle-words, but if you read the paper they are reasonably well qualified and defined. However, very few of the mainstream articles I read used this language in the title or even the first few paragraphs. Most of them repeated the paper title’s flawed assertion that the game was “solved.”</p>
<p>So what does this mean? Well, first let’s start with the objective, mathematical term “weakly solved,” which comes from a branch of mathematics called <a href="http://en.wikipedia.org/wiki/Game_theory">Game Theory</a>.</p>
<p>A game that is <a href="http://en.wikipedia.org/wiki/Solved_game#Weak">weakly solved</a> means that an algorithm is capable of ensuring a win or a draw for the player—the “weak” solution ensures that the player will not lose. However, a weak solution does not ensure optimal play against an imperfect opponent.</p>
<p>A <a href="http://en.wikipedia.org/wiki/Solved_game#Strong">strongly solved</a> game means that an algorithm exists that can produce perfect play, even if mistakes have already been in earlier moves/plays by either side. “Strong” proofs are often generated and/or proven by brute force analysis,<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup> where a computer exhaustively searches an entire game tree to evaluate all possible branches. Perfect play would punish mistakes from the other player(s) and ensure no exploitable weakness on the part of the algorithm player. Blackjack is a strongly solved game, as is the aforementioned Tic Tac Toe.</p>
<p>In a simple game like Tic-Tac-Toe, the difference between weakly and strongly solved games is largely academic. However, the difference is huge in a complex game. Optimized play against an imperfect opponent can mean the difference between losing money and earning it in a game like poker.</p>
<p>So HU-LHE is weakly solved? No, not quite. They said “<em>Essentially</em> weakly solved.” Unfortunately, that wiggle-word is a bit more subjective. The authors of the paper take care to define it thusly:</p>
<p>The result is that our new program Cepheus is beatable for […] 0.05 BB/100. Even if you knew the perfect counter-strategy and could play it flawlessly, it’d take 60 million games to overcome the variance due to luck in order to actually have 95% confidence that you were winning. It’s essentially solved: not quite perfect, but closer than any human could distinguish within a lifetime of play.</p>
<p>This is pretty dense prose, but it’s clear and unambiguous. The current algorithm is technically beatable by perfect play, but with such a small win rate that, given expected variance, no human could reliably beat it within the number of hands that can reasonably be played in a single human lifetime. While a lot of poker experts quibble about this terminology, I think it’s reasonable and well defined. Hence, the “essentially” wiggle-word is academically interesting and important from a scientific perspective, but is not important from a practical perspective. Put another way, there’s no approximation in math—if X=1 but you get the answer 1.000000000000001, your algorithm is objectively wrong. However, how meaningful that precision is in practice depends on the application. What the authors are saying is that while their algorithm can technically be beaten (X!=1) it can only be beaten “in the long run” where “long” is equal to a duration greater than a human lifetime. Their algorithm does not technically achieve the <a href="http://en.wikipedia.org/wiki/Nash_equilibrium">Nash equilibrium</a> required for a true solution to a game of imperfect information, but it gets close enough that it doesn’t matter from a practical perspective against human players.<sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup></p>
<p>OK, so where does that leave us? Is poker solved? No, a highly specialized (and simplified) version of poker that humans don’t really play is essentially weakly solved to the extent that no human player can reliably beat it, but it will not be able to extract money from the best players. It’s not playing an optimized game, and does not know how to maximally exploit suboptimal play from its opponent.</p>
<p>Another tidbit that I think is important is that the researchers acknowledge that the algorithm does not take its opponents past actions into account. Each hand is essentially an independent game, with no history. At the risk of oversimplifying, the algorithm simply looks up the current hand and board combination in a giant database<sup id="fnref:4"><a href="#fn:4" class="footnote">4</a></sup> and gets the answer of how to act with no regard for its opponent’s past actions. This makes sense, since they are seeking a Game Theoretical Optimal (GTO) solution, so they want actions that are not exploitable. However, it’s a terrible way to maximize winnings at the poker table.</p>
<h1 id="moving-the-research-forward">Moving the Research Forward</h1>
<p>Where does this research go from here? Presumably in two directions in parallel, ideally converging in a few years.</p>
<p>First, poker playing algorithms need to develop the ability to play No Limit games. This will geometrically increase the size of an already massive dataset, but probably still within a size manageable by modern research-grade computing. (Going from ~17 terabytes to a few petabytes doesn’t make this problem intractable to researchers. Indeed, the researchers already have a no-limit version, which can beat a typical novice player, and it will continue to improve. Another by strong players, but with too small a sample size to be statistical proof.<sup id="fnref:5"><a href="#fn:5" class="footnote">5</a></sup> I suspect that they will be able to claim that HU-NLHE will be “essentially weakly solved” within 1-3 years by simply applying their current HU-LHE methodology with moderate but achievable increases in storage and CPU requirements to handle the additional permutations, and playing a few trillion more hands to better tweak the algorithm will also help.</p>
<p>Second, poker playing algorithms need to get to a point where they can play against multiple opponents. Playing against a single opponent is obviously (and precisely) an order of magnitude more complex than playing against 10. However, I would argue that the subtleties of the game, and actions of each player impacting the decisions of players left to act make 10-handed Limit Hold’em (a game that is popular and commonplace in both online and bricks-and-mortar poker rooms) significantly more than simply 10X more complex than vs. a single opponent: players interact with one another in ways that start approximating <a href="http://en.wikipedia.org/wiki/Graph_theory">graph theory</a>. If you accept the graph of those interactions at face value, it’s not 10X more complicated, <strong>it’s theoretically 1,000 times more complicated.</strong> However, it’s probably not quite that much of an explosion in practice<sup id="fnref:6"><a href="#fn:6" class="footnote">6</a></sup>, and a well designed algorithm can probably compress that significantly. I’m guessing somewhere in the 20-60X range. That’s not a problem that’s within the reach of current researchers without significant increases in funding or computational power at a given price point. Assuming funding as a constant, that’s 5-7 doublings of computational power, or ~7-10 years of Moore’s Law if you assume the researchers’ current techniques.</p>
<p>However, I strongly suspect that the application of <em>deep machine learning</em> techniques can be successful much faster. Rather than a simple “regret score”<sup id="fnref:7"><a href="#fn:7" class="footnote">7</a></sup> on each of trillions of hands, machine learning can extrapolate meaningful insights beyond simple brute force trial and error, and with less computational power.<sup id="fnref:8"><a href="#fn:8" class="footnote">8</a></sup> Of course, even a strongly solved algorithm would be vulnerable to on the part of its human opponents. Even theoretically perfect play can only beat multiple opponents who are not colluding against the computer, even implicitly. <sup id="fnref:9"><a href="#fn:9" class="footnote">9</a></sup></p>
<p>My back-of-the envelope calculations put us at 1-3 years away from HU-NLHE being “effectively weakly solved,” and probably another 3-5 years on top of that for full ring games of 6-10 players (with a small possibility it could take as much as 10 years.) So how does that impact the poker world?</p>
<h1 id="can-online-poker-stop-bots">Can online poker stop bots?</h1>
<p>Even today, every online poker room I am aware of prohibits the use of poker algorithms that play the game or instruct the player what actions to take. However, I don’t see how a poker room could effectively enforce a ban on artificial intelligence software.</p>
<ul>
<li>
<p><strong>HU-LHE:</strong> From a practical perspective, Heads Up Limit Hold’em is already weakly solved. It will be impossible for even the best player in the world to beat a well-designed algorithm. Hence, human players should steer clear of online HULHE games, which they already do. So this is academically interesting, but has no real practical impact beyond creating a bunch of press.</p>
</li>
<li>
<p><strong>HU-NLHE:</strong> Heads Up No Limit Hold’em will likely be essentially weakly solved within the next couple years. Current algorithms, while not quite at the level of game theory optimal, can likely beat typical low-limit players. Unless you are a world-class player, you should probably consider gradually moving away from heads-up online play (except as the end of final table play in a tournament.) Within a couple years, that recommendation will apply to expert and world-class players as well. Heads-up online play has a short shelf life.</p>
</li>
<li>
<p><strong>LHE:</strong> 6-10 handed Limit Hold’em is probably safe for now, but I expect typical players to be beatable by bots within the next year or two, and world-class players in 3-5 years.</p>
</li>
<li>
<p><strong>NLHE:</strong> 6-10 handed No Limit Hold’em, often referred to as the “Cadillac of Poker,” is the game that decides the World Series of Poker Main Event. Based on the pace of current research, I expect this game to be “essentially weakly solved” in the next 5-7 years. This means that online poker may not be able to survive the bot onslaught beyond that point with its game integrity intact. I will be surprised if online poker still exists at the current scale in 7 years, positively shocked if it lasts 10 years. However, I sincerely hope I’m wrong about that.</p>
</li>
</ul>
<p>These estimates are for how long before research-grade computing will take to get to the described levels of play. This means fairly substantial computing resoruces. However, keep in mind that through cloud hosting platforms like Amazon’s AWS and Microsoft’s Azure, virtually unlimited computing power is available to anyone for a price. However, that price is fairly high today—perhaps adding a couple years to each of my estimates for high-end desktop computers to catch up with the resources required would be reasonable.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>When you play poker in a casino, you aren’t playing against the house, you’re playing against the other players. The house takes a small amount out of each pot, called the “rake.” This means that poker represents zero risk to the casino; they make money on every hand. That’s why with video poker machines you are playing a variant of poker with no other players, simply trying to achieve a 5-card poker hand. This presents a simple statistics problem that the house can maintain a standard and predictable edge. The simplicity of HU-LHE allows them to create video poker machines with a game that looks more like the game players see on TV, but where the casino can ensure it can’t be consistently beaten by skilled players. <a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:2">
<p>One of the best ways for someone with even a beginner’s programming knowledge to understand game theory brute force analysis is to take a look at the <a href="https://github.com/glinscott/Garbochess-JS">GarboChessJS engine</a>, which is an incredibly strong chess algorithm implemented in <a href="https://github.com/glinscott/Garbochess-JS/blob/master/js/garbochess.js">~2500 lines of surprisingly readable JavaScript</a>. Very fun to read, if you’re into that sort of thing. You’ll be amazed (and, if you play chess, humbled) by how simplistic it is. <a href="#fnref:2" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:3">
<p>Also, the algorithm is playing without a “rake,” making it a zero-sum game. Add a rake and the algorithm would not be able to force a win/draw, but it would ABSOLUTELY be able to prevent the other player from winning money sufficient to beat the rake. <a href="#fnref:3" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:4">
<p>No, I mean <em>really</em> big. Through compression and elimination of redundant card combinations that are equivalent from a practical perspective (for example, it needn’t care if your AQs is spades or hearts) <strong>they were able to reduce the data set down to 11 terabytes for counterfactual data, and 6 terabytes for main strategy data.</strong> Yeah. That’s a lot of ones and zeroes. Using it in practice required 200 computational nodes, each with 24 CPU cores and 32 gigabytes of RAM <em>each</em>. Don’t expect to see an iPhone version until a few dozen more doubling cycles of Moore’s Law have kicked in. <a href="#fnref:4" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:5">
<p>The win rate had the human players positively crushing the computer with a win rate of over 9BB/100 hands, but with only 80,000 hands played, statisticians are rightfully calling “LOLsamplesize.” <a href="#fnref:5" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:6">
<p>A player’s decision when holding middle-strength cards is profoundly impacted by the actions of other players and players yet to act, but extremely weak or strong holdings often play the same regardless of what your opponents do in full ring games. <a href="#fnref:6" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:7">
<p>The Cepheus algorithm is largely designed around minimizing its regret: it reviews every decision and then learns which ones paid off, and which ones cost it money. However, unlike human players, this analysis is not colored by selective memory or faulty statistics. <a href="#fnref:7" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:8">
<p>Some interesting deep learning resources: Google’s <a href="http://www.wired.com/2015/02/google-ai-plays-atari-like-pros/">DeepMind</a>, IBM’s <a href="http://en.wikipedia.org/wiki/Watson_(computer)">Watson</a>. Also, check out <a href="https://www.youtube.com/watch?v=xx310zM3tLs">this TED talk for a great visual introduction</a> to the concept. (OK, Watson is arguably not technically deep learning, but it uses substantially similar techniques from the perspective of how one might train a poker bot.) <a href="#fnref:8" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:9">
<p>Even a “strong” solution algorithm could only beat players who are not playing together/colluding. If you assume the use of an algorithm/bot is cheating (and it is) then players could cheat in response. However, the resulting game is completely lacking in integrity, and loses all interest. <a href="#fnref:9" class="reversefootnote">&#8617;</a></p>
</li>
</ol>
</div>
2015-05-17T22:18:00-06:00http://origin.jrj.org/2015/05/06/apple-watch-review-and-early-thoughts/Apple Watch: Early Thoughts2015-05-06T21:21:00-06:00JRJhttp://jrj.org/Initial thoughts after my first week and and a half with the Apple Watch…<!---
![Apple Watch: Early Thoughts](/assets/postheads/applewatch.png "Apple Watch")
-->
<p>My Apple Watch (a 42mm Stainless Steel with “Milanese Loop” band) arrived on launch day, so I’ve had a little time to play with it and see how it integrates into my workflow. An unusually large number of people asked me what I think of the device<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>, and I’ve been responding that I needed more time to form an opinion. I needed more than a week, with a mixture of weekend and work days, and even a well-timed business trip to fully evaluate the device. Now that I’ve used it for a while, I wanted to share my initial thoughts on this enigmatic little wearable computer.</p>
<p>First, I’ll talk about the device itself— the physical hardware. Next, I will cover the main features of the watch in my perceived order of importance/impact:</p>
<ol>
<li>Timepiece</li>
<li>Notifications</li>
<li>Fitness Tracking</li>
<li>Communication</li>
<li>Apps</li>
</ol>
<p>Finally (after touching on a few odds and ends, and weighing in on a few common criticisms) I’ll provide a recommendation for whether or not you should buy one. <strong>The TLDR is that most users should probably wait for the second generation, but gadget freaks won’t regret the early adopter tax of buying the first generation.</strong> When people—especially strangers— ask me about the device on my wrist, I always answer with “I like it, but don’t love it— most people should wait until the second generation.”</p>
<p>Why’s that? Read on.</p>
<p><img src="/images/galleries/applewatch/watch-in-case.jpg" alt="The Apple Watch in all its glory" /></p>
<h1 id="the-device-itself">The Device Itself…</h1>
<!--- need to figure out a stratey for images.
[caption id="attachment_1543" align="alignright" width="150"]<a href="http://jrjblog.constellationofideas.com/wp-content/uploads/sites/9/2015/05/IMG_0490.jpg"><img class="size-thumbnail wp-image-1543" src="http://jrjblog.constellationofideas.com/wp-content/uploads/sites/9/2015/05/IMG_0490-150x150.jpg" alt="apple watch in box" width="150" height="150" /></a> The Apple Watch and the box it came in[/caption]
-->
<p>Watches are fashion, which is inherently subjective. Personally, <strong>I think the stainless steel watch is great looking, particularly with the Milanese Loop or Link Bracelet.</strong> I’m disappointed in all of the leather bands, except for the Modern Buckle, which is lovely but a bit feminine (and only available on the smaller 38mm watch.) The Milanese Loop (which, as I mentioned earlier, is the one I selected) is the best watch band I’ve ever owned. I love the way it looks, the way it feels, and the magnetic attachment. I’m extremely pleased with it. The Sport band (which I have on order, so I’ve only tried it on) is fine, but not exceptional in my view. I think the link bracelet is beautiful and uniquely modern, but a bit too expensive for my tastes.</p>
<!---
[gallery ids="1543,1544,1545,1546,1547,1548,1549,1550"]
-->
<div>
<ul class="clearing-thumbs small-block-grid-4" data-clearing="">
<li><a href="/images/galleries/applewatch/watch-in-case.jpg"><img data-caption="The Apple Watch and the box it came in" src="/images/galleries/applewatch/watch-in-case-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/watch-on-wrist.jpg"><img data-caption="Apple Watch on my wrist, before pairing with the phone" src="/images/galleries/applewatch/watch-on-wrist-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/watch-on-wrist2.jpg"><img data-caption="Close-up of the watch on my wrist with the screen off" src="/images/galleries/applewatch/watch-on-wrist2-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/watch-on-wrist-podcast.jpg"><img data-caption="Close-up of the watch while listening to a podcast in Overcast" src="/images/galleries/applewatch/watch-on-wrist-podcast-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/screen-podcasts.jpg"><img data-caption="The first version of Overcast, which has since been replaced with a much better version" src="/images/galleries/applewatch/screen-podcasts-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/screen-tweets.jpg"><img data-caption="The Twitteriffic app is much better than Twitter's first party app" src="/images/galleries/applewatch/screen-tweets-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/screen-activity.jpg"><img data-caption="Concentric circles of the activity glance" src="/images/galleries/applewatch/screen-activity-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/screen-mickey.jpg"><img data-caption="Mickey is here, tapping his feet in unison with every other Apple Watch on the planet" src="/images/galleries/applewatch/screen-mickey-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/screen-modular.jpg"><img data-caption="The Modular face is my daily driver during the work-day." src="/images/galleries/applewatch/screen-modular-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/sport-clasp.jpg"><img data-caption="The sport band is comfortable and durable/weather-resistant." src="/images/galleries/applewatch/sport-clasp-thumb.jpg" /></a></li>
<li><a href="/images/galleries/applewatch/watch-stand.jpg"><img data-caption="The third party stand I bought is kind of a piece of crap. Skip it." src="/images/galleries/applewatch/watch-stand-thumb.jpg" /></a></li>
</ul>
</div>
<p>I strongly prefer the polished stainless steel “Apple Watch” to the satin finish on the aluminum “Apple Watch Sport.” <strong>When paired with the right band, the steel watch can feel either dressy or casual, unlike the Sport, which can’t be effectively dressed up.</strong> (Pairing the Sport with the Milanese loop or link bracelet looks bad in my opinion, due to the mismatched metals and the satin finish on the aluminum watch.) However, opinions on this will vary widely— again, fashion is highly subjective.</p>
<p>What isn’t subjective, however, is fit-and-finish, and nobody can argue with the fact that <strong>this device is beautifully made.</strong> The manufacturing tolerances seem impossible, likely due in part to Apple’s innovation in machine vision based part matching, rather than traditional manufacturing techniques focused on simply rejecting parts beyond certain tolerances. This methodology both increases yields AND improves fit and finish. I would be interested in spending a bit more time with the aluminum Sport watch, but my overall sense from a brief try-on is that it feels less substantial and jewelry-like, but is built to the same exacting standards.</p>
<p><strong>The screen is incredibly good</strong>— I’ve used OLED displays before (for example, on my Moto X) and love the inky blacks the technology enables. Under most angles and lighting conditions, you can’t see where the screen ends, which would not be the case with an LCD’s backlight. Under direct sunlight the screen is washed out but readable, but under any other conditions (including partially shaded outdoor light) it’s bright, perfectly saturated, and all-around gorgeous. Black pixels consume no energy on an OLED display, and Apple has obviously taken that into account in designing the user experience, which is very black-heavy rather than the white/pastel tendencies of iOS on phones and tablets. I’m not convinced that OLED makes sense on larger displays yet at current manufacturing costs, but on a small device like this it’s wonderful, and this is a particularly good OLED display.</p>
<p>The “Digital Crown” is getting rave reviews from press and users, but personally I’m not as impressed. It’s fine, but I feel like the reviewers lavishing hyperbolic praise on its lubriciousness are overstating things a bit. It works, and it works well. I can’t really find fault with it, but <strong>it’s a tiny knob. It turns. Get over it.</strong> That said, I do think it’s better for a lot of scenarios than using the touchscreen, since your finger obscures easily ⅓ of the screen when scrolling on the multitouch display. <strong>I think Apple did the right thing including a physical control but enabling touch-based controls to still work.</strong></p>
<h1 id="timepiece">Timepiece</h1>
<p>It goes without saying, but yes, the Apple Watch is, indeed, a watch. You can tell time with it. However, there’s quite a bit here, it’s worth discussing.</p>
<p>It’s a historical irony to me that people started out with pocket watches, and then people suddenly realized that <em>“Hey, we could just strap this thing to our wrist, and we wouldn’t need to pull a device out of our pockets to know what time it is!”</em> Pocket watches fell into disuse almost immediately, as the wristwatch was obviously so much more convenient. Fast-forward a hundred years or so, and wristwatches have fallen into disuse because everyone has a smartphone in their pocket… it took Pebble (and later Samsung, then Google, and now Apple) to remind us that <em>“Hey, we could just strap this thing on your wrist, and you wouldn’t need to pull a device out of your pocket to know what time it is!”</em> As they say, history doesn’t repeat itself, but it often rhymes.</p>
<p>If your primary use for a smartwatch is to tell time, the Apple Watch (and most other smart watches, including Android Wear and Samsung devices) probably isn’t the right device for you. A Pebble has an e-ink display like a Kindle that’s always on, and the battery life is almost Kindle-like as well. <strong>If you want a smartwatch 95% to tell time, and 5% for notifications and other stuff, the Pebble is probably a better choice for you.</strong> However, the Apple Watch (like it’s Android and Samsung smartwatch brethren) isn’t primarily about telling time. It’s mostly about the other stuff— notifications, fitness, communication, and apps— but that doesn’t let it off the hook as a timepiece.</p>
<p>And the Apple Watch is a good (but not exceptional) timepiece. It’s extremely accurate (to within 50 nanoseconds, which is as accurate as a non-atomic clock can be.) However, the display, unlike the aforementioned Pebble (or a traditional watch) is not always on (a concession to battery life.) That means it has to activate when you want to know what time it is. It tries to do this automatically, and it mostly succeeds. When I lift my wrist to look at the watch, it almost always turns on and displays the watch face. Almost. However, when I am in a position where I can already see the watch face, I need to explicitly activate it— either by needlessly (and conspicuously) lifting it to my face, or by tapping the screen. That’s unfortunate. <strong>It’s still dramatically less obtrusive than pulling my phone out of my pocket, but it’s not as convenient as a wristwatch that always displays the time… you know, like a watch.</strong> Exacerbating this is a brief, almost imperceptible, delay. I’m guessing it’s around 100-200 milliseconds, but that’s enough to be slightly off-putting to someone used to wearing a watch. If you don’t already wear a watch, you won’t notice it at all, it will seem instantaneous. However, if you are used to wearing a traditional watch, the delay is annoying. On a traditional wristwatch, you are subconsciously starting to read the analog face before the watch is fully in view. Your brain will notice the difference at a subconscious level. It’s not a big deal, but it takes some time to get used to it.</p>
<p>The watch faces themselves are great, and most can be customized with “complications” that add little bits of data (like battery level, date, next appointment, etc.) However, Apple doesn’t allow third party watch faces, an omission I hope they correct (<a href="http://daringfireball.net/2015/04/custom_watch_faces">but I am not confident they will</a>.) More likely to come (and come sooner) is the ability to create third party complications, which would solve more than 80% of the problem for me.</p>
<!---
[caption id="attachment_1550" align="alignright" width="150"]<a href="http://blog.jrj.org/2015/05/06/apple-watch-review-and-early-thoughts/img_0530/" rel="attachment wp-att-1550"><img class="size-thumbnail wp-image-1550" src="http://jrjblog.constellationofideas.com/wp-content/uploads/sites/9/2015/05/IMG_0530-150x150.jpg" alt="Modular watch face" width="150" height="150" /></a> The "Modular" face is my daily driver during the work day.[/caption]
-->
<p><img src="/images/galleries/applewatch/screen-modular.jpg" align="right" style="margin-left: 15px; margin-bottom: 10px; width: 150px;" alt="Modular Face Screen Shot" title="The Modular face is my daily driver during the work-day" /></p>
<p>The usage pattern that has evolved for me is fairly simple: I use the extremely practical but less attractive “modular” watch face during the workday. I’ve configured it to show my next appointment, the current temperature, current Pacific time zone (where most of my colleagues are located) and the current Dow Jones Industrial Average. When 5:00 rolls around, and on weekends, I use different faces, with a particular affinity for the astronomy and chronograph faces.</p>
<h1 id="notifications">Notifications</h1>
<p>My primary use for the Apple Watch is notifications, and I view it as the most important aspect of the experience. Fortunately, this is an area that Apple has absolutely nailed, but with too much onus on the user to carefully hand-tweak settings. Users who accept the defaults will likely not be happy with the watch, and that’s a huge mistake on Apple’s part. “Tyranny of the Default” is definitely a thing, and most users will never change the settings. That’s too bad, because <strong>when properly configured it’s a huge improvement over a phone, but if I had to live with the default I probably wouldn’t want to wear one.</strong></p>
<p>I have the notification settings on my iPhone are already pared down to the bare essentials. I don’t get notifications for every email, and only a small handful of apps have permission to bubble up a notification. As a result, I was already pre-configured for near-optimal watch usage. However, on the watch I pared it down even further— I won’t go into the specifics (which are only useful for my exact needs) but I will describe the principals I use in selecting notification settings:</p>
<ol>
<li>
<p>No notification of any kind for email, except from a small handful of VIPs.</p>
</li>
<li>
<p>Text messages notify on the watch and phone.</p>
</li>
<li>
<p>Informational alerts (news, stock, etc.) I only allow things that I would actually want to be interrupted for, which is almost nothing. However, the few that survive that funnel go to the watch.</p>
</li>
<li>
<p>Actionable alerts (things that I need to act on) generally only go to the watch if I can do something about them on the watch. For example, a text message I can reply to. Apps that have alert actions enabled I am willing to let flow to the watch. However, any alert that would require me to pull out my phone to act on I don’t send to the watch. There’s no point— the phone will alert me, and since the alert isn’t on my watch I know that it’s something I need the phone for.</p>
</li>
</ol>
<p>The way I have things configured, I get a small trickle of alerts on my watch, and they tend to be important and I’m happy to receive them there. There are a few notifications that go to my phone but not my watch, and I quickly adapted to understand that and subconsciously categorize them. I will only pull out my phone when it’s appropriate to do so, and as a consequence, I’m finding I pull my phone out dramatically less than I used to, and when I do I tend to have a smaller number of notifications queued up that I need to deal with. I’m genuinely more present, and have less anxiety about what’s lurking behind the latest vibration from my pocket. Bottom line, <strong>if your watch taps you on the wrist for every email and every time someone “likes” your latest Facebook post, you’re doing it wrong.</strong></p>
<p>The way the notifications themselves work on the watch is nearly perfect. I get a subtle but noticeable tap on the wrist (<strong>definitely more of a tap than a buzz</strong>— the difference between the watch’s linear actuator and the phone’s vibration motor is significant) but the screen doesn’t light up. If I raise my arm to look at the watch the notification is displayed full screen (though the time is still visible in the corner.) A second or so later (assuming I’m still interested enough to hold my wrist up) more information is displayed. If I wish to interact with the notification (reply to a text, etc.) I can do so, but if I simply lower my arm the notification is dismissed on the watch AND on my phone. No double notifications. In practice, this works extremely well, and is intuitive. It works the way I expect it to.</p>
<p>There are some neat nuances. For example, if I forgot to silence my phone and get an unwanted notification, I simply place my palm over the watch, and both the watch and phone will be put on silent mode. Brilliant. (It’s worth noting that I have alert sounds all set to zero on the watch— the haptic feedback is sufficient, and I prefer to be silently notified. I couldn’t do that on my phone because I might not always feel it, but with the watch it’s touching your skin.)</p>
<p>Notifications are at once the best reason to buy an Apple Watch AND the thing that, if you don’t go out of your way to configure them properly, will drive you crazy and make you regret the purchase. Pared down to only essential notifications, though, the watch makes your interaction with the notifications easier and less intrusive. I think Apple made a mistake by having notifications on full-blast out of the box. The default should have been iMessage and VIP emails only, with everything else off by default. By going opt-out with notifications instead of opt-in, Apple has ensured that the watch experience for most users will be one of frustration. Surprising, given how well thought out the rest of the notification experience and implementation is.</p>
<h1 id="fitness-tracking">Fitness Tracking</h1>
<p>I have owned multiple Jawbone Up devices, and loved them. However, they were incredibly unreliable— I had 3 warranty replacements, and bought 2 more out of warranty. The functionality was sufficiently compelling (and the device’s purchase price sufficiently affordable) that I was willing to put up with the constant device failures to have a fitness tracker that tracks my activity.</p>
<p>The Apple watch is a much better fitness tracker than the Jawbone in almost every way, but with a couple of (big) deficiencies. First, let’s talk about the good parts.</p>
<!---
[caption id="attachment_1548" align="alignright" width="150"]<img class="wp-image-1548 size-thumbnail" src="http://jrjblog.constellationofideas.com/wp-content/uploads/sites/9/2015/05/IMG_0528-150x150.jpg" alt="activity glance" width="150" height="150" /> Concentric circles of the activity glance[/caption]
-->
<p><img src="/images/galleries/applewatch/screen-activity.jpg" align="right" style="margin-left: 15px; margin-bottom: 10px; width: 150px;" alt="Activity App Screen Shot" title="Concentric circles of the activity glance" /></p>
<p>Throughout the day, the watch uses its sensors (and, if available, the sensors on your phone) to track your movement. The basic interface for fitness tracking is the “Activity” glance. It shows 3 concentric circles. The outermost ring represents movement, and you set goals for how much. Think of this as a basic pedometer, and setting goals for steps/distance each day. The middle ring represents exercise, and the watch will try to persuade you to get 30 minutes of real exercise per day. (Good luck with that, Apple. Geeks are a sedentary bunch.) It goes beyond a simple pedometer, and includes your heart rate as part of the algorithm. <strong>Turns out the Apple Watch is among the most accurate heart rate monitors available to consumers</strong>— <a href="http://www.consumerreports.org/cro/news/2015/04/the-science-behind-smartwatch-scratch-resistance/index.htm">Consumer Reports found it matched their top-rated chest strap monitor</a>. I was surprised by this. The innermost circle shows how often you get off your butt and move around, with a default goal of standing up at least once per hour and moving around a bit. The interface is intuitive and attractive, and has just the right amount of gamification. You can score achievements for hitting your goals, and you’d be amazed how much a ring that is 90% full will bug the OCD side of you into completing it.</p>
<p>The other interface is the Workouts app, which allows you to monitor various workouts. It’s really only useful for cardio, and doesn’t work well for non-cardio workouts like Yoga or weightlifting. Fortunately for me, I’m not into weight lifting. (Those things are heavy!) You select the type of workout and set a goal (in the form of time, distance, calories burned, etc.) The watch will track your movement, and give you quick glancable status information. The information is available to you on your phone later.</p>
<p><strong>This is all extremely well implemented, and is the part of the watch that feels the least like a 1.0 product.</strong> Apple clearly learned from the other fitness bands on the market, and completely outclasses all of them except the excellent Microsoft Band, which is a dedicated fitness band that’s about ⅔ the price of the Apple Watch. If fitness is your primary use, you might consider the Band instead. However, the Apple Watch does a great job, and is perfect for me, since I don’t want to wear multiple devices.</p>
<p>I mentioned weaknesses relative to other fitness bands. First and foremost is battery life. The Apple Watch needs to be charged every day, just like your phone. (It does not, in my experience, run out of juice before I do, though— I’ve been above 25% battery when I went to bed every night without exception.) My Jawbone Up lasted a week or two on a charge. That’s a HUGE difference. However, the functionality of the Apple Watch relative to my Jawbone (or to a similar Fitbit or Nike band) is so much better I would be willing to accept the nightly charging if it weren’t for the second deficiency: sleep tracking. <strong>Because the watch needs to be charged every night, you can’t use it to track your sleep like a Jawbone Up or FitBit.</strong> This is a real bummer, and I don’t see how Apple can resolve it in the next few years. I’m going to need to find another solution for sleep tracking. If this is important to you, you might consider a Jawbone Up (which I think is slightly better for sleep tracking than the Fitbit.)</p>
<p>Other than the lack of sleep tracking functionality, I’m extremely happy with the Apple Watch as a fitness tracker, and am impressed by how well they implemented this first version. It’s only going to get better as they add more sensors in future versions (and, perhaps, activate the <a href="http://https://www.ifixit.com/Teardown/Apple+Watch+Teardown/40655">dormant blood oxygen sensor in the current watch</a>!)</p>
<h1 id="communication">Communication</h1>
<p>Apple obviously thought communication was a key feature of the Apple watch— in a notoriously anti-button company, they have allocated an entire button to the “friends” UI. Personally, I think this button is wasted.<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup> The friends UI is well-designed, but doesn’t get used. You’re not going to initiate communication on the watch (except with Siri) you’ll be replying. I never go to the friends UI, so the only time I press the “side button” is to trigger Apple Pay.</p>
<p>There are four ways you can use the watch to communicate:</p>
<ul>
<li>IMessage and SMS/Text</li>
<li>“Digital Touch”</li>
<li>Telephone calls</li>
<li>Limited email support</li>
</ul>
<p>Of the three, the <strong>iMessage/Text functionality is the only one that people will actually use a week after buying the watch.</strong> The digital touch stuff is cute, and (again,) demos well… but drawing on a 1” screen is laughable. Sending your heartbeat is only applicable with your spouse, and even then only as a novelty. The tapping I could see being mildly useful, but not in its current implementation. Bottom line, ignore the digital touch functionality.</p>
<p>Even sillier is the ability to make and receive phone calls, <strong>basically using your watch as the world’s most expensive and least useful bluetooth headset on the planet</strong>. Don’t do this, seriously. It’s dumb. I did it once to try it out, and suspect it will never happen again (with a possible exception of answering the phone literally to say “I don’t have my phone on me, I’ll call you back in 2 minutes.”)</p>
<p>However, messaging is very well implemented, and is something I find myself using constantly. The canned messages that Apple provides with a single touch are eerily well-chosen based on context. (When someone asks a question, yes/no are at the top. When someone asks an either/or question, the two options are usually your first two selections. When I selected a friend to send them a text message on their birthday, the watch had “Happy Birthday!” as the first selection. Seriously, it’s really good.) You can dictate text if none of the canned responses fit, and it works extremely well. For a reason I can’t explain, speech recognition on the watch has been noticeably better than on the phone. It almost never makes a mistake. It’s so good that on the third day I disabled the feature that allows you to send a voice recording instead of dictated text because the dictation was so accurate that I didn’t want the extra step of telling it to send it.</p>
<p>Finally, there are the emoticons. You have a choice between a surprisingly silly 3D animated smiley face that can relay a wide range of equally annoying expressions, a 3D heart that’s actually kind of fun, but probably only makes sense with one other person in your life, and a mildly creepy but surprisingly useful 3D white gloved hand that can perform a bunch of gestures. I find these animated emoticons neat but gaudy, a rare example of poor taste from Cupertino. I think this is the feature that we’ll look back on with the most disdain in a few years. <strong>Fortunately, you can also select and use normal emoji</strong>, which are reasonably easy to use, and recently used emoji are easily accessible.</p>
<p>Ultimately, quick text messages and responses are easy to do on the watch, and I find the functionality useful and well executed. For longer, more involved messages you can and should pull out your phone. However, in my experience that’s the small minority of messages I’m sending.</p>
<p>The watch also receives email, and you can mark items as read or flagged, or delete them from the watch. You can’t reply— for that you need your phone (but it’s easy to do through handoff.) I find the email functionality of limited usefulness, but it’s nice to be able to quickly glance at an email and know if it’s worth pulling out my phone or not, but this is not the device you’re going to use for triage let alone processing and replying.</p>
<h1 id="apps">Apps</h1>
<p>There are three types of Apple Watch Apps:</p>
<ul>
<li><strong>Watch Kit apps</strong>, which execute primarily on the phone, but project user interface to the watch. All third party apps are currently this type.</li>
<li><strong>Native Apps</strong>, which execute primarily on the watch, but can leverage phone resources and sensors (like, for example, internet connectivity or GPS) when needed. Only the built-in apps that ship on the watch are this type, and only Apple can currently build them. (It’s widely anticipated that Apple will release an SDK enabling third parties to build native apps at WWDC this summer.)</li>
<li><strong>Glances</strong> are like the today screen extensions on your iPhone: quick glancable bits of data that are displayed on demand.</li>
</ul>
<p>The native apps work well. Everything from messaging to calendaring to fitness tracking is fast and seamless. Apple’s built-in apps are high quality, and show what the first generation hardware can really do. They still rely heavily on the phone (without it, there’s no internet connectivity, for example) but the user doesn’t notice. They “just work” and truly show what the device is capable of.</p>
<!---
[caption id="attachment_1547" align="alignright" width="150"]<a href="http://jrjblog.constellationofideas.com/wp-content/uploads/sites/9/2015/05/IMG_0527.jpg"><img class="size-thumbnail wp-image-1547" src="http://jrjblog.constellationofideas.com/wp-content/uploads/sites/9/2015/05/IMG_0527-150x150.jpg" alt="Twitteriffic app" width="150" height="150" /></a> The Twitteriffic app is much better than Twitter's first party app[/caption]
-->
<p><img src="/images/galleries/applewatch/screen-tweets.jpg" align="right" style="margin-left: 15px; margin-bottom: 10px; width: 150px;" alt="Twitteriffic Screen Shot" title="The Twitteriffic app is much better than Twitter's first-party app" /></p>
<p>Watch Kit apps, on the other hand, are limited in functionality and nearly unusably slow. First execution is often measured in tens-of-seconds, and subsequent execution of most apps isn’t much faster. The apps you download you’ll use a time or two and forget they exist because it’s almost always quicker and easier to simply pull out your phone. It really is that bad. There are two watch kit apps that are sufficiently useful that I’m willing to actually use them on a semi-regular basis: Day One and OmniFocus. Both have been smart about UI design optimized for the watch, and minimizing data transfer. (No massive PNG files flying back and forth.) However, even these are really slow. Every other app I don’t use on the watch, I simply pull out my phone. These will improve, with larger touch-targets and fewer, smaller image assets… but improvement will occur within the constraints and limitations of the platform. There’s only so good they can get without moving to native apps.</p>
<p>Glances actually work reasonably well, even in their current form. They don’t update in the background, which is unfortunate, but they execute quickly and are quite usable. You will want to pare down the number of glances to a manageable number (I have 6, and am trying to identify one or two more to remove) but once you do that it’s a great user experience, and one that would be made nearly perfect if they updated in the background every 15-30 minutes.</p>
<p>The app springboard interface (the amorphous blob of bubble icons, as I call it) demos well, and is pretty… but come on, this is silly. <strong>Nobody’s fingers are small enough to accurately touch those little icons</strong>, and trying to find the icon you want is difficult at best. You can configure the arrangement of icons on the watch or in the partner app on the phone… in so doing, Apple has created the world’s first and second most frustrating puzzle game on the watch and phone respectively. Seriously, this is form over function, and no matter how cool it looks, it simply doesn’t work in practice.</p>
<p>However, you won’t use it. You’ll find that launching apps with Siri works perfectly every time. Personally, there are only a couple apps I use on a regular basis (though I suspect there will be more when native app support comes) and telling Siri “Launch OmniFocus” is quick and easy.</p>
<h1 id="odds-and-ends">Odds and Ends</h1>
<p>There are a few features or characteristics of the device that didn’t fit neatly into the major areas, but I thought were worth discussing. I’ve provided information here in no particular order:</p>
<h2 id="apple-pay">Apple Pay</h2>
<p>You can use the Apple Watch for contactless payments (whether or not they are branded Apple Pay. Apple wasn’t the first one to do NFC-based mobile payments, and their implementation is fully compatible with most earlier systems.) I use Apple Pay on the phone all the time, and wish more businesses supported it. Using a watch instead of my phone is slightly more convenient, but there’s only a material difference in one situation: when I drive, my phone is usually in a dock plugged into the car. Hence, when I stop for gas it’s a bit of a pain to pull the phone out of its dock and plug it back in when done. In this case, the watch is significantly more convenient than using Apple Pay on the phone. In all other scenarios, pulling my phone out of my pocket is not a big deal. Apple Pay (and other NFC-based mobile payment solutions) is great, though, and it’s a welcome addition to the watch. It’s much more secure than using a card, and will reduce likelihood of fraud.</p>
<h2 id="authentication">Authentication</h2>
<p>If you use Apple Pay, you must have a PIN number configured on your watch. (Indeed, I strongly recommend setting up a PIN regardless of whether or not you plan to use Apple Pay.) This sounds like a huge pain in the butt, and it would be… except Apple has implemented authentication extremely well on the watch.</p>
<p>You only need to authenticate once, when you first put the watch on. It stays “logged in” as long as you are wearing it— only when you take it off at the end of the day (i.e. the watch loses contact with your skin) does it force a login. Even better, you don’t need to type the PIN, since doing a TouchID authentication on your phone while wearing the watch logs you in. I’ve only typed a PIN once the entire time I’ve been wearing the watch— in the morning, I put the watch on and turn on my phone (with TouchID.) The watch is logged in until I take it off at the end of the day. Perfect. Indeed, it’s so good that I chose a longer password than the default 4-digits.</p>
<h2 id="battery-life">Battery Life</h2>
<p>I was worried about battery life, but I didn’t need to. The watch works reliably for a full day, and has, without exception, had at least 25% remaining on the battery when I set it in its charger before going to bed. I suspect the difference between me and people who are closer to 10% at the end of the day is that I have my notifications pruned. I use the watch as much as I want to and never worry about battery life (beyond, of course, needing to charge it each night, when I am, presumably, otherwise engaged.)</p>
<p>The inductive charger is great— I love it. I have one on my nightstand and one in my bag for travel. It’s well executed and works well. People are reporting that it’s semi-compatible with Qi chargers, but I wouldn’t depend on that. I plan to buy a dock at some point.</p>
<h2 id="pairing-with-the-iphone">Pairing with the iPhone</h2>
<p>This was a painless process. You turn on the watch, it tells you to open the watch app on your phone. Then you point your camera at the watch and it instantly recognizes the watch and pairs. It’s the best pairing experience I’ve seen. Apple could have made it 1% easier by using NFC instead of their proprietary-but-damn-attractive QR code, but that would have meant compatibility only with the iPhone 6 and above. However, the pairing was so easy and seamless that I think they made the right call.</p>
<h2 id="hey-siri">Hey, Siri!</h2>
<p>Siri is here on the watch, and seems to work better than on the phone. I’m not sure why speech recognition seems so improved— I don’t know if the engine is different (unlikely, since it’s a cloud service) or because the microphone is somehow better or naturally positioned more optimally. Regardless of the reason, dictation is more accurate for me, and Siri understands my commands pretty much without fail.</p>
<p>Also, Siri is way more convenient on the watch than the phone. Instead of pulling the phone out of my pocket and holding down the home button for a second, I simply raise my wrist to look at the watch and say “Hey Siri, add take over world to my OmniFocus list.” It just works, and it becomes second-nature.</p>
<p>The combination of better recognition (which I’m convinced is not a placebo effect) and greater convenience means I use Siri dramatically more than I used to. This was the unexpected win of the watch.</p>
<p>There are a few limitations— every once in a while, Siri will inform me that it can’t perform a given operation on the watch, and will instruct me to pull out my phone for the result. I’m sure this will improve with time, but the command is usually still understood, it’s just that Siri doesn’t yet know how to format the result for the tiny watch display. Fortunately, “hand-off” works great, and when I pull out my phone I don’t need to repeat the command, the information is ready for me on the screen.</p>
<h2 id="passbook">Passbook</h2>
<p>I wish more companies supported Passbook, but the ones that matter most do: airlines. Having passbook on the watch is neat, but not necessary— it’s well implemented on the phone such that you don’t need to log in to display a boarding pass at the airport, so the difference between pulling out your phone and using your watch is negligible. I think supporting it on the watch was the right call, but it’s not a reason to buy the watch. I did go through two airports using the watch as my boarding pass, and it worked fine both times.</p>
<h2 id="find-my-phone">Find My Phone</h2>
<p>I’m dumb, and I frequently misplace things. Being able to press a button on my watch to make my phone emit loud noises (even when in silent mode) is extremely useful if you are as dumb as me. I’m embarrassed to admit I have used this 3 times in the short time I’ve had the watch, and it’s a lot easier than asking my wife to call me (which doesn’t work if the phone is on silent, in which case I needed to dig up my iPad or use my computer.) It’s a small feature, but a useful one.</p>
<h2 id="switching-bands">Switching Bands</h2>
<p>The mechanism Apple designed for attaching the watch to bands is the best I’ve seen on a watch. It’s easy to use yet feels secure, and it’s a real pleasure to take one band off and put another on. Apple has announced a program for officially sanctioned third-party bands, and I’m sure there will be a ton of unsanctioned ones as well. The mechanism is great, and I’m confident it will last for several generations of watch. I’ll be shocked if the apple watch 3 years from now isn’t compatible with gen-1 bands, and frankly I suspect it will last longer than that. I think it will be more like the lifespan of the 30-pin connector (which Apple replaced with Lighting after 10+ years) than something that is replaced every other generation like iPhone cases. I might be wrong, but I’m putting the over/under at 5 years. Regardless, the mechanism is well designed, easy to switch, and confidence-inspiring.</p>
<h2 id="companion-iphone-app">Companion iPhone App</h2>
<p>Most of the more fine-grained configuration takes place in the companion app on the iPhone. However, it’s not always obvious what settings you change on the watch vs. What settings you change on the phone. (Hint: a good way to think about it is that you can change almost everything from the phone, but a few key settings are duplicated on the watch for convenience.) The iPhone app is generally well designed, but it shows one of the key weaknesses of the watch: that Apple’s probably trying to do too much with the first generation. This results in a complex app with a thousand things to configure, toggle, and tweak. I don’t really see how they could have managed this level of complexity much better, but perhaps they could have removed a few features in this first version and let their users tell them what features to add over time.</p>
<h2 id="bluetooth-headphone-support">Bluetooth Headphone Support</h2>
<p>You can sync a playlist with a couple gig of music to your watch and listen to it with bluetooth headphones, even if you leave your phone at home. I’m sure this will be useful to some users (runners come to mind) but I always have my phone with me. I can confirm it works (I have a pair of BT headphones, which paired easily) but I don’t think I’ll wind up using this feature. Instead, I prefer to listen to music, podcasts, and audiobooks from my phone, which is always with me.</p>
<h2 id="watch-as-remote">Watch as Remote</h2>
<p>You can use the watch as a remote control for your phone’s audio playback, which is surprisingly useful. I use it frequently to pause a podcast (or skip a boring portion) on my daily walks. It works perfectly, and the UI is always available as a glance. I never pull my phone out on a walk anymore except if I want to take a picture.</p>
<p>You can also use the watch as a remote for the Apple TV, but this feels like a novelty to me rather than a useful feature. Maybe if I didn’t have a universal remote in the living room I’d feel differently about it. However, I have used it a few times to control the Hue lights in my living room or to turn off the porch light when my phone was in a different room. Not a huge deal, but useful on occasion.</p>
<h2 id="handoff-is-incredibly-useful">Handoff is incredibly useful</h2>
<p>Apple introduced Handoff as an iOS 8/Mac OS X Yosemite feature, but it really shines with the watch. Starting an interaction on the watch and switching to the phone is easy and natural. <a href="http://www.imore.com/true-magic-apple-watch-what-it-cant-do">Serenity Caldwell has a great post on the subject of handoff</a> (and what the watch doesn’t do.)</p>
<h2 id="force-tapp-everything">Force Tapp Everything</h2>
<p>When Windows 95 first shipped, I always recommended to users to “right-click everything” as the best way to learn how to use software on that platform. The same applies here: force-tap EVERYTHING. You’ll find lots of features that are otherwise undiscoverable.</p>
<h1 id="common-criticisms">Common Criticisms</h1>
<p>In reading other people’s assessments of the Apple Watch (especially those of people who have never actually used one) a few common criticisms are continually raised. I wanted to weigh in on a few of them.</p>
<h2 id="the-apple-watch-is-too-expensive">The Apple Watch is too expensive!</h2>
<p>Yep, it really is, for most users. This is a luxury device, and if you aren’t comfortable spending $350-1000 on a device you don’t actually need (this is a want, not a need) then it’s not a reasonable purchase. This is a disposable income purchase. If you have to go into credit card debt to buy it, you can’t afford it. Personally, I don’t have a problem with that. I don’t think $350 is wildly expensive for a device of this calibre, but I recognize it’s out of reach for a lot of people. It’s ~40% more expensive than most of its competition, which is often the case for Apple products. It’s extremely high quality, and it’s beautifully designed and manufactured. For some, that’s worth the price of admission. However, that fact doesn’t negate the criticism that it’s expensive.</p>
<h2 id="the-battery-life-is-poor">The battery life is poor!</h2>
<p>I wish the Apple Watch had multi-day battery life, as this would open up new use cases (like, for example, sleep tracking.) However, unless you can get a wearable device’s battery life up to 7+ day battery life like a Jawbone Up or Pebble, the difference is largely academic. Ultimately, you have to charge a device at the end of the day or you don’t, and a device with 36-48 hour battery life needs to be charged at the end of the day. I’ve yet to have a situation where the watch’s battery was below 25% when I went to bed, and it’s usually closer to 50%, so I don’t see this as a real criticism unless you want multi-day scenarios like sleep tracking (which I do, but I recognize that nothing with a color screen is going to deliver that in the real world.)</p>
<h2 id="it-only-works-with-an-iphone">It only works with an iPhone!</h2>
<p>This seems like a real red herring to me. I’ll split this into two pieces: only works with a phone, and only works with an iPhone. Yes, the Apple Watch must be paired with a phone, and most of its functionality is only available if it’s within range of a phone. To me, this is no a real deficiency. It’s like saying that your display is useless without your computer. Of course it is. I don’t want another device to manage and sync, I want an external display for the device I already carry. I’m not going to stop carrying my phone, so what would be the point of making this device stand on its own? Yes, there are a few scenarios where leaving your phone at home is useful, like going for a run… but the watch supports these already. You can listen to music and track a workout while untethered. People complaining that the device doesn’t stand on its own without the phone don’t understand the device. As for only supporting iOS devices, well duh. Android Wear requires Android. Apple watch requires iOS. It makes sense, not only from a business perspective, but from a quality-and-completeness-of-experience perspective, to support a single platform. (I suspect Pebble will redouble efforts on Android support, and gradually reduce emphasis on iOS.) If you don’t have an iPhone this device isn’t for you, but I don’t see that as a valid criticism— it’s not designed for you.</p>
<h2 id="its-too-bigbulky">It’s too big/bulky!</h2>
<p>Most people saying this are drawing their opinions from photographs and videos of the watch, not impressions drawn in the real world. The watch is surprisingly small, and looks smaller on the wrist than it does in pictures. I tried on a 38mm, and it felt comically small on my wrist, the 42mm is just right for me. It’s thin enough to fall comfortably under my sleeve when I wear a tailored button-down shirt. (Meaning that when I hold my arm straight down, the cuff falls naturally over the watch on its own via gravity without needing to be coerced.) It’s smaller than most of the watches I’ve owned over the years. Are there smaller watches? Sure… but I would suggest that the 42mm Apple Watch is on the small-to-middle range of men’s watches, though I concede that the 38mm is probably on the large side of women’s wristwatches.</p>
<h2 id="its-too-complicated">It’s too complicated!</h2>
<p>This is the only criticism that I agree with absent any caveats. I think Apple took on too much in the first generation, and could have used a year of refinement before adding so much functionality. This is not yet a device for a casual user, it’s for geeks and power users who are willing (and desirous) to change their way of doing things to accommodate a new device, and aren’t afraid of spending a bit of time tweaking esoteric settings to get something dialed in just right. In a few years, this device will likely be better suited to a casual user, but it’s far too complicated today.</p>
<h1 id="should-you-buy-one">Should you buy one?</h1>
<p>If you’re even asking the question then my answer is simple: you should probably wait. This is definitely a 1.0 product, and I suspect the second generation device will solve a lot of the issues next year. It’s too expensive for a typical end user to buy a device that will be obsolete so quickly. A good analogy is the iPad: the first generation iPad was replaced the next year with the iPad 2, which was better in every way. OS support for the first generation iPad ended quickly, but the 2 is supported to this day. I suspect the watch will be similar, in that early adopters will pay a disproportionate obsolescence tax. If you’re a crazy early adopter like me, you aren’t asking whether or not to buy— you already ordered one. You won’t regret it, it’s a neat device. It’s well made, and has enough well-thought-out functionality to offset its 1.0 limitations. However, typical end users will want to sit the first generation out.</p>
<p>The Apple Watch (obviously) only works with the iPhone. If you’re an Android user, the Apple watch is absolutely NOT worth switching for. If you’re interested in a smartwatch, you should look at the Moto 360, which is a really compelling device. I like the Apple Watch better, but not enough to warrant a platform switch.</p>
<p>If you decide you need to buy one (and really, you shouldn’t) I think most users will be fine with the Sport, but I can totally understand wanting spend the extra cash on the stainless steel model, especially if you find yourself in a suit or business-casual attire on a regular basis. I’m extremely confident that the bands will be forward compatible for at least the next 3-5 years (possibly more) because the mechanism they designed is extremely flexible and elegant. As the Apple watch gets lighter and thinner there’s no reason they will need to change the band attachment. That’s one of the reasons I was comfortable buying the steel watch with Milanese Loop: I am comfortably certain that the band will be compatible with the replacement watch I buy next year (and the year after, and probably the year after that.) Assuming this prediction holds true (I’d be willing to pay 5:1 on a bet) I may buy a link bracelet next time around.</p>
<p>That being said, I’m completely sold on the smartwatch as a platform and form-factor. Having notifications on your wrist is wonderful (once you tone them down a bit) and quickly reading and responding to text messages without digging out your phone is a delight as implemented on the Apple Watch. Not needing a second device for fitness tracking is great, and Apple does a better job than any device on the market other than the excellent Microsoft Band. I think wearables in general will continue to be a huge growth area, and the watch is the ideal initial form, since they are unobtrusive and familiar. In 10 years, we’ll have displays built into glasses and contact lenses (or something equally unobtrusive) but for now, a watch is the best place to get notifications and display glanceable information.</p>
<p>Apple’s implementation is arguably better than any currently available smartwatch, but the competitors are very close behind, and Apple’s edge is probably not sufficient to offset the cost. However, if you’re in the iOS ecosystem, they’re the only game in town. Pebble is neat, but lack of deep integration with iOS will always limit its usefulness on the platform. However, if you’re an Android user, Android Wear and Galaxy Gear devices are quite good, and getting better every day. At some point, there will be a device that matches your needs and taste if there isn’t one already. To pretend like Apple needs to compete with Android Wear ignores the fact that the potential audience has already made their mobile platform selection, and a smartwatch is insufficiently compelling to make them switch. From a purchasing perspective, though, Apple’s competition isn’t Android Wear. It’s competition isn’t the Pebble. It’s not even a non-smart watch. The competition is nothing at all. That’s right, for most potential buyers, it’s competing with not wearing ANY device on your wrist. The Apple Watch, like any smartwatch, needs to earn its spot on your wrist. As John Gruber says in his recent post, nobody needs a smartwatch, you either WANT one, or you don’t. Does the watch reach the bar of “want?” Yeah, I think the product is sufficiently compelling to cross that line, but for most people, not at such a high entry price with a first generation obsolescence speed.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>I suspect that the higher volume of inquiries was indicative not so much of a lot of interest in the device, but rather that reviews have been mixed/hedged, so people are interested in hearing an opinion. <a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:2">
<p>I can’t think of a better use for the side button, but I would probably have considered releasing the watch with only the Digital Crown, placed in the center, and no other buttons. The only real challenge that would create is how best to quickly signal intent for Apple Pay. <a href="#fnref:2" class="reversefootnote">&#8617;</a></p>
</li>
</ol>
</div>
2015-05-06T21:21:00-06:00http://origin.jrj.org/2015/04/27/your-password-sucks/Your Password Sucks2015-04-27T13:07:00-06:00JRJhttp://jrj.org/Your password sucks, but every character counts…<!---
![Your Password Sucks](/assets/postheads/passwordsucks.png "Your Password Sucks")
-->
<p><a href="http://blog.codinghorror.com/your-password-is-too-damn-short/">Jeff Atwood has a great piece on Passwords</a>, with a heavy emphasis on length. He uses <a href="https://www.grc.com/haystack.htm">Steve Gibson’s Password Crack Checker</a> to estimate the amount of time needed to crack passwords of varying lengths, and the numbers will probably surprise you… and keep in mind, this is for a truly random password (like <code class="highlighter-rouge">Uhs&amp;amp;81Aj</code>) - anything with variations of dictionary words, like <code class="highlighter-rouge">M0nk3y!89</code> would be MUCH faster. Indeed, that password would be cracked by most brute force cracking tools within a few seconds.</p>
<p><img src="/assets/your-password-is-too-damn-short.jpg" alt="Your Password is too Damn Short!" title="Your password is too damn short!" /></p>
<p>A random 8-character password using all character types takes over 2,100 years to crack in the “online attack scenario,” which is guessing 1,000 attacks per second over a network. (And that’s probably a bit optimistic - few network cracks could do that many requests per second. Figure it’s probably 3-5X longer in most real-world online attacks.) Hence, it works great in this situation. <strong>However, that’s not how passwords are typically attacked.</strong> A more realistic attack is that someone downloads a bunch of leaked password hashes from an online breach, and they can attack the password locally. On a typical computer, depending on the hashing technique used, this can be very fast, as in a hundred billion guesses per second fast. Using this technique, cracking our hypothetical 8-character password takes 18.62 hours. Obviously, this is unacceptable– the password can be trivially cracked in under a day using a typical PC workstation.</p>
<p>The calculator also estimates for a scenario it calls a “massive cracking array.” Most people think of this as “how fast can the NSA crack my password.” However, keep in mind that virtually unlimited computing resources are available to anyone willing to pay for them via services like Amazon AWS or Microsoft’s Azure, and instead of needing to build up a room with racks upon racks of servers (a huge capital expenditure) you can rent these resources by the hour. Simply spin up a few thousand machines, crack away, and then spin them down after an hour, at a much more approachable cost. Think a few dozen or hundreds of dollars.</p>
<p>Using this type of resource against our hypothetical 8-character password? 1.12 minutes. So you could spin up a bunch of AWS machines, and crack ~50 of them in an hour, then spin down.</p>
<p>Does this bother you? It should. However, increasing the length of passwords makes them only a tiny bit more difficult to remember, but geometrically more difficult to crack. Using the “Massive Cracking Array” statistic for a few different password lengths is instructive:</p>
<table>
<thead>
<tr>
<th> </th>
<th>8 char</th>
<th>10 char</th>
<th>12 char</th>
<th>16 char</th>
<th>24 char</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>MCA crack time</strong></td>
<td>1.1 minutes</td>
<td>1 week</td>
<td>174 years</td>
<td>1.41 hundred million centuries</td>
<td>9.38 hundred billion trillion centuries</td>
</tr>
</tbody>
</table>
<p>So going from 8 characters to 10 characters is the difference between a little over a minute and a week to crack.</p>
<p>Adding 2 more characters (for a total of 12) gets you to 174 years.</p>
<p>Willing to go to 16 characters? 1.41 hundred million centuries.</p>
<p>24 characters? We start flirting with the eventual heat-death of the universe.</p>
<p>Of course, this is with current computers… and they get faster every year, so these numbers will get smaller every year, in an exponential manner.</p>
<p>Bottom line? Download a password manager (I recommend <a href="http://1password.com">1Password</a> to Apple-centric users, and <a href="http://lastpass.com">LastPass</a> to Windows/Android-centric users.) Start generating different strong passwords for every site. And when you do, remember that suddenly the cost of increasing the length of your passwords is essentially free since you are neither remembering or typing them. Personally, I do 16-character passwords for most sites, 24 for high-security sites (like my work and banking passwords.) However, 12/16 would probably be perfectly reasonable for most users.</p>
2015-04-27T13:07:00-06:00http://origin.jrj.org/2014/09/16/local-ios-backup-locations/Local iOS Backup Locations2014-09-16T13:21:00-06:00JRJhttp://jrj.org/Changing the location where iTunes backs up your iOS devices…<!---
![Local iTunes Backups](/assets/postheads/itunesbackuploc.png "iTunes Backup Locations")
-->
<p>It’s an interesting technological paradox: storage on devices (like phones and tablets) is growing at a geometric rate, but (for the first time in memory) storage on desktop and laptop PCs and Macs has been shrinking for years as they transition from hard drives to solid state disks. This becomes intensely frustrating if you want to maintain a local backup of all your devices.</p>
<p>Yes, I use iCloud backup as well, but I like having a local backup. Restoring is lightning fast (no <a href="http://en.wikipedia.org/wiki/Lightning_(connector)">pun</a> intended) and reliable. Also, local backups include copies of media and apps so you don’t have to download them separately. While I still strongly recommend iCloud backup to everyone (despite recent events, which would have been prevented by a strong password) that doesn’t mean that local backups aren’t useful.</p>
<p>The problem is that my iMac has a 128GB SSD, my iPad has 128GB of storage, and my iPhone has 64GB of storage. See the problem? Fortunately, I have a 3TB HD in my iMac as well for storage. The 128GB SSD only holds the OS and applications. However, the location to which iTunes backs up your devices is hard coded, and there’s no way to change it. So, much like my solution for iTunes Media storage, I decided to set up a symbolic link– a few simple steps and I’m up and running with local device backups:</p>
<ol>
<li>
<p>Make sure the iTunes application on your Mac is closed</p>
</li>
<li>
<p>Move the folder called <strong>~/Library/Application Support/MobileSync/Backup/ </strong>to the new locations you want backups to live. In my case, this is a secondary internal hard drive, but it could just as easily be an external drive or a share on my NAS device.</p>
</li>
<li>
<p>Launch Terminal <code class="highlighter-rouge">Applications/Utilities/Terminal</code> and type in the following command line:</p>
</li>
</ol>
<p><code class="highlighter-rouge">
ln -s /Volumes/JRJHDD/Backup/ ~/Library/Application\ Support/MobileSync/Backup
(replace "JRJHDD" with the name of your hard drive.)
</code></p>
<p>Now, when you run iTunes and do a local backup of your iPhone or iPad it will go to the new location. You can accomplish the same on Windows using a tool called “<a href="http://technet.microsoft.com/en-us/sysinternals/bb896768.aspx">Junction</a>.”</p>
2014-09-16T13:21:00-06:00http://origin.jrj.org/2014/04/16/heartbleed-openssl-oversight-adult-supervision/Heartbleed Shows a Need for Oversight2014-04-16T18:49:00-06:00JRJhttp://jrj.org/Hearbleed demonstrated projects critical to the net’s infrastructure need help…<!---
![Hearbleed Vulnerability](/assets/postheads/heartbleed.png "Heartbleed OpenSSL Vulnerability")
-->
<p>Heartbleed happened while I was on my vacation and not really keeping up with things, hence I haven’t really commented much. <a href="http://dankaminsky.com/2014/04/10/heartbleed/">Dan Kaminsky (and, by extension, Matthew Green) pretty much summed up my thoughts perfectly</a>:</p>
<blockquote>
<p>“…The answer is that we need to take Matthew Green’s advice, start getting serious about figuring out what software has become Critical Infrastructure to the global economy, and dedicating genuine resources to supporting that code.”</p>
</blockquote>
<p><strong>Exactly right.</strong> We can’t have fulcrum points of Internet infrastructure held together by chewing gum and bailing wire in the form of code written by hobbyists. When an open source project reaches a certain point of critical mass it requires professional oversight and rigorous security audits. Not to put too fine a point on it, but “adult supervision.”</p>
<p><img src="/images/jrj-adultsupervision.png" alt="Critical Infrastructure Needs Adult Supervision" /></p>
<p>I’m not discounting the contribution of open source developers, nor am I minimizing the benefit of the transparency an open source model provides. I’m simply stating that it is not a full replacement for the input of security professionals and comprehensive audits. To some degree this can be crowd-funded (example: <a href="http://istruecryptauditedyet.com">TrueCrypt audit</a>, though much more needs to be done here) but for the most critical components of internet infrastructure like Open SSL, the companies that benefit most need to pay a share. Oracle and Google are the first names that come to mind, but there are dozens of companies that could (and should) contribute to a consortium to work together to ensure the integrity of internet infrastructure. I usually don’t blog about work, but I will say that if such a consortium existed I would advocate loudly inside Adobe to contribute, and I suspect that call would be met with enthusiasm.</p>
<p><a href="http://dankaminsky.com/2014/04/10/heartbleed/">Kaminsky’s post</a> is a bit long, but very much worth reading.</p>
2014-04-16T18:49:00-06:00