I can’t recall a game that so quickly had me invested in the story and characters. There’s the gut-punch of the opening choices you make to set the stage, brilliantly presented as a series of simple text-based prompts that still manage to be emotionally powerful. This leads to wonderfully written dialogue between the game’s two main characters, bolstered by the fantastic voice performances—I would go so far as to say the best in a game, ever—by actors Rich Sommer and Cissy Jones.

the game cultivates this immeasurable and invaluable quality of being wilderness. The Thing that it creates is this sense of truly being out not just in the Shoshone National Forest but truly any untamed expanse of land. You latch onto landmarks like this weird shaped rock or this particular cluster of trees. You manage to orientate yourself, never feeling lost but also never quite having a complete grasp on the world around you.

the places are so pretty and the plot is so immediately arresting that in no time at all you’re deep in Firewatch‘s thrall. The profound sadness that pervades the whole experience, the isolation, the reasons you went out into the forest in the first place… it’s an experience not to be missed.

While the climactic (or, for some, anti-climactic) ending of Firewatch left plenty of players happy to leave the Thorofare behind, a second playthrough could still have plenty to offer. Depending on how thorough you are, there may still be a number of interesting conversations, events and even locations that you missed on your first time around.

The best computer games combine screenwriting, painting, music, and more into something hard to describe.

Congratulations, Campo Santo, on receiving the accolades you so thoroughly deserve.

With some frequency, I find that my down-time thoughts are filled with worry. Are my children OK? Am I doing enough for them? Am I saving enough for retirement? Am I taking care of my health? When will the next big earthquake hit the Bay Area? Will there ever be peace in the Middle East?

The folks at Alameda Point Harbor are pretty stoked: The “seal monitors” counted a record number of harbor seals floating on a concrete dock taken on Tuesday, beating last year's high set on Christmas Day.

“That’s a record for the year and surpasses last year’s record of 38,” which was set on Dec. 25, 2016, the group wrote on its Facebook page. “It looks pretty much like a full house now.”

2016 was a tremendous year for tabletop gaming. The tireless members of the Ars Cardboard crew spent a lot of time playing, replaying, and dissecting the year's new titles, and we're ready to tell you what we enjoyed most.

A caveat: because so many games appear each year, and because the board game release calendar is so heavily weighted toward the latter half of the year, we didn't (and simply couldn't) play absolutely everything. But we gave it a good shot!

So here, in no particular order, are our 20 favorite tabletop games of 2016—along with a few runners-up and notable new editions.

he created a service where physicists could post their preprints as "e-prints" accessible to anyone with an Internet connection. The idea caught on, submissions multiplied, and subject matter expanded to include mathematics, astrophysics, computer science and, most recently, biology and statistics.

Eleven years ago Ginsparg joined the Cornell faculty, bringing what is now known as arXiv.org with him. (Pronounce it "archive." The X represents the Greek letter chi.) It is managed by Cornell University Library, allowing Ginsparg to devote more time to his research.

you don’t even know me, but I wanted to take a minute to tell you that what matters is that you like your own hat, hat-wearing female dog. Who is this guy anyway, some sort of dog hat expert?? Who cares what he thinks??? Wear a hat you love

This new feature takes your hand and walks you through the mistakes you made in a game. It lets you figure out a better move for each of them. And finally, if you request it, lichess tells you what the best move was.

Instead of telling you right away what you should have played, this feature gives you a chance to rethink the position by yourself. That's how we learn.

In a story where engineers are more central than Jedi or Sith, Rogue One breaks new ground for the franchise both in its characters but also in the ethical territory it covers. Not to diminish the character arcs of Jyn Erso and Cassian Andor, but the core ethical arc of the film is one man’s decision to engineer the Death Star in such a way as to prevent its use for galactic domination. One could fairly retitle the movie to ‘Rogue One: an Engineering Ethics Story.’

From the moment the three of them start tapping their feet until they collapse in a laughing heap on an overturned couch, there are only nine cuts in more than three minutes—nine unforgiving head-to-toe shots which left no room for error. I’ve seen Singin’ in the Rain many times, but I’ve watched “Good Morning” closer to a hundred—it was one of the things I used to teach my daughter how to watch movies when she was too young to watch anything longer—and it’s as close to perfection as movies get.

Wednesday, December 28, 2016

One challenge with a company like Uber or GitHub is that they are private, so they don't have to release their financials, which can make it a challenge to understand how well (or poorly) they are doing.

Though the name GitHub is practically unknown outside technology circles, coders around the world have embraced the software. The startup operates a sort of Google Docs for programmers, giving them a place to store, share and collaborate on their work. But GitHub Inc. is losing money through profligate spending and has stood by as new entrants emerged in a software category it essentially gave birth to, according to people familiar with the business and financial paperwork reviewed by Bloomberg.

"Profligate spending?" "Stood by as new entrants emerged?" These are strong words, indeed.

Newcomer also takes the expected swing at the management scandals at GitHub, and even drops a rumor of a recent layoff.

As I said, it's hard to know what to make of such an article, given that the company is private and we just have to take people's word for things. Still, this article definitely smacked of being a bit of a hit piece, particularly since it spoke so glowingly of GitHub's upstart competitor, GitLab:

The issue took on a new sense of urgency in 2014 with the formation of a rival startup with a similar name. GitLab Inc. went after large businesses from the start, offering them a cheaper alternative to GitHub. “The big differentiator for GitLab is that it was designed for the enterprise, and GitHub was not,” says GitLab CEO Sid Sijbrandij. “One of the values is frugality, and this is something very close to our heart. We want to treat our team members really well, but we don’t want to waste any money where it’s not needed. So we don’t have a big fancy office because we can be effective without it.”

Plassnig criticizes Newcomer, justifiably I think, for focusing too much on 2014, and not enough on 2016:

GitHub was the darling of the developer community but 2014 (harassment scandal, Tom Preston-Werner resigning) and 2015 (slower progress, increased competition) were challenging and it suddenly wasn’t set in stone anymore that GitHub will dominate and own their vertical in the same way as for example Amazon Webservices owns theirs.

The September 2015 ARR number of $90M seems to reflect that. But, if they were struggling in 2015, they blew it out of the water in 2016 and went from $90M in September 2015 to $140M in August 2016.

And, perhaps more importantly, Plassnig works to try to redirect attention away from GitHub's free hosted services and toward their booming enterprise business:

GitHub offers three different products as of December 2016:

github.com personal plan

github.com organizational plan

GitHub Enterprise (on-premise/VPC product)

All of the growth in the last two years came from GitHub’s organization plans or GitHub Enterprise. The revenue from their organization plans roughly doubled and the revenue of GitHub Enterprise tripled over the course of 23 months (Sep’14 — Aug’16) while the revenue of their personal plans stagnated according to the numbers Bloomberg published.

50% of GitHub’s ARR came from GitHub Enterprise as of August 2016 compared to 35% back in September 2014. Their efforts to get into larger organizations and become more of a traditional enterprise software vendor also explains the higher burn rate. Historically, most of GitHub’s revenue came from their github.com offering which was completely self-serve while their GitHub Enterprise customers require more handholding and a more traditional enterprise sales process.

Those are, indeed, remarkable numbers.

Newcomer, too, is aware of the dramatic size and spectacular growth of the GitHub Enterprise numbers, although he chooses to frame it a different way, as a core tension in the company rather than an engine of growth:

GitHub says it has 18 million users, and its Enterprise service is used by half of the world’s 10 highest-grossing companies, including Wal-Mart Stores Inc. and Ford Motor Co.

Some longtime GitHub fans weren’t happy with the new direction, though. More than 1,800 developers signed an online petition, saying: “Those of us who run some of the most popular projects on GitHub feel completely ignored by you.”

The backlash was a wake-up call, Wanstrath says. GitHub is now more focused on its original mission of catering to coders, he says. “I want us to be judged on, ‘Are we making developers more productive?’” he says.

It's not obvious to me that being "focused on its original mission of catering to coders" is at all counter to the strategy of making money by selling GitHub Enterprise; as far as I know, the features that are made available to hobbyists and open source teams on hosted GitHub are also present in GitHub Enterprise, so improvements to, e.g., the code review process benefit all.

To me, the more telling and fascinating finding, which neither Newcomer nor Plassnig seem to discuss much at all, is the counter-industry-trends success of GitHub Enterprise.

In this era of the cloud, when company after company is moving everything possible onto Amazon Web Services or Microsoft Azure, the fact that so many organizations are still choosing to make massive deployments of SCM services inside their own corporate data center is astonishing to me.

I'm not sure what is driving this particular aspect of the business, although I have a few theories.

One theory is that the IP stored in an SCM system is still viewed as the most precious, private, and critical IP within an organization, and many organizations still aren't willing to trust the cloud with these materials.

Another, perhaps more likely, theory, is that deployments like GitHub Enterprise are displacing older SCM systems within organizations, and those older SCM systems are almost always deployed in the corporation's main internal data center, so it is natural to replace like with like.

Finally, inside large corporations, SCM systems are rarely used "out of the box" as-is; rather, they are foundational pieces of infrastructure, with considerable flexibility and extensibility, and they are typically deployed as a base atop which the organization builds their own proprietary workflow and development tools, integrates with their own proprietary security system, etc.

In such a configuration, you may well need to have near-total control over the SCM deployment inside your own data center in order to port your existing development toolchain to it and extend that tooling over time.

This last argument, while it might well be the basis of the current choice to deploy tools like GitHub in their on-premise "enterprise" editions, seems like it has nothing fundamental to it in the long run, so I still think that, eventually, most organizations, even giant ones, will find themselves running their SCM services in the cloud.

It's just that, as these current GitHub financials appear to show, we aren't there just yet.

Monday, December 26, 2016

Derek Carr's broken fibula was a devastating moment for an entire franchise and fanbase that waited 14 years for a season like this. They rightfully had Super Bowl champion dreams with Carr. With fourth-year pro Matt McGloin now under center, any playoff win will feel like overachieving. Yet it's worth noting McGloin joins a situation that couldn't be better.

The Raiders have one of the NFL's best pass protecting offensive lines. (In a cruel bit of bad luck, Carr was injured on the only QB hit Oakland gave up on Sunday.) They have one of the best starting wide receiver duos in football. They have an emerging running game and an offense that gets receivers wide open, often taking short passes a long way.

Carr's talent and leadership was a massive part in making all of the above work.

Thursday, December 22, 2016

If (like me), you were dimly aware of what OxyContin was, but had never really learned much about it, you'll certainly want to spend some times with this monumental, mesmerizing, harrowing, Pulitzer-worthy series that's been running in the L.A. Times:

Narcotic painkillers work differently in different people. Some drug companies discuss that variability on their product labels and recommend that doctors adjust the frequency with which patients take the drugs, depending on their individual response.

The label for Purdue’s MS Contin, for instance, recommends that doctors prescribe the drug every eight or 12 hours to suit the patient. The morphine tablet, Kadian, manufactured by Actavis, is designed to be taken once a day, but the label states that some patients may need a dose every 12 hours.

Despite the results of the clinical trials, Purdue continued developing OxyContin as a 12­-hour drug. It did not test OxyContin at more frequent intervals.

To keep the OxyContin flowing, Lake Medical needed people. Lots of them. Age, race and gender didn’t matter. Just people whose time was cheap. For that, there was no place better than skid row.

Low-level members of the Lake Medical ring known as cappers would set up on Central Avenue or San Pedro Street. The stench of urine was everywhere. People were lying in doorways, sleeping in tents, fighting, shooting up. Who wants to make some money, the cappers would shout.

For as little as $25, homeless people served as straw patients and collected prescriptions for 80s. It required just a few hours at the clinic, filling out a few forms and sitting through a sham examination. They were then driven, often in groups, to a pharmacy, where a capper acting as a chaperone paid the bill in cash. He then took the pills back to the Lake Medical ring leaders who packaged them in bulk for sale to drug dealers.

In this global drive, the companies, known as Mundipharma, are using some of the same controversial marketing practices that made OxyContin a pharmaceutical blockbuster in the U.S.

In Brazil, China and elsewhere, the companies are running training seminars where doctors are urged to overcome “opiophobia” and prescribe painkillers. They are sponsoring public awareness campaigns that encourage people to seek medical treatment for chronic pain. They are even offering patient discounts to make prescription opioids more affordable.

U.S. Surgeon General Vivek H. Murthy said he would advise his peers abroad “to be very careful” with opioid medications and to learn from American “missteps.”

Kerr credits his father for his demeanor on the sideline as an N.B.A. coach: calm and quiet, mostly, and never one to berate a player. Kerr was not always that way.

“When I was 8, 9, 10 years old, I had a horrible temper,” Kerr said. “I couldn’t control it. Everything I did, if I missed a shot, if I made an out, I got so angry. It was embarrassing. It really was. Baseball was the worst. If I was pitching and I walked somebody, I would throw my glove on the ground. I was such a brat. He and my mom would be in the stands watching, and he never really said anything until we got home. He had the sense that I needed to learn on my own, and anything he would say would mean more after I calmed down.”

His father, Kerr said, was what every Little League parent should be. The talks would come later, casual and nonchalant, conversations instead of lectures.

“He was an observer,” he said. “And he let me learn and experience. I try to give our guys a lot of space and speak at the right time. Looking back on it, I think my dad was a huge influence on me, on my coaching.”

Kerr played for some of the best basketball coaches in history — Olson at Arizona, Phil Jackson with the Chicago Bulls and Gregg Popovich with the San Antonio Spurs among them. By the standards of basketball coaches, they were worldly men with interests far beyond the court.

I can't believe I never connected Steve Kerr with Malcolm Kerr, nor even knew that Steve Kerr was born in Beirut.

Wednesday, December 21, 2016

You might call Hamilton the founding mother of software engineering. In fact, she coined the very term. She concluded that the way forward was rigorously specified design, an approach that still underpins many modern software engineering techniques—“design by contract” and “statically typed” programming languages, for example. But not all engineers are on board with her vision. Hamilton’s approach represents just one side of a long-standing tug-of-war over the “right way” to develop software.

Formal methods are very powerful, but also very challenging to use in practice. However, I do believe that they are making progress. For example, in an area that I (somewhat) follow, that of "distributed consensus," there has been quite a lot of progress recently, including

I think you have to be a big, well-funded organization to be able to employ techniques like this, but I'm excited to see their use growing, and hopefully they will become more accessible to practicing software engineers who have less resources available to them.

In the meantime, I've still got Valgrind, and the Address Sanitizer, and my compiler, and my TDD process, so I'll try to emulate Hamilton as best I can...

Tuesday, December 20, 2016

One of my early holiday escapes was Emily St. John Mandel's utterly enthralling Station Eleven.

It's a little hard to write about Station Eleven, though, because it's a book that's a bit hard to pin down.

For a book so full of death, it's a book full of life.

For a book so full of horror, it's a book full of love.

For a book so full of desolation, it's a book full of art.

For a book so full of disease, it's a book full of healing.

Station Eleven's hook is simple, yet thoroughly effective: a worldwide pandemic has almost completely extinguished the human race, but left the rest of the world unaltered. Scattered around the world, tiny snatches of survivors here and there make do.

In and about the area that was once Lower Michigan, a small group of musicians and performers have banded together, and formed a traveling entertainment group, minstrels as it were, who go from place to place, performing Shakespeare and Beethoven for their livelihood.

The company's motto:

Because survival is insufficient

Using a fairly obvious approach, the book tells the story of several of the members of The Traveling Symphony, both Before, and After, interspersing flashbacks with sequential narrative.

Mandel's touch is careful and subtle, accomplishing the purpose without shoving herself in your face. Take, for example, this passage, in which the breathless speed of the calamity and the total enormity of the transformation is conveyed via the simple device of a run-on, no-time-for-a-break, hurry-up-what-can-we-do-?, everything-all-thrown-together-in-a-heap paragraph in a single sentence:

There was the flu that exploded like a neutron bomb over the surface of the earth and the shock of the collapse that followed, the first unspeakable years when everyone was traveling, before everyone caught on that there was no place they could walk to where life continued as it had before and settled wherever they could, clustered close together for safety in truck stops and former restaurants and old motels.

Meanwhile, there is a villain.

There are Ordinary Heroes.

But through all of that, Mandel's enduring theme is that, well, life goes on:

The problem with The Traveling Symphony was the same problem suffered by every group of people everywhere since before the collapse, undoubtedly since well before the beginning of recorded history. Start, for example, with the third cello: he had been waging a war of attrition with Dieter for some months following a careless remark Dieter had made about the perils of practicing an instrument in dangerous territory, the way the notes can carry for a mile on a clear day. Dieter hadn't noticed. Dieter did, however, harbor considerable resentment toward the second horn, because of something she'd once said about his acting. This resentment didn't go unnoticed -- the second horn thought he was being petty -- but when the second horn was thinking of people she didn't like very much, she ranked him well below the seventh guitar -- there weren't actually seven guitars in the Symphony, but the guitarists had a tradition of not changing their numbers when another guitarist died or left, so that currently the Symphony roster included guitars four, seven, and eight, with the location of the sixth presently in question, because they were done rehearsing A Midsummer Night's Dream in the Walmart parking lot, they were hanging the Midsummer Night's Dream backdrop between the caravans, they'd been in St. Deborah by the Water for hours now and why hadn't he come to them? Anyway, the seventh guitar, whose eyesight was so bad that he couldn't do most of the routine tasks that had to be done, the repairs and hunting and such, which would have been fine if he'd found some other way to help out but he hadn't, he was essentially dead weight as far as the second horn was concerned.

Isn't that passage simply PERFECT? The juxtaposition of the normal and the abnormal, the human and the inhuman, the typical and the bizarre is just so delightful that it takes my breath away each time I read it.

The story moves through arcs and events, but really, Station Eleven isn't about the story, it's about the storytelling; the title itself is taken from the title of a series of graphic novels penned by one of the characters, who herself is telling a story not for a purpose, but because she feels compelled to tell a story.

In the end, says Mandel, isn't this what humans basically, fundamentally, do: we come together, and we tell each other stories.

Miranda discarded fifteen versions of this image before she felt that she had the ghost exactly right, working hour upon hour, and years later, at the end, delirious on an empty beach on the coast of Malaysia with seabirds rising and plummeting through the air and a line of ships fading out on the horizon, this was the image she kept thinking of, drifting away from and then toward it and then slipping somehow through the frame: the captain is rendered in delicate watercolors, a translucent silhouette in the dim light of Dr. Eleven's office, which is identical to the administrative area in Leon Prevent's Toronto office suite, down to the two staplers on the desk. The different is that Leon Prevant's office had a view over the placid expanse of Lake Ontario, whereas Dr. Eleven's office window looks out over the City, rocky islands and bridges arching over harbors. The Pomeranian, Luli, is curled asleep in a corner of the frame. Two patches of office are obscured by dialogue bubbles.

Let's obscure those offices by dialogue bubbles.

Let's work hour upon hour to get the image exactly right, and render it in delicate watercolors.

Let's slip, somehow, through the frame, and immerse ourselves in creativity, in imagination, in communication, in human-ness.

If you should ever find your way to Mandel's Station Eleven, I hope you enjoy it as much as I did.

Many people have described Sorcerer to the Crown as: "Harry Potter meets Jane Austen," which certainly captures the idea nicely, but I think an alternate description might be: "Jonathan Strange and Mr. Norrell from an Asian-feminist perspective."

Cho is a Londoner of Malaysian heritage, and she brings that background to bear quite nicely, introducing a certain grace as well as a certain exotic flair to her story that lifts it quite above the smoky sturdiness of Susanna Clarke's wonderful work.

As with any fine work of literature, Sorcerer to the Crown isn't really what it appears to be on the surface. As it tells its story of young warlocks and witches making their way through finishing school, spring ball, and London society, it is really exploring many deeper issues of caste, race, gender, religion, and (of course: this is a novel about England) Imperialism.

So our hero, the Sorcerer to the Crown himself, is a freed slave; our heroine is a young East Asian woman; and our villians include stuffy old-money snobs, misogynist dictators, and jealous rivals.

Some fantasies would make the magic the center point, with lots of spell-weaving and dramatic displays. Cho instead wisely wields this tool with a very light touch, using such events only a handful of times, and spending most of her energy on simple, human interactions and interests:

"I desire to speak to your King," said Mak Genggang. "You had best bring me to him straightaway -- and no dillydallying, if you please, for the fate of the nation depends on it!"

"Good gracious," said Prunella, staring. "But what dreadful thing is it that is going to befall us?"

"I have befallen you," said Mak Genggang. "I was not referring to Britain, however. I was speaking of what is of rather more importance: the fate of my nation, which your King seeks to bully!"

"If you will permit me to say so, ma'am, I believe there is a misunderstanding," said Zacharias. "Our King has no wish to alienate you, and I am sure would regret any inadvertent offence."

"If he had no wish to offend, he ought not to have lent his ear to Raja Ahmad!" retorted Mak Genggang. "A sovereign ought to learn better judgment of character. It must be clear to anyone with their wits about them that the raja is a fool. But then again" -- her eyes gleamed -- "I suppose it serves your King's purpose to treat with fools!"

All of this could have been right out of any work of Austen or Dickens, were it not for the fact that Mak Genggang is the sort-of Mother Superior of the witches coven of the Malaysian island of Janda Baik, but as you can see you hardly even notice that during the exchange.

So when Cho does decide to deploy her tools of magic, they come through as wonderous interludes that delight and amaze, even if they are still somehow solidly rooted in the peculiar dignities of English social customs:

Lord Burrow gave him an incredulous look, but with the advent of the thunder-monster the sea had been thrown into even greater tumult. The sheets of rain falling unbroken from the sky seemed as though they would cause a second Flood. The strivings of Mrs. Midsomer and the thunder-monster so infused the place with magic that every wave bore a crest of green foam, every magician was outlined in light and the opaque vault of the sky was a livid green, reflecting the unearthly glow of the battle below.

"Damn your impudence!" said Lord Burrow. "Do you mean to blackmail me at such a time as this?"

There may be magicians everywhere, but a stuffy aristocrat is still a stuffy aristocrat. Damn your impudence, indeed.

Cho's book is a delight, and she is a wondrous talent. Every writer needs a spark, and every great novel needs various mechanisms and stage props to tell its story, so I certainly can't condemn her for resorting to a bit of a gimmick ("what if I told a classic English novel of manners, updated with modern concerns, but the main characters were actual magicians?") to frame her work.

Still, I must admit to the slightest bit of disappointment: she's clearly capable of something truly great, and Sorcerer to the Crown, while thoroughly charming and enjoyable and undeniably well-performed, is just not that work of true greatness.

Congratulations on your great start, Ms Cho. Now, please, show us what you are really capable of, next.

Saturday, December 17, 2016

In a Tale of Three Safeties, we discussed three kinds of safety: type, memory, and concurrency. In this follow-on article, we will dive deeper into the last, and perhaps the most novel yet difficult, one. Concurrency-safety led me to the Midori project in the first place, having spent years on .NET and C++ concurrency models leading up to joining. We built some great things that I’m very proud of during this time. Perhaps more broadly interesting, however, are the reflections on this experience after a few years away from the project.

There was no inside-the-data center ground fault and that is exactly my point. The facility did not have a problem but the switchgear incorrectly locked out the backup power. The customer called in the utility to investigate and they reported the facility experienced a switch fault that locked out the backup generator.

Under rare circumstances the switch gear incorrectly determines there is a problem and does not transfer the load to generator. When this happens, the generators are running but not taking load due to switch gear lock-out and the critical load is dropped when the UPSs are exhausted.

Cherami is a distributed, scalable, durable, and highly available message queue system we developed at Uber Engineering to transport asynchronous tasks. We named our task queue after a heroic carrier pigeon with the hope that this system would be just as resilient and fault-tolerant, allowing Uber’s mission-critical business logic components to depend on it for message delivery.

Unfortunately there’s a small period between when this process last calls accept() and when it calls close() where the kernel will still route some new connections to the original socket. The code then blindly continues to close the socket, and all connections that were queued up in that LISTEN socket get discarded (because accept() is never called for them)

For small scale sites, the chance of a new connection arriving in the few microseconds between these calls is very low. Unfortunately at the scale we run HAProxy, a customer impacting number of connections would hit this issue each and every time we reload HAProxy.

The master is not an isolated entity. It has replicas. These replicas continuously poll the master for incoming changes, copy those changes and replay them. They have their own retry count/interval setup. When orchestrator looks for a failure scenario, it looks at the master and at all of its replicas. It knows what replicas to expect because it continuously observes the topology, and has a clear picture of how it looked like the moment before failure.

orchestrator seeks agreement between itself and the replicas: if orchestrator cannot reach the master, but all replicas are happily replicating and making progress, there is no failure scenario. But if the master is unreachable to orchestrator and all replicas say: “Hey! Replication is broken, we cannot reach the master”, our conclusion becomes very powerful: we haven’t just gathered input from multiple hosts. We have identified that the replication cluster is broken de-facto. The master may be alive, it may be dead, may be network partitioned; it does not matter: the cluster does not receive updates and for all practical purposes does not function.

The fundamental problem is not storing bits safely for the long term, it is paying to store bits safely for the long term. With an unlimited budget an unlimited amount of data could be stored arbitrarily reliably indefinitely. But in the real world of limited budgets there is an inevitable tradeoff between storing more data, and storing the data more reliably.

Historically, this tradeoff has not been pressing, because the rate at which the cost per byte of storage dropped (the Kryder rate) was so large that if you could afford to keep some data for a few years, you could afford to keep it "forever". The incremental cost would be negligible. Alas, this is no longer true.

The place where this is used is performance-critical -- the "semi-sorted" (a type of compression) cuckoo filter has to sort the contents of a 4-element bucket any time an element is inserted. The sorting network works well on our target x86 platform because it exploits the inherent parallelism of modern processors (they can issue multiple instructions per cycle, if those instructions are independent). The entirely inlined implementation avoids a lot of unnecessary function call and setup overhead from a more general-purpose sorting algorithm.

Basically Postgres calculates a hash for each of your column values and then stores some bits out of each hash as one index entry, together with the row’s physical location info (as with every other index). And exactly this “merging” of many column values into one index entry, resulting in a signature in our Bloom context, is where the effectiveness of this index type comes to shine. In short – it can help you to save a lot of disk space! Thus instead of 10 separate normal B-tree indexes you can now have only one Bloom index that’s though lossy, meaning it won’t give you perfect accuracy as matched values need to be always re-checked from the table, but from a probabilistic viewpoint it is “good enough” to be useful.

The language Amazon uses around Aurora is really wierd – they talk about “MySQL compatibility” and “PostgreSQL compatibility”. At an extreme, one might interpret that to mean that Aurora is a net-new database providing wire- and function-level compatibility to the target databases. However, in the PostgreSQL case, the fact that they are additionally supporting PostGIS, the server-side languages, really the whole database environment, hints strongly that most of the code is actually PostgreSQL code.

The problem is that class inheritance (by extension, the `extends` keyword in JavaScript) forces you to inherit everything from the parent class. This problem is easily avoided using object composition instead of class inheritance.

Many vi users have an epiphany when they realize that vi does not just provide a set of modes making various text editing shortcuts easier to type, but actually provides a text editing language.

Commands are composable in order to express complex changes, dw in vi is not just a shortcut to delete a word, it is the combination of a verb: d for delete, with an object w for word. There are more complex objects like ib (inside block) refers to the content of the parenthesis surrounding the cursor, so yib would yank (copy) the text inside the surround parenthesis.

This language allows the programmer to express their intent much more closely than in other editors; most editors can express "delete the word after the next parenthesis", but more often than not, expressing that intent is more cumbersome than simply doing an ad-hoc edit. Text editing as a language changes that, by making clearly expressing your intent the fastest and easiest way to do your edit.

While we often worry about sophisticated digital attacks, the most common attacks for accessing news organizations’ accounts depend on only a few simple weaknesses. These weaknesses are usually a combination of predictable passwords, phishing emails designed to steal login credentials, as well as malicious file attachments in email and elsewhere. While the attacks are simple, so are the defenses. This collection of resources and learning materials will walk you through practices recommended by security specialists for defending your newsroom against common attacks on your accounts.

I still have real complaints with the software These include fundamentally different concepts merged into the same label and the fact that commands may do many different things depending on how you call them. The fact that the concepts are not clear means that it is worse than a learning curve issue. One cannot have a good grasp of what git is doing behind the scenes because this is not always clear.

No one should be surprised that unscrupulous buyers use eBay to commit fraud on unsuspecting sellers. What surprised me was the extent to which eBay now facilitates this fraud through its “buyer protection program”. In October this year I listed a very slightly used iPhone 6S for sale on eBay and was quite satisfied when it eventually sold for $465. This satisfaction was short-lived, however, as I came to realize that I had been taken in by an eBay scammer.

So impressed was Wiener that he promised Pitts a Ph.D. in mathematics at MIT, despite the fact that he had never graduated from high school—something that the strict rules at the University of Chicago prohibited. It was an offer Pitts couldn’t refuse. By the fall of 1943, Pitts had moved into a Cambridge apartment, was enrolled as a special student at MIT, and was studying under one of the most influential scientists in the world. It was quite a long way from blue-collar Detroit.

KVH was founded as Sailcomp back in 1982 by Arent Kits van Heyningen and his sons Robert and Martin. Their initial product offering was a digital compass for use in racing sailboats. Nearby Newport, Rhode Island was at the time a hot-bed for racing sailboats and, although America’s Cup racing no longer takes place there, sail racing remains an important part of Newport.

KVH has evolved to focus on satellite communications/guidance and stabilization for both military and civilian applications but, while walking through the factory, the descendant of the original KVH product line remains a part of the now much broader product line.

When it comes to software engineering, I'm a very practical sort. Knowing all the theory is great, and I do try to keep my knowledge updated and accurate, but what really matters to me is writing solid, reliable, efficient, USEFUL software.

So here are two books that were surprisingly better than I had thought they might be, principally because they are, at their core, eminently practical:

The SRE book is rather an odd bird. It's really 34 separate chapters, written, in total, by about 50 different people, all of whom are or once were Google SRE's.

Each chapter has a topic, and the overall topics are collected into themes and roughly grouped and ordered, but each chapter is independent and you can easily skip around and read different things in different orders, as your interests and needs dictate.

Everything about this book is practical: it's nothing but hard-earned knowledge from a group of people who have been spending their entire professional careers down in the bowels of Google's production systems, keeping them running night and day.

Sometimes the advice is rather basic ("automate everything!"); sometimes the advice is quite advanced (adjusting the overload multiplier in the equation which governs client request throttling behaviors). In all cases, however, it's useful and well-explained.

This book has been "the hotness" for years now, and for years I had been avoiding it.

I feared it would be nothing but cheap tricks and short-cuts, a cram-class for people trying to pretend they're something they're not.

But I recently read a very favorable blog article about CTCI, and decided to give it a try.

I wish I'd done this years ago.

Although it's certainly useful for its stated purpose (at-home preparation for new graduates who are hoping to enter the workforce), it's much more broadly useful to the practicing professional programmer.

Again, as with SRE, CTCI is divided into many separate chapters, each with a very specific topic, gathered into themes, and roughly grouped and ordered. And, again, you can jump around however you like, as your interests and needs dictate.

Each chapter contains a very clear, very concise, and very useful summary of the current received wisdom about some aspect of software engineering, including the major topics, algorithms, approaches, etc., as well as lots of references for people who want to dig deeper.

More, each chapter then contains a fascinating set of quiz questions, each one suitable for you to sit down with a pencil and pad of paper, a strong cup of coffee, and a quiet hour, to work through.

Even better, each question comes with one or more "hints", organized in a very clever fashion so that you can choose to look at a single hint without "spoiling" things by accidentally seeing all the other hints at the same time. I found that the hints, in many cases, were the best part of the book, as they often sparked insightful rumination about how to look at common problems in significantly different ways.

And, as the title promises, each question contains a detailed answer, explaining not only how to solve the problem properly using the techniques from that chapter, but also, in many cases, notes about common pitfalls and errors that you'll want to avoid.

One quibble: whatever semi-automated process the CTCI team are using to update their book from edition to edition suffers from some sort of cross-referencing flaw, as many of the questions would indicate that the answer could be found on page NNN, when in fact the actual answer was on a different page number. This little annoyance was easy to work around, though, as the answers were in the same order as the questions.

Both SRE and CTCI are surprisingly "deep" books, with a LOT of material. I've been, on-and-off, working my way through each of them over a period of months, and I imagine I might still be digging into them months from now.

Tuesday, December 6, 2016

If rapid growth could not drive major margin improvements between 2012 and 2016, there is no reason to believe that Uber will suddenly find billions in scale economies going forward. Fundamentally digital companies like Amazon, EBay, Google and Facebook had massive operating scale economies because the marginal cost of expanded operations was close to zero. Aggressive pricing fueled the growth that drove major margin improvements and also created major consumer welfare benefits.

By contrast, in the hundred years since the first motorized taxi, there has been no evidence of significant scale economies in the urban car service industry. That explains why successful operators never expanded to other cities and why there was no natural tendency towards concentration in individual markets. Drivers, vehicles and fuel account for 85% of urban car service costs. None of these costs decline significantly as companies grow. As the P&L data above demonstrates, Uber has not discovered a magical new way to drive down unit costs.

Every other transport industry depends on highly centralized management using highly sophisticated systems to ensure that capital assets are highly utilized and tightly scheduled around market demand. The Uber business model implies that all these industries are horribly wrong; decentralizing asset purchasing, maintenance and scheduling to isolated low-wage workers would not only reduce costs, but create an efficiency gain large enough to drive all incumbent operators out of business. No one has produced any economic evidence demonstrating that the Uber view might be correct.

Hundreds of other consumer industries have migrated from telephone ordering to smartphone and internet ordering (pizza delivery, airline booking), but there is not a single case where this had any material impact on industry competition, much less created tens of billions of dollars in corporate value. The major emphasis on the app in pro-Uber articles appears to be symbolic; the app implies the existence of magically new “on-demand” efficiencies (just push a button and your car appears).

Highlighting the app also implies that Uber is a “technology company” that has completely “disrupted” industry economics, and is not simply a traditional company like Domino’s Pizza that is utilizing smartphone ordering. Needless to say, none of these articles are written by anyone with actual expertise in ecommerce or urban transportation, and none provide any evidence supporting the claim that the app represents breakthrough technology that gives Uber a powerful competitive advantage.

From its earliest days, Uber’s investors and managers have always recognized that investor returns would require global industry dominance, and the elimination (or effective nullification) of longstanding laws and regulations designed to protect competition, and to protect consumers from the risks of anti-competitive market power[1]. This presumes that urban car services can be turned into a “winner-take-all-game”, where the winner can earn sustainable rents once quasi-monopoly industry dominance has been achieved. Dominance would also allow Uber to leverage its platform in order to expand into other markets that it could not otherwise profitably enter.

Saturday, December 3, 2016

It can take up to 180 milliseconds for data traveling by undersea cables at nearly the speed of light to cross the Pacific Ocean. Data traveling across the Atlantic can take up to 90 milliseconds. This travel time is compounded by the way TCP works. To establish a reliable connection for uploads, the client initiates what’s called a slow start. It sends a few packets of data, then waits for an ACK (or acknowledgement), confirming that the data has been received. The client will then send a larger group of packets and await confirmation, repeating this process until ultimately transmitting data at the user’s full available link capacity. Given the limitations we encounter here—the distance across the Pacific Ocean, and the speed of light—there are only so many optimizations we can make before physics stands in the way.

What exactly is Slicer then? It has two key components: a data plane that acts as an affinity-aware load balancer, with affinity managed based on application-specified keys; and a control plane that monitors load and instructs applications processes as to which keys they should be serving at any one point in time. In this way, the decisions regarding how to balance keys across application instances can be outsourced to the Slicer service rather than building this logic over and over again for each individual back-end service. Slicer is focused exclusively on the problem of balancing load across a given set of backend tasks

Distributed Systems are difficult to build and test for two main reasons: partial failure & asynchrony. These two realities of distributed systems must be addressed to create a correct system, and often times the resulting systems have a high degree of complexity. Because of this complexity, testing and verifying these systems is critically important. In this talk we will discuss strategies for proving a system is correct, like formal methods, and less strenuous methods of testing which can help increase our confidence that our systems are doing the right thing.

For the processing part, a master is elected among the cluster members. Zookeeper could be used for leader/master election, but since BigBen already uses Hazelcast, we used the distributed lock feature to implement a Cluster Singleton. The master then schedules the next bucket and reads the event counts. Knowing the event count and shard size, it can calculate very easily how many shards are in total. The master then creates pairs of (bucket, shard_index) and divides them equally among the cluster members, including itself. In case of unequal division, the master tries to take the minimum load on itself.

If you have programmed applications in Java, you have probably worked with concurrency primitives like the synchronized statement (the intrinsic lock) or the concurrency library that was introduced in Java 5 under java.util.concurrent, such as Executor, Lock and AtomicReference.

This concurrency functionality is useful if you want to write a Java application that uses multiple threads, but the focus here is to provide synchronization in a single JVM and not distributed synchronization over multiple JVMs. Luckily, Hazelcast provides support for various distributed synchronization primitives such as the ILock, IAtomicLong, etc. Apart from making synchronization between different JVMs possible, these primitives also support high availability: if one machine fails, the primitive remains usable for other JVMs.

Today we are launching AWS Step Functions to allow you to do exactly what I described above. You can coordinate the components of your application as series of steps in a visual workflow. You create state machines in the Step Functions Console to specify and execute the steps of your application at scale.

Each state machine defines a set of states and the transitions between them. States can be activated sequentially or in parallel; Step Functions will make sure that all parallel states run to completion before moving forward. States perform work, make decisions, and control progress through the state machine.

btree nodes are log structured, with multiple sorted sets of keys. In memory, we sort/compact as needed so that we never have more than three different sets of keys: the lookup and iterator code has to search through and maintain pointers into each sorted set of keys, so we don't want to deal with too many. Having multiple sorted sets of keys ends up being a performance win, since the result is that only the newest and smallest is being modified at any given time, and the rest are constant - we can construct lookup tables for the constant sets of keys that are drastically more efficient for lookup, but wouldn't be possible to update without regenerating the entire lookup table.

Probabilistic data structures store data compactly with low memory and provide approximate answers to queries about stored data. They are designed to answer queries in a space-efficient manner, which can mean sacrificing accuracy.

Like Bloom filters, the Cuckoo filter is a probabilistic data structure for testing set membership. The ‘Cuckoo’ in the name comes from the filter’s use of the Cuckoo hashtable as its underlying storage structure. The Cuckoo hashtable is named after the cuckoo bird becauses it leverages the brood parasitic behavior of the bird in its design. Cuckoo birds are known to lay eggs in the nests of other birds, and once an egg hatches, the young bird typically ejects the host’s eggs from the nest. A Cuckoo hash table employs similar behavior in dealing with items to be inserted into occupied 'buckets’ in a Cuckoo hash table.

So what, exactly, goes into a design document for a problem domain? What makes these docs so detailed and rigorous? I believe that the hallmark of these designs is an extremely thorough assessment of risk.

As the owner of a problem domain, you need to look into the future and anticipate everything that could go wrong. Your goal is to identify all of the possible problems that will need to be addressed by your design and implementation. You investigate each of these problems deeply enough to provide a useful explanation of what they mean in your design document. Then you rank these problems as risks based on a combination of severity (low, medium, high) and likelihood (doubtful, potential, definite).

Still, for all the success Microsoft has had with Office 365, the real giant of cloud computing — which is to say the future of enterprise computing — is, as is so often the case, a company no one saw coming: the same year Google decided to take on Microsoft Amazon launched Amazon Web Services. What makes AWS so compelling is the way that it reflects Amazon itself: it is built for scale and with clearly-defined and hardened interfaces. Customers — first Amazon but also companies around the world — access “primitives” that can be mixed-and-matched to build a more efficient, scalable, and secure back-end than nearly any company could build on its own.

...

Where Kubernetes differs from Borg is that it is fully portable: it runs on AWS, it runs on Azure, it runs on the Google Cloud Platform, it runs on on-premise infrastructure, you can even run it in your house. More relevantly to this article, it is the perfect antidote to AWS’ ten year head-start in infrastructure-as-a-service: while Google has made great strides in its own infrastructure offerings, the potential impact of Kubernetes specifically and container-based development broadly is to make irrelevant which infrastructure provider you use. No wonder it is one of the fastest growing open-source projects of all time: there is no lock-in.

Mapping today is dominated by data freaks, obsessed with being scientifically rigorous and statistically significant. But as a data freak I’ve come to realize that not all maps have to involve equations. I want to take a break, be a little unscientific, and put the human element back on the map. Ultimately, cities and neighborhoods are collections of people, and I wanted to map their experiences. As it turns out, these unscientific maps are just as charming, thorough and thought-provoking as any other.

And you can lose hours trying to reproduce the process that Trubetskoy must have followed, wandering around Urban Dictionary to find pages like Oakland

City east of SF Bay, aka "tha town". Separated into 3 parts (North, West, and East Oakland). There is no south. North Oakland is the hills. West Oakland has downtown, lake merritt, chinatown, and jack london square. East Oakland has the airport, coliseum, and the zoo. Deep East Oakland is where you can find the sideshows, people actin' a fool and gettin' hyphy, goin stupid doo doo dumb retarded, smokin perk and chewy, sippin' on some heem or yak, and slappin' hard in they box chevs.

Many of these jargon terms are completely unfamiliar to me. For example, Fruitvale has always been called Fruitvale to me (though I'm an oldster, not hip at all). I've certainly never heard it called East Side Oakland (ESO), though perhaps that term is describing the area where 98th meets East 14th, a bit farther away.

The city of Hayward, California. It is known as the "heart" of the bay. The city was founded by a man named William Hayward, who came to California to seek his fortune in the California Gold Rush (Began in 1848, I believe). He bought some forty acres from some Rancher, and in a few years sprouted into a town. Was at one point misspelled into "Haywood". Haystack is a slang term when refering to this city.

Before the match, I spent a fair amount of time describing Carlsen's astonishing endurance and ability to sustain his concentration over a six, seven, or even eight hour chess game.

But his skill on shorter time frames is even greater.

And, although Karjakin was every bit Carlsen's equal during the standard time control games, today was all Carlsen.

So we move on. As I said, I don't think anyone is pleased that it had to go to tie breaks, but those are the rules and that's the way the match was organized, it was not a surprise that this was a possibility.

Everybody is going to have their own opinions about the match, but overall I was pleased. It was fun chess to watch, and I can't wait for the next match! (Of course, not everyone shares my opinion.)

To probe these subtle shifts, scientists combined multiple radar scans from the Copernicus Sentinel-1 twin satellites of the same area to detect subtle surface changes – down to millimetres. The technique works well with buildings because they better reflect the radar beam.

Over the weekend, my wife and I were walking along the shore of the bay, approximately 7 miles from downtown, with a very clear view on the day after a big storm, and my wife wondered if it was possible to tell which tower was the Millenium Tower from our perspective.

We had a lot of rain in November, and the reservoirs are filling up fast. New Melones and Pine Flat are still dramatic outliers at less than 25% capacity, but the Big Three (Shasta, Oroville, and Trinity) are filling up fast. Let's go, rain!

As it was making its slow descent, Schiaparelli’s Inertial Measurement Unit (IMU) went about its business of calculating the lander’s rotation rate. For some reason, the IMU calculated a saturation-maximum period that persisted for one second longer than what would normally be expected at this stage. When the IMU sent this bogus information to the craft’s navigation system, it calculated a negative altitude. In other words, it thought the lander was below ground level.

...

Encouragingly, this behavior was replicated in computer simulations, which means mission planners stand a good chance of correcting the anomaly. The exact cause of the IMU’s miscalculation was not disclosed, but if it was tripped by some kind of mechanical problem, that would be bad news. The ESA is planning a similar mission in 2020, which doesn’t leave much time for an engineering overhaul. A software glitch, on the other hand, would likely prove to be an easier fix.

The bigger question at this point is how NATS Streaming will tackle scaling and replication (a requirement for true production-readiness in my opinion). Kafka was designed from the ground up for high scalability and availability through the use of external coordination (read ZooKeeper). Naturally, there is a lot of complexity and cost that comes with that. NATS Streaming attempts to keep NATS’ spirit of simplicity, but it’s yet to be seen how it will reconcile that with the complex nature of distributed systems. I’m excited to see where Apcera takes NATS Streaming and generally the NATS ecosystem in the future since the team has a lot of experience in this area.

We downsized the team working on core components (the transactional, distributed key-value store), composed of five engineers with the most familiarity with that part of the codebase. We even changed seating arrangements, which felt dangerous and counter-cultural, as normally we randomly distribute engineers so that project teams naturally resist balkanization.

...

Relocating team members for closer proximity felt like it meaningfully increased focus and productivity when we started. However, we ended up conducting a natural experiment on the efficacy of proximity. First two, and then three, out of the five stability team members ended up working remotely. Despite the increasing ratio of remote engineers, we did not notice an adverse impact on execution.

...

The smaller stability team instituted obsessive review and gatekeeping for changes to core components. In effect, we went from a state of significant concurrency and decentralized review to a smaller number of clearly delineated efforts and centralized review.

Somewhat counter-intuitively, the smaller team saw an increase per engineer in pull request activity

Concepts fight obsolescence. Even when ASP.NET inevitably dies, the concepts I've learned from programming in it for ten plus years will still be useful. Concepts have a longer shelf life than details, because details change. Languages are born and die, frameworks become unpopular overnight, companies go out of business, support will end. But the thoughts, the ideas, the best practices? They live forever.

Learn about SOLID. Learn KISS, DRY, and YAGNI. Learn how important naming is. Learn about proper spacing, functional vs object-oriented, composition vs. inheritance, polymorphism, etc. Learn soft skills like communication and estimation. Learn all the ideas that result in good code, rather than the details (syntax, limitations, environment, etc.) of the code itself. Mastering the ideas leads to your mind being able to warn you when you are writing bad code (as you will inevitably do).

What do modern applications look like? We are seeing the combination of rapid cloud based provisioning, a DevOps culture transformation, and the journey from waterfall through agile to continuous delivery product development processes give rise to a new application architecture pattern called microservices. This shares the same principles as the service oriented architecture movement from 10–15 years ago, but in those days, machines and networks were far slower, and XML/SOAP messaging standards were inefficient. The high latency and low messaging rates meant that applications ended up composed of relatively few large complex services. With much faster hardware and more efficient messaging formats, we have low latency and high messaging rates. This makes it practical to compose applications of many simple single function microservices, independently developed and continuously deployed by cloud native automation.

The AWS Well-Architected Framework documents a set of foundational questions that allow you to understand if a specific architecture aligns well with cloud best practices. The framework provides a consistent approach to evaluating systems against the qualities you expect from modern cloud-based systems, and the remediation that would be required to achieve those qualities.

Something that stood out in almost all of the presentations at DOES16 was the vast number of tools and point solutions companies are using to achieve their goals. Many speakers at some point in their presentations even listed the hodge-podge of vendors and tools they have in their toolchains. Different teams within an organization use a variety of different tools, and the resulting complexity can become overwhelming for enterprises.

Instead of racing to the bottom as the market plummets, Apple appears to be taking the “high road”, in a sense: They’re taking refuge at the high end of the market by introducing new, more expensive MacBook Pros, with a visible differentiating feature, the Touch Bar. This is known, inelegantly, as milking a declining business, although you shouldn’t expect Apple to put it that way.

Facebook has allowed advertisers to create news stream ads with the option of not publishing to them to the news feed for some time, but it’s still a fairly untapped play for the moment.

By employing this tactic the advertiser mentioned about could run all four product ads as sponsored posts, target different audiences, split test headlines and even create personalized messages for demographic and geographic targets – literally run dozens of ads all on the same day – without a single ad showing in their own news stream.

The modern era of autonomous driving began in the 1980s. The US and Germany were the two nations at the forefront in this line of research. In the US, the research was largely funded by DARPA (Defense Advanced Research Projects Agency). In Germany, large automotive companies such as Mercedes-Benz funded research. The leading projects utilized computer vision-based systems, lidar, and autonomous robotic control. The decision-making systems were essentially driven by optimizing if-then-else algorithms (e.g., optimize speed subject to not exceeding the speed limit and not hitting anything; “if is raining, then slow down…”, “if a pedestrian approaches within 5 feet on the left, then swerve right”). In other words, the systems were algorithmic, that is, codifying an algorithm describing the connection between road conditions and decisions related to speed and steering.

However, to be able to drive on roads in unstructured and unpredictable environments, including ones where other drivers (human or autonomous) also exist, requires predicting the outcomes of a large number of possible actions. Codifying the set of possible outcomes proved too challenging. In the early 2000s, however, several groups began using primitive versions of modern machine-learning techniques. The key difference between the new method and the old method is that while the old method centered around optimizing a long list of if-then-else statements, the new method instead predicts what a human driver would do given the set of inputs (e.g., camera images, lidar information, mapping data, etc). This facilitated significant improvements in autonomous driving performance.

Whereas the utopian view has argued that blockchain technology will affect every market by reducing the need for intermediation, we argue that it is more likely to change the scope of intermediation both on the intensive margin of transactions (e.g., by reducing costs and possibly influencing market structure) as well as on the extensive one (e.g., by allowing for new types of marketplaces). Furthermore, for the technology to have any impact in a specific market, verification of transaction attributes (e.g., status of a payment, identity of the agents involved etc.) by contracting third-parties needs to be currently expensive; or network operators must be enjoying uncompetitive rents from their position as trusted nodes above and beyond their added value in terms of market design.

Economically speaking, the American health care system is not built for patients, because patients aren’t the ones paying for it directly. Insurance companies are.

See, health care in the U.S. is mostly a B2B business. It is only B2C where insurance doesn’t cover expenses to the patient. And even then, insurance still often pays for it when patients can’t, don’t or both.

Thursday, November 24, 2016

You could just sort of tell, I think, that Carlsen wanted this game, badly.

On move 19, Karjakin offered an exchange of bishops, and re-took with his f-pawn, to open up the possible lanes for an attack.

But Carlsen quickly exchanged off the queens, and by move 30 the game had entered what became an extraordinarily complex endgame. Both sides had a pair of rooks and a knight, and all 16 pawns were still on the board.

But one of those pawns was doubled: Karjakin's f-pawn, offered back on move 19, became the focus of the entire game.

Carlsen maneuvered and maneuvered, patiently and carefully, taking his sweet, sweet time, as he is so willing to do.

Karjakin defended superbly, for 50 more moves, as the game stretched past the second time control, and entered its seventh hour.

And then, in a blink, it was over.

So now there are just 2 games to go in the match. Both sides have demonstrated they can win.

Wednesday, November 23, 2016

As we draw closer to the conclusion of the match, things have really heated up!

With his back against the wall, Carlsen is NOT going quietly.

In what appeared to be a very classic, very vanilla Ruy Lopez, Carlsen, with the black pieces, sacrificed a pawn in the opening for initiative, rapid development, and a quite threatening attack. By move 17, Karjakin's king was open and exposed, and Carlsen was lining up the big guns.

Moves 20 through 40 were, frankly, as exciting as chess ever gets, with the advantage see-sawing back and forth, pieces en prise all around, sacrifices, advanced passed pawns, each player's king being chased around the board, both barely avoiding disaster...

Then suddenly, just after both players had made the 40-move time control, all the pieces were suddenly off the board, leaving each player with just Queen and Bishop.

Karjakin had an extra pawn, but it was doubled, and Carlsen's pawns were connected, while Karjakin's were not.

Karjakin pressed hard, hard, hard for 30 more moves, but Carlsen was equally tenacious, and there was no breakthrough to be found by either player.

I'm really looking forward to the final three games. The match may have started slowly, but once blood was drawn, it's been as vibrant and vivid as I could have possibly hoped for.

But I think it is also like trying to count all the grains of sand on the beach.

There are so many.

And more keep washing up after each storm.

Trying to identify the bad guys and keep them out seems fundamentally flawed. It strikes me as somewhat analogous to the computer security debates about "white-listing" vs "black-listing". That is, you can try to enumerate all the things you don't want, but that's a long list. Perhaps better just to make a very short list of the sources you do trust.

Or, even better, just to educate people about the need to "understand the context; understand the source".

Even, perhaps especially, we "simple code monkeys" who, in the end, build these algorithms and deploy them on these computers.

I had the opportunity, recently, to watch Shattered Glass, the now-15-years-old dramatization of the story of Stephen Glass and the fall of The New Republic, once perhaps the most respected magazine in all of journalism. It's not the greatest movie ever made, but it's an important story, and worth watching (or at least learning about). I thought the movie did a particularly good job of showing how so many different people were complicit, in so many different ways, in what happened.

I'm not sure where the answer lies.

But I'm very happy that the discussion is considerably more lively than it has been.