Saturday, January 30, 2016

As January draws to a close, the effects of the month's rains and snows are clear in these charts of the water levels for the 3 massive reservoirs in far northern California: Lake Shasta, Trinity Lake, and Lake Oroville.

Check out those dramatic changes in just the last 2 weeks! Trinity Lake is now 25% full, Lake Oroville is now 40% full, and Lake Shasta is now 50% full!

And it's not just been rain; the Snow Pack report is solid, too. The Northern Sierra snowpack, which actually the most important, is at 124% of normal; the Central Sierra snowpack is also above normal, and even the Southern Sierra snowpack is close, at 93% of normal.

We all know that this year's El Nino has been compared to 1997-1998 in strength and as a benchmark of what it may bring. I have documented as late as last week that so far this rainfall season has not been typical of strong El Nino years.

This winter was supposed to be dry in the Northwest, but precipitation amounts are 30 to 60 percent above normal. It was to be a wet year in Southern California, but rainfall so far have been 30 to 40 percent below normal.

30 to 40 percent below normal, of course, is a LONG way from the 85 percent below normal that we saw the last two years. But, it's certainly not been a record-breaking wet year, so far.

Easy to spot are the very warm waters around the equatorial Eastern Pacific with our current strong El Nino. There is also a core of warmer-than-normal water from Asia to the West Coast of the U.S. between 20 and 40 north. In between, however, is a core of cooler-than-normal water running much of the Pacific between 10 and 20 north.

Well, as the great Yogi Berra noted, "predictions are hard, especially about the future."

But, for now at least, the creeks are rising, and the snow is falling.

Thursday, January 28, 2016

Over the last few years I’ve written over forty blog posts that discuss ETW/xperf profiling. I’ve done this because it’s one of the best profilers I’ve ever used, and it’s been woefully undersold and under documented by Microsoft. My goal has been to let people know about this tool, make it easier for developers and users to record ETW traces, and to make it as easy as possible for developers to analyze ETW traces.

If your Windows computer is running slowly – if a program takes a long time to launch, if a game has a poor frame rate, or if an idle application uses too much CPU time – the best way to investigate is to record an Event Tracing for Windows (ETW) trace. An ETW trace records a wealth of information (CPU sampling, context switches, disk I/O, custom data, and much more) that allows most performance problems to be understood by a trained expert. If you’re not a trained expert then you can still record an ETW trace, and then share it with somebody who is.

Monday, January 25, 2016

The firm is a fundamental economic unit of contemporary human societies. Studies on the general quantitative and statistical character of firms have produced mixed results regarding their lifespans and mortality. We examine a comprehensive database of more than 25 000 publicly traded North American companies, from 1950 to 2009, to derive the statistics of firm lifespans. Based on detailed survival analysis, we show that the mortality of publicly traded companies manifests an approximately constant hazard rate over long periods of observation. This regularity indicates that mortality rates are independent of a company's age. We show that the typical half-life of a publicly traded company is about a decade, regardless of business sector. Our results shed new light on the dynamics of births and deaths of publicly traded companies and identify some of the necessary ingredients of a general theory of firms.

Hmmm... So, the typical company seems to survive for about a decade, more or less?

That does indeed seem to match my experience, though I confess I thought it was the extreme rate of technological change in my little corner of the industrial world that had a lot to do with that.

I found this paragraph from the paper particularly interesting:

A third perspective suggests that mortality rates increase as companies age. This idea is based upon two related concepts: the first is liability of senescence, the idea that as companies age, they accumulate rules and stagnating relationships with consumers and input markets that render them less agile and that re-configuration is increasingly expensive [32]. Arguing instead for a liability of obsolescence, Sorenson & Stuart [33] suggest that environmental requirements change over time and that, although firms may improve in competence and efficiency with age by becoming more specialized, these specific adaptations also increase the companies’ risk to new kinds of external shocks that will inevitably beset them.

In my own experience, I've definitely seen that a company's relationships with its existing customers can make it a challenge to behave in different ways with new customers: "that's not the way we do things around here".

The net effect is that a company finds it hard to change direction, or even to evolve.

Friday, January 22, 2016

Over the first three quarters of 2015, Uber lost $1.7 billion on $1.2 billion in revenue. For perspective, during Amazon.com’s worst-ever four quarters, in 2000, it lost $1.4 billion on $2.8 billion in revenue. CEO Jeff Bezos responded by firing more than 15 percent of his workforce.

Offering shares to retail investors with no financial disclosure is nothing new! Companies used to do it all the time! Then there was a Great Depression, and financials became rather strongly expected (by which I mean, legally required). In the subsequent decades, more expectations grew up, many of them enshrined in law, expectations about things like shareholder rights and corporate formalities. The public corporation was standardized around a model that worked pretty well. And pretty much the only way to be a big company was to be a public company, so that standard model imposed itself broadly.

But in recent years it's become much easier to get pretty much whichever advantages of the public corporation you want -- bigness, name recognition, investment-banker attention, regular access to massive amounts of capital from mutual funds and retail investors, liquidity for employees and early investors -- without the things that you don't want -- activists, short sellers, volatility and, sure, financial disclosure. The rules that everyone thought were binding aren't binding anymore. You want to raise billions of dollars from investors but keep control of your company, limit financial disclosure and have approval rights over who gets to buy? Sure, you can do that now.

Saturday, January 16, 2016

Why has Bitcoin failed? It has failed because the community has failed. What was meant to be a new, decentralised form of money that lacked “systemically important institutions” and “too big to fail” has become something even worse: a system completely controlled by just a handful of people. Worse still, the network is on the brink of technical collapse. The mechanisms that should have prevented this outcome have broken down, and as a result there’s no longer much reason to think Bitcoin can actually be better than the existing financial system.

In my simple mind I liken it to this. Should Bitcoin be Gold or should Bitcoin be Visa. If it is Gold, it’s a store of wealth and something to peg value to. If it is Visa, then its a transactional network that can move wealth around the globe in a nanosecond.

The dispute — which grew out of a question about the number of transactions the Bitcoin network can handle — may sound like something of interest only to the most die-hard techies. But it has exposed fundamental differences about the basic aims of the Bitcoin project, and how online communities should be governed. The two camps have broadly painted each other as, on one side, populists who are focused on expanding Bitcoin’s commercial potential and, on the other side, elitists who are more concerned with protecting its status as a radical challenger to existing currencies.

Clever engineers will find ways to work around around the limit, whether that is ‘extension blocks’ or the lightning network or a sidechain that everybody moves their coins to doesn’t really matter. I’d prefer a nice, simple, clean solution, but I’m old enough to know that most of the world’s great technologies are built on top of horrifying piles of legacy cruft, and they work just fine pretty much all of the time.

This block size debate ultimately comes down to competing economic and system survival theories. One theory is that a free market range exists for block size, in absence of a hard limit. Another theory is that a hard limit is required to forcibly constrain the free market. Stalling on core block size changes the former to the latter — uncharted territory for bitcoin.

These developers claim that the “schism” is more like two rogue developers against everyone else (and then there are a few out there, like Bitcoin core developer Jeff Garzik, who seem in the middle). Nick Szabo, whom some suspect is Bitcoin's original creator, called it a “reckless act to be performing on a $4 billion system,” sided with a more conservative fork, and posted a photo of a heartbreaking space shuttle disaster with the caption, “What happens when the managers and investors ignore the engineers and scientists....”

The operator of a full node, and only the operator of a full node, decides which consensus rules that full node applies. As such, full node operators decide what kind of transactions they want to accept and, therefore, what kind of Bitcoin they want to use. Full nodes grant individual autonomy to their operators.

But operating a full node is not only empowering from an individual perspective; it’s also empowering in a more democratic sense, as full nodes carry out social influence through network effects . Full node operators are incentivized to apply the same consensus rules as other full node operators, since that allows them to transact. So, as a full node operator decides to use a specific set of consensus rules, the incentive to adhere to these consensus rules becomes stronger for everyone else, too.

This social influence can currently be witnessed by the fact that some full node operators would individually prefer to increase the block-size limit to produce and accept bigger blocks – but don’t.

Friday, January 15, 2016

The National Park Service said today it will rename many well-known spots in Yosemite, as part of an ongoing legal dispute with an outgoing concessionaire that has trademarked many names in the world-famous park.

“While it is unfortunate that we must take this action, changing the names of these facilities will help us provide seamless service to the American public during the transition to the new concessioner,” Yosemite National Park Superintendent Don Neubacher said today.

Among the changes: Yosemite Lodge at the Falls will become Yosemite Valley Lodge; The Ahwahnee Hotel will become the Majestic Yosemite Hotel; Curry Village will become Half Dome Village; Wawona Hotel will become Big Trees Lodge; and Badger Pass Ski Area will become Yosemite Ski & Snowboard Area.

So, here's the backstory: part of the deal between the park and Delaware North in 1993, was that the company had to buy the assets of the previous operator, the Yosemite Park & Curry Company. That contract also noted that any new concession firm that took over from Delaware North would then have to purchase the assets from Delaware North.

Java Serialization is insecure, and is deeply intertwingled into Java monitoring (JMX) and remoting (RMI). The assumption was that placing JMX/RMI servers behind a firewall was sufficient protection, but attackers use a technique known as pivoting or island hopping to compromise a host and send attacks through an established and trusted channel. SSL/TLS is not a protection against pivoting.

This means that if a compromised host can send a serialized object to your JVM, your JVM could also be compromised, or at least suffer a denial of service attack. And because serialization is so intertwingled with Java, you may be using serialization without realizing it, in an underlying library that you cannot modify.

One of the things that stands out in the Java Serialization exploit is that once a server side Java application is compromised, the next step is to gain shell access on the host machine. This is known as a Remote Code Execution, or RCE for short.

The interesting thing is that Java has had a way to restrict execution and prevent RCE almost since Java 1.1: the SecurityManager. With the SecurityManager enabled, Java code operates inside a far more secure sandbox that prevents RCE.

Most sandbox mechanisms involving Java’s SecurityManager do not contain mechanisms to prevent the SecurityManager itself from being disabled, and are therefore “defenseless” against malicious code. Use a SecurityManager and a security policy as a system property on startup to cover the entire JVM, or use an “orthodox” sandbox as described below.

None of these articles are easy reading: they are dense and detailed, and astonishingly replete with links to yet more information.

But they are very interesting, and very informative, and you shouldn't call yourself a Java systems programmer without being extremely conversant with the topics mentioned here.

Tuesday, January 12, 2016

The WGA’s Videogame Writing Award honors the best qualifying script from a videogame published in the previous year.

...

The Writers Guild Awards honor outstanding writing in film, television, new media, videogames, news, radio, promotional, and graphic animation categories. Competitive awards will be presented at both the Los Angeles ceremony at the Hyatt Regency Century Plaza and the New York ceremony at the Edison Ballroom. The Los Angeles and New York ceremonies take place concurrently on February 13, 2016.

Assassin’s Creed Syndicate, Pillars of Eternity, Rise of the Tomb Raider and The Witcher III: Wild Hunt have been nominated for the WGA’s Videogame Writing Award.

Well, I can vouch for the fact that two of them, for sure, have extremely good scriptwriters. In fact, the writing in Pillars of Eternity is so good that I might actually have to admit that there's a possibly a potential category where some other video game might just slightly have outdone The Witcher.

As far as I can ascertain, RethinkDB’s safety claims are accurate. You can lose updates if you write with anything less than majority, and see assorted read anomalies with single or outdated reads, but majority/majority appears linearizable.

Rethink’s defaults prevent lost updates (offering linearizable writes, compare-and-set, etc), but do allow dirty and stale reads. In many cases this is a fine tradeoff to make, and significantly improves read latency. On the other hand, dirty and stale reads create the potential for lost updates in non-transactional read-modify-write cycles. If one, say, renders a web page for a user based on dirty reads, the user could take action based on that invalid view of the world, and cause invalid data to be written back to the database. Similarly, programs which hand off state to one another through RethinkDB could lose or corrupt state by allowing stale reads. Beware of sidechannels.

Where these anomalies matter, RethinkDB users should use majority reads. There is no significant availability impact to choosing majority reads, though latencies rise significantly. Conversely, if read availability, latency, or throughput are paramount, you can use outdated reads with essentially the same safety guarantees as single–though you’ll likely see continuous, rather than occasional, read anomalies.

Although this sounds like nuanced, even faint praise, when it comes to Jepsen this is a strong an endorsement as it (and Kingsbury) have ever delivered, so this is certainly interesting.

Somewhat as an aside, while reading Kingsbury's article, I was interested to read about Gavin Lowe's Testing for Linearizability work, which I hadn't seen before. This, too, will be fun to dig into and absorb.

These are MONSTER reservoirs. Together, they have a storage capacity of almost 11 MILLION acre feet.

And, currently, they contain less than 3 million AF, so just those 3 reservoirs alone are hoping to collect 8 million acre feet of water, if they can.

And that's true across the state. The reservoirs that most affect my daily life are the ones in the Mokelumne Watershed, such as Camanche Reservoir, as that's where East Bay Municipal Utility District draws its supplies.

Sunday, January 3, 2016

Even though the San Jose Mercury-News annoyingly keeps mis-spelling the acronym of the shipping company in their article, they still ran a pretty nifty story about the new Benjamin Franklin:

The future of Pacific shipping loomed large on the bay Thursday as a giant container ship docked at the Port of Oakland.

At 1,310 feet in length, the CMA-CMG Benjamin Franklin would stand 50 feet higher than the Empire State Building. It can carry 18,000 20-foot-long cargo containers. Most such ships coming into U.S. ports carry 14,000 containers.

The ship, which came up the coast from Los Angeles, is the world's tenth largest and represents a drive to economize in shipping by building ever bigger, more efficient cargo carriers.

The U.S. Army Corps of Engineers finished dredging 870 acres of Bay floor last month. The work cleaned up a 50-foot-deep channel leading massive container ships into 50-foot-deep berths at the Port. That’s the desired clearance for thousand-foot-long vessels that could carry up to 14,000 20-foot cargo containers.

14,000; 18,000; what's the difference? Any way you look at it, these ships are enormous.

Indeed, the port's press release gives its ecological responsibilities almost equal weight with its commercial responsibilities:

The Corps’ challenge: finding beneficial use of the residue -- river-borne sediment and shifting sands that sweep in with the tide. The answer in this case: the Montezuma Wetlands Restoration Project. Barges transported all of the dredged material 52 nautical miles northeast to this 2,400-acre marsh on Suisun Bay. Under regulations governing the Port, only 80% must actually be reclaimed.

Privately owned Montezuma Wetlands LLC is overseeing a project to restore the marsh with 1.75 million cubic yards of fill. The goal is to restore the site’s original surface height. The Montezuma Wetlands have subsided 10 feet since being diked and drained a century ago. With a fresh topcoat, the wetlands should provide a more inviting habitat for shorebirds and other wildlife.

It's not easy to be a coastal sea bird on the popular West Coast of North America.

If you haven't already done so, you might enjoy watching Pelican Dreams, a nicely-presented documentary about the complex life of the California Brown Pelican. It's a beautiful movie; I really enjoyed it.

We get lots of pelicans around our neighborhood, at the right times of the year, and this is one such time. On a recent walk around town, we saw not only pelicans but more than a dozen other migratory sea birds, taking the chance to rest in these gentle waters before resuming their task.

And as the shifting sands continue to sweep in and out with the tide, I'm sure the balance will swing back and forth as well.

But on a beautiful winter's afternoon as we walked along the bluffs at Point Pinole Regional Park, watching a White-Tailed Kite soar and swoop over the meadows, we talked about how beautiful this world was, and how lucky we were to live in it.