Monday, August 31, 2015

RED is full of artists, wild dreamers and people crazy about what they do (and sometimes just plain crazy). We lose sleep over that particular colour the sun has when it sets over Velen, and argue over arranging the furniture in a house the majority of gamers will probably never see. We’re not the kind of people who are easily satisfied and we always strive for more. I’d like you to know that.

We host in our own datacenters. We actually have an amazing provisioning story. We basically can provision hardware like it was the cloud. We have a really small, but amazingly dedicated, physical infrastructure team, and they do phenomenal work in providing us these amazing services that we can use.

If I need a new host, I can basically tell our chatbot, Hubot, that I need X amount of host of this class on these chassis, and it will just build them and deploy back in minutes. We have this incredibly flat, flexible, but physical infrastructure. As someone who consumes that infrastructure, it’s phenomenal and to watch it working is brilliant.

A virtual or a physical machine is neither durable nor cloud-native. Neither is a container. But a cluster of Kubernetes pods is a durable and declarative abstraction. To a lesser extent, a Marathon managed cluster of containers is also durable and declarative.

A name collision occurs when an attempt to resolve a name used in a private name space (e.g. under a non-delegated Top-Level Domain, or a short, unqualified name) results in a query to the public Domain Name System (DNS). When the administrative boundaries of private and public namespaces overlap, name resolution may yield unintended or harmful results.

Sophia database and its architecture was born as a result of research and reconsideration of primary alghorithmic constraints that relate to growing popular Log-file based data structures, such as LSM-tree, B-tree, etc.

Most Log-based databases tend to organize own file storage as a collection of sorted files which are periodically merged. Thus, without applying some key filtering scheme (like Bloom-filter) in order to find a single key, database has to traverse all files that can take up to O(files_count * log(file_key_count)) in the worst case, and it's getting even worse for range scans, because Bloom-filter is incapable to operate with key order.

Sophia was designed to improve this situation by providing faster read while still getting benefit from append-only design.

This June, we released our latest improvements. We started doing dynamic pricing—that is, offering new price tips daily based on changing market conditions. We tweaked our general pricing algorithms to consider some unusual, even surprising characteristics of listings. And we’ve added what we think is a unique approach to machine learning that lets our system not only learn from its own experience but also take advantage of a little human intuition when necessary.

The same goes for BitTorrent. You can only download chunks from peers if they've got all the chunks. That's the current problem with the AshMad dump: everyone combined has only 85% of all possible chunks. The remaining 15% of the chunks haven't been uploaded to the swarm yet. Nobody has a complete copy. The original tracker is seeding at a rate of 37-kilobytes/second, handing off the next chunk to a random person in the swarm, who quickly exchanges it with everyone else in the swarm.

We are deploying Remote Direct Memory Access (RDMA) technology in Microsoft’s datacenters to provide ultra-low latency and high throughput to applications, with very low CPU overhead. With RDMA, network interface cards (NICs) transfer data in and out of pre-registered memory buffers at both end hosts. The networking protocol is implemented entirely on the NICs, bypassing the host networking stack. The bypass significantly reduces CPU overhead and overall latency. To simplify design and implementation, the protocol assumes a lossless networking fabric.

We start by using fiber maps provided by tier-1 ISPs and major cable providers to construct a map of the long-haul US fiber-optic infrastructure. We also rely on previously under-utilized data sources in the form of public records from federal, state, and municipal agencies to improve the fidelity of our map. We quantify the resulting map’s connectivity characteristics and confirm a clear correspondence between long-haul fiber-optic, roadway, and railway infrastructures. Next, we examine the prevalence of high-risk links by mapping end-to-end paths resulting from large-scale traceroute campaigns onto our fiber-optic infrastructure map.

When we set out to build X-Stream and subsequent systems our aim was to really provide a great system and computation model for implementing such algorithms. The fundamental *systems* takeaway from the paper was that doing sequential scanning is a great way to deal with graphs because the gap between sequential and random access bandwidth means that you still win over sorting the data and then doing random access to fetch edges attached to active vertices.

Chaos is a new scalable system, due to appear at SOSP 2015. It isn't yet public, but my understanding is that it is basically a beast at sequentially streaming through edge data, across as many machines as you can swing.

Order was the subject of the recent blog post that stirred up this brouhaha: can something as simple as sorting empower the lowly laptop to compete with the scalable systems? Order isn't actually the name of a system, but it should be.

During the same time, the phenomenon of SOA was well popularized but there was not a clear and distinct best practice for a concrete implementation of this. Many implementations did succeed, but some were very difficult and failed. Some had services that were just too large and monolithic, while others had too many smallish services (almost microservices like) that it became difficult to achieve good performance. The concept of SOA was there, but designers and implementers failed to understand the full lifecycle of the service and its granularity and scalability impact on other services, and therefore paid a huge price during implementation.

What Gogo does in the sky is, indeed, different from what wireless companies do on terra firma. It uses an air-to-ground system that functions similarly to traditional cell service, but its radio towers point up, not down. Gogo’s towers are anywhere from 50 to 200 feet tall and can be located in rather remote locations, such as atop peaks in the Rocky Mountains or deep in the Alaskan tundra. The tower signal is received by a device on the plane’s belly that looks a bit like those antennas you used to see on stretch limos. The signal is routed to an onboard server about the size of an old-fashioned tower PC and then continues to the cabin.

Friday, August 28, 2015

In April, we accepted 1,051 university students from 73 countries. These students wrote code for 137 mentoring organizations. We also had 1,918 mentors from 70 countries help them out.

Unfortunately, I haven't kept close enough records, but I believe this was my 7th year of participating in GSoC, although one year doesn't count because I messed up the paperwork.

This year, Abhinav Gupta and I worked together on some very interesting and complex projects in Derby. Notably:

We addressed several security vulnerabilities in the Derby XML processing logic, which could be exploited by malware to attempt information disclosure attacks on computers running Derby software. (DERBY-6807)

We re-designed and re-implemented the data structures and algorithms which Derby uses to track which columns in which database tables are referenced by the database triggers that are in effect. This became more complex recently when Derby added support for trigger "WHEN clauses", which introduce new opportunities for database table references to need to be resolved during trigger execution. (DERBY-6783)

We re-factored the DRDA exception-passing logic in the Derby client-server libraries so that client and server code could cooperate on the encoding of exceptions without needing to duplicate code in the underlying code base. As a result, we added the ability for Derby to throw Derby-specific sub-classes of SQLException with additional information available for user applications to access. (DERBY-6773)

I really enjoyed working with Abhinav on these projects, and I hope he enjoyed learning more about Derby and more about the Open Source development process.

The optimization above relied on the property that writing a byte followed by reading the byte produces the byte originally written. But this doesn't work for video memory because of the weird way video memory works. The result was that when the decompression engine tried to read what it thought was the uncompressed data, it was actually asking the video controller to do some strange operations. The result was corrupted decompressed data, and corrupted video data.

If you have 16 colors, then you need four bits per pixel. You would think that the encoding would be to have the each byte of video memory encode two pixels, one in the bottom four bits and one in the top four. But for technical reasons, the structure of video memory was not that simple.

Tuesday, August 25, 2015

One of the more subtle underlying issues with the rise of Uber is the company’s slow siphoning of the political will to fix existing—or build new—public transit infrastructure in major cities. In Affluence and Influence: Economic Inequality and Political Power in America, Princeton Professor of Politics Martin Gilens shows that—as he put it in an article with Northwestern Professor of Decision Making Benjamin Page—“economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while mass-based interest groups and average citizens have little or no independent influence.” As the wealthy—and, as the prices of Uber and Lyft fall, the slightly less so—essentially remove themselves from the problems of existing mass transit infrastructure with Uber and other services, the urgency to improve or add to it diminishes. The people left riding public transit become, increasingly, the ones with little or no political weight to demand improvements to the system.

This week in San Francisco, Uber took a first step toward realizing the vision that Kalanick described. The ride-hail company began experimenting with a new ride option called Smart Routes. The idea is drivers will be able to both pick up and drop off passengers along a specific route, which in turn allows them to quickly pick up their next passenger. For now the company is experimenting with only two routes: Fillmore Street between Haight and Bay, and Valencia Street between 15th and 26th.

Uber is setting up a new self-driving car project at the University of Arizona, according to an email sent out today to university employees. The new project will focus on self-driving car technology, particularly the mapping and optics challenges involved in developing a fully autonomous vehicle. The news comes just months after a major hiring push for Uber's Pittsburgh center, which many complained had hired so many experts away from the local robotics lab that they had effectively gutted competing projects.

On November 13 and 14, the New School in New York City will host a coming-out party for the cooperative Internet, built of platforms owned and governed by the people who rely on them. The program will include discussion sessions, screenings, monologues, legal hacks, workshops, and dialogues, as well as a showcase of projects, both conceptual and actual, under the purview of celebrity judges. We’ll learn from coders and worker cooperatives, scholars and designers. Together, we’ll put their lessons to work as we work toward usable apps and structural economic change.

Ride hailing companies continue to face pressure from courts and politicians who say drivers should be treated as employees rather than independent contractors. Labor unions are pushing this view, while ignoring that many ride hailing drivers are drawn to the flexibility of being independent contractors. (Meanwhile, taxicab drivers in many cities are also considered independent contractors, a fact that is rarely mentioned in these debates.)

But those days can wait, because next year Marcin Iwiński will be 40 years old, and it will be 20 years since his CD Projekt adventure began. He dared in those car parks all those years ago, and he has achieved so much. He did not do it alone, he is at pains to point out - at every twist and turn he had help, be it from Michal Kiciński or his brother Adam Kiciński, or Piotr Nielubowicz or Adam Badowski. Without them and many more he wouldn't be here today, sitting before me, wearing a blue hoodie and jeans and a relaxed stubbly smile, surrounded by a company not only continuing to set an example for Poland, but now the wider world as well.

He pays attention to what The Witcher fans are saying, and the number-one concern he's seen about The Witcher 3 is that fans think the traditionally tight stories of the series will be sacrificed to fit an open world.

Not so. "We don't want to make any compromises in storytelling," he told me. "We simply needed to come up with a larger-scale story. That's it. The world is bigger so we need to fill it with good stories.

As if the Polish Prime Minister wasn't enough, day two had begun with the CD Projekt board having presidential breakfast with Bronislaw Komorowski - remarkable, given that I don't believe Adam Badowski has slept just yet. He disappears for a lie down later when a procession of models and cosplayers from last night's festivities work their way through the office to the accompaniment of drums, delivering invitations to everyone for next week's party. Some 250 people, plus partners, will get together and celebrate their collective achievement. "This will be a time for emotions," Iwiński says.

Sunday, August 23, 2015

One of the things I find to be curious about these failure modes is that when I talked about what I found with other folks, at least one person told me that each process issue I found was obvious. But these “obvious” things still cause a lot of failures. In one case, someone told me that what I was telling them was obvious at pretty much the same time their company was having a global outage of a multi-billion dollar service, caused by the exact thing we were talking about. Just because something is obvious doesn’t mean it’s being done.

In this simple example, I end up with many possibilities. But a real query can have other relational operators like OUTER JOIN, CROSS JOIN, GROUP BY, ORDER BY, PROJECTION, UNION, INTERSECT, DISTINCT … which means even more possibilities.

So, how a database does it?

Dynamic programming, greedy algorithm and heuristic

A relational database tries the multiple approaches I’ve just said. The real job of an optimizer is to find a good solution on a limited amount of time.

Most of the time an optimizer doesn’t find the best solution but a “good” one.

For small queries, doing a brute force approach is possible. But there is a way to avoid unnecessary computations so that even medium queries can use the brute force approach. This is called dynamic programming.

For those who haven't kept up with bcache, the bcache codebase has been evolving/metastasizing into a full blown, general purpose posix filesystem - a modern COW filesystem with checksumming, compression, multiple devices, caching, and eventually snapshots and all kinds of other nifty features.

At a high level, bcache's btree is a copy on write b+ tree. The main difference between bcache's b+ tree and others is the nodes are very large (256k is typical) and log structured. Like other COW b+ trees, updating a node may require recursively rewriting every node up to the root; however, most updates (to both leaf nodes and interior nodes) can be done with only an append, until we've written to the full amount of space we originally reserved for the node.

Cake instead schedules packets based on time deficits. If no deficit exists when a packet is requested, it can be sent immediately. The transmit time of the following packet is then calculated, and until that time the shaper is placed in deficit mode. While in deficit mode, packets are scheduled using a watchdog timer whenever a request arrives too soon, and transmission times are calculated for a continuous packet train. This continues until the queue drains; if a packet is requested, but none are available and the next transmission time has been reached, the shaper returns to the quiescent state in which the next packet can be sent immediately.

Deficit mode makes the burst size dependent only on hardware and kernel latency (including timer resolution), and minimises bursts without requiring manual tuning. Cake's shaper can therefore be set much closer to the actual link speed without jeopardising latency performance. Modern hardware can achieve sub-millisecond bursts in most cases.

tl;dr: A recent NSDI paper argued that data analytics stacks don’t get much faster at tasks like PageRank when given better networking, but this is likely just a property of the stack they evaluated (Spark and GraphX) rather than generally true. A different framework (timely dataflow) goes 6x faster than GraphX on a 1G network, which improves by 3x to 15-17x faster than GraphX on a 10G network.

Often times technology vendors advertise scale-out as a way to achieve high performance. It is a proven approach, but it is often used to mask single node inefficiencies. Without a well balanced system where CPU, memory, network, and local storage are properly balanced, this is simply what we call “throwing hardware at the problem”. Hardware that, virtual or not, customers pay for.

To demonstrate this, we decided to check Helium’s performance on a single node on Google Cloud Platform with a workload similar to the one previously used to showcase Aerospike and Cassandra (200 byte objects and 100 million operations). With Cassandra, the data store contained 3 billion indices.

Why? Well, a lot of the fun engineering problems in trading are caused by it actually not being reliably the case that you send in an order and it gets unproblematically matched at “the price.” Markets are distributed systems. The exchange’s view of reality and your trading system’s view of reality are, by necessity, separated by the great firewall known as “physics.” For maximum possible results, you have to be able to do things like accurately predict what the future state of the exchange is, because the order you’re composing right now will arrive in the future not the present, while being cognizant that your present view of the exchange’s state is actually the exchange’s past.

Saturday, August 22, 2015

The problem is that any change, no matter how obvious, can be nixed entirely if it becomes “controversial”, meaning another person with commit access objects. As there are five committers and many other non-committers who can also make changes “controversial” this is a recipe for deadlock. The fact that the block size was never meant to be permanent has ceased to matter: the fact that removing it is debated, is, by itself, enough to ensure it will not happen. Like a committee with no chairman, the meeting never ends. To quote the committer who has pushed hardest for stasis, “Bitcoin needs a leader like a fish needs a bicycle”.

We started evaluating standard compression techniques to find the best way to compress this data to 200 bytes. Unfortunately, simply entropy encoding the image, with, say, zlib, gets you only a factor of 2. Still too big. We then evaluated a bunch of nonstandard techniques, but we decided it was better to leverage other code/libraries that we had. So, we looked at JPEG image encoding, which is a very popular image codec. Especially since our image is going to be blurred heavily on the client, and thus band-limiting our image data, JPEG should compress this image quite efficiently for our purposes. Unfortunately, the standard JPEG header is hundreds of bytes in size. In fact, the JPEG header alone is several times bigger than our entire 200-byte budget. However, excluding the JPEG header, the encoded data payload itself was approaching our 200 bytes. We just needed to figure out what to do about that pesky header!

An example of this occurs when Chrome is showing an animation on a web page. The animation will update the screen at 60 FPS, giving Chrome around 16.6 ms of time to perform the update. As such, Chrome will start work on the current frame as soon as the previous frame has been displayed, performing input, animation and frame rendering tasks for this new frame. If Chrome completes all this work in less than 16.6 ms, then it has nothing else to do for the remaining time until it needs to start rendering the next frame. Chrome’s scheduler enables V8 to take advantage of this idle time period by scheduling special idle tasks when Chrome would otherwise be idle.

Take MySQL, for example. The database has changed hands a few times, with Sun acquiring MySQL AB in 2008, then Oracle picking up the asset through its acquisition of Sun the following year. But MySQL, Sun, and Oracle have collectively made a heck of a lot less -- orders of magnitude less -- by selling MySQL-related services than Amazon Web Services has made by selling MySQL ­as­ a­ service (that is, Relational Database Service).

Nor will MySQL be the last open source project to be more heavily monetized by a cloud giant than by the original developers who brought it into the world.

What does it mean for companies to know everything about us, and for computer algorithms to make life and death decisions? Should we worry more about another terrorist attack in New York, or the ability of journalists and human rights workers around the world to keep working? How much free speech does a free society really need?

How can we stop being afraid and start being sensible about risk? Technology has evolved into a Golden Age for Surveillance. Can technology now establish a balance of power between governments and the governed that would guard against social and political oppression? Given that decisions by private companies define individual rights and security, how can we act on that understanding in a way that protects the public interest and doesn’t squelch innovation? Whose responsibility is digital security? What is the future of the Dream of Internet Freedom?

It follows, then, that to the degree that governments answer to the people, effective control and regulation of these companies will be even more difficult than regulating the monopolies of old: that’s why Google got its first deal, and it’s why Uber was able to stare down de Blasio. What changed in Google’s case, though, was the Axel Springer article and the widespread attention it received. Similarly, while Amazon is not being accused of antitrust (for now anyways), at least in some small way the company was this weekend forced to respond in a way they usually avoid because of an article. Meanwhile, Uber, seemingly in a worse position politically, emerged from its crisis stronger than ever, confident in its ability to wield the collective influence of its customers to accomplish its political ends.

The p-value reveals almost nothing about the strength of the evidence, yet a p-value of 0.05 has become the ticket to get into many journals. “The dominant method used [to evaluate evidence] is the p-value,” said Michael Evans, a statistician at the University of Toronto, “and the p-value is well known not to work very well.”

Scientists’ overreliance on p-values has led at least one journal to decide it has had enough of them. In February, Basic and Applied Social Psychology announced that it will no longer publish p-values. “We believe that the p < .05 bar is too easy to pass and sometimes serves as an excuse for lower quality research,” the editors wrote in their announcement. Instead of p-values, the journal will require “strong descriptive statistics, including effect sizes.”

When I was on the C# design team, several times a year we would have "meet the team" events at conferences, where we would take questions from C# enthusiasts. Probably the most common question we consistently got was "Are there any language design decisions that you now regret?" and my answer is "Good heavens, yes!"

This article presents my "bottom 10" list of features in C# that I wish had been designed differently, with the lessons we can learn about language design from each decision.

Version 0 contains a database, compiler, query runtime, data editor, and query editor. Basically, it's a database with an IDE. You can add data both manually or through importing a CSV and then you can create queries over that data using our visual query editor.

...

Our original goal was to build a "better programming," one that enabled more people to build software. To that end we set out to find a simpler foundation, a language with few parts that could still produce everything from your vacation planner to machine learning algorithms. We ultimately found our answer in research out of the BOOM lab at Berkeley and took off trying to prove that with such a simple language you could still build real software. We've built compilers, editors, Turing machines, even a clone of Foursquare to prove that our strategy is workable

In this post I want to try and paint a picture of what it means to have a field that respects the laws of quantum mechanics. In a previous post, I introduced the idea of fields (and, in particular, the all-important electric field) by making an analogy with ripples on a pond or water spraying out from a hose. These images go surprisingly far in allowing one to understand how fields work, but they are ultimately limited in their correctness because the implied rules that govern them are completely classical. In order to really understand how nature works at its most basic level, one has to think about a field with quantum rules.

This post introduces eigenvectors and their relationship to matrices in plain language and without a great deal of math. It builds on those ideas to explain covariance, principal component analysis, and information entropy.

Univ of MD at College Park will have 2100 students in the CS program next year. Thats... a lot! CS is up across the country which is mostly a good thing but does raise some logistical questions. How are your schools handling the increase in the number of CS students? Here are some options I've heard people use.

The exchange between India and Bangladesh means that the world will not only lose one of its most unique borders, but it will also lose the only third-order enclave in the world – an enclave surrounded by an enclave surrounded by an enclave surrounded by another state.

Set in a cluster of old storage lockers just one block off Telegraph Avenue, the tiny retail corridor feels like it could have sprung from a different century, yet it's also redolent of the handcrafted, pastoral ethos that's characterized new development in Oakland. Temescal Alley has been designated a hipster hotspot in the press and become a go-to destination for First Friday Art Murmur.

The city of La Paz, Bolivia, has long struggled with transportation issues. Steep terrain, high density, and narrow streets have resulted in years of traffic nightmares for fleets of minibuses and private taxis. In the past two years, the government has worked to alleviate this by building the largest urban cable-car system in the world. Currently La Paz has three urban ropeway lines in operation, stretching over 10 kilometers, with plans to triple the size of the network. The city recently announced six new lines, which will extend the aerial system to 30 kilometers and carry up to 27,000 passengers an hour.

Sunday, August 16, 2015

Based on our field analysis of how flash memory errors manifest when running modern workloads on modern SSDs, this paper is the first to make several major observations: (1) SSD failure rates do not increase monotonically with flash chip wear; instead they go through several distinct periods corresponding to how failures emerge and are subsequently detected, (2) the effects of read disturbance errors are not prevalent in the field, (3) sparse logical data layout across an SSD’s physical address space (e.g., non-contiguous data), as measured by the amount of metadata required to track logical address translations stored in an SSD-internal DRAM buffer, can greatly affect SSD failure rate, (4) higher temperatures lead to higher failure rates, but techniques that throttle SSD operation appear to greatly reduce the negative reliability impact of higher temperatures, and (5) data written by the operating system to flash-based SSDs does not always accurately indicate the amount of wear induced on flash cells due to optimizations in the SSD controller and buffering employed in the system software.

It appears that PostgreSQL blocks new attempts to take a shared lock while an exclusive lock is wanted. (This sounds bad, but it's necessary in order to avoid writer starvation.) However, the exclusive lock was itself blocked on a different shared lock held by the autovacuum operation. In short: the autovacuum itself wasn't blocking all the data path queries, but it was holding a shared lock that conflicted with the exclusive lock wanted by the "DROP TRIGGER" query, and the presence of that "DROP TRIGGER" query blocked others from taking shared locks. This explanation was corroborated by the fact that during the outage, the oldest active query in the database was the "DROP TRIGGER". Everything before that query had acquired the shared lock and completed, while queries after that one blocked behind it.

First we ran step 2a on all nodes in parallel. The command completed instantly, and we saw a spike of 125,000 lines (250*500) on the Splunk graph. That might seem like a lot of logging, but it isn't anything the system can't handle, especially in small bursts like this. Next we ran step 2b in the same way. This was where something curious happened. The correct number of log lines did show up in Splunk (The logs do show something!), but the command did not appear to return as immediately as it did for step 2a. In fact it took a few minutes before the console became responsive again, and the returned data indicated that several nodes did not respond in time. Looking over the status of the cluster, those nodes were now showing as dead. Somehow this innocent log line had managed to cause these nodes to time-out and drop out of the cluster.

Recently we wrote about the now-famous hack of a Jeep Cherokee. At Black Hat USA 2015, a large security conference, researchers Charlie Miller and Chris Valasek finally explained in detail, how exactly that hack happened.

Automotive security research, for the most part, began in 2010 when researchers from the University of Washington and the University of California San Diego showed that if they could inject messages into the CAN bus of a vehicle (believed to be a 2009 Chevy Malibu) they could make physical changes to the car, such as controlling the display on the speedometer, killing the engine, as well as affecting braking. This research was very interesting but received widespread criticism because people claimed there was not a way for an attacker to inject these types of messages without close physical access to the vehicle, and with that type of access, they could just cut a cable or perform some other physical attack.

It's becoming more and more common to see malware installed not at the server, desktop, laptop, or smartphone level, but at the router level. Routers have become quite capable, powerful little computers in their own right over the last 5 years, and that means they can, unfortunately, be harnessed to work against you.

Davidson scolded customers who performed their own security analyses of code, calling it reverse engineering and a violation of Oracle's software licensing. She said, "Even if you want to have reasonable certainty that suppliers take reasonable care in how they build their products—and there is so much more to assurance than running a scanning tool—there are a lot of things a customer can do like, gosh, actually talking to suppliers about their assurance programs or checking certifications for products for which there are Good Housekeeping seals for (or “good code” seals) like Common Criteria certifications or FIPS-140 certifications."

The claim that Oracle can, on its own, find all the vulnerabilities in its products is nonsense. No tech company in the world is equal to the task of shipping bug-free code. The idea that no one outside of Oracle could have the expertise or ability to find relevant exploitable coding errors in the company’s products is similarly ridiculous: Independent security researchers routinely find important vulnerabilities in commercial products made by companies they don’t work for. And while it is no doubt true that some of the reports Davidson and her team receive are false alarms, the notion that assessing and responding to these concerns is a waste of her time demonstrates a fundamental misunderstanding of the value provided by people who devote their time and energy to finding and reporting software vulnerabilities.

But people find it hard to trust what they don’t understand. And nobody understands how Metascores are computed.

One of Doyle’s other big policies has also been in the news recently: Metacritic’s refusal to change an outlet’s first review score, no matter what happens. It’s a policy they’ve had for a while now, Doyle told me. He enacted it because during the first few years of Metacritic, which launched in 2001, reviewers kept changing their scores for vague reasons that Doyle believes were caused by publisher pressure.

For years this has been discussed in more academic circles as “context collapse.” You have an identity and a set of ideas about the world that exists and is understood in one social context. You want to bring it to another place and not have to have to do a five minute introduction about who you are and what you value every time you say anything. Other people don’t share the same preset understandings and may read more into what you are saying than you think you put there. Your jokes fall flat, or cause offense. Conversation devolves into side discussions and arguments about first principles and word definitions. People start citing the dictionary and Wikipedia and angrily talking past each other.

The attack occurred about 4 a.m. when the man walked onto his porch and was ambushed by the bear in Midpines, a community on the edge of Yosemite National Park.

The bear was feeding on a bag of trash left 20 feet from the man’s front door, Stoots said. The bear tackled the man and attacked him. But the man fought back, using his legs and arms, and eventually escaped back into his house.

A bag of trash 20 feet from his door? Sheesh. Some of these failures are easier to fix than others...

Saturday, August 15, 2015

Fundamentally, we believe this allows us more management scale, as we can run things independently that aren’t very related. Alphabet is about businesses prospering through strong leaders and independence. In general, our model is to have a strong CEO who runs each business, with Sergey and me in service to them as needed. We will rigorously handle capital allocation and work to make sure each business is executing well. We'll also make sure we have a great CEO for each business, and we’ll determine their compensation. In addition, with this new structure we plan to implement segment reporting for our Q4 results, where Google financials will be provided separately than those for the rest of Alphabet businesses as a whole.

There are two ways of looking at Google right now. The first is that it’s a hugely successful search company which is frittering away its money on crazy projects like self-driving cars and next-generation contact lenses. The second is that it’s a hugely successful search company which is making smart, high-risk, long-term bets which, if they pay off, could be worth trillions of dollars. Either way, it’s a search company. And Google wants to be more than that.

He’s stepping back. Google’s Employee No. 11, returned last year as chief business officer when Nikesh Arora left for Softbank. Worth noting: When Page announced Kordestani’s return, he said it was “for now.” With Pichai running the revenue-generating products, Kordestani becomes adviser to Google and Alphabet, and the “CBO” title vanishes. That wasn’t mentioned in Page’s announcement; you had to find it in the SEC filing.

Frankly, the whole thing seems to be leaving a lot of people scratching their heads (myself included). It may turn out to be nothing beyond just a different take on a corporate restructuring -- or it may be a prelude to the company doing something much bigger that would fit much more readily into this holding company structure.

The result won't change Google's tax bill much (R&D expenses are deductible under either structure), but the liability concerns are real, and many of the lawyers I spoke to were surprised that Google had kept the X projects so exposed for so long. "You would have thought that each new business would have been set up as its own subsidiary, but apparently that was not the case," said Duane Morris' David Feldman, who has written on the advantages of corporate restructuring in the past. Feldman said he likes the simplicity of Alphabet, as an unusually direct way of getting the protection of subsidiaries. "It's a very clean structure, and it works."

"I know there is a feeling on Wall Street that this maneuver was somehow tax motivated, but the consensus among tax professionals is just the opposite," says Bob Willens, one of the best-known corporate tax advisers on Wall Street for decades. "We do not see any tax advantage to be gained from forming a holding company."

In fact, in my book on The Future of Work I make the argument that many organizations are already too big. These large organizations have survived simply because of their massive resources but this will change. The larger an organization becomes the more sluggish it becomes and agility and adaptability seem farther out of reach. Gary Hamel has actually been talking about this for many years now.

Google has been focused on diversifying their business for a long time, even before their IPO. In August of 2003, they posted a job listing on Craigslist looking for a manager to run their collection of Googlettes, which were essentially startups within Google

The way I see it, Google is the cash cow that finances all the big bets Larry and Sergey are making inside Alphabet. The public markets get the transparency of seeing how the cash cow is performing and how the entire holding company is performing.

The problem for Page, though, is that he is not a strategy and business nerd. Page is, for lack of a better description, a change-the-world nerd, and it seems clear that he found the day-to-day business of managing a very profitable utility to be not only uninteresting but a distraction from what he truly wanted to do. Page declared in Google’s 2004 Founders IPO Letter that “We aspire to make Google an institution that makes the world a better place”, a rather large departure from aspiring to capture a greater share of global advertising, and I suspect the strongest driver behind this change was that in Page’s mind “making the world a better place” was increasingly in conflict with “Google the institution”. With the establishment of Alphabet Page has prioritized the former at the cost of abandoning the continued making and maintenance of the institution Google has become to the very capable hands of Sundar Pichai.

But the shift to mobile has caused some observers to wonder about the company’s future rate of prosperity. As Internet users abandon desktop computers and flock to mobile devices, search results become harder to monetize, either because there’s less screen real estate or because users are searching in distinct apps rather than on the open web.

But there’s a better comparison than Buffett: it’s John Malone, the “mastermind” who built the cable TV powerhouse Liberty Media. Malone is a brilliant financial engineer, who creates separate capital structures — each with a unique stock — for his different lines of business. Liberty Media, Malone’s holding company, owns a portion of the stock in each business. This approach allows Malone to attract equity and debt investors whose preferences regarding risk and payoff horizon match those of the business in question.

If what it takes to operate the businesses and make them successful are fundamentally similar, then it will be helpful to have a unifying corporate culture so that knowledge and learning get shared. But if they are really different, then any attempts at corporate synergy will simply frustrate people. My guess is that in businesses that depend on truly great creative talent, there are fewer economies of scale or scope than one might think.

This year we headed north, far north, about as far north as you can go and still stay in the state of California, to the Russian Wilderness, a small and not-well-known Wilderness Area tucked between the larger and better known Trinity Alps and Marble Mountain wildernesses in California's Salmon Range.

The best way to get to the Russian Wilderness, at least from where I live, is to drive north on Interstate 5 until you are nearly in Oregon. Just before leaving the state, you'll find yourself in the small town of Yreka, where you should stop and spend the night.

Sadly, the marvelous tale about the naming of Yreka, from Mark Twain's autobiography, is not true, though you may be like me and choose to believe it anyway!

There are a number of ways to arrive at Paynes Lake, but we chose to take the trail from the Paynes Lake Trailhead, which is on the eastern edge of the Russian Wilderness, up French Creek Road from Etna.

Now, sometimes trails are interesting things in and of themselves, for the trail-maker may have various puzzles to solve and obstacles to overcome in the process of deciding how to arrange the trail.

In the case of the Paynes Lake trail, no such complication was in order: the trail-maker simply snapped a chalk line along the ridgeline between two canyons and sent the trail accordingly, resulting in an easy-to-follow and obvious trail which takes you from the 4,400 foot trailhead to the 6,500 foot Paynes Lake in a mere 2.4 miles of trail-walking.

Which is not to say that the trail has no personality. As I walked it, I could see that it was divided into segments, as follows:

First it was difficult,

then strenuous,

followed by severe,

then unrelenting,

which led to punishing,

after which there was a 100-yard level section with a view, where we ate lunch,

and then moved on to the devastating section,

which rapidly became vicious,

and, finally, was followed by the heart-breaking last segment.

If you've done the math, you can tell that climbing 2,100 vertical feet in barely 12,500 horizontal feet is an average grade of over 15%, which is quite the climb.

The trail is at least forested and visited by breezes, which is something good that I feel the need to say about it.

And it got us to Paynes Lake.

Paynes Lake itself is a perfect California mountain lake, a delightful 14-acre lake set in a gorgeous basin ringed by towering granite cliffs. It is wonderful for swimming, fishing, or just sitting by the shore and watching the time go by, depending on where your preference lies.

There are several very nice camping spots on the north side of the lake, safely away from the water but with nice lake views. The shoreline is forested and makes an excellent habitat for birds, squirrels, chipmunks, deer, and all sorts of other mountain creatures.

On our first afternoon at Paynes Lake we were treated to a spectacle I've never before seen in my backpacking trips: just around dusk, a bald eagle swooped down over the lake, passing barely 30 feet above the water surface. The eagle made several circuits of the lake, around and around, rising slowly until it disappeared over the ridge line, 300 feet above the lake surface. We never saw it again during our visit, but what a majestic bird that was!

If you've been paying much attention to California recently, you'll know a few things:

California is experiencing a severe period of prolonged drought.

Pretty much the entire state is impacted.

As a result, there is little water, and many fires are burning.

As a result, our first several days in Siskiyou County were heavily impacted by smoke from the forest fires. On our drive up, we could barely see two miles for the smoke, and we drove right past majestic Mount Shasta without even knowing it.

Several days into the trip, however, the smoke cleared, and our remaining time was blessed with superb visibility.

But we didn't get to see the Perseids, unfortunately. Can't have everything.

Paynes Lake, being both a Source Of Water as well as a delightful place to spend the night, is a frequent host to these most serious of backpackers, and we saw many of them during our visit.

Happily, though, these are folk who treasure the wilderness and care for it with the utmost concern, and Paynes Lake is remarkably well preserved for all the use it gets.

During our time in the Russian Wilderness, we spent a fair amount of time at the lake, but also made a few side trips:

Above Paynes Lake is Albert Lake, which the map shows with a nice trail which should have started essentially at our campsite. The trail, however, is nearly impossible to find, and we found ourselves lost, climbing higher and higher on the canyon wall until it became nearly impassible. Our fearless leader, and best athlete, made his way carefully across the top of the canyon, and later reported to us that he had in fact found Albert Lake, and that it was indeed beautiful. Meanwhile, we had a nice lunch by Paynes Lake after carefully climbing back down.

To the south of Paynes Lake, the PCT leads to Carter Summit, with a nice view of Lipstick Lake. The trail in this section goes through a large burned portion of the forest, perhaps the result of a recent fire?

To the north of Paynes Lake, the PCT heads to Etna Summit. We followed it as far as a saddle at about 6,900 feet, where we got delightful views of Taylor Lake to the west, and an unnamed mountain meadow to the east, 650 exhilarating vertical feet below us as the PCT hugs tightly to the spine of the Salmon Mountains.

The Russian Wilderness apparently came into being as part of the California Wilderness Act of 1984. I suspect that it has benefitted from being small, from being in a remote corner of the state, and from being sandwiched between several much larger and much-better-known wilderness areas.

Regardless, the Russian Wilderness is a treasure. Even though it was formed from reclaimed logging lands, it has healed nicely. This area is wild, healthy, and vibrant, and it stands as a vivid demonstration of why America's wilderness areas are an inadequately praised marvel.

My other gear change this year was to pick up a set of Pat's Backcountry Beverages gear. This was actually our second year using Pat's; my friend brought the supplies last year. I think this is what you can say: it works, as Pat says it does. And beer in the backcountry is dramatically superior to no beer in the backcountry. That said, I wish they had a wider range of Brew Concentrate.

Thursday, August 6, 2015

Federal investigators in Boston on Thursday released 25-year-old surveillance video showing a security guard admitting a man to the Isabella Stewart Gardner Museum the night before it was robbed of $500 million worth of art in the largest such heist in U.S. history.

The six-minute, 40-second video shows a white man, wearing glasses and apparently in his 50s or 60s, being let in by the guard through a rear entrance to the museum shortly after midnight on March 17, 1990, about 24 hours before the heist.

The (short) article doesn't mention why the FBI decided the time had finally come to release the video, nor why that time couldn't have come sooner.

Wednesday, August 5, 2015

In other words, the game is at its best when it stops being about saving the world and is instead about anything else, like finding a frying pan for a distraught old woman in a decrepit riverside village. It’s these particularities in a massive world that give it a pulse. And while the main questline is good fantasy storytelling, it wouldn’t carry weight or consequence without being grounded in mundanity first. Don’t get me wrong; I enjoy sliding through The Hero’s Journey time and time again, it’s just always been a conduit—especially in games—to live somewhere else and care about something new.

What I figured would just be a two to three hour hub for early quests bloomed into a massive domestic drama that washed away the majority of my concern for Geralt’s priorities—I wanted to help a family in ruins. It’s this focus on the particular, on something outside the typical spread of fantasy storytelling, that I found so refreshing. The Witcher has it’s fair share of ugly bad guys and mysticism and prophecy, but in Return to Crookback Bog, I was simply given the culmination of small family’s story. Granted, it’s told using familiar fantasy props (how about those Crones?), but in such a fantastic world, it’s the attention to normalcy that most willfully suspends my disbelief elsewhere.

Doing what feels like the right thing in The Witcher 3 can always leave you feeling uncertain, because unlike in most fiction, noble intentions don't always lead to happy endings.

Tuesday, August 4, 2015

The price modifier was a particularly important addition, because it worked to reduce purchasing power over time. Essentially it meant that for particularly wealthy players, merchants would always sell for more and buy for less, a clever move that prevented a player from abusing the system and attaining infinite wealth. The final version of Steinke's system, which took everything he learned and wrapped it up in some software, was called "Reactor." Instead of setting a static value, Reactor calculated prices at run-time, making every trade unique and creating a living economy where every item had a different value based on the type of merchant the player was interacting with and the state of the world around them.

My computer struggles to play on regular settings, so I can only just sit and ogle these unbelievably gorgeous panoramas taken on Settings=Ultra: The Witcher 3 panoramas. Watch out for the harpy that guards this lighthouse!

Luke: I think the “no resolution” thing is what makes so many of these so special. Bloody Baron is obviously the same: there’s sadness and emptiness in that plotline regardless of how you decide it. This isn’t a game that’s going to give you “good” and “bad” results very often.

The Strenger family’s story — an admittedly tiny fraction of a colossal game — has two possible endings: sad, or the most depressing thing you can imagine. Even in its darkest moments it remains thoughtful and treats its characters like people with realistic motivations.

The Bloody Baron quest represents a big, daring creative decision. Honestly, it’s hard to read over a recap and believe all of that made it into a modern, big-budget game. At times, it’s even hard to believe it’s in The Witcher 3. That’s because — while often brilliant in its handling of sex, women characters, and even sexism — it’s a game frequently at odds with itself. It’s been called out for depicting a fantasy world where women are treated poorly to up its surface level edginess factor, even as others have praised it for confronting real-world issues, offering players a mirror upon which to view actual problems. It’s even been praised as a feminist game. Of course Witcher 3 has caused arguments between players. It sometimes feels like it’s arguing with itself.

Partway into Wild Hunt, Geralt attends a poetry reading by Priscilla, Dandelion’s writing partner and I-guess-girlfriend-it’s-sort-of-unclear. She sings some poetry for a rapt audience. The song goes on for approximately one hundred years, is extremely dorky, and is also wonderful.

Saturday, August 1, 2015

The results are so promising, in fact, that the research itself has changed. Instead of using two randomized groups of subjects—one that receives the vaccine immediately after potential exposure and one that receives the vaccine 21 days after—the researchers are now giving the vaccine to every subject immediately.

In retrospect, the absence of any physical evidence from the crash shouldn’t have been that much of a mystery. By the time the search shifted to the Indian Ocean 10 days after the jet disappeared, the flaperon was already on its way and riding the current towards Africa.

FBI court filings unsealed last week showed how Denise Huskins’ kidnappers used anonymous remailers, image sharing sites, Tor, and other people’s Wi-Fi to communicate with the police and the media, scrupulously scrubbing meta data from photos before sending. They tried to use computer spyware and a DropCam to monitor the aftermath of the abduction and had a Parrot radio-controlled drone standing by to pick up the ransom by remote control.

“At some point, that member of the team informed me that the person we had was Victim F and not [Victim M’s ex-fiance]. This threw a monkey wrench into our plans. Disagreement broke out among the three of us. I insisted that we should continue and carry out the operation, that it was a training mission anyhow, and that we needed the experience so that we could have successful missions later. So we continued,” the sender wrote.

To make a long story short, the one vulnerability mentioned in the title is CVE-2015-0093 (also dubbed CVE-2015-3052 by Adobe). What makes it unique is the fact that it provides an extremely powerful primitive, making it possible to perform arbitrary PostScript operations (e.g. arithmetic, logic, conditional and other) anywhere on the exploited thread’s stack, with full control over what is overwritten and how. This, in turn, could be used by an attacker to craft a self-contained malicious Type 1 font which, once loaded in the vulnerable environment, reliably and deterministically builds a ROP chain in the Charstring program, consequently defeating all modern exploit mitigations techniques such as stack cookies, DEP, ASLR, SMEP and so on. It also affected both Adobe Reader and the Windows kernel (32-bit), enabling the creation of a single PDF file, which would first achieve arbitrary code execution within the PDF viewer’s process, and further escape the sandbox by exploiting the very same bug in the operating system, elevating chosen process’ privileges in the system and removing the associated job’s restrictions.

User experience is what Apple puts above pretty much everything else, and they’ve decided that they don’t like it the experience available through the ad-supported web, and so they’re going to do something about it. Hence content blockers for Safari (and all web views) on iOS 9, which wasn’t announced onstage at WWDC but was one of those “Whoa!” moments on browsing through the Settings in the first iOS 9 beta. (Do read the link in the previous sentence, which explains what iOS 9 content blockers are, and are not.) Hence also Apple News, which is basically “all those sites but with the crap taken out”.

TQP, which was owned by well-known patent asserter Erich Spangenberg, claimed that the 5,412,730 patent covered any website using the SSL together with the RC4 cipher, a common Web encryption scheme for retailers and other sites. Under Spangenberg's guidance, the TQP patent was used to sue more than 100 companies, garnering some $45 million in settlements by the time of the Newegg trial.

In post-trial motions, Newegg argued that it couldn't be found to infringe, because it doesn't change "key values" with each block that's transmitted. Gilstrap's new order embraces that argument, vacates the jury's verdict, and finds that Newegg doesn't infringe the patent.

We have broken down the malware communication process into five stages to explain how the tool operates, receives instructions, and extracts information from victim networks. The stages include information on what APT29 does outside of the compromised network to communicate with HAMMERTOSS and a brief assessment of the tool’s ability to mask its activity.

The attackers automatically rotate Twitter handles daily for sending commands to infected machines, and use images embedded with encrypted command information and then upload stolen information to cloud storage services, for example. They also recruit legitimate web servers that they infect as part of the command and control infrastructure.

Dat is a data collaboration tool. We think most people will use it to simplify the process of downloading and updating datasets, but we are also very excited about how people will use it to fork, collaborate on, and publish new datasets for others to consume.

We show using experiments with up to hundreds of machines on a Clos network topology that it provides excellent performance: turning on TIMELY for OS-bypass messaging over a fabric with PFC lowers 99 percentile tail latency by 9X while maintaining near line-rate throughput. Our system also outperforms DCTCP running in an optimized kernel, reducing tail latency by 13X. To the best of our knowledge, TIMELY is the first delay-based congestion control protocol for use in the datacenter, and it achieves its results despite having an order of magnitude fewer RTT signals (due to NIC offload) than earlier delay-based schemes such as Vegas.

There have been many recent advances in distributed systems that provide stronger semantics for geo-replicated data stores like those underlying Facebook. These research systems provide a range of consistency models and transactional abilities while demonstrating good performance and scalability on experimental workloads. At Facebook we are excited by these lines of research, but fundamental and operational challenges currently make it infeasible to incorporate these advances into deployed systems.

The moratorium would hit Chrome much harder than it would the other browsers, since it’s Google that is proposing most of the new features nowadays. That may not be entirely fair, but it’s an unavoidable consequence of Chrome’s current position as the top browser — not only in market share, but also in supported features. Also, the fact that Google’s documentation ranges from lousy to non-existent doesn’t help its case.

Java mixed-mode flame graphs provide a complete visualization of CPU usage and have just been made possible by a new JDK option: -XX:+PreserveFramePointer. We've been developing these at Netflix for everyday Java performance analysis as they can identify all CPU consumers and issues, including those that are hidden from other profilers.

Since this was a mid tier service, there was not much use of apache. So, instead of tuning two systems (apache and tomcat), it was decided to simplify the stack and get rid of apache. To understand why too many tomcat threads got busy, let's understand the tomcat threading model.

In a series of three posts that summarize what I have learned since publishing that paper, I will try to stick to positive assertions, that is assertions about the facts, concerning this difference between the premises that freshwater economists take for granted and the premises that I and other economists take for granted.

My conjecture is that the fundamental problem in macro-economics, and the explanation for the puzzle I noted in my reply to Luis, is that a type of siege mentality encouraged people in this group to ignore criticism from the outside and fostered a definition of in-group loyalty that delegitimized the open criticism that is an essential part of the scientific method. Once this mentality got established, it fed on itself.

That is, the skipper does not actually begin the maneuver until every involved crew member has indicated they are ready. This prevents partial execution, people getting hit in the head with booms, and people getting knocked off the boat. It also implicitly makes clear when we are discussing a possible course change (e.g., “I think we should set course that direction”) from when we are actually doing it (e.g., “Ready about”).

For those with CS degrees, the sailboat tack principle is a two-phase commit protocol, used commonly in distributed transaction processing systems.

The state of Georgia hired LexisNexis to create these annotations, and LexisNexis then assigns the copyright that it receives on those annotations over to the state of Georgia. Part of the deal between Georgia and LexisNexis is that LexisNexis does the work and the state gets the copyright, but then LexisNexis gets to host the "official" copies of the laws of the state, while selling that annotated version (in both digital and paper versions). The state argues that this arrangement is actually more beneficial to consumers, because rather than relying on taxpayer funds to do this, LexisNexis gets to recoup the costs in the form of customer fees.

Windows 10 comes out July 29th, and it takes what was familiar about Windows 7 and what was great about Windows 8 and takes it forward. It's nice on a tablet, it's nice on a laptop, and I'm on my desktop with it now. Features like game streaming from an Xbox are amazing. The Office Touch apps look great.

A Polish research group claims there are still several outstanding vulnerabilities in Google App Engines for Java, including three complete Java sandbox escapes. After three weeks of radio silence from Google, it decided to disclose on Friday the vulnerabilities, along with proof of concept code.

On the evening March 14, 2013, a heavily-armed police force surrounded my home in Annandale, Va., after responding to a phony hostage situation that someone had alerted authorities to at our address. I’ve recently received a notice from the U.S. Justice Department stating that one of the individuals involving in that “swatting” incident had pleaded guilty to a felony conspiracy charge.

The students loved their new lecturer as much for his mind as his high jinks. He had a homely lecturing style, discussing abstract concepts in terms of trains and cars, cats and dogs. In lecturing on symmetry and the Platonic solids, he sometimes brought a large turnip and a carving knife to class, transforming the vegetable one slice at a time into an icosahedron with 20 triangular faces, eating the scraps as he went.

SolarCity, which focuses on putting solar panels on the roofs of homes and buildings, didn’t invent the solar panel. But, like Ford Motor Co. did a century ago, it has put together and perfected a combination of functions and disciplines—efficient assembly, economies of scale, vertical integration, and innovative financing techniques—that could make mass adoption possible. And it continually seeks and finds ways to expand its market.

Ernest Cline’s second book, Armada, is almost as wonderful as his first book, Ready Player One. While plenty of folks on Amazon are giving it mediocre ratings, I think it’s because they don’t understand what Cline really did here.

For decades, his groundbreaking designs and artwork for a variety of corporations, creative firms, and cinematic projects have become synonymous with looking forward. His film work alone, which includes Blade Runner, Aliens and TRON, gave a generation a glimpse into what technology and design may have in store. Mead says that he would use architecture as a sort of "magical background" in his work. Curbed spoke with him about his architectural influences and his current views on the future of urban design.

For 4 days and nights things were a blur. I was on the move, from place to place, it was all noise, and brief images flashed by and were gone. I tossed and turned and could not sleep.

Things were all new and different and unusual, but there was also a sameness, a constant, one who was by my side, me-but-not-me, someone I knew, had always known, but was yet new and different and unusual at the same time.

Then, unexpectedly, the shaking and rolling and flying ceased, and was gone, abated, finis.

I collapsed and slept for hours.

When I awoke it was as if nothing had happened, all was as it was.

But there was a piece of me somewhere different, in a new place, a new person, once again renewed, once again restarted, once again on a new path.