Did it work? Yes. It started with shapes, hung about 10 meters away. “I’m talking like the size of my hand,” Licina says. Before long, they were able to do longer distances, recognizing symbols and identifying moving subjects against different backgrounds. “The other test, we had people go stand in the woods,” he says. “At 50 meters, we could figure out where they were, even if they were standing up against a tree.” Each time, Licina had a 100% success rate. The control group, without being dosed with Ce6, only got them right a third of the time.

Comcast has an initiative called Xfinity WiFi. When you rent a cable modem/router combo from Comcast (as one of my nearby neighbors apparently does), in addition to broadcasting your own WiFi network, it is kind enough to also broadcast “xfinitywifi,” a second “hotspot” network metered separately from your own.

By using his Buffalo WZR-HP-AG300H router’s extra radio, he can load-balance across both his own paid-for connection, and the XFinity WiFi free one. ;)

Within the context of a distributed system, you cannot have exactly-once message delivery. Web browser and server? Distributed. Server and database? Distributed. Server and message queue? Distributed. You cannot have exactly-once delivery semantics in any of these situations.

At a recent call, Neha said “The most confusing behavior we have is how producing to a topic can return errors for few seconds after the topic was already created”. As she said that, I remembered that indeed, this was once very confusing, but then I got used to it. Which got us thinking: What other things that Kafka does are very confusing to new users, but we got so used to them that we no longer even see the issue?

This is the second part of our guide on streaming data and Apache Kafka. In part one I talked about the uses for real-time data streams and explained our idea of a stream data platform. The remainder of this guide will contain specific advice on how to go about building a stream data platform in your organization.

The JVM by default exports statistics by mmap-ing a file in /tmp (hsperfdata). On Linux, modifying a mmap-ed file can block until disk I/O completes, which can be hundreds of milliseconds. Since the JVM modifies these statistics during garbage collection and safepoints, this causes pauses that are hundreds of milliseconds long. To reduce worst-case pause latencies, add the -XX:+PerfDisableSharedMem JVM flag to disable this feature. This will break tools that read this file, like jstat.

In the literature, Rahman et al. found that a very cheap algorithm actually performs almost as well as some very expensive bug-prediction algorithms. They found that simply ranking files by the number of times they’ve been changed with a bug-fixing commit (i.e. a commit which fixes a bug) will find the hot spots in a code base. Simple! This matches our intuition: if a file keeps requiring bug-fixes, it must be a hot spot because developers are clearly struggling with it.

The US wields secretive and indiscriminate powers to collect data, he said, and had never offered Brussels any commitments to guarantee EU privacy standards for its citizens’ data. On the contrary, said [Max Schrems' counsel] Mr Hoffmann, “Safe Harbour” provisions could be overruled by US domestic law at any time. Thus he asked the court for a full judicial review of the “illegal” Safe Harbour principles which, he said, violated the essence of privacy and left EU citizens “effectively stripped of any protection”. [Irish] DPC counsel Paul Anthony McDermott SC suggested that Mr Schrems had not been harmed in any way by the status quo. “This is not surprising, given that the NSA isn’t currently interested in the essays of law students in Austria,” he said. Mr Travers for Mr Schrems disagreed, saying “the breach of the right to privacy is itself the harm”.

A lawyer for the European Commission told an EU judge on Tuesday (24 March) he should close his Facebook page if he wants to stop the US snooping on him, in what amounts to an admission that Safe Harbour, an EU-US data protection pact, doesn’t work.

Working in a similar fashion – drawing small portions each day – it took Mr. Nomura about 2 months to complete his new maze. And in our humble opinion, we think it’s actually just as beautiful, if not more. It’s not quite as dense and the crisper lines make it easier to perceive the interesting patterns that the maze forms. It’s stunning in graphic quality but it’s also a functioning solvable maze, just like its predecessor. Say hello to Papa’s Maze 2.0. It’s available as a print for $30.

The REST Proxy is an open source HTTP-based proxy for your Kafka cluster. The API supports many interactions with your cluster, including producing and consuming messages and accessing cluster metadata such as the set of topics and mapping of partitions to brokers. Just as with Kafka, it can work with arbitrary binary data, but also includes first-class support for Avro and integrates well with Confluent’s Schema Registry. And it is scalable, designed to be deployed in clusters and work with a variety of load balancing solutions. We built the REST Proxy first and foremost to meet the growing demands of many organizations that want to use Kafka, but also want more freedom to select languages beyond those for which stable native clients exist today. However, it also includes functionality beyond traditional clients, making it useful for building tools for managing your Kafka cluster. See the documentation for a more detailed description of the included features.

a Nix-based continuous build system, released under the terms of the GNU GPLv3 or (at your option) any later version. It continuously checks out sources of software projects from version management systems to build, test and release them. The build tasks are described using Nix expressions. This allows a Hydra build task to specify all the dependencies needed to build or test a project. It supports a number of operating systems, such as various GNU/Linux flavours, Mac OS X, and Windows.

jemalloc(3) extensively uses madvise(2) to notify the operating system that it’s done with a range of memory which it had previously malloc’ed. The page size on this machine is 2MB because transparent huge pages are in use. As such, a lot of the memory which is being marked with madvise(…, MADV_DONTNEED) is within substantially smaller ranges than 2MB. This means that the operating system never was able to evict pages which had ranges marked as MADV_DONTNEED because the entire page has to be unneeded to allow a page to be reused. Despite initially looking like a leak, the operating system itself was unable to free memory because of madvise(2) and transparent huge pages. This led to sustained memory pressure on the machine and redis-server eventually getting OOM killed.

‘inspires kids to explore and learn about science, engineering, and technology—and have fun doing it. Every month, a new crate to help kids develop a tinkering mindset and creative problem solving skills.’ aimed at ages 9-14+

Ag uses Pthreads to take advantage of multiple CPU cores and search files in parallel. Files are mmap()ed instead of read into a buffer. Literal string searching uses Boyer-Moore strstr. Regex searching uses PCRE’s JIT compiler (if Ag is built with PCRE >=8.21). Ag calls pcre_study() before executing the same regex on every file. Instead of calling fnmatch() on every pattern in your ignore files, non-regex patterns are loaded into arrays and binary searched.

Thought-provoking article looking back to John Perry Barlow’s “A Declaration of the Independence of Cyberspace”, published in 1996:

Barlow once wrote that “trusting the government with your privacy is like having a Peeping Tom install your window blinds.” But the Barlovian focus on government overreach leaves its author and other libertarians blind to the same encroachments on our autonomy from the private sector. The bold and romantic techno-utopian ideals of “A Declaration” no longer need to be fought for, because they’re already gone.

TechCrunch, very down on the traditional big-O-and-whiteboard tech interview. See also https://news.ycombinator.com/item?id=9243169 for some good comments at HN. To be honest I think a good comprehension of data structures and big-O is pretty vital though….

‘There’s a set of stairs on Greenwood Avenue that lead nowhere. At the top, a wooden fence at the end of someone’s back yard blocks any further movement, forcing the climber to turn around and descend back to the street. What’s remarkable about the pointless Greenwood stairs, which were built in 1959 as a shortcut to a now-demolished brickyard, is that someone still routinely maintains them: in winter, some kindly soul deposits a scattering of salt lest one of the stairs’ phantom users slip; in summer someone comes with a broom to sweep away leaves. These urban leftovers are lovingly called “Thomassons” after Gary Thomasson, a former slugger for the San Francisco Giants, Oakland As, Yankees, Dodgers, and, most fatefully, the Yomiuri Giants in Tokyo.’