May 13, 2015

At Reason, Ed Krayewski points out that you now have a way of discovering (and modifying) what Google’s search engine will reveal about you:

In January Google quietly rolled out the capability to view your entire search history with the online service, download a copy of it, and even to delete it from Google’s servers. The new feature wasn’t widely reported online until earlier this month when an unofficial Google blog publicized it.

You can check out your search history here, including web and image searches, and links and images you clicked on as a result. There’s also an option to download under settings (the gear button on the top left of the page), as well as one to “remove items,” including the ability to remove your recent search history or your entire search history.

April 16, 2015

Tim Worstall on how our traditional economic measurements are less and less accurate for the modern economic picture:

… in the developed countries there’s a problem which seems to me obvious (and Brad Delong has even said that I’m right here which is nice). Which is that we’re just not measuring the output of the digital economy correctly. For much of that output is not in fact priced: what Delong has called Andreessenian goods (and Marc Andreessen himself calls Mokyrian). For example, we take Google’s addition to the economy to be the value of advertising that Google sells, not the value in use of the Google search engine. Similarly, Facebook is valued at its advertising sales, not whatever value people gain from being part of a social network of 1.3 billion people. In the traditional economy that consumer surplus can be roughly taken to be twice the sales value of the goods. For these Andreessenian goods the consumer surplus could be 20 times (Delong) or even 100 times (my own, very controversial and back of envelope calculations) that sales value.

We are therefore, in my view, grossly underestimating output. And since we measure productivity as the residual of output and resources used to create it we’re therefore also grossly underestimating productivity growth. We’re in error by using measurements of the older, physical, economy as our metric for the newer, digital, one.

In short, I simply don’t agree that growth is as slow as we are measuring it to be. Thus any predictions that rely upon taking our current “low” rate of growth as being a starting point must, logically, be wrong. And that also means that all the policy prescriptions that flow from such an analysis, that we must spend more on infrastructure, education, government support for innovation, must also be wrong.

February 24, 2015

Oh, I don’t mean the profession of teaching … I mean the actual practice of imparting knowledge. As Joanna Williams explains, it’s the practical part that’s in steep decline nowadays:

After almost two decades working in the British education system, I’m still shocked when I meet teachers and lecturers who recoil at the prospect of actually imparting knowledge to their students. I cringed when the headteacher at my daughter’s junior school gathered all the new parents together to watch a sharply edited film showing that knowledge was now so easily accessible and so quickly outdated that there was little point in teaching children anything other than how to Google. When I find myself discussing the purpose of higher education, my proposal that the pursuit and transmission of knowledge should be the primary concern of the university is mostly met by looks of incomprehension that swiftly turn to barely concealed horror.

Teaching knowledge, as has been discussed before on spiked, has rarely been popular among the Rousseau-inspired, supposedly child-centred progressives of the educational world. It began to go more seriously out of fashion in the 1970s. Today, when every 10-year-old has a smart phone in their back pocket, actually teaching them stuff is seen as an unnecessary imposition on their individual creativity, serving no other end than future pub-quiz success. Working with children, rather than teaching knowledge, is considered altogether nicer; what’s more, it conveniently avoids the need for complex decisions to be made about what is most important in any particular subject. Rather than imposing their authority on children, teachers can be simply ‘guides on the side’, creating a learning environment through which children can determine their own path. What lies behind many of these entrenched ideas is a fundamental misunderstanding of what knowledge actually is.

Unfortunately, as a few voices in the educational world are beginning to make clear, left to their own devices children generally learn little and creativity is stifled rather than unleashed. Michael Young has been making the case for ‘bringing knowledge back in’ for many years now. More recently, people like Daisy Christodoulou, Toby Young and Tom Bennett have joined those chipping away at the child-centred, anti-knowledge orthodoxy. This is definitely a trend to welcome. And when knowledge-centred teaching goes against everything the educational establishment stands for, it is important to get the arguments right.

William Kitchen’s book, Authority and the Teacher, is a useful addition to the debate. Kitchen makes a convincing case that ‘any education without knowledge transmission is not an education at all’. The central premise of his book is his claim that ‘the development of knowledge requires a submission to the authority of a master expert: the teacher’. Kitchen argues that it is the teacher’s authority that makes imparting knowledge possible; in the absence of authority, teaching becomes simply facilitation and knowledge becomes inaccessible. He is careful to delineate authority from power, and he locates teachers’ authority within their own subject knowledge, which in turn is substantiated and held in check through membership of a disciplinary community. Without ‘the authority of the community and the practice,’ he argues, the notion of ‘correctness’ loses its meaning and there is no longer any sense to the passing of educational judgements.

February 11, 2015

Google this week unveiled the latest iteration of its driverless-car experiment, a prototype vehicle with no steering wheel or brake pedal. Reactions came in three main modes: terror, in the case of traditional automobile manufacturers; gee-whiz enthusiasm among nerds such as yours truly; and, most important, withering criticism from assorted design and automotive critics, who denounced the vehicles’ 1998-iMac-on-wheels aesthetic as too cute by a (driverless) mile, an Edsel as reimagined by Jony Ive. Every wonder that comes (in this case literally) down the pike is met with a measure of scorn by various aficionados and sundry mavens, who are annoying but who also are, regardless of whether they intend or realize it, champions of civilization. The iterative, evolutionary process of product design and refinement is the main engine of progress in material standards of living on this planet, and every condescending, self-righteous snob who pronounces every innovation not quite good enough is making humanity better off.

The Google driverless car may turn out to be the iPod of the automotive world, or it may fail. It may be the case that another firm (though I would not bet on General Motors) will produce a better version; it is more likely that, as with traditional cars, dozens of firms will offer scores of competing products, each serving a different need or taste. It may be the case that the most economically consequential application of the technology is moving cargo rather than people. Or maybe people won’t like them; you never can tell. What is important is that the evolutionary process be allowed to play out with a minimum of political interference.

January 4, 2015

The Oatmeal got a chance to ride in one of Google’s self-driving cars, and learned six things from his experience:

2. Google self-driving cars are timid.

The car we rode in did not strike me as dangerous. It struck me as cautious. It drove slowly and deliberately, and I got the impression that it’s more likely to annoy other drivers than to harm them. Google can adjust the level of aggression in the software, and the self-driving prototypes currently tooling around Mountain View are throttled to act like nervous student drivers.

In the early versions they tested on closed courses, the vehicles were programmed to be highly aggressive. Apparently during these aggression tests, which involved obstacle courses full of traffic cones and inflatable crash-test objects, there were a lot of screeching brakes and roaring engines and terrified interns. Although impractical on the open road, part of me wishes I could have experienced that version as well.

In the Washington Post, Lindsey Kaufman recounts her experience when her workplace changed to the “open-office model”:

A year ago, my boss announced that our large New York ad agency would be moving to an open office. After nine years as a senior writer, I was forced to trade in my private office for a seat at a long, shared table. It felt like my boss had ripped off my clothes and left me standing in my skivvies.

Our new, modern Tribeca office was beautifully airy, and yet remarkably oppressive. Nothing was private. On the first day, I took my seat at the table assigned to our creative department, next to a nice woman who I suspect was an air horn in a former life. All day, there was constant shuffling, yelling, and laughing, along with loud music piped through a PA system. As an excessive water drinker, I feared my co-workers were tallying my frequent bathroom trips. At day’s end, I bid adieu to the 12 pairs of eyes I felt judging my 5:04 p.m. departure time. I beelined to the Beats store to purchase their best noise-cancelling headphones in an unmistakably visible neon blue.

Despite its obvious problems, the open-office model has continued to encroach on workers across the country. Now, about 70 percent of U.S. offices have no or low partitions, according to the International Facility Management Association. Silicon Valley has been the leader in bringing down the dividers. Google, Yahoo, eBay, Goldman Sachs and American Express are all adherents. Facebook CEO Mark Zuckerberg enlisted famed architect Frank Gehry to design the largest open floor plan in the world, housing nearly 3,000 engineers. And as a businessman, Michael Bloomberg was an early adopter of the open-space trend, saying it promoted transparency and fairness. He famously carried the model into city hall when he became mayor of New York, making “the Bullpen” a symbol of open communication and accessibility to the city’s chief.

These new floor plans are ideal for maximizing a company’s space while minimizing costs. Bosses love the ability to keep a closer eye on their employees, ensuring clandestine porn-watching, constant social media-browsing and unlimited personal cellphone use isn’t occupying billing hours. But employers are getting a false sense of improved productivity. A 2013 study found that many workers in open offices are frustrated by distractions that lead to poorer work performance. Nearly half of the surveyed workers in open offices said the lack of sound privacy was a significant problem for them and more than 30 percent complained about the lack of visual privacy. Meanwhile, “ease of interaction” with colleagues — the problem that open offices profess to fix — was cited as a problem by fewer than 10 percent of workers in any type of office setting. In fact, those with private offices were least likely to identify their ability to communicate with colleagues as an issue. In a previous study, researchers concluded that “the loss of productivity due to noise distraction … was doubled in open-plan offices compared to private offices.”

I work in the software industry and it’s been nearly 20 years since I last had a private office. Every company I’ve worked for since then has either consciously been moving in the open office direction, or unwilling to spend money to partition open space in whatever office space they had. Sometimes, I even get nostalgic for cube farms…

Back in October, we noted that Spain had passed a ridiculously bad Google News tax, in which it required any news aggregator to pay for snippets and actually went so far as to make it an “inalienable right” to be paid for snippets — meaning that no one could choose to let any aggregator post snippets for free. Publishers have to charge any aggregator. This is ridiculous and dangerous on many levels. As we noted, it would be deathly for digital commons projects or any sort of open access project, which thrive on making content reusable and encouraging the widespread sharing of such content.

Apparently, it’s also deathly for Google News in Spain. A few hours ago, Google announced that due to this law, it was shutting down Google News in Spain, and further that it would be removing all Spanish publications from the rest of Google News. In short, Google went for the nuclear option in the face of a ridiculously bad law:

But sadly, as a result of a new Spanish law, we’ll shortly have to close Google News in Spain. Let me explain why. This new legislation requires every Spanish publication to charge services like Google News for showing even the smallest snippet from their publications, whether they want to or not. As Google News itself makes no money (we do not show any advertising on the site) this new approach is simply not sustainable. So it’s with real sadness that on 16 December (before the new law comes into effect in January) we’ll remove Spanish publishers from Google News, and close Google News in Spain.

Every time there have been attempts to get Google to cough up some money to publishers in this or that country, people (often in our comments) suggest that Google should just “turn off” Google News in those countries. Google has always resisted such calls. Even in the most extreme circumstances, it’s just done things like removing complaining publications from Google News, or posting the articles without snippets. In both cases, publishers quickly realized how useful Google News was in driving traffic and capitulated. In this case, though, it’s not up to the publishers. It’s entirely up to the law.

October 24, 2014

If you have a need for system icons and don’t want to create your own (or, like me, you have no artistic skills), you might want to look at a recent Google Design set that is now open source:

Today, Google Design are open-sourcing 750 glyphs as part of the Material Design system icons pack. The system icons contain icons commonly used across different apps, such as icons used for media playback, communication, content editing, connectivity, and so on. They’re equally useful when building for the web, Android or iOS.

October 7, 2014

At Techdirt, Mike Masnick reports on the first New York Times articles to be removed from Google‘s search indices under the European “right to be forgotten” regulations:

Over the weekend, the NY Times revealed that it is the latest publication to receive notification from Google that some of its results will no longer show up for searches on certain people’s names, under the whole “right to be forgotten” nuttiness going on in Europe these days. As people in our comments have pointed out in the past, it’s important to note that the stories themselves aren’t erased from Google‘s index entirely — they just won’t show up when someone searches on the particular name of the person who complained. Still, the whole effort is creating a bit of a Streisand Effect in calling new attention to the impacted articles.

In this case, the NY Times was notified of five articles that were caught up in the right to be forgotten process. Three of the five involved semi-personal stuff, so the Times decided not to reveal what those stories were (even as it gently mocks Europe for not believing in free speech):

Of the five articles that Google informed The Times about, three are intensely personal — two wedding announcements from years ago and a brief paid death notice from 2001. Presumably, the people involved had privacy reasons for asking for the material to be hidden.

I can understand the Times‘ decision not to reveal those articles, but it still does seem odd. You can understand why people might not want their wedding announcements findable, but they were accurate at the time, so it seems bizarre to have them no longer associated with your name.

CInsideMedia has done it again with exclusive access to the Royal Navy destroyer HMS Cavalier at the Historic Dockyard, Chatham.

The class C/A destroyer celebrated her 70th anniversary on 5th April 2014, having served during World War II and then subsequently being refitted in 1957 to remove the mid-ship torpedoes and replacing them with anti-submarine squid mortars.

Following on from the massive success of CInsideMedia’s Google Maps Business View tour of HMS Ocelot, The Historic Dockyard Chatham is making it possible for anyone in the world to ‘virtually visit’ this National Destroyer Memorial and last surviving Royal Navy Second World War Destroyer.

Over two days of exclusive access, the CInsideMedia team were guided around the destroyer by the Dockyard’s Scott Belcher (Duty Manager: Visitor Operations, Security and Health and Safety) and Chris Tutt (Marketing).

The team captured over 350 panoramas meticulously capturing almost every deck of the 363 ft, 1.7 ton former Arctic Convoy vessel, including the gear room and engine room which are normally only accessible on special request due to Health & Safety and limited accessibility.

July 28, 2014

The National Journal‘s Alex Brown talks about a federal government department facing the end of the line thanks to search engines like Google:

A little-known branch of the Commerce Department faces elimination, thanks to advances in technology and a snarkily named bill from Sens. Tom Coburn and Claire McCaskill.

The National Technical Information Service compiles federal reports, serving as a clearinghouse for the government’s scientific, technical, and business documents. The NTIS then sells copies of the documents to other agencies and the public upon request. It’s done so since 1950.

But Coburn and McCaskill say it’s hard to justify 150 employees and $66 million in taxpayer dollars when almost all of those documents are now available online for free.

Enter the Let Me Google That for You Act.

“Our goal is to eliminate you as an agency,” the famously grumpy Coburn told NTIS Director Bruce Borzino at a Wednesday hearing. Pulling no punches, Coburn suggested that any NTIS documents not already available to the public be put “in a small closet in the Department of Commerce.”

H/T to Jim Geraghty for the link. He assures us that despite any similarities to situations portrayed in his recent political novel The Weed Agency, he didn’t make this one up.

July 15, 2014

In Forbes, Tim Worstall ignores the slogans to follow the money in the Net Neutrality argument:

The FCC is having a busy time of it as their cogitations into the rules about net neutrality become the second most commented upon in the organisation’s history (second only to Janet Jackson’s nip-slip which gives us a good idea of the priorities of the citizenry). The various internet content giants, the Googles, Facebooks and so on of this world, are arguing very loudly that strict net neutrality should be the standard. We could, of course attribute this to all in those organisations being fully up with the hippy dippy idea that information just wants to be free. Apart from the obvious point that Zuckerberg, for one, is a little too young to have absorbed that along with the patchouli oil we’d probably do better to examine the underlying economics of what’s going on to work out why people are taking the positions they are.

Boiling “net neutrality” down to its essence the argument is about whether the people who own the connections to the customer, the broadband and mobile airtime providers, can treat different internet traffic differently. Should we force them to be neutral (thus the “neutrality” part) and treat all traffic exactly the same? Or should they be allowed to speed up some traffic, slow down other, in order to prioritise certain services over others?

We can (and many do) argue that we the consumers are paying for this bandwidth so it’s up to us to decide and we might well decide that they cannot. Others might (and they do) argue that certain services require very much more of that bandwidth than others, further, require a much higher level of service, and it would be economically efficient to charge for that greater volume and quality. For example, none of us would mind all that much if there was a random second or two delay in the arrival of a gmail message but we’d be very annoyed if there were random such delays in the arrival of a YouTube packet. Netflix would be almost unusable if streaming were subject to such delays. So it might indeed make sense to prioritise such traffic and slow down other to make room for it.

You can balance these arguments as you wish: there’s not really a “correct” answer to this, it’s a matter of opinion. But why are the content giants all arguing for net neutrality? What’s their reasoning?

As you’d expect, it all comes down to the money. Who pays more for what under a truly “neutral” model and who pays more under other models. The big players want to funnel off as much of the available profit to themselves as possible, while others would prefer the big players reduced to the status of regulated water company: carrying all traffic at the same rate (which then allows the profits to go to other players).

June 18, 2014

Tim Worstall asks when it would be appropriate for your driverless car to kill you:

Owen Barder points out a quite delightful problem that we’re all going to have to come up with some collective answer to over the driverless cars coming from Google and others. Just when is it going to be acceptable that the car kills you, the driver, or someone else? This is a difficult public policy question and I’m really not sure who the right people to be trying to solve it are. We could, I guess, given that it is a public policy question, turn it over to the political process. It is, after all, there to decide on such questions for us. But given the power of the tort bar over that process I’m not sure that we’d actually like the answer we got. For it would most likely mean that we never do get driverless cars, at least not in the US.

The basic background here is that driverless cars are likely to be hugely safer than the current human directed versions. For most accidents come about as a result of driver error. So, we expect the number of accidents to fall considerably as the technology rolls out. This is great, we want this to happen. However, we’re not going to end up with a world of no car accidents. Which leaves us with the problem of how do we program the cars to work when there is unavoidably going to be an accident?

[…]

So we actually end up with two problems here. The first being the one that Barder has outlined, which is that there’s an ethical question to be answered over how the programming decisions are made. Seriously, under what circumstances should a driverless car, made by Google or anyone else, be allowed to kill you or anyone else? The basic Trolly Problem is easy enough, kill fewer people by preference. But when one is necessary which one? And then a second problem which is that the people who have done the coding are going to have to take legal liability for that decision they’ve made. And given the ferocity of the plaintiff’s bar at times I’m not sure that anyone will really be willing to make that decision and thus adopt that potential liability.

Clearly, this needs to be sorted out at the political level. Laws need to be made clarifying the situation. And hands up everyone who thinks that the current political gridlock is going to manage that in a timely manner?

Michael Geist talks about another court attempting to push local rules into other jurisdictions online — in this case it’s not the European “right to be forgotten” nonsense, it’s unfortunately a Canadian court pulling the stunt:

In the aftermath of the European Court of Justice “right to be forgotten” decision, many asked whether a similar ruling could arise in Canada. While a privacy-related ruling has yet to hit Canada, last week the Supreme Court of British Columbia relied in part on the decision in issuing an unprecedented order requiring Google to remove websites from its global index. The ruling in Equustek Solutions Inc. v. Jack is unusual since its reach extends far beyond Canada. Rather than ordering the company to remove certain links from the search results available through Google.ca, the order intentionally targets the entire database, requiring the company to ensure that no one, anywhere in the world, can see the search results. Note that this differs from the European right to be forgotten ruling, which is limited to Europe.

The implications are enormous since if a Canadian court has the power to limit access to information for the globe, presumably other courts would as well. While the court does not grapple with this possibility, what happens if a Russian court orders Google to remove gay and lesbian sites from its database? Or if Iran orders it remove Israeli sites from the database? The possibilities are endless since local rules of freedom of expression often differ from country to country. Yet the B.C. court adopts the view that it can issue an order with global effect. Its reasoning is very weak, concluding that:

the injunction would compel Google to take steps in California or the state in which its search engine is controlled, and would not therefore direct that steps be taken around the world. That the effect of the injunction could reach beyond one state is a separate issue.

Unfortunately, it does not engage effectively with this “separate issue.”