from the don't-be-evil,-guys dept

Back in 2014, we wrote about a campaign by Yelp which it called "Focus on the User," in which it made a very compelling argument that Google was treating Yelp (and TripAdvisor) content unfairly. Without going into all of the details, Yelp's main complaint was that while Google uses its famed relevance algorithm to determine which content to point you to in its main search results, when it came to the top "One Box" on Google's site, it only used Google's own content. Four years ago, the Focus on the User site presented compelling evidence that users of Google actually had a better overall experience if the answers for things like local content (such as retailer/restaurant reviews) in the One Box were ranked according to Google's algorithm, rather than just using Google's own "Local" content (or whatever they call it these days).

As we noted at the time, this argument was pretty compelling, but we worried about Yelp using the site to ask the EU to then force Google to change how its site functioned. As we wrote at the time:

... the results are compelling. Using Google's own algorithm to rank all possible reviews seems like a pretty smart way of doing things, and likely to give better results than just using Google's (much more limited) database of reviews. But here's the thing: while I completely agree that this is how Google should offer up reviews in response to "opinion" type questions, I still am troubled by the idea that this should be dictated by government bureaucrats. Frankly, I'm kind of surprised this isn't the way Google operates, and it's a bit disappointing that the company doesn't just jump on this as a solution voluntarily, rather than dragging it out and having the bureaucrats force it upon them.

So while the site is fascinating, and the case is compelling, it still has this problem of getting into a very touchy territory where we're expecting government's to design the results of search engines. It seems like Yelp, TripAdvisor and others can make the case to Google and the public directly that this is a better way to do things, rather than having the government try to order Google to use it.

It took four years, but it looks like Yelp is at least taking some of my advice. The company has relaunched the "Focus on the User" site, but positioned it more towards convincing Google employees to change how the site handles One Box content, rather than just asking the government for it. This is a good step, and I'm still flabbergasted that Google hasn't just done this already. Not only would it give users better overall results, but it would undercut many of the antitrust arguments being flung at Google these days (mainly in the EU). It's a simple solution, and Google should seriously consider it.

That said, while Yelp has shifted the focus of that particular site, it certainly has not not given up on asking the government to punish Google. Just as it was relaunching the site, it was also filing a new antitrust complaint in the EU and again, I'm still concerned about this approach. It's one thing to argue that Google should handle aspects of how its website works in a better way. It's another to have the government force the company to do it that way. The latter approach creates all sorts of potential consequences -- intended or unintended -- that could have far reaching reverberations on the internet, perhaps even the kind that would boomerang around and hurt Yelp as well.

Yelp makes a strong argument for why Google's approach to the One Box is bad and not the best overall results for its users. I'm glad that it's repurposed its site to appeal to Google employees, and am disappointed that Google hasn't made this entire issue go away by actually revamping how the One Box works. But calling on the government to step in and determine how Google should design its site is still a worrisome approach.

from the fakey-fakey dept

Fake news stories are a scourge. Something different from parody news folks such as The Onion, there are outfits out there that produce false news stories simply to get clickthroughs and generate advertising revenue. And it isn't just a couple of your Facebook friends and that weird uncle of yours that gets fooled by these things, even incredibly handsome and massively-intelligent writers such as myself are capable of getting completely misled into believing that a bullshit news story is real.

Facebook is generally seen as a key multiplier in this false force of non-news, which is probably what led the social media giant to declare war on fake news sites a year or so back. So how'd that go? Well, the results as analyzed over at Buzzfeed seems to suggest that Facebook has either lost this war it declared or is losing it badly enough that it might as well give it up.

To gauge Facebook’s progress in its fight, BuzzFeed News examined data across thousands of posts published to the fake news sites’ Facebook pages, and found decidedly mixed results. While average engagements (likes + shares + comments) per post fell from 972.7 in January 2015 to 434.78 in December 2015, they jumped to 827.8 in January 2016 and a whopping 1,304.7 in February.

Some of the posts on the fake news sites’ pages went extremely viral many months after Facebook announced its crackdown. In August, for instance, an Empire News story reporting that Boston Marathon bombing suspect Dzhokhar Tsarnaev sustained serious injuries in prison received more than 240,000 likes, 43,000 shares, and 28,000 comments on its Facebook page. The incident was pure fiction, but still spread like wildfire on the platform. An even less believable September post about a fatal gang war sparked by the “Blood” moon was shared over 22,000 times from the Facebook page of Huzlers, another fake news site.

So, how did this war go so wrong for Facebook? Well, to start, it relied heavily on user-submitted notifications that a link or site was a fake news site. Sounds great, as aggregating feedback has worked quite well in other arenas. For this, however, it was doomed from the start. The purpose of fake news sites is, after all, to fool people, and fooled people are obviously not reporting the links as fake. Even when a reader manages to determine eventually that a link was a fake news post at a later time, perhaps after sharing it and having comments proving it false, how many of those people then take steps to report the link? Not enough, clearly, as the fake news scourge marches on.

Another layer of the problem appears to be the faith and trust the general public puts into some famous people they are following, who have also been fooled with startling regularity.

Take D.L. Hughley, for example. The comedian, whose page is liked by more than 1.7 million people, showed up twice in the Huzlers logs. One fictitious Huzlers story he posted, about Magic Johnson donating blood, garnered more than 10,000 shares from his page. Hughley, who did not respond to BuzzFeed News’ request for comment, also shared four National Report links in 2015.

Radio stations also frequently post fake news. The Florida-based 93XFM was one of a number of radio stations BuzzFeed News discovered sharing Huzlers posts in 2015. Asked about one April post linking to a Huzlers story about a woman smoking PCP and chewing off her boyfriend’s penis, a 93XFM DJ named Sadie explained that fact-checking Facebook posts isn’t exactly a high priority.

In other words, people and organizations that the public assumes to be credible sources of information are sharing these fake news articles, and the public turns off their collective brains and assumes them to be true. After all, if we can't trust D.L. Hughley then, really, who can we trust? But when even major outlets such as the New York Times have included links in its posts to The National Report, do we really expect people to cast a wary eye towards such an established news peddler?

Well, we should, because the ultimate problem here are the equal parts of a polarized American public coupled with a terrifying level of credulity. Many of these fake news pieces contain headlines for stories that some people want to believe, typically for ideological reasons. This is why a family party recently saw me trying to explain to my grandmother that, no, Michelle Obama probably does not in fact have a penis. That's a true story, friends, and it stemmed from a fake news article. The willingness to believe such a thing is extreme, certainly, but stories of the Boston Bomber getting beaten in prison fuel the same desire for such a story to be true.

The war is lost. Fake news goes on unabated. Long live Michelle Obama's penis.

Rathi provides a hypothetical situation in which this algorithm might prove to be of use. A person with a rare medical condition they'd like to keep private visits a clinic that happens to be under investigation for fraud. This person often calls another family member for medical advice (an aunt who works at another clinic). This second person's clinic is also under investigation.

When the investigation culminates in a criminal case, there's a good chance the patient -- a "non-target" -- may have their sensitive medical information exposed.

If the government ends up busting both clinics, there’s a risk that people could find out about your disease. Some friends may know about your aunt and that you visit some sort clinic in New York; government records related to the investigation, or comments by officials describing how they built their case, may be enough for some people to draw connections between you, the specialized clinic, and the state of your health.

Even though this person isn't targeted by investigators, the unfortunate byproduct is diminished privacy. This algorithm, detailed in a paper published by the National Academy of Sciences, aims to add a layer of filtering to investigative efforts. As Kearns describes it, the implementation would both warn of potential collateral damage as well as inject "noise" to make accidental exposure of non-targets minimal.

For such cases where there are only a few connections between people or organizations under suspicion, Kearns’s algorithm would warn investigators that taking action could result in a breach of privacy for selected people. If a law were to require a greater algorithmic burden of proof for medical-fraud cases, investigators would need to find alternative routes to justify going after the New York clinic.

But if there were lots of people who could serve as links between the two frauds, Kearns’s algorithm would let the government proceed with targeting and exposing both clinics. In this situation, the odds of comprising select individuals’ privacy is lower.

Potentially useful, but it suffers from a major flaw: the government.

Of course, if an investigation focused on suspected terrorism instead of fraud, the law may allow the government to risk compromising privacy in the interest of public safety.

Terrorism investigations will trump almost everything else, including privacy protections supposedly guaranteed by our Constitution. Courts have routinely sided with the government's willingness to sacrifice its citizens' privacy for security.

It's highly unlikely investigative or intelligence agencies have much of an interest in protecting the privacy of non-targeted citizens, even in non-terrorist-related surveillance -- not if it means using alternate (read: "less effective") investigative methods or techniques. It has been demonstrated time and time again that law enforcement is more interested in the most direct route to what it seeks, no matter how much collateral damage is generated.

The system has no meaningful deterrents built into it. Violations are addressed after the fact, utilizing a remedy process that can be prohibitively expensive for those whose rights have been violated. On top of that, multiple layers of immunity shield government employees from the consequences of their actions and, in some cases, completely thwart those seeking redress for their grievances.

The algorithm may prove useful in other areas -- perhaps in internal investigations performed by private, non-state parties -- but our government is generally uninterested in protecting the rights it has granted to Americans. Too many law enforcement pursuits (fraud, drugs, terrorism, etc.) are considered more important than the rights (and lives) of those mistakenly caught in the machinery. If the government can't be talked out of firing flashbangs through windows or predicating drug raids on random plant matter found in someone's trash can, then it's not going to reroute investigations just because a piece of software says a few people's most private information might be exposed.

from the urls-we-dig-up dept

Artificial intelligence projects are making significant progress (even though humans seem to keep moving the goalposts for what qualifies as AI). We haven't created any self-conscious computers yet, but some chips and software are more closely mimicking how the human brain works. There still isn't much agreement on how to measure intelligence, though if researchers just continue working on different approaches to creating thinking machines, maybe we'll figure out more about both ourselves and how to make computers learn and interact like people.

from the urls-we-dig-up dept

Quantum computers are starting to become a commercial reality as multiple companies start to take advantage of the strange laws of quantum physics to solve complex mathematical problems. The hardware is difficult enough to build, but assuming the hardware actually exists, programmers now have to figure out how to write software for qubits. Here are just a few links on these new computers that aren't quite ready to replace desktop PCs.

from the urls-we-dig-up dept

Some animals, like cats, can see much better in the dark than us humans. (However, cats can't see in total darkness, they just need about a sixth of the light our eyes need to see.) Other animals, like bats, use echolocation to "see" without any light at all. Some people have figured out how to use echolocation, but until we start genetically engineering our eyeballs to be more like a cat's eye, we'll have to use special cameras and sensors to see in low light situations (barring the use of flashlights). Here are just a few examples of some night vision tech.

from the urls-we-dig-up dept

Trying to find a date using statistics and computers isn't exactly a new idea. (Punch cards were used in some of the earliest versions of computer dating.) As technology has improved, you might expect that dating has gotten better as well, but some modifications of the Drake equation show just how unlikely the odds are. Here are a few more data points in the realm of romantic relationships.

from the urls-we-dig-up dept

Some folks look at online dating services as a tool for meeting like-minded people (with potentially romantic engagements). But as with almost anything on the internet, there are also people out there trying to game the system and figure out how the algorithms behind the date matching systems work. (Though, sometimes the process is a bit simpler.) Just to gear up for Valentine's Day later this week, here are just a few examples of people taking online dating to an extreme.

from the urls-we-dig-up dept

The first video games you could play at home were introduced in the late 1970s, and these games got really popular until the early-80s when the game market slumped. Fortunately, video game consoles took off again with the introduction of the Nintendo Entertainment System in 1985, and we've all seen how popular video games are nowadays. Here are just a few links on remembering those old 8-bit/16-bit games that kids today might not even recognize.