from the industrial-chicanery dept

Shortly after the FCC voted to begin killing net neutrality earlier this month, we noted how a mysterious bot began spamming the FCC comment system with posts favoring the dismantling of net neutrality. Analysis of the bot indicates it has simply been pulling names from a hacked database of some kind, posting the same exact missive over and over again. The scale of the informational assault isn't subtle; one estimate suggests that more than 40% of the nearly 3 million comments filed so far are courtesy of this bot, the operator of which still hasn't been identified.

The original report detailing this bot activity actually managed to get a hold of many of the people whose names are being used, and confirmed that these folks never left comments at the FCC website -- and in many instances have no idea what net neutrality even is. In some instances, many of the supposed anti-net neutrality commenters are no longer, well... living:

It is uncertain how these individuals' personal information was obtained, but it appears that a significant portion of the names and addresses used to post these comments were culled from government files stolen during a number of different network breaches over the years. Many of the addresses associated with these people's names are outdated, and according to the digital rights group Fight for the Future, in at least two cases a comment was filed to the FCC's website by people who recently died.

People who aren't dead and had their names used in this fashion aren't particularly happy about it. Net neutrality activist group Fight for the Future recently launched a website letting users test to see if their name is being used in such a fashion. And they followed that up with a letter to the FCC, signed by more than two-dozen people whose names have been (ab)used in this fashion, urging the FCC to discard the obviously fraudulent comments and help investigate who's behind the campaign:

"Based on numerous media reports [2], nearly half a million Americans may have been impacted by whoever impersonated us in a dishonest and deceitful campaign to manufacture false support for your plan to repeal net neutrality protections. While it may be convenient for you to ignore this, given that it was done in an attempt to support your position, it cannot be the case that the FCC moves forward on such a major public debate without properly investigating this known attack."

But that's precisely the problem. Because the phony bot comments support the FCC's frontal attack on net neutrality, there's every indication that the FCC intends to do nothing about any of this. And when the final vote comes to a pass later this year, you can be sure that these comments will either be used as evidence of support for the FCC's large ISP-serving policies, or be used to suggest that the massive outpouring of support for the agency's 2015 rules should be disregarded entirely.

The FCC is scheduled to continue fielding comments on its plan to kill net neutrality until August 16. If you're a living, breathing human being, you can add your thoughts to the proceeding here.

from the faux-outrage dept

As previously noted, the FCC has begun fielding comments on its plan to dismantle net neutrality protections. As of the writing of this post, nearly 556,000 users have left comments on the FCC's plan to roll back the rules, which will begin in earnest with a likely 2-1 partisan vote on May 18. The lion's share of that comment total were driven by John Oliver's recent rant on HBO. Many others are the result of what I affectionately call "outrage-o-matic" e-mail campaigns by either net neutrality activists or think tanks that let people comment without having to expend calories on original thought.

"The unprecedented regulatory power the Obama Administration imposed on the internet is smothering innovation, damaging the American economy and obstructing job creation. I urge the Federal Communications Commission to end the bureaucratic regulatory overreach of the internet known as Title II and restore the bipartisan light-touch regulatory consensus that enabled the internet to flourish for more than 20 years."

This in and of itself didn't seem like that big a deal, given the aforementioned campaigns often let commenters quickly file a form letter with the agency.

But it was notable that if this was a form letter, the people who were filling it out magically organized themselves in perfect alphabetical order. And when ZDNet decided to do a deeper dive into these alphabetical duplicate comments, they found that they appear to be produced by a bot that's grabbing the names from somewhere (perhaps public voter registration records or a previous data breach). What's more, the reporter managed to get a hold of many of the folks that purportedly filed the comments, and found several that state they never filed the comments in question, and have no idea what net neutrality even is:

"We reached out to two-dozen people by phone, and we left voicemails when nobody picked up. A couple of people late Tuesday called back and confirmed that they had not left any messages on the FCC's website. One of the returning callers specifically said they didn't know what net neutrality was. A third person reached in a Facebook message Tuesday also confirmed that they had not left any comments on any website."

Numerous Reddit users also spotted the bot campaign, and noted the language used by the 128,000 (and counting) phony commenters was pulled from a 2010 press release by the Center for Individual Freedom, which does not appear to be driving the comments with a corresponding campaign. As of this writing, nobody has identified the driver of the bot, and the FCC has stated it doesn't comment on public proceeding input.

ISPs do have a history of trying to artificially pad anti-net neutrality sentiment, since finding a critical mass of people who blindly support policies that only help companies like Comcast can be... difficult. As Vice News pointed out in 2014, a lobbying organization named the DCI Group (which receives funding from Verizon) paid individuals to flood websites and the FCC comment system with anti-net neutrality sentiement. Whether the work of a similar group, think tank, or other organization, you just know you have a quality argument when you need to pay people (or bot masters) to support your position.

from the outlawing-our-robot-overlords dept

Ticket scalpers have a bad rep. Critics deride them as "malicious" and "bad actors" and sometimes even deem them the primary cause of a purported nationwide "bot epidemic." Responding to fan complaints about the paucity of tickets, de facto monopolist Live Nation Entertainment points to the scourge of "bots" – software that allows scalpers to buy tickets en masse and resell them on secondary-market sites like SeatGeek and Stubhub.

Instead of finding a solution on their own, ticket sellers want the federal government to do their policing for them. In July, Sens. Jerry Moran and Chuck Schumer introduced the BOTS Act, a bill that promises "equitable consumer access to tickets." The most recent Senate hearing on the issue featured compelling personal narratives about fans who weren't able to get cheap tickets to popular shows, such as the hit musical "Hamilton."

But closer analysis of the legislation's details cast doubt on whether it truly would benefit fans. Indeed, it clearly misses several crucial pieces of the puzzle.

A solution in search of a problem

The first question is obvious: are ticket-harvesting bots actually a significant problem? To be sure, those who seek to outlaw them are armed with anecdotes. Research from New York Attorney General Eric Schneiderman finds "at least tens of thousands of tickets per year" are acquired using bots. But given that Live Nation's Ticketmaster service sold 147 million tickets in 2012, even if bots acquired 100,000 tickets a year, that would still be significantly less than 1 percent of all tickets sold.

For its part, Ticketmaster estimates that "60 percent of the most desirable tickets for some shows" are purchased by bots. Leaving aside the profusion of qualifiers needed to make even that claim, it probably shouldn't be surprising that high-demand and underpriced tickets are the most likely to be resold. But this ignores that venues contribute to the problem by not making tickets available to the general public in the first place. For example, analysis of a 2013 Justin Bieber concert in Nashville, Tennessee, revealed that 92 percent of tickets were presold for credit-card promotions, to fan clubs, to VIP programs or to the artist. Even if bots bought 60 percent of the Bieber tickets released to the general public, it would represent only 4 percent of the show's seats.

'Unfairness' is in the eye of the beholder

The BOTS Act would punish digital ticket scalping as an "unfair and deceptive" practice. But the way tickets currently are sold is neither fair nor transparent. Schneiderman's report acknowledged that ticket sellers are complicit in limiting the general public's access to tickets. The investigation found a majority (56 percent, on average) of tickets are presold or put on hold for the most popular concerts. Given that managers and artists often resell these tickets to the highest bidders, it's clear why industry insiders support a crackdown on bots. They compete directly in the resale market.

The culture long ago evolved to regard scalpers with moral repugnance, similar to the opprobrium reserved for price gougers, speculators and arbitrageurs. Indeed, Nobel laureate economist Alvin Roth has identified ticket scalping as a market in which repugnance discourages what would be otherwise efficient market activity. But norms can change. Usury – that is, charging interest on loans – also was once widely deemed morally repugnant, but modern financial markets could scarcely exist without it.

Writing in The New York Times, Harvard University economist Gregory Mankiw notes of his recent experience spending $2,500 for tickets to "Hamilton" that it "was only because the price was so high that I was able to buy tickets at all on such a short notice." Mankiw's tale illustrates a frequently forgotten fact – namely, that tickets purchased by bots do end up in the hands of genuine fans. By making tickets available closer to the event date and by raising their perceived cost, scalpers also help ensure that venues fill.

When scalpers buy and resell tickets, they bear the risk of stale inventory so that primary ticket sellers don't have to. Like any investment, scalpers can lose money. When scalpers guess wrong, they have to sell tickets at below face value. This allows the market to clear and allows consumers to buy at a price that better matches how much they value the experience. Tightening controls on ticket resellers would expose primary ticket outlets to a liquidity and seat-inventory risk.

Scalpers also help provide crucial market information both to venues and to consumers. The prices paid in the secondary market signal to venues when tickets are underpriced and concerts are undervalued. Secondary-market vendors such as SeatGeek, for example, collect and share data on past ticket transactions to provide ticket cost analysis to vendors and fans. These services create value for consumers and shouldn't be suppressed.

Because it would raise the cost of using bots, the BOTS Act would leave fewer tickets available through services like Stubhub and even Ticketmaster itself. It wouldn't kill the secondary market, but scalpers likely would raise prices to account for the higher risk that any given ticket will go unsold. It also would change the distribution of tickets to favor those willing to stand in line the longest, those who have the fastest internet connections or even just those who happen to have good timing or good luck. It's hard to see how any of this benefits fans.

Another expected effect of legislation like this would be to reduce innovation in ticket sales. Fueled by the demand for tickets, investors continue to fund new entrepreneurs in the event space. For example, the app Pogoseat allows existing ticket holders to browse and purchase potential seat upgrades from their smartphones while they are in the venue. Sites like SeatGeek provide information on the going prices of tickets, helping consumers gauge whether they are getting a good or bad deal. Punishing digital ticket resellers probably would scare away capital investments in apps that better match consumers with tickets, leading to worse outcomes for everyone.

The case against federal regulation

There are some subtle differences in the two versions of the BOTS Act currently wending their way through Congress. The House version – H.R. 5104, sponsored by Reps. Marsha Blackburn and Paul Tonko – would make the purchase and use of bots to acquire tickets a federal crime. The Senate bill is vaguer, prohibiting the "circumvention of control measures used by Internet ticket sellers to ensure equitable consumer access to tickets." The Senate version also could affect a broader range of user activities – for example, allowing primary ticket outlets to bar season-ticket holders from reselling their seats.

The law would empower the Federal Trade Commission to police compliance with the terms and conditions of private contracts. That sets a dangerous precedent. Under Sen. Schumer's vision, the FTC would target websites that assist in selling digitally scalped tickets, issue cease-and-desist orders and level fines in the millions of dollars for unfair trade activities. In practice, the law would grant the entertainment industry a hammer to smash its competition in the resale market.

But little effort has been made to explain the case for federal involvement in an area in which state enforcement long has proven more than adequate. More than 30 states have scalping laws and 14 states ban the use of bots in ticket purchasing. Furthermore, ticket fraud and other coercive activities are already illegal under criminal law. A federal criminal statute would be both redundant and excessive.

For that matter, the industry appears perfectly capable of handling this issue on its own. Venues and primary ticket sellers can and do recover tickets from individuals who purchase them in violation of the terms and conditions. In 2007, Ticketmaster successfully sued software maker RMG Technologies for $18.2 million over programs designed to circumvent anti-scalping measures. Companies also spend big sums hiring machine-learning experts to outwit the bots. The BOTS Act would shift these enforcement costs to the federal government, and ultimately, to the taxpayers.

There are simpler solutions

Basic economics dictates the easiest way to minimize scalping is either for venues to raise ticket prices or for artists to have many more concerts. The secondary market for tickets exists only because venues and artists routinely underprice and undersupply tickets. If artists truly want their fans to have access to lower ticket prices, they can hold concerts over consecutive nights or schedule them at larger venues. Increasing supply for the most popular concerts will shrink the secondary market.

Country singer Garth Brooks chose to add concerts to cities based on demand. His decision to disrupt the way concerts are scheduled made him the highest-paid country performer in 2016. Ticketmaster has also begun to price tickets based on supply and demand, and holds its own auctions. Major League Baseball instituted dynamic pricing in 2013. The Ultimate Fighting Championship circuit also uses dynamic pricing, making it more difficult for resellers to make a profit. These are much more direct ways to overcome inefficiencies in the ticket market.

The BOTS Act would lock the industry into its current practices, effectively protecting insiders' business models at the expense of competitors and consumers. Live Nation controls about 85 percent of the primary ticket market. Without competitive pressure from other ticket sellers, secondary markets or customers, the firm has little incentive to improve how tickets are supplied.

Efforts to criminalize bots draw attention away from the larger conversation about how venues misallocate tickets in presales. It also detracts from important policy questions about the role of government in enforcing private companies' terms and conditions. If Congress is genuinely interested in benefiting fans, it should allow entrepreneurs to find better ways to match consumer preferences and empower fans to choose how tickets are sold.

Anne Hobson is a technology policy fellow at the R Street Institute. Christopher Koopman is a senior research fellow with the Project for the Study of American Capitalism at the Mercatus Center at George Mason University.

from the please-don't-do-this dept

It's been really unfortunate to see various internet companies that absolutely should know better, look to abuse the CFAA to attack people using tools to scrape public information off of their websites. In the past few years, we've seen Facebook and Craigslist do this (with Facebook recently winning in court).

The latest lawsuit appears to be more of the same, claiming that the scraping violates both the CFAA and the DMCA:

During periods of time since December 2015, and to this day, unknown persons
and/or entities employing various automated software programs (often referred to as “bots”) have
extracted and copied data from many LinkedIn pages. To access this information on LinkedIn’s
site, the Doe Defendants circumvented several technical barriers employed by LinkedIn that
prevent mass automated scraping, and have knowingly and intentionally violated various access
and use restrictions in LinkedIn’s User Agreement, which they agreed to abide by in registering
LinkedIn member accounts. In so doing, they have violated an array of federal and state laws,
including the Computer Fraud and Abuse Act, 18 U.S.C. §§ 1030, et seq. (the “CFAA”),
California Penal Code §§ 502 et seq., and the Digital Millennium Copyright Act, 17 U.S.C. §§
1201 et seq. (the “DMCA”), and have engaged in unlawful acts of breach of contract,
misappropriation, and trespass.

This is bullshit. Courts have directly held that violating a terms of service does not equate to a CFAA violation for "unauthorized access" or "exceeding authorized access." Here, it appears that LinkedIn is hoping that the combination of claiming a terms of service violation with attempts to get around technological protection measures makes it a CFAA violation.

I completely understand that LinkedIn may not like the fact that people are scraping its data, and that they've found ways around LinkedIn's attempts to block such scraping via technological means, but it's a dangerous slippery slope when a company is claiming that a terms of service violation violated the CFAA -- and that getting around simple blocks becomes a DMCA 1201 anti-circumvention violation. Both of these are problematic: saying that violating the terms of service violates the CFAA is a stretch and saying that violating the DMCA by getting around protection technology -- even if not for the purpose of infringing on copyrights -- is a problem.

Of course, this lawsuit, like the last one, is probably really designed to just sniff out who's running the bots, and to push them into a settlement where they'll stop doing so.

Still, this lawsuit seems particularly ridiculous coming just weeks after LinkedIn's founder and chairman, Reid Hoffman, funded a $250,000 disobedience award at MIT's Media Lab. The point of that award is to encourage people to engage in disobedience to change society in a positive way -- which is something that people often use scraping for. And yet, here his company is engaging in a legal battle that will make that kind of scraping much more risky. I know and like Hoffman, who is quite a smart, thoughtful and principled guy. And I have no idea if he even knew this lawsuit was going to be filed. But I think it sends the wrong message when he's encouraging useful hacking on the one hand, while his company (which, yes, was just sold to Microsoft) is suing people for doing the very same thing of hacking on the other hand.

from the gotta-be-a-better-way dept

I think most people agree that bots that drive up viewer/follower counts on various social media systems are certainly a nuisance, but are they illegal? Amazon-owned Twitch has decided to find out. On Friday, the company filed a lawsuit against seven individuals/organizations that are in the business of selling bots. There have been similar lawsuits in the past -- such as Blizzard frequently using copyright to go after cheater bots. Or even, potentially, Yelp suing people for posting fake reviews. When we wrote about the Yelp case, we noted that we were glad the company didn't decide to try a CFAA claim, and even were somewhat concerned about the claims that it did use: including breach of contract and unfair competition.

Unfortunately, Twitch's lawsuit uses not just those claims, but also throws in two very questionable claims: a CFAA claim and a trademark claim. I understand why Twitch's lawyers at Perkins Coie put that in, because that's what you do as a lawyer: put every claim you can think of into the lawsuit. But it's still concerning. The CFAA, of course, is the Computer Fraud and Abuse Act, which was put in place in the 1980s in response to the movie War Games (no, really!) and is supposed to be used to punish "hackers" who break into secure computer systems. However, over the years, various individuals, governments and companies have repeatedly tried to stretch that definition to include merely breaching a terms of service. And that appears to be the case here with Twitch:

To provide their services and with the goal of defrauding Twitch’s users,
Defendants knowingly and intentionally used bot software that accessed Twitch’s protected
computers without authorization or in excess of the authorization granted to them by the Terms.
Also without authorization or in excess thereof, Defendants willfully, and with the intent to
defraud, accessed Twitch’s protected computers by means of that fraud, and intended to and did
use Twitch’s protected computers. For example, Defendants represent that they can access
Twitch’s protected computers and circumvent Twitch’s security measures in order to provide
their bot services without being detected by Twitch.

Except, it's a pretty big stretch to argue that bots accessing your open website that anyone can visit requires some kind of specific "authorization." Yes, cheating bots are annoying. And yes, they can be seen as a problem. But that doesn't mean that Twitch should be trying to expand the definition of the CFAA to include accessing an open website in a way the site doesn't like. As a company, Twitch has been on the right sides of lots of important tech and policy issues. It was vocal in the SOPA fight. It even sponsored us here at Techdirt for our net neutrality coverage. It's generally viewed as a pretty good internet citizen.

So it's especially disappointing that the company has chosen to come down on the wrong side of another really important tech policy issue: abuse of the CFAA.

The trademark claim is also somewhat troubling, though not as much. But it's also a huge stretch:

As described above for each Defendant, Defendants use the TWITCH mark in
domain names and on their websites in connection with the provision of bot services.
Defendants’ use of the TWITCH mark in commerce constitutes a reproduction, counterfeit, copy,
or colorable imitation of a registered mark for which the use, sale, offering for sale, and
advertising of their bot services is likely to cause confusion or mistake or lead to deception.

No one is visiting the sites of these bot makers and assuming that they're endorsed by Twitch. I mean, they're all pretty clear that their entire purpose is to inflate viewers/followers on Twitch, which is clearly something that Twitch is against. As we've noted over and over again, having a trademark does not mean that you get to block any and all uses of that word. Using a company's trademarked name in a way that refers to that company is generally seen as nominitive fair use (basically using the trademark in a descriptive manner, rather than as a way of deceiving people into thinking that there's an endorsement).

There's a similar "anti-cybersquatting" claim in there as well, but that's basically just a repeat of the trademark claim, "for the domain name," so the same thing applies.

Twitch doesn't need to use either of these claims, and it's disappointing that they and their lawyers have chosen to do so. This is not to say that bots and fake followers are okay. But these kinds of cases can set really bad precedents when a company like Twitch decides to overclaim things in a way that harms the wider tech and internet industry. I'm not even sold on the need to litigate these kinds of issues at all, prefering to think that a tech-based approach should be good enough. To be sure, Twitch notes that it's still mostly focused on technological and social moderation methods for stopping bots, but has decided to go the lawsuit path as a "third layer" of attack against bots.

Even if it felt it needed to go down that path, it really should have thought more carefully about bringing claims under the CFAA and trademark law. One hopes that the company will reconsider and perhaps drop those claims, even if it wants to pursue other claims, such as breach of contract.

from the that's-NOT-copyright-infringement dept

We've been here before a few times. Back in 2008, video game giant Blizzard initially won a very dangerous ruling against a World of Warcraft bot maker, saying that if (as most software companies do) the End User License Agreement (EULA) says that you've only licensed the product, rather than bought it, then any violation of the EULA can be a violation of copyright law. Copyright expert William Patry, at the time, pointed out how insane such a ruling was:

The critical point is that WoWGilder did not contributorily or vicariously lead to violating any rights granted under the Copyright Act. Unlike speed-up kits, there was no creation of an unauthorized derivative work, nor was a copy made even under the Ninth Circuit's misinterpretation of RAM copying in the MAI v. Peak case. How one might ask can there be a violation of the Copyright Act if no rights granted under the Act have been violated? Good question.

Thankfully, the Ninth Circuit mostly walked back this ruling (though with a bunch of other problems...), noting (as Patry did in discussing the earlier ruling) that nothing was done that actually violated copyright law. It might violate a contract, but not copyright. This ruling, however, has not stopped Blizzard from continuing to go after bot makers with copyright claims. It went after some Starcraft II cheat creators in 2010. And just last year it went after a few more Starcraft II cheat creators, using the same twisted copyright theory.

And now, as TorrentFreak first pointed out, it's done so yet again -- this time filing a lawsuit against James Enright, who had built up a series of gaming bots for use in World of Warcraft, Diablo and Heroes. And, once again, Blizzard claims that it's a copyright violation, again arguing that violating the EULA is a form of copyright infringement.

Defendants have infringed, and are continuing to infringe, Blizzard’s
copyrights by reproducing, adapting, distributing, and/or authorizing others to
reproduce, adapt, and distribute copyrighted elements of the Blizzard Games
without authorization, in violation of the Copyright Act

More specifically, Blizzard is trying to make this a copyright claim by saying that he violated the EULA by reverse engineering their games to make his bots work. But that's not copyright infringement. It further claims that he's engaged in "tortious interference" because he's convincing other players to break their EULA's with his bots.

Now -- as in past such stories -- it's quite clear that many people are not happy about the use of cheats and bots in these games. It may be absolutely 100% true that they diminish the gaming experience for others and present a real problem for Blizzard. In all likelihood, they probably do violate the EULA that Blizzard uses on those games that forbids such activities.

But that shouldn't make it a copyright violation.

Blizzard can go after them for breach of contract. Or it can cut them off from its service. Or it can change how its games work to try to prevent bots. But that doesn't mean it gets to twist copyright law to use it against something that has absolutely nothing to do with copyright. This seems like yet another case of copyright immigration, where copyright law is used to go after "some bad thing" because it's such a powerful law with such powerful remedies. Blizzard has been doing this for nearly a decade now, and it's high time a court told them to knock it off.

from the am-I-my-bot's-keeper? dept

We've seen a partial answer to the question: "what happens if my Silk Road shopping bot buys illegal drugs?" In that case, the local police shut down the art exhibit featuring the bot and seize the purchased drugs. What's still unanswered is who -- if anyone -- is liable for the bot's actions.

This week, police in the Netherlands are dealing with a robot miscreant. Amsterdam-based developer Jeffry van der Goot reports on Twitter that he was questioned by police because a Twitter bot he owned made a death threat.

As van der Goot explained is his tweets (all of which can be viewed at the above link), he was contacted by an "internet detective" who had somehow managed to come across this bot's tweet in his investigative work. (As opposed to being contacted by a concerned individual who had spotted the tweet.)

So, van der Goot had to explain how his bot worked. The bot (which was actually created by another person but "owned" by van der Goot) reassembles chunks of his past tweets, hopefully into something approaching coherence. On this occasion, it not only managed to put together a legitimate sentence, but also one threatening enough to attract the interest of local law enforcement.

The explanation didn't manage to completely convince the police of the bot's non-nefariousness. They ordered van der Goot to shut down the account and remove the "threatening" tweet. But it was at least convincing enough that van der Goot isn't facing charges for "issuing" a threat composed of unrelated tweets. The investigator could have easily decided that van der Goot's explanation was nothing more than a cover story for tweets he composed and issued personally, using a bot account to disguise their origin.

The shutdown of the account was most likely for law enforcement's peace of mind -- preventing the very occasionally evil bot from cobbling together algorithmically-derived threats sometime in the future. It's the feeling of having "done something" about an incident that seems alarming at first, but decidely more banal and non-threatening by the end of the investigation.

The answer to the question of who is held responsible when algorithms "go bad" appears to be -- in this case -- the person who "owns" the bot. Van der Goot didn't create the bot, nor did he alter its algorithm, but he was ultimately ordered to kill it off. This order was presumably issued in the vague interest of public safety -- even though there's no way van der Goot could have stacked the deck in favor of bot-crafted threats without raising considerable suspicion in the Twitter account his bot drew from.

There will be more of this in the future and the answers will continue to be unsatisfactory. Criminal activity is usually tied to intent, but with algorithms sifting through data detritus and occasionally latching onto something illegal, that lynchpin of criminal justice seems likely to be the first consideration removed. That doesn't bode well for the bot crafters of the world, whose creations may occasionally return truly unpredictable results. Law enforcement officers seem to have problems wrapping their minds around lawlessness unmoored from the anchoring intent. In van der Goot's case, it resulted in only the largely symbolic sacrifice of his bot. For others, it could turn out much worse.

from the get-those-lawyers-ready dept

If you program a bot to autonomously buy things online, and some of those things turn out to be illegal, who's liable? We may be about to have the first such test case in Switzerland, after an autonomous buying bot was "seized" by law enforcement.

Two years ago, we wrote about the coming legal questions concerning liability and autonomous vehicles. Those vehicles are going to have some accidents (though, likely fewer than human driven cars) and then there are all sorts of questions about who is liable. Or what if they speed? Who gets the ticket? There are a lot of legal questions raised by autonomous vehicles. But, of course, it's not just autonomous vehicles raising these questions. With high-frequency trading taking over Wall Street, who is responsible if an algorithm goes haywire?

This question was raised in a slightly different context last month when some London-based Swiss artists, !Mediengruppe Bitnik, presented an exhibition in Zurich of The Darknet: From Memes to Onionland. Specifically, they had programmed a bot with some Bitcoin to randomly buy $100 worth of things each week via a darknet market, like Silk Road (in this case, it was actually Agora). The artists' focus was more about the nature of dark markets, and whether or not it makes sense to make them illegal:

The pair see parallels between copyright law and drug laws: “You can enforce laws, but what does that mean for society? Trading is something people have always done without regulation, but today it is regulated,” says ays Weiskopff.

“There have always been darkmarkets in cities, online or offline. These questions need to be explored. But what systems do we have to explore them in? Post Snowden, space for free-thinking online has become limited, and offline is not a lot better.”

But the effort also had some interesting findings, including that the dark markets were fairly reliable:

“The markets copied procedures from Amazon and eBay – their rating and feedback system is so interesting,” adds Smojlo. “With such simple tools you can gain trust. The service level was impressive – we had 12 items and everything arrived.”

“There has been no scam, no rip-off, nothing,” says Weiskopff. “One guy could not deliver a handbag the bot ordered, but he then returned the bitcoins to us.”

But, still, the much more interesting question is about liability in this situation. The Guardian reporter who wrote about this in December spoke to Swiss law enforcement, who noted that the situation was "unusual":

A spokesman for the National Crime Agency, which incorporates the National Cyber Crime Unit, was less philosophical, acknowledging that the question of criminal culpability in the case of a randomised software agent making a purchase of an illegal drug was “very unusual”.

“If the purchase is made in Switzerland, then it’s of course potentially subject to Swiss law, on which we couldn’t comment,” said the NCA. “In the UK, it’s obviously illegal to purchase a prohibited drug (such as ecstasy), but any criminal liability would need to assessed on a case-by-case basis.”

On the morning of January 12, the day after the three-month exhibition was closed, the public prosecutor's office of St. Gallen seized and sealed our work. It seems, the purpose of the confiscation is to impede an endangerment of third parties through the drugs exhibited by destroying them. This is what we know at present. We believe that the confiscation is an unjustified intervention into freedom of art. We'd also like to thank Kunst Halle St. Gallen for their ongoing support and the wonderful collaboration. Furthermore, we are convinced, that it is an objective of art to shed light on the fringes of society and to pose fundamental contemporary questions.

It appears possible that, in this case, law enforcement was just looking to seize and destroy the contraband products that were purchased by the bot, and may not then seek further prosecution, but it still does raise some interesting questions. I'm not sure I buy the "unjustified intervention in the freedom of art" argument (though that reminds me of another, unrelated story, of former MIT lecturer Joseph Gibbons, who was recently arrested for robbing banks, but who is arguing that it was all part of an "art project").

Still, these legal questions are not going away and are only going to become more and more pressing as more and more autonomous systems start popping up in different areas of our lives. The number of different court battles, jurisdictional arguments and fights over who's really liable are likely to be very, very messy -- but absolutely fascinating.

from the urls-we-dig-up dept

Despite the staggering growth in computing power and capabilities over the history of the technology, there remains a line in the sand between what computers can do and what we think of as "true" artificial intelligence. This line has gotten blurrier as computers have succeeded in performing certain tasks that were formerly human-only, but even these instances often feel like a brute-force approach to simulating something our own brains seem to accomplish more genuinely and abstractly. On the flipside, the success of these simulations raises questions about what's really happening inside our own heads. Here are a few of the latest developments in artificial intelligence that try to approach that line:

from the urls-we-dig-up dept

More and more digital media is being edited and prioritized in datacenters by intangible algorithms. As usual, this can be good and bad, depending on how the technology is used. On the one hand, algorithms can do laborious tasks that humans don't want to do. But at the same time, algorithms might introduce all kinds of errors or inadvertent biases on a scale that no group of humans could ever accomplish without automation. Here are just a few links on bots tinkering with online content.

About half of all the edits on Wikipedia are made by bots. Algorithms keep spam links from flooding the site, and they also create whole entries based on online data, as well as perform tedious tasks such as grammar and spelling corrections. Not surprisingly, the biggest bot job on Wikipedia is detecting vandalism. [url]