Do you lose free speech rights if you speak using a computer?

Scholar argues computer-generated speech is not protected by the Constitution.

It took guts for the New York Times to publish an op-ed by Tim Wu, the Columbia law professor who coined the phrase "network neutrality," arguing that the First Amendment doesn't protect the contents of the New York Times website. A significant amount of the content on the Times website—stock tickers, the "most e-mailed" list, various interactive features—were generated not by human beings, but by computer programs. And, Wu argues, that has constitutional implications:

Protecting a computer’s "speech" is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship. The First Amendment has wandered far from its purposes when it is recruited to protect commercial automatons from regulatory scrutiny.

OK, I fibbed. The target of Wu's op-ed was Google and Facebook, not the New York Times. But accepting Wu's audacious claim that computer-generated content doesn't deserve First Amendment protection endangers the free speech rights not only of the tech titans, but of every modern media outlet.

No one believes that the output of computer programs, as such, are protected by the First Amendment. It would be ridiculous, for example, to argue that the First Amendment barred the government from regulating a computer that controlled a nuclear power plant. But when a firm is in the business of providing information to the public, that information enjoys First Amendment protection regardless of whether the firm creates the information "by hand," or using a computer.

"Computer speech" at the New York Times

Wu's argument depends on drawing a sharp distinction between constitutionally protected human speech and computer speech that is unprotected by the First Amendment. But closer examination demonstrates how nonsensical this distinction is. To make the point, we don't need to look any further than the grey lady herself.

Articles published by the New York Times are often composed using word processors, and pages in the print newspaper are laid out using page layout software. The nytimes.com website is sent to readers by a Web server (a computer program) and rendered by a Web browser (also a computer program).

Of course, Wu isn't talking about those programs. He means programs that are directly involved in the production or selection of content. But the New York Times website has plenty of examples of those too. The home page features an automated stock ticker. A box on the right-hand side of the page shows "most e-mailed" and "recommended for you" stories—also generated automatically. The millions of ads the Times shows its readers every month are almost certainly chosen by computer algorithms.

In 2010, the Times produced an interactive feature called "You Fix the Budget." Users were invited to try to balance the US federal budget by choosing a mix of spending cuts and tax increases. A January feature, called "What Percent Are You?," invited readers to enter their household income in order to see how it compares with others in hundreds of metropolitan areas around the country. Features like this would be impossible to create "by hand."

On election night, the Times typically has an extensive section of its website featuring election results from around the country, complete with maps, charts, and poll results. These features are updated in real time, far too quickly for a human staff to keep them up to date.

The Times employs Nate Silver, a statistician who collects thousands of poll results and produces sophisticated mathematical models of election outcomes. These models are complex enough that his results could only be generated by a computer, and indeed even Silver himself can't always explain exactly why the model produces a particular outcome.

Censoring computers means censoring people

Obviously, it would be ridiculous to say that the New York Times website doesn't get First Amendment protection because too much of the site is generated by software. It wouldn't make sense to say that the government can freely regulate the computer-generated sections of the site—like Silver's projections or the "most e-mailed" list—because they are software-generated. Nor would it make sense to say that a feature like the "most e-mailed" list would receive more First Amendment protections if a human being, rather than a computer, made the list of most e-mailed pages.

Silver was exercising his First Amendment rights when he designed his model and selected the polls that drive it. Similarly, the Times was exercising its First Amendment right when it decided that readers would be interested in stories that were frequently e-mailed by other readers. Regulating these sections of the site raises exactly the same First Amendment issues as regulating the content or placement of traditional news stories.

The same point applies to Google. The question isn't whether Google's computers have First Amendment rights. Obviously they don't. Rather, the people who own and operate Google's computers—its engineers, executives, and shareholders—have First Amendment rights. And regulating the contents of Google's website raises First Amendment issues regardless of how Google might have used software to generate that content.

Perhaps the most perverse aspect of Wu's argument is that it seems to offer the highest level of First Amendment protection to those aspects of Google's website that have attracted the most criticism. For example, critics have charged that it is anticompetitive for Google to give its own services, such as Google Books and Google Maps, a more prominent position in search results than competing services. But this has generated controversy precisely because a human Google employee overrode the default behavior of Google's software to move Google products to the top of the page. Similarly, some critics criticize Google for manually removing some search results it regards as too spammy. Again, Google's alleged sin is using too much human judgment, not too little.

Wu's theory seems to imply that manual manipulation of search results is entitled to a higher level of First Amendment protection than automated, organic search results. That doesn't make any sense.

Ars alum Julian Sanchez has pointed out that Wu's argument also seems to imperil free speech rights for video games. The Supreme Court recently ruled that video games are entitled to the same First Amendment protections as other forms of media. But the contents of a video game are generated by software in much the same way Google's search results are. Would the state of California have won the case if it had raised Wu's "computers don't have free speech rights" argument? I hope not.

So it's true that computers don't have First Amendment rights. As Paul Alan Levy points out, neither do printing presses. But people have free speech rights, and those rights apply even if we use computers to help us speak.

Timothy B. Lee
Timothy covers tech policy for Ars, with a particular focus on patent and copyright law, privacy, free speech, and open government. His writing has appeared in Slate, Reason, Wired, and the New York Times. Emailtimothy.lee@arstechnica.com//Twitter@binarybits

Interesting. I think the OpEd was touching on content aggregators, not content created using the aid of a computer. Content aggregation (such as displaying search results, or suggested articles) is governed by a machine algorithm. Not by a specific user creating specific content.

However the person creating the algorithm should be protected by the first amendment. The machine itself is still only a tool.

Darn you, academic.sam! I was going to say the same thing! I do wonder how the first amendment issue will play out with the first true AI machine -- one that's coming up with its own ideas and is not simply a proxy for a human.

Interesting. I think the OpEd was touching on content aggregators, not content created using the aid of a computer. Content aggregation (such as displaying search results, or suggested articles) is governed by a machine algorithm. Not by a specific user creating specific content.

However the person creating the algorithm should be protected by the first amendment. The machine itself is still only a tool.

And that's the thing. The computers and algorithms aren't creating anything on their own. They're working in the ways their human creators intended, on information given to them by human users. As was pointed out in the article, the printing press doesn't have 1st Amendment rights, the owner of the press does. So, the automated aggregator has no rights whatsoever, but the people who wrote the stories it is aggregating sure do.

Interesting. I think the OpEd was touching on content aggregators, not content created using the aid of a computer. Content aggregation (such as displaying search results, or suggested articles) is governed by a machine algorithm. Not by a specific user creating specific content.

However the person creating the algorithm should be protected by the first amendment. The machine itself is still only a tool.

And that's the thing. The computers and algorithms aren't creating anything on their own. They're working in the ways their human creators intended, on information given to them by human users. As was pointed out in the article, the printing press doesn't have 1st Amendment rights, the owner of the press does. So, the automated aggregator has no rights whatsoever, but the people who wrote the stories it is aggregating sure do.

The trouble is not everyone wants their story agitated.We really need fair use updated and widen for the times less information becomes monopolized to death.

The computer is my property. It cannot speak except for what I direct it to speak. Therefore, the computer's owner or operator is speaking through the computer. Therefore, the computer has no speech, it is all my own. It functions no more than a loud speaker or bull horn.

Mr. Lee writes, "Of course, Wu isn't talking about those programs." No, Wu ISN'T talking about programs that are used as tools to facilitate human generated content, nor (despite that snarky, loaded comment by Lee) does Wu try to mislead you on that point. Wu writes, "The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)" One can argue where do we draw the line to determine whether something is automated or nonhuman, but both the Ann Landers example in the op-ed and the video-games example in this article are clearly not automated, non-human work. I assert that any judge would interpret both scenarios similarly.

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls, modify Google search results, or ban video games. Wu is simply arguing the government should be able to come in and "regulate" such content. There are still rules that have to be followed. As Wu points out, there already exist non-First Amendment protections for public, commercial content. You can't just arbitrarily modify or remove someone else's content, regardless of whether it has First Amendment protection or not.

That is my interpretation of Wu's Op-Ed piece. I honestly believe Mr. Lee and Mr. Wu are more in agreement about machines having First Amendment protection (they shouldn't) than this article would have you believe.

Mr. Lee writes, "Of course, Wu isn't talking about those programs." No, Wu ISN'T talking about programs that are used as tools to facilitate human generated content, nor (despite that snarky, loaded comment by Lee) does Wu try to mislead you on that point. Wu writes, "The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)" One can argue where do we draw the line to determine whether something is automated or nonhuman, but both the Ann Landers example in the op-ed and the video-games example in this article are clearly not automated, non-human work. I assert that any judge would interpret both scenarios similarly.

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls or Google search results. Wu is simply arguing the government should be able to come in and "regulate" such content. There are still rules that have to be followed. As Wu points out, there already exist non-First Amendment protections for public, commercial content. You can't just arbitrarily modify or remove someone else's content, regardless of whether it has First Amendment protection or not.

That is my interpretation of Wu's Op-Ed piece. I honestly believe Mr. Lee and Mr. Wu are more in agreement about machines having First Amendment protection than this article would have you believe.

Our current government is more interested in censoring things along IP lines not necessarily content wise thio, sure I always think the worse but its hard not to giving just how bad things are.

Mr. Lee writes, "Of course, Wu isn't talking about those programs." No, Wu ISN'T talking about programs that are used as tools to facilitate human generated content, nor (despite that snarky, loaded comment by Lee) does Wu try to mislead you on that point. Wu writes, "The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)" One can argue where do we draw the line to determine whether something is automated or nonhuman, but both the Ann Landers example in the op-ed and the video-games example in this article are clearly not automated, non-human work. I assert that any judge would interpret both scenarios similarly.

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls, modify Google search results, or ban video games. Wu is simply arguing the government should be able to come in and "regulate" such content. There are still rules that have to be followed. As Wu points out, there already exist non-First Amendment protections for public, commercial content. You can't just arbitrarily modify or remove someone else's content, regardless of whether it has First Amendment protection or not.

That is my interpretation of Wu's Op-Ed piece. I honestly believe Mr. Lee and Mr. Wu are more in agreement about machines having First Amendment protection (they shouldn't) than this article would have you believe.

So if I print a sticker and stick on my car that's not free speech because the sticker was created a by a machine? What about books? Papers? Magazines? Posters? They're all created by computers the same way a web site is, basically.

Mr. Lee writes, "Of course, Wu isn't talking about those programs." No, Wu ISN'T talking about programs that are used as tools to facilitate human generated content, nor (despite that snarky, loaded comment by Lee) does Wu try to mislead you on that point. Wu writes, "The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)" One can argue where do we draw the line to determine whether something is automated or nonhuman, but both the Ann Landers example in the op-ed and the video-games example in this article are clearly not automated, non-human work. I assert that any judge would interpret both scenarios similarly.

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls or Google search results. Wu is simply arguing the government should be able to come in and "regulate" such content. There are still rules that have to be followed. As Wu points out, there already exist non-First Amendment protections for public, commercial content. You can't just arbitrarily modify or remove someone else's content, regardless of whether it has First Amendment protection or not.

That is my interpretation of Wu's Op-Ed piece. I honestly believe Mr. Lee and Mr. Wu are more in agreement about machines having First Amendment protection than this article would have you believe.

Our current government is more interested in censoring things along IP lines not necessarily content wise thio, sure I always think the worse but its hard not to giving just how bad things are.

Didn't quite understand that sentence, but if you're saying the government may abuse their regulatory powers if given the ability to because of lack-of-First-Amendment protection, I would argue that's a different problem (one I agree exists), and not relevant to the First Amendment argument being made by Mr. Wu.

I read Wu's argument as saying that allowing companies to put their computer-produced output as "speech," it allows these companies to claim First Amendment protection against regulation. As one of his examples, he alludes to the Citizens United decision allowing unlimited political spending in the name of free speech. He does not say that all computer output is or is not speech, just that it can potentially lead to a big loophole.

Lots of things are already covered under the First Amendment. But there are exceptions to this. Google likely violated anti-competition laws when it promoted its own products over those of others. (And Google obviously does tweak its algorithm all the time. Even though I found the "Santorum" site to be hilarious, its excuse that it couldn't change the algorithm is ridiculous.) Facebook, despite its awful privacy policies, cannot freely share its personal information about its users. The New York Times isn't necessarily a good example because the press is explicity mentioned in the Amendment.

It seems like the error Wu is making may be in considering a program and the output of that program separately. He does barely touch on this, saying "Defenders of Google’s position have argued that since humans programmed the computers that are “speaking,” the computers have speech rights as if by digital inheritance," but he then goes on to disagree in a way that suggests he sees algorithm and output as completely separate entities. I'm not sure that makes sense.

An example that comes to mind is interactive digital art -- say, video art that incorporates live camera feeds. Would anyone suggest that it ceases to be protected because it's an algorithm integrating live data?

Moreover, the kinds of things Wu is fretting about -- privacy breaches as protected speech -- are not new concerns raised by automation. Records privacy and related concepts (arguably, something like attorney-client privilege) long predate computers, and there's already the precedent of "commercial speech" having limited protection. Perhaps automated commercial speech too should have limited protection -- but it doesn't mean it's not speech.

Mr. Lee writes, "Of course, Wu isn't talking about those programs." No, Wu ISN'T talking about programs that are used as tools to facilitate human generated content, nor (despite that snarky, loaded comment by Lee) does Wu try to mislead you on that point. Wu writes, "The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)" One can argue where do we draw the line to determine whether something is automated or nonhuman, but both the Ann Landers example in the op-ed and the video-games example in this article are clearly not automated, non-human work. I assert that any judge would interpret both scenarios similarly.

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls, modify Google search results, or ban video games. Wu is simply arguing the government should be able to come in and "regulate" such content. There are still rules that have to be followed. As Wu points out, there already exist non-First Amendment protections for public, commercial content. You can't just arbitrarily modify or remove someone else's content, regardless of whether it has First Amendment protection or not.

That is my interpretation of Wu's Op-Ed piece. I honestly believe Mr. Lee and Mr. Wu are more in agreement about machines having First Amendment protection (they shouldn't) than this article would have you believe.

My point is that the distinction he's drawing, between "human-generated content" and "automated content" doesn't make any sense. All digital content is produced via a combination of human judgment and computer automation. If a particular editorial judgment (say, "we should feature articles that get a lot of emails") would enjoy constitutional protection if made by a human being, it doesn't lose First Amendment protection if that judgment is baked into a computer program.

If Google hired an army of people to perform the PageRank algorithm by hand, that would clearly be constitutionally protected speech. So why should Google have fewer rights when it buys a computer to make the same sequence of decisions in a much faster and more accurate fashion?

There's no such thing as completely automated content generation. All content was ultimately generated by the decisions of one or more human beings, and has First Amendment protection for that reason.

nobody is arguing that the government can just come in and [...] modify Google search results [...]. Wu is simply arguing the government should be able to come in and "regulate" [Google search results].

It sounds like you might not be entirely clear on what it would mean for search engine results to be 'regulated' because these statements contradict each other. Consider how search engines in China are 'regulated' by the Chinese government.

And what's the point of 'regulation' that never modifies the thing being regulated?

Mr. Lee writes, "Of course, Wu isn't talking about those programs." No, Wu ISN'T talking about programs that are used as tools to facilitate human generated content, nor (despite that snarky, loaded comment by Lee) does Wu try to mislead you on that point. Wu writes, "The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)" One can argue where do we draw the line to determine whether something is automated or nonhuman, but both the Ann Landers example in the op-ed and the video-games example in this article are clearly not automated, non-human work. I assert that any judge would interpret both scenarios similarly.

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls or Google search results. Wu is simply arguing the government should be able to come in and "regulate" such content. There are still rules that have to be followed. As Wu points out, there already exist non-First Amendment protections for public, commercial content. You can't just arbitrarily modify or remove someone else's content, regardless of whether it has First Amendment protection or not.

That is my interpretation of Wu's Op-Ed piece. I honestly believe Mr. Lee and Mr. Wu are more in agreement about machines having First Amendment protection than this article would have you believe.

Our current government is more interested in censoring things along IP lines not necessarily content wise thio, sure I always think the worse but its hard not to giving just how bad things are.

Didn't quite understand that sentence, but if you're saying the government may abuse their regulatory powers if given the ability to because of lack-of-First-Amendment protection, I would argue that's a different problem (one I agree exists), and not relevant to the First Amendment argument being made by Mr. Wu.

Maybe I am thinking to much put such research could easily be misconstrued and used to further the iron fisted IP rules big biz wants so badly. Then again I tend to take stuff slightly out of context by reading between the lines to much.

Mr. Lee writes, "Of course, Wu isn't talking about those programs." No, Wu ISN'T talking about programs that are used as tools to facilitate human generated content, nor (despite that snarky, loaded comment by Lee) does Wu try to mislead you on that point. Wu writes, "The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)" One can argue where do we draw the line to determine whether something is automated or nonhuman, but both the Ann Landers example in the op-ed and the video-games example in this article are clearly not automated, non-human work. I assert that any judge would interpret both scenarios similarly.

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls, modify Google search results, or ban video games. Wu is simply arguing the government should be able to come in and "regulate" such content. There are still rules that have to be followed. As Wu points out, there already exist non-First Amendment protections for public, commercial content. You can't just arbitrarily modify or remove someone else's content, regardless of whether it has First Amendment protection or not.

That is my interpretation of Wu's Op-Ed piece. I honestly believe Mr. Lee and Mr. Wu are more in agreement about machines having First Amendment protection (they shouldn't) than this article would have you believe.

My point is that the distinction he's drawing, between "human-generated content" and "automated content" doesn't make any sense. All digital content is produced via a combination of human judgment and computer automation. If a particular editorial judgment (say, "we should feature articles that get a lot of emails") would enjoy constitutional protection if made by a human being, it doesn't lose First Amendment protection if that judgment is baked into a computer program.

If Google hired an army of people to perform the PageRank algorithm by hand, that would clearly be constitutionally protected speech. So why should Google have fewer rights when it buys a computer to make the same sequence of decisions in a much faster and more accurate fashion?

There's no such thing as completely automated content generation. All content was ultimately generated by the decisions of one or more human beings, and has First Amendment protection for that reason.

Okay, I agree with your assertion that human-generated content and algorithmically-generated content isn't a distinction that makes sense because the latter is a subset of the former. That didn't come across to me in the article (possibly poor comprehension on my part).

Er, I think you misunderstand, because that's what these systems do. They gather human generated content and format it as the provider (Google, et al) specifies. That's it. Trying to make a distinction between clicking a button to upload data you've collated by hand vs. letting the program you've specifically designed collate & upload the data is a wild goose chase.

Edit: I see you've replied to a similar comment above, so feel free to ignore this comment.

I think Wu is seriously misapprehending what it means to "choose". Computers, as they now exist, do not choose. They process data with an algorithm designed to reflect choices made by the programmer. The computer's output cannot be construed to constitute speech or choice or anything else. Although computers did the heavy lifting of collecting, sorting, and presenting the information on a Google search results page, the final appearance and organization of that material results from the decisions of humans--not just the humans that wrote the algorithm, but also the management people that okayed it for public presentation. The decisions of those people clearly qualify as speech on behalf of a commercial corporate entity and are therefore entitled to a precedent-defined subset of the protections of the first amendment. When computers attain sentience, this will be a material issue, but until then, it's pretty much entirely covered under existing law.

geniekid wrote:

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls, modify Google search results, or ban video games. Wu is simply arguing the government should be able to come in and "regulate" such content.

The topic of this article is the presentation of computer output by companies whose product is information. "Regulate" and "censor" are essentially synonymous in that context.

[There's no such thing as completely automated content generation. All content was ultimately generated by the decisions of one or more human beings, and has First Amendment protection for that reason.

Wired ran a story on computer algorithms writing sports stories based on sport ticker feeds. Outside of the raw data being collected as perhaps being user generated, the finished article is 100% written by a computer. While Mr. Wu didnt' cite that as an example, I think that is perhaps the best example I can think of as a separation of user generated speech and computer generated speech.

Mr. Lee writes, "Of course, Wu isn't talking about those programs." No, Wu ISN'T talking about programs that are used as tools to facilitate human generated content, nor (despite that snarky, loaded comment by Lee) does Wu try to mislead you on that point. Wu writes, "The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)" One can argue where do we draw the line to determine whether something is automated or nonhuman, but both the Ann Landers example in the op-ed and the video-games example in this article are clearly not automated, non-human work. I assert that any judge would interpret both scenarios similarly.

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls, modify Google search results, or ban video games. Wu is simply arguing the government should be able to come in and "regulate" such content. There are still rules that have to be followed. As Wu points out, there already exist non-First Amendment protections for public, commercial content. You can't just arbitrarily modify or remove someone else's content, regardless of whether it has First Amendment protection or not.

That is my interpretation of Wu's Op-Ed piece. I honestly believe Mr. Lee and Mr. Wu are more in agreement about machines having First Amendment protection (they shouldn't) than this article would have you believe.

My point is that the distinction he's drawing, between "human-generated content" and "automated content" doesn't make any sense. All digital content is produced via a combination of human judgment and computer automation. If a particular editorial judgment (say, "we should feature articles that get a lot of emails") would enjoy constitutional protection if made by a human being, it doesn't lose First Amendment protection if that judgment is baked into a computer program.

If Google hired an army of people to perform the PageRank algorithm by hand, that would clearly be constitutionally protected speech. So why should Google have fewer rights when it buys a computer to make the same sequence of decisions in a much faster and more accurate fashion?

There's no such thing as completely automated content generation. All content was ultimately generated by the decisions of one or more human beings, and has First Amendment protection for that reason.

I think you're still missing Wu's (admittedly subtle) point. My reading is that his argument is that the results of a pure algorithm (the presentation of content from other sources, not the actual 3rd party content) should not have the same free speech rights as the algorithm itself. Think of the algorithm as the speech, and the results as only the presentation or method of delivery. Methods of delivery are regulated, and few argue wholesale against that. You can't drive around at 3am playing a speech at 110dB in a suburban neighborhood and expect "free speech" to shield you from the noise regulation charges. Regulation of that presentation (like the unclear mixing of ads and search results from Google that Wu refers to) shouldn't be wholesale denied by a simple First Amendment defense, but rather examined from the perspective of whether the interest of the public good does overcome free speech concerns to allow said regulation.

WordTipping and achoo5000: what is the difference between that and a computer that picks which links to show in a "Most Popular" box? It's only the *amount* and *complexity* of automation. The computer is still 100% doing exactly what its creator has designed it to do. There is zero agency within the computer itself.

[There's no such thing as completely automated content generation. All content was ultimately generated by the decisions of one or more human beings, and has First Amendment protection for that reason.

Wired ran a story on computer algorithms writing sports stories based on sport ticker feeds. Outside of the raw data being collected as perhaps being user generated, the finished article is 100% written by a computer. While Mr. Wu didnt' cite that as an example, I think that is perhaps the best example I can think of as a separation of user generated speech and computer generated speech.

But that sports story writing algorithm was designed and implemented by humans. Thus the outputs that algorithm generates based on the automated input is actually generated by the humans who implemented that algorithm.

My point is that the distinction he's drawing, between "human-generated content" and "automated content" doesn't make any sense. All digital content is produced via a combination of human judgment and computer automation. If a particular editorial judgment (say, "we should feature articles that get a lot of emails") would enjoy constitutional protection if made by a human being, it doesn't lose First Amendment protection if that judgment is baked into a computer program.

If Google hired an army of people to perform the PageRank algorithm by hand, that would clearly be constitutionally protected speech. So why should Google have fewer rights when it buys a computer to make the same sequence of decisions in a much faster and more accurate fashion?

There's no such thing as completely automated content generation. All content was ultimately generated by the decisions of one or more human beings, and has First Amendment protection for that reason.

I think you're still missing Wu's (admittedly subtle) point. My reading is that his argument is that the results of a pure algorithm (the presentation of content from other sources, not the actual 3rd party content) should not have the same free speech rights as the algorithm itself. Think of the algorithm as the speech, and the results as only the presentation or method of delivery. Methods of delivery are regulated, and few argue wholesale against that. You can't drive around at 3am playing a speech at 110dB in a suburban neighborhood and expect "free speech" to shield you from the noise regulation charges. Regulation of that presentation (like the unclear mixing of ads and search results from Google that Wu refers to) shouldn't be wholesale denied by a simple First Amendment defense, but rather examined from the perspective of whether the interest of the public good does overcome free speech concerns to allow said regulation.

I'm not saying Google's algorithms are protected speech. I'm saying the output of the algorithms are protected speech because it's Google's expression. Telling Google it has to present its search results in a different order raises exactly the same First Amendment issues as telling the New York times it has to put different articles on its front page. The courts would be highly skeptical of attempts to regulate the mix of ads and content on the front page of the New York Times, and should be equally skeptical of efforts to regulate Google search results.

Mr. Lee writes, "Of course, Wu isn't talking about those programs." No, Wu ISN'T talking about programs that are used as tools to facilitate human generated content, nor (despite that snarky, loaded comment by Lee) does Wu try to mislead you on that point. Wu writes, "The line can be easily drawn: as a general rule, nonhuman or automated choices should not be granted the full protection of the First Amendment, and often should not be considered “speech” at all. (Where a human does make a specific choice about specific content, the question is different.)" One can argue where do we draw the line to determine whether something is automated or nonhuman, but both the Ann Landers example in the op-ed and the video-games example in this article are clearly not automated, non-human work. I assert that any judge would interpret both scenarios similarly.

Contrary to the tone of this article, nobody is arguing that the government can just come in and censor internet polls or Google search results. Wu is simply arguing the government should be able to come in and "regulate" such content. There are still rules that have to be followed. As Wu points out, there already exist non-First Amendment protections for public, commercial content. You can't just arbitrarily modify or remove someone else's content, regardless of whether it has First Amendment protection or not.

That is my interpretation of Wu's Op-Ed piece. I honestly believe Mr. Lee and Mr. Wu are more in agreement about machines having First Amendment protection than this article would have you believe.

Our current government is more interested in censoring things along IP lines not necessarily content wise thio, sure I always think the worse but its hard not to giving just how bad things are.

Didn't quite understand that sentence, but if you're saying the government may abuse their regulatory powers if given the ability to because of lack-of-First-Amendment protection, I would argue that's a different problem (one I agree exists), and not relevant to the First Amendment argument being made by Mr. Wu.

Maybe I am thinking to much put such research could easily be misconstrued and used to further the iron fisted IP rules big biz wants so badly. Then again I tend to take stuff slightly out of context by reading between the lines to much.

Not sure what you mean but I think that I agree you. There is no such thing as IP or there shouldn't be anyway. Most of the trouble in this system is caused by people thinking that they should be able to make a living on just their thoughts.

And that's the thing. The computers and algorithms aren't creating anything on their own. They're working in the ways their human creators intended, on information given to them by human users. As was pointed out in the article, the printing press doesn't have 1st Amendment rights, the owner of the press does. So, the automated aggregator has no rights whatsoever, but the people who wrote the stories it is aggregating sure do.

That was my exact reaction to the OpEd.

A lot of us here on Ars Technica are computer programmers. Computers don't do this stuff by themselves any more than a moveable type printing press wrote the articles for Benjamin Franklin.

Instead, the output of computers is the end results of an (often lengthy) process wherein humans decide what they will do and attempt to do it.