SAN FRANCISCO -- The Simon Wiesenthal Center's annual report highlighting hate and terror speech online will single out Twitter this year for a sharp increase in "hate-spewing hashtags and handles."

On Wednesday, the Jewish human rights group will assign the San Francisco social-media company a letter grade of F, citing a 30 percent increase in what it considers hate or terror activity as well as a lack of responsiveness when the instances are brought to its attention. In contrast, Google's YouTube earned a C-minus, while Facebook earned a spot at the head of the class, with an A-minus.

The report is a clear attempt to shame Twitter into taking a more aggressive role in censoring content on its service, a very bad idea stemming from the very best intentions.

It is, of course, news to precisely no one that people say and do incredibly offensive things on the Internet. But online services should follow the free speech rules of the land -- the product of centuries of debate between legislators, presidents and courts -- rather than crafting their own standards for society in corporate board rooms.

The Simon Wiesenthal Center plans to make a presentation at its New York Museum of Tolerance on Wednesday morning. The version The Chronicle reviewed in advance highlighted distressing speech on Twitter accounts like "Adolf_Hitler11," ''I_Love_Racists," and what translates to "The Jew Hunter."

There also appear to be active accounts for the United Klans of America, and various radical or fundamentalist Islamic groups, including affiliates of the Taliban and al Qaeda.

In addition, the presentation points to troubling materials elsewhere online, such as a request for anthrax on the "As Ansar Islamic Network," as well as numerous instructional manuals for making weapons, including "Setting Fires with Electrical Timers: An Earth Liberation Front Guide."

''We're just asking companies to do their share and to try to make it a little more difficult for the bad guys to operate," Rabbi Abraham Cooper, associate dean of the Simon Wiesenthal Center, said in an interview.

No reasonable person would stand up and cheer for this kind of online rhetoric, and it certainly underscores the ugly underbelly of our popular consumer services. All of it is vile and much of it may well be illegal.

But insofar as the Simon Wiesenthal Center is taking a poke at online service providers themselves, it's important to consider all this in the appropriate context.

For starters, while the Wiesenthal Center noted a 30 percent increase in hate-spewing in the last year, Twitter's total tweet volume soared 60 percent in a recent eight-month period-- so on a proportional basis, this kind of speech might actually be declining.

But the real issue is whether, as a general rule, we want our Internet services deciding what we should and shouldn't be allowed to say or do online.

Companies like Twitter and Google stress that they follow the speech rules of the nations in which they operate, and remove material when presented with lawful requests.

There are, of course, certain boundaries on free speech in the United States, including limits on words that constitute a genuine threat and lies that harm a person's reputation. European laws are often stricter on the issue of hate speech, in part due to the history of Nazism.

Twitter varies the content you see based on the laws of country in which you're seeing it. An Adolf_Hitler11 tweet, for instance, could be visible in the United States but show up as "Tweet withheld" in Germany.

But unless it's brought to their attention, Twitter, Google and many other online companies purposefully avoid moderating the content that users post, be it videos or pictures or racist screeds.

U.S. law generally provides online services what is called safe harbor from the activities of its users -- and that, on balance, is a good thing. It's doubtful we'd see a Craigslist, YouTube or Facebook without such laws guaranteeing free and open communications online. Companies would otherwise adopt an overly conservative stance on the behavior of their users, taking down protected speech in the process.

A whole other set of considerations kicks in when it comes to terror-related activity online, but I'll just note that part of the choice there is between allowing the spread of information about bomb making, on the one hand; and about revealing the tenor, scope and perhaps identity of people engaging in this kind of behavior, on the other.

I'll bet everything in my pocket the National Security Agency is monitoring these sites closer than the Wiesenthal Center is.

If it sometimes seems to the organization that services like Twitter are ignoring its removal requests -- one factor on which it assigned that F grade -- it could be because they're demanding the removal of legitimate if distasteful content.

The courts would probably disagree, and I certainly would. With due respect, I believe he is confusing hate with a lack of civility.

If Rabbi Cooper or anyone else is unhappy with what the laws say about where free speech ends and hate speech begins, they have every right to push for changes among legislatures and courts. And if they believe speech already on Twitter or elsewhere is illegal, they can bring it to the attention of law enforcement.

But what I don't want is for Google, Facebook or Twitter -- gigantic communications platforms with enormous ability to tilt the way we discuss and think about issues -- getting into the business of unilaterally making that call for society.

And to their credit, neither do they.

Like so much else, this is about making the best choice among bad ones. But on balance, the unfettered speech unleashed by the Internet around the world has been a powerful force for good that's worth protecting.

That's why on Thursday the Radio Television Digital News Association will call out Twitter for very different reasons: Handing the company the organization's First Amendment Award.