Events today force me to add two more concepts to this list -- responsibility and hypocrisy. And while today's associated news is utterly unrelated to the Mideast, it is very much relevant to what we've been recently discussing.

This afternoon, FOX News was indulging in what has become a standard eyeball-grabbing fare, a live helicopter chase of a prolonged, high-speed police pursuit.

Though it's well understood that the unspoken motive in these displays is the hope for some sort of dramatic ending, when the driver actually stopped and emerged from his vehicle, FOX (we learned in their after-the-fact on-air apology) inserted a five-second feed delay into their broadcast, presumably so that anything especially gruesome could be prevented from airing.

They failed miserably.

Apparently sensing what was about to occur, the panicked FOX anchor yelled repeatedly for his control room to break away, but fascinated as they were with what was transpiring, control was asleep at the switch, and well over a million viewers were treated to watching a man blow his brains out in full color on their big screen TVs.

FOX afterwards apologized profusely -- and I believe sincerely -- admitting that they had blown it big time.

Apology accepted.

But this actually is only the beginning of the story.

In the hours following this drama, sites around the Web lit up with predictable condemnations of FOX News for showing the death, and more generally for airing the chase in the first place.

Criticizing FOX News is like (no pun intended) shooting fish in a barrel -- and it's usually very well deserved by FOX. And condemnation of TV's fascination with high speed chases is practically a meme unto itself.

But a remarkable thing happened on some of the major Web sites engaging in this orgy of FOX bashing. Even while on one hand they loudly noted FOX News' faults in exquisite detail, many of these sites also posted and promoted the explicit video footage that they condemned FOX for airing in the first place.

The hypocrisy inherent in this situation seems not to have been entirely lost on these sites' editors. In some cases they've now released long-winded explanations and excuses of why "after much internal soul-searching," they've decided to publish the video -- after all "it's news," they claim, "it's educational."

Yet it's clear enough what's actually going on. Anyone who happened to capture that FOX footage could upload it to YouTube or various other venues, but when major Web sites engage in such behavior we know it's all about the eyeballs and the clicks.

Their claims of diligent deliberations ring as hollow as the faux-discussions in the satirical 1976 film "Network," where TV executives argue the ethics of killing off an erratic news anchor, live on air.

It's as if one spent years arguing against bullfighting, and then published and monetized the last few recorded moments of a hideous, bloody encounter in the bullring.

Unfortunately, the collateral damage of such behavior by major Web sites may go far beyond hypocrisy.

By behaving in what is essentially a duplicitous manner, by not showing even a modicum of self-control, they provide ready ammunition to those forces arguing for government-imposed crackdowns on Internet content and the horrendously ill-conceived calls for censorship that are part and parcel of these forces' sensibilities.

And while we know that Internet censorship cannot ever be entirely successful, it can certainly cause a lot of people a great deal of grief, even landing some in shackles and cells.

You'd be hard pressed, I believe, to find many persons more dedicated to Internet freedom of speech than I am, but freedom of speech does not mean freedom from responsibility -- it does not mean carte blanche dispensation to exploit tragedy and wallow in false editorial self-righteousness while simultaneously counting the ad clicks.

Perhaps there's another old, oft-forgotten concept that needs to be appended onto the list, along with responsibility, besides hypocrisy.

That concept is shame. For if some of these sites looked at themselves honestly, at how they've behaved in this case and what the possible negative impacts of their behavior could be, they should be thoroughly ashamed of themselves.

September 26, 2012

A dangerous and decidedly false meme has been floating around in media and elsewhere in recent days. It's actually not a new concept at all, but we're now seeing calculated efforts being deployed to leverage recent world events toward the achievement of an ancient and evil goal -- the control of public and private speech in their various guises and forms.

I will not here and now discuss this particular case in much more detail, except to note that trying to understand the reactions to this video, without a comprehensive understanding of the geopolitical and social history of the Mideast, is like attempting to figure out how a smartphone works by staring intently at its miniaturized circuit board components.

Of great concern are the comments and editorial opining now appearing, suggesting that the U.S. puts too much stake in "free speech" concepts, that we must be "tolerant" of other countries' sensibilities about speech restrictions, and that perhaps global censorship of unpopular concepts and ideas can be justified in the name of community good and world peace.

Implicit (and sometimes explicit) in these arguments is the assumption that censorship leads to happier, more peaceful populations, where conflicts that would otherwise occur will instead be tempered or eliminated by the unavailability of particular types of information and content.

Attempts to impose such controls on speech are now of global extent, and have massively accelerated with the evolution of the Internet.

Some countries ban what they consider to be "sacrilegious" materials in a religious context. Others ban Nazi imagery, or negative comments about the ruling government or monarchs. In some nations, violations of associated speech laws can result in decades-long prison sentences. Even here in the U.S., multiple legislative attempts have been made to try ban a wide variety of broadly defined content from the Net, on the grounds of it supposedly being "inappropriate" for children.

But the question that is hardly ever asked is fundamentally a simple one.

Ethical questions aside for the moment, does government-imposed censorship -- or government-inspired self-censorship -- actually have the "desired" results?

As a thought experiment, imagine that Google had acceded to demands that the anti-Islamic video be immediately blocked globally on YouTube, instead of taking what I believe was the appropriate course of instead only implementing highly targeted and narrow blocking.

Would global blocking have avoided the violence? Would the leaders calling for such blocking have then been satisfied?

The answer to both questions clearly appears to be no.

In fact, most of the violence in reaction to the video has been from persons who have not even seen it. Most don't even personally know anybody who has seen significant amounts of the actual video. Rather, they have "heard" about it -- second hand, third hand, characterizations, rumors, bits and pieces from other sources.

This is a clue to the Very Big Lie of censorship.

Censorship is not actually about preventing violence, or keeping people happy, or even improving the economy.

Censorship is essentially a *political* act. It is a mechanism of political control and political empowerment of existing leaders, not an effective mechanism for improving people's lives -- other than the lives of rulers and politicians themselves.

If YouTube had blocked the video in question globally, various leaders would have crowed that they had bullied Google into submission, but so long as the video existed anywhere, in any form, protests and violence would continue, with many of these leaders tacitly or even directly urging protesters on, fanning the flames of emotion.

For it is the very *existence* of information, not *access* to information per se, that is at the heart of censorship demands.

And in the age of the Internet, information has become much like energy itself. It can be hidden, changed in form, but information has become virtually indestructible. And like a chain reaction in a pit of uranium-235, the suppressed energy of information can explode across the Internet in a relative instant, impossible to control around the planet.

Demands to censor the Net, to somehow limit or marginalize free speech as some sort of American aberration, are ultimately doomed.

Censorship proponents dream of the days before the Net -- before television, radio, newspapers, and the printing press, when information could not be easily duplicated, transmitted, and widely disseminated.

When the printing press was invented, church leaders in particular were horrified. Much like politicians and leaders today, they knew that the technology could serve them well, but the last thing they wanted was such communications powers in the hands of the common folk.

The Internet of today has become the fulfillment of would-be censors worst nightmares. It provides the ability of virtual "nobodies" to reach vast audiences with unapproved ideas of all sorts, at any time, in all manner of ways -- written, audio, video.

Without the Internet, you would obviously not be reading these words, nor would you likely even be aware of my existence. Multiply this effect by millions -- that's the technological marvel that is a terror to those who would control information, communications, speech, and ideas themselves.

The U.S. has plenty of problems when it comes to its own handling of free speech. Related government hypocrisies are as old as the union, and largely independent of which political parties are ascendant at any given time.

But the Founding Fathers, fresh from the repression of monarchy, wrote words of genius when they created the First Amendment to the Constitution, and ensconced freedom of speech firmly into the fabric of their new nation. That their foresight, in a largely agrarian society, is even more valid and important today, in a time of instantaneous global communications within a highly technological milieu, is a wonder of the ages.

We must firmly reject the claims of persons who assert that there's too much free speech, that perhaps censorship isn't so bad, that the world at large must cower to the lowest common denominator of narrow minds and political expediencies.

They are wrong, and unless they're willing to cut themselves off from the Internet entirely -- and perhaps not even then -- the Net will ultimately foil their efforts to impose "dark ages" sensibilities onto our world of now.

We're all well into the 21st century -- not the 13th.

Get used to it -- or learn the lessons of history the very hard way indeed.

Controversies surrounding that video and YouTube are continuing unabated, however.

The New York Times yesterday, in Free Speech in the Age of YouTube, discussed Columbia professor Tim Wu's recent proposal for a sort of "external oversight board" to make tough decisions about controversial YouTube video takedown situations.

I have great respect for Tim, and we both agree that Google was correct in their targeted blocking of the anti-Islamic video in the current case.

And to his credit, Tim lays out in his proposal some of the reasons why such a plan might not succeed.

Unfortunately, I feel forced to go further down this latter path, and suggest that the proposal -- if implemented -- would not only fail, but also could make the situation regarding YouTube and censorship questions far more complex and problematic, potentially leaving us in a much worse place than where we started.

One reason for this should be pretty obvious. An external group making such decisions likely wouldn't have any significant "skin in the game" in a legal sense. They could make whatever decisions they wanted, with deference to free speech and high ideals, yet Google would ultimately be the party legally vulnerable to any negative results of those decisions.

It's difficult in the extreme to imagine Google being willing to cede such decision-making authority to an outside group. I'd certainly be unwilling to do so if I were in Google's position.

Even if such a group operated only in an adversary capacity, so long as its decisions were officially made public (or leaked publicly), there would be enormous pressure on Google to conform with such decisions, even when internal Google data and knowledge indicated that this would unwise for any number of reasons not publicly known.

The logistics of Tim's proposal also seem largely unworkable. He acknowledges that (for example) trying to herd YouTube users into some sort of advisory capacity could be difficult, but invokes Wikipedia as an example of a possible model.

Though I admire much of what Wikipedia has accomplished, it seems like a very poor model for a YouTube advisory system. Wikipedia is often accused -- with considerable merit -- of not taking responsibility for inaccuracies in its materials. Battles over edits and changes are legendary, even while all manner of known errors persist throughout its corpus. Political and commercial battles are fought by proxy among Wikipedia editors, usually by anonymous parties of unknown expertise operating under pseudonyms.

This is not the kind of structure appropriate for making decisions about YouTube videos in the sort of charged atmosphere we're dealing with today, with potential immediate and long-term real-world consequences for lives, property, and more.

The proper venue for these decisions is internally at Google. They are in the best position to determine how to react to these situations, including demands from governments, other organizations, and individuals. Google ultimately is the party with any legal exposure from these decisions, so the decisions must be theirs.

If Google feels it appropriate to privately reach out to external experts and observers for input relating to these situations, all well and good. Perhaps they're already doing that. But this can't reasonably take place in the hothouse of the public sphere, where every comment or speculation will likely trigger endless arguing, exploitation, and trolling that will only inflame passions, not lead toward reasoned decisions.

Personally, I have faith that Google will do their best to weigh the many involved factors honestly, and make the best possible decisions in an area where we are faced almost entirely with shades of gray, rarely with black and white certainties. Not only does YouTube's overall trajectory over time suggest this path, but it's also in Google's own best interests to navigate this course with all possible diligence and care.

And frankly, I believe that any moves toward external decision-making in these regards would also tend to inevitably open the door to a slippery slope of outside pressures that would ultimately take aim at effective control of more than just YouTube, attempting to also gain strangleholds on search results and other affiliated services as well -- a nightmare scenario we must avoid at all costs.

September 14, 2012

Piece by piece -- and thanks to the Internet with fair speed now -- the provenance of the hideous anti-Islamic video playing a role in current Mideast violence is becoming increasingly clear.

It is now obvious that the amateurish production -- given the known history of the region -- was created specifically to instigate violence. But the more we learn, the more evil the undertaking is revealed to be.

The apparent producer, operating under a long list of assumed names, appears to have an extensive criminal record for fraud and other offenses, including multiple prison incarcerations.

When the current violence brought his video into public focus, he initially reportedly falsely identified himself as Jewish and Israeli, in an obvious and insidious attempt to trigger violence against those groups. Apparently he is neither -- it is currently reported that he is actually an Egyptian Coptic Christian, engendering fears in that group of possible retaliations.

Actors in the production have reported that at no time were they told that the subject matter related to Mohammad. Rather, they were hired for a production about life in the ancient world, and the main character was called "Master George" -- in retrospect just the right number of syllables for dubbing in "Mohammad" later. And in fact, the actors assert that all of the inflammatory dialogue was dubbed in post-production without their knowledge nor consent.

It's difficult to imagine a more sordid scheme to inflame known passions and trigger terrible violence.

It is inappropriate at this time to argue about whether or not such a film should actually trigger such reactions. Comparisons to comedic, satiric, or even controversial dramatized films or television programs regarding various religions are largely orthogonal now. None of the examples being mentioned in some quarters were totally without any artistic or scholarly merit -- unlike this production -- nor were they designed (as the video was) specifically to set fire to an area of the world already on an emotional knife's edge for many reasons.

The Internet's question of the moment appears to be whether or not Google's YouTube was justified in targeting blocking of access to the video in question (specifically in Libya and Egypt) where the related violence has been most serious so far. Some observers, including people I much respect, have been critical of this decision.

I'm forced to respectfully disagree with those critics.

Anyone who knows me knows that you'll be hard pressed to find anyone more dedicated to freedom of speech than I am. And I've long asserted that attempts to effectively censor material on the Internet are doomed to failure and often counterproductive even to the stated intentions of the would-be censors.

But there are no absolutes in life other than death, and here we have a prime example.

It's a well-known principle that purposely and falsely yelling "fire" in a crowded theater is not an acceptable exercise of free speech.

But if a natural gas leak is feeding the flames in a burning building, putting lives genuinely at risk, it can be entirely appropriate to temporarily cut off that gas supply. It may not stop the fire entirely, but it will stop feeding the conflagration. This doesn't mean you're cutting off the gas for the entire world. It doesn't mean you'll never turn the gas back on to that building again after it is repaired.

It simply means you're taking steps to help save lives right now in an extraordinary situation.

This in my view is what Google has responsibly done in this case.

I've worked enough with Google to have some inkling about the rigorous discussions that internally drive Google policies, especially regarding controversial or particularly complex issues. And I feel safe in assuming that their decision to block the video in those specific countries for now, was only made after due deliberation, keeping in mind Google's long-standing goal of maximizing the availability of content while staying in accordance with relevant laws, and their working to achieve the greatest possible public transparency regarding any related actions.

There will be time later to argue the philosophical questions surrounding this evil video, its impacts, and the reactions that have occurred.

For now though, our number one concern should be minimizing the loss of human life. We need to help the situation evolve toward a state where reasoned discussion can again take place, and the broader issues brought back into focus.

Google is doing the right thing by deploying tightly limited blocking of specific content in this emergency situation. I would do exactly the same thing.

There's an old saying that "the exception proves the rule" -- and in situations like this, knowing when those exceptions need to be employed is a hallmark of being a caring and thoughtful member not only of the Internet community, but also of the even larger global community itself.

I've been flooded with messages from upset YouTube users since then, many of whom took the time to write up -- sometimes in extremely lengthy detail -- their battles with YouTube relating to this area.

Since my posting last month, there have been a couple of other high profile events relating to the issues of automated video takedowns as well.

The science fiction Hugo Awards live stream on Ustream was suddenly cut off when Ustream's third-party content scanner, Vobile, claimed to have found "forbidden" content. Since then, Ustream and Vobile have been pointing fingers of blame at each other. Clearly this situation hit a nerve at Ustream. Their blog posting attempting to apologize for the disruption attracted a large number of comments -- mostly polite but many angry -- which Ustream unceremoniously and suddenly deleted en masse -- and they then blocked further comments on that posting. I fully support the right of blog authors to determine their comments policies and whether they want to support comments at all -- but to delete such a large quantity of on-topic, already posted comments without explanation does seem to smell of censorship, not reasonable moderation.

Then a few days ago, YouTube's live stream of the Democratic National Convention ended with a copyright warning claiming that a whole long list of Content ID partners had filed a claim against the stream. Google says (and there's no reason to disbelieve) that this was an error that occurred when the stream ended normally, but it's understandable that people already sensitized to takedowns became immediately concerned.

Overall, it's these higher profile cases that are usually going to be the most straightforward to resolve going forward.

Not necessarily so for the sorts of situations that ordinary YouTube uploaders described in their missives filling my inbox for the last month.

By and large, most of their complaints fell into the kinds of categories I described in my August posting as noted above. Public domain audio and/or video clips being pulled down. Their own original clips being pulled down when they were incorporated into Content ID partner clips without permission. Completely off-the-wall, obviously erroneous takedowns from unscrupulous YouTube exploiters. You name it -- it happens -- and apparently in large numbers. It has happened to me, too.

Then there are the folks who cannot understand why a recording of Mendelssohn's "Wedding March" that was played at their wedding results in ads appearing on their wedding video, or the entire audio track deleted, or even the video being blocked in various locales or globally.

There is also the sense that many YouTube users faced with notifications of Copyright Strikes or Content ID "hits" are very confused about whether or not they have any real recourse.

YouTube has made great strides in improving their notification dispute forms related to these events, but they are still apparently confusing many people, who just throw up their arms and give up in resignation. And for people who do submit the dispute forms, often they simply result in a "dispute denied" return with no obvious path for appeal.

To YouTube's credit, when a dispute is filed against a takedown, the usual procedure is for the video to be provisionally returned to viewable status, until the dispute is "answered" by the claiming party.

Yet unless or until a takedown dispute claim is filed, it's also the case that when a Content ID or Copyright claim is made, videos are immediately subjected to sanctions -- third party ads, audio deletions, geographically limited or global takedowns, etc.

More confusing is the fact that even when a video uploader has gone through this entire process once, claimants can change their minds at any time, resulting in new or altered sanctions against the same uploaded videos later!

I have previously discussed the legal. logistical, and practical reasons why systems like Content ID have become necessary, especially in the context of their being desirable alternatives to the untenable, impractical, and dangerous concept of requiring all uploaded videos to be pre-screened by humans before becoming publicly available (see my posting from last month at the link above for more).

Yet the ongoing status quo is also increasingly untenable. As the quantity of video uploaded rapidly expands, and use of systems like Content ID greatly accelerates, we're weaving a net of restrictions so tightly that ever more legitimate content will be erroneously caught in its grasp, and given the "guilty until proven innocent" nature of these takedown systems, the results aren't going to be pretty.

Can we do better?

Yes, I believe that we can.

I have no knowledge about the internal workings of Content ID beyond publicly available information, but here are some thoughts for consideration.

A database of public domain video clips and related materials needs to be established, and used as a counter-signal to help prevent erroneous takedowns to the greatest extent practicable. It is unreasonable for Content ID partner videos that include PD materials to trigger the takedowns of other YouTube users who have included the same PD visual footage or audio.

Ordinary YouTube users need some mechanism to protect their own original video materials from incorporation into Content ID partner videos in ways that will later be used to takedown the original clips. Ordinary users may seem like specks in the wind to the MPAA and RIAA giants, but these ordinary users have content ownership rights as well -- and also deserve protection.

In instances where video blocking would currently immediately take place based on a Content ID claim, there should be more "friction" in the system, at least when relatively low numbers of video views have been involved. In such a case, the video uploader could be notified and file their dispute, but the video would not be proactively taken down during this period. Delaying the video takedown may arguably be less appropriate when large numbers of video views have quickly occurred, since this may be a signal of "gaming" the system. And when only third-party ads are involved, not takedowns, the situation is overall less urgent in most cases.

Content ID hits that involve relatively constrained segments of video or audio should not typically result in knocking out the entire audio tracks or blocking the entire involved videos. If a short segment of video is claimed to be offending, that segment could be blacked out with an appropriate visible note -- without blocking the entire video. Similarly, music-related Content ID hits could mute only that section of audio, rather than the entire audio track.

Not everything I've suggested above may be practical within the current mechanisms of YouTube's Content ID and related systems. Nor does every YouTube user with a supposedly "offending" video receive exactly the same treatment even now. The details of how Content ID makes its determinations are not publicly known, and inconsistencies in deployed penalties are also frequently mentioned in the notes I've received on these topics.

That said, as noted I've faced erroneous Content ID claims myself, and I've run various ... experiments ... on both YouTube and via Google's "Hangouts On Air" (which appears to feed content through the Content ID system) to try get a feel for what it takes to trigger the alarms. I have some pretty good data from these tests -- but for now let's just say that inconsistency is indeed a matter of considerable quite relevant concern.

Even if we choose to ignore my specific points and conceptual suggestions above, it is still undeniable that continuing down the current path appears to be heading toward something of a slow speed train wreck.

The combination of expansive content rights with automated content analysis systems -- unable to really deal appropriately with public domain materials and fair use -- has created a tightening noose that could ultimately squeeze much of the life out of ordinary user-created video content. Even if we stipulate that the current apparent skewing of these systems toward the powerful content giants is the result of practical and technical considerations, rather than any particular policy imperatives, such a viewpoint doesn't help us escape from this rapidly coagulating, stultifying dilemma.

We can do better. We must do better. And the sooner that open dialogue really gets going toward dealing with these issues in the name of all stakeholders, from the teenager creating their own video masterpiece in their bedroom, to the largest of the studios here in L.A., the greater the chances that we'll be able to avoid the nightmarish day that "This video is no longer available." becomes the standard-bearer of what were once our technological dreams.