Was it only in 2004 that the pundits posed the question as to whether Microsoft was the new IBM? How time flies. Last week, freelance journalist Erik Sherman raised the question “Is Google the new Microsoft?” as he reviewed Google’s latest behaviors and market dominance. For evidence, Sherman listed several similarities:

Google asks users to trust them with an explanation essentially of “because we say so” and because we care

Dominates search almost to the extent that Microsoft dominated the desktop and laptop

Acts first, asks later – after the lawsuits start hitting the fan, as with the book scanning

Ignores antitrust concerns, drawing government attention in the U.S. and Europe

What do Google’s dominance and actions mean to your privacy and safety?

Google’s services have the ability to collect, data mine and resell information – whether it be your content, your location, or elements of your identity – to a far greater extent than an operating system or productivity tools ever could.

This means that transparency, consumer choice, and the ability to opt out of features, services, or to have your information erased entirely are more critical than ever. In the face of these risks, the points noted above are more than a little concerning.

Trust must be earned, on a user-by-user basis; and trust-but-verify applies even when trust has been earned. Every user should be able to see the information collected, stored, or shared about them at any time – and be able to have it removed.

Monopolies become dictatorships – benevolent or otherwise. At the end of the day, companies and corporations are responsible to their bottom line. The actions taken by Google that have drawn governments’ attention should concern every internet user. Their seeming disregard for antitrust concerns only heightens the unease.

Congratulations to Digg. In an effort to reduce the amount of link spam on Digg, a social news website where people can discover and share content online, the company announced a change in their policy towards questionable links.

Spammers use sites like Digg to post their links in an attempt to drive lots of traffic to their sites. In addition to direct clicks by users, the spammers know that search engines are likely to rate their link as more important if their URL is found on Digg.

By adding a “rel=nofollow” tag to every link that Digg doesn’t trust to be legitimate, the company effectively instructs search engines to ignore the link so that it doesn’t positively influence the link’s ranking and bring it higher up in search results that consumers see. This undercuts the effectiveness of some types of search engine spam, and improves the quality of search engine results that you receive. The nofollow policy is applied to questionable links in stories, profiles and comments.

Digg’s VP of Engineering, John Quinn, commented on the change today in a blog informing users of the change:

We’ve made a few changes to the way Digg links to external sites that may impact some folks in the SEO [search engine optimization] community. These changes reduce the incentive to post spammy content (or link spam) to Digg, while still flowing ’search engine juice’ freely to quality content. We’ve added rel=”nofollow” this code is an HTML to any external link that we’re not sure we can vouch for. This includes all external links from comments, user profiles and story pages below a certain threshold of popularity.

This work was done … in an effort to look out for the interests of content providers and the Digg community.

Digg did not disclose how they determine which sites they mistrust, and that’s probably for the best as it doesn’t give spammers insight they may use to circumvent the blocks.

It is great to see companies that proactively protect consumers. Hats off to Digg.

For more than a year, Canada’s privacy commission, under the leadership of Jennifer Stoddard investigated Facebook’s privacy policies and tools. They found that Facebook gave “confusing or incomplete” privacy information to subscribers and gave developers “virtually unrestricted access to Facebook users’ personal information.”

Under pressure to change, Facebook today announced plans to improve their service. “Our productive and constructive dialogue with the Commissioner’s office has given us an opportunity to improve our policies and practices in a way that will provide even greater transparency and control for Facebook users,” said Elliot Schrage, Vice-President of Global Communications and Public Policy at Facebook. “We believe that these changes are not only great for our users and address all of the Commissioners’ outstanding concerns, but they also set a new standard for the industry.”

Here are the specific changes Facebook will be making according to their Press Statement:

Updating the Privacy Policy to better describe a number of practices, including the reasons for the collection of date of birth, account memorialization for deceased users, the distinction between account deactivation and deletion, and how its advertising programs work.

Encouraging users to review their privacy settings to make sure the defaults and selections reflect the user’s preferences.

Increasing the understanding and control a user has over the information accessed by third-party applications. Specifically, Facebook will introduce a new permissions model that will require applications to specify the categories of information they wish to access and obtain express consent from the user before any data is shared. In addition, the user will also have to specifically approve any access to their friends’ information, which would still be subject to the friend’s privacy and application settings.

Facebook announced, “work on the planned changes will begin immediately. However, some changes will take some time before they are visible. For example, updates to the Privacy Policy will require a notice and comment period for users. In addition, the changes to how users share information with third-party applications will require significant time and resources, both for the updating and testing of the new Facebook API, and for third-party application developers to reprogram and test their applications. Facebook anticipates this entire process will take approximately 12 months.

Thank goodness. These changes are a long time in coming, and every Facebook user will benefit from the work now being undertaken. This is a significant step towards recognizing users’ right to privacy, choice, and transparency.

Until the changes are in place (up to a year from now), I recommend that you do not use 3rd party applications, and that you carefully review the safety/privacy settings you currently have in place.

Stronger focus on creating a trained workforce to thwart high-tech threats, increased frequency of national cyber-reviews, and the development of a workforce plan to address skill deficiencies and an analysis of barriers to recruitment of cybersecurity professionals are among the changes introduced over the August recess to the cybersecurity legislation by Senate Commerce Chairman John (Jay) Rockefeller and Sen. Olympia Snowe, R-Maine.

Though the revisions have not yet been approved, they incorporate excellent feedback to this important legislation. As a nation, we simply do not have enough qualified cybersecurity experts within law enforcement, government bodies, and companies to effectively combat the mounting threats against our infrastructure, and this legislation is an excellent step towards changing this shortfall.

Also encouraging, is that even in these difficult economic times the original bill’s provision of a National Science Foundation scholarship program is preserved, and that significant funding is set aside for the National Institute of Standards and Technology to conduct competitions to woo students into cybersecurity careers.

Another alteration to the bill is the curtailment of what was a highly contentious provision, which had the potential to give the White House the authority to effectively turn off the Internet during a cyber crisis. The redrafted proposal directs the president to work with the industry during cyber emergencies on a national response as well as the timely restoration of affected networks.

The significant and escalating threats to our economy, infrastructure, and safety demand a strong response, and shift in course that this legislation, if appropriately crafted, will begin to address.

There has been a longstanding legal battle over what companies should be required to do in order to monitor and block harmful content from minors. The debate has recently flared up again, and it is worth understanding the issues at stake.

The controversy

On one side of the content filter debate is the Justice Department. It is seeking to reinvigorate the Child Online Protection Act (COPA) that was first created in 1998 to protect minors from commercially distributed pornographic content on the Internet. COPA requires commercial Web sites to secure proof of identity and age before displaying content that could be harmful to minors.

On the other side of the debate are the ACLU and a broad array of Internet content providers. They argue that COPA is flawed and that content filters allow parents adequate opportunity to protect their children. They also assert that about half of the sites that promote sexually explicit content are international in origin where the law would have no bearing anyway.

Introduced into the current court hearing was a new study by Professor Philip B. Stark, a statistics professor at the University of California, Berkeley. Stark’s research on the effects of content filtering software found that one filter—AOL’s Mature Teen—blocked up to 91 percent of sexually explicit Web sites. The study showed that less restrictive filters blocked “at least 40 percent” of explicit content. (The report did not mention how many desirable sites were blocked in the process.)

Citing Stark’s research, ACLU attorney Chris Hansen claimed that because “filters are more than 90 percent effective,” “it’s up to the parents how to use it, whereas COPA requires a one-solution-fits-all [approach).”

If only one percent of Web sites are pornographic and filters are more than 90 percent effective, what’s the issue? If blocking “harmful” content is as easy as installing a filter tool, why is it that 82 percent of users feel the ease of stumbling across sexually explicit material is a problem? (Consumer Reports WebWatch, 2005) There are serious flaws in the arguments on both sides of this debate.

Flaws in COPA

The COPA proposal flaws include the following:

At eight years old, COPA is based on a view of the Internet that is antiquated both in understanding newer revenue models (ad funded and the like) and in failing to address some of the newer functionality for sharing and distributing content (which further reduce the effectiveness of filters that the effectiveness of COPA depends on. For example, It has a very simplistic view of how ‘bad content’ can be discovered. It doesn’t account for material generated by users or for RSS, P2P sharing, and other innovations that have developed since 1998.

The regulation would apply only to U.S. companies which means that all Web sites hosted internationally (more than half of all porn sites) would not be bound by the laws.

COPA only addresses commercially distributed content, but there is a great deal of “free” content that falls into the category of “harmful to minors.”

Flaws in arguments of those opposed to COPA

The arguments from the ACLU and others contain flaws as well, including:

Chris Hansen’s claim that “filters are more than 90 percent effective” is blatantly overstated and contradicts the research of Professor Stark—that one filter blocked 91 percent of sexually explicit content) Hansen did not mention what the rate of over-blocking is at that filter setting. (Note: over-blocking means a filter falsely blocks a legitimate site, like a *** cancer site because it contains the word ***). If a content filter over-blocks legitimate content too frequently the filter is so frustrating to use that consumers give up and turn it off.

The Stark study indicated that less restrictive filter settings “blocked at least 40 percent of sexually explicit sites,” a number that is more realistic in terms of filter accuracy without incurring significant overblocking. That means, however, that less restrictive filter settings fail to block about 60 percent of content deemed harmful to minors. This may be a show stopper for many parents when the average age of first exposure to unwanted sexually explicit material is eleven (research by Top Ten Reviews), and 25 percent of youth have unsolicited exposure to sexually explicit content (research from Online Victimization: A Report on the Nation’s Youth).

First, the one percent data point refers to Web sites—, not the frequency that porn is presented to minors.

Secondly, this statistic is the result of one study. Other (from Top Ten Reviews, for example) suggests that pornographic Web sites represent 12 percent of the total.

The ACLU’s assertion that parents can take charge, will know where to find and how to download and install them, and proactively watch out for their children’s online safety all the while achieving over 90 percent accuracy in blocking sexually explicit images—ignores that the children potentially at greatest risk are those whose parents aren’t taking the appropriate steps to protect them in any facet of their lives. COPA intention was to default to safer settings for the protection of minors, the ACLU and like minded companies want to assume an unfiltered approach and require proactive steps to be taken for the protection of minors.

Follow the money

Companies are in the business of making money and minimizing costs. Building strong filters that allow consumers to set their own content experience or the content experience of their children is complex and expensive.

To provide highly effective content filters, a company would need to screen all content—text, images, video, and audio.

Also, it isn’t a build-the-filter-once-and-you’re-set proposition. Businesses and individuals who want to circumvent the filters are constantly working on ways to do so (just as they do with spam, phishing, spyware, and virus safeguards.)

Keep in mind that each of these filters (and updates) has to be planned, built, tested, and translated into many languages. They must account for cultural sensitivities, respect differing state and national laws, and empower consumers with enough flexibility to set their own standards.

Filters have to work on a dizzying array of networks, operating systems, Internet browsers, and devices including PCs, Internet-enabled cell phones, gaming devices like Xbox, and so on. Each type of device and operating system has unique development and testing requirements.

It isn’t enough to simply filter content that can be browsed. To really provide consumers the content filtering choice and protections they should have, content filters need to be applied to content in blogs (like MySpace, Friendster, and Facebook), video hosting services (like YouTube, Google Video, and Windows Live Soapbox), as well as content served up by the services themselves.

There is also the reality that no matter how good a filter is, it won’t catch everything. Companies struggle with the concern that trying to filter and failing will open them up to greater legal exposure than if they do nothing.

These are huge challenges—not insurmountable, but certainly not appealing for businesses who face the costs and aren’t hearing a huge outcry from consumers demanding change.

What you can do

In spite of the complexity, empowering consumers to set content filters to match their values and protecting minors are goals worth shooting for. But it’s naïve to imagine the Internet industry taking on this challenge and expense without clear regulatory requirements, strong consumer demand, and some safeguards that protect them from penalties for any gaps. I can’t think of a single industry that has managed to successfully regulate itself and put consumer interest and safety first.

For regulators and law enforcement: Focus your energy on these three areas:

In its current form, COPA won’t be successful for the reasons cited above. Rather than fighting to enforce COPA as it is written today, revise and modernize it so that it provides the intended benefits without compromising free speech and privacy.

Provide companies protection from legal exposure if harmful material slips through when they have demonstrated diligence in providing strong filters.

Work across state and national borders to standardize regulatory requirements to minimize the breadth of legal variables companies will face when building filters.

For consumers: Let companies and elected officials know that you demand that your safety and values be protected and respected; if you don’t let companies know your expectations it will surely take longer to achieve them. (To fuel your demands, read my blog, Your Internet Safety Bill of Rights.)

For Internet companies:

Increase your investments in researching and building robust filters that provide consumers the safety and flexibility they need.

Make safety a top priority in building consumer trust and loyalty.

Reach out across the industry to establish standards and best practices.