SOPA: Online Piracy Bill Shelved Until 'Consensus' Is Found

Whether or not, websites should black out against Stop Online Piracy Act (SOPA), the internet community can finally relax now as the highly controversial SOPA bill was shelved by the U.S. congress, putting off action on the bill indefinitely, reports The Hill.The announcement came just hours after Judiciary Chairman Lamar Smith (R-Texas), SOPA's sponsor, made […]

Share online:

Whether or not, websites should black out against Stop Online Piracy Act (SOPA), the internet community can finally relax now as the highly controversial SOPA bill was shelved by the U.S. congress, putting off action on the bill indefinitely, reports The Hill.

The announcement came just hours after Judiciary Chairman Lamar Smith (R-Texas), SOPA's sponsor, made a major concession to the bill's critics by agreeing to drop a controversial provision that would have required Internet service providers to block infringing websites.

"House Oversight Chairman Darrell Issa (R-Calif.) said early Saturday morning that Majority Leader Eric Cantor (R-Va.) promised him the House will not vote on the controversial Stop Online Piracy Act (SOPA) unless there is consensus on the bill.

"While I remain concerned about Senate action on the Protect IP Act, I am confident that flawed legislation will not be taken up by this House," Issa said in a statement. "Majority Leader Cantor has assured me that we will continue to work to address outstanding concerns and work to build consensus prior to any anti-piracy legislation coming before the House for a vote."

Issa said that even without the site-blocking provision, the bill is "fundamentally flawed."

"Right now, the focus of protecting the Internet needs to be on the Senate where Majority Leader Reid has announced his intention to try to move similar legislation in less than two weeks," he said.

For past few months, SOPA had created a rage on the net, with companies such as Google, Twitter and Facebook highly opposed the bill, and the people boycotting companies that supported the bill -- Reddit supported and led a initiative to boycott to reduce the amount of domain names that GoDaddy was selling.

A number of websites are (or were) planning to "go black" this week while the U.S. Congress discusses issues related to the Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA), because SOPA and PIPA threaten the existence of sites that link to copyright infringing content (like Twitter, Wikipedia, Facebook and every other site on the Internet) the bills -- which are currently stalled in Congress -- have sparked a massive online backlash.

CloudFlare, anti-SOPA launched the "CloudFlare Stop Censorship" app -- which essentially solves investor Fred Wilson's problem of not knowing exactly how to black out. Anyone who uses CloudFlare can download the app with one click from the CloudFlare App Marketplace, and those who don't use ClouFlare, can grab the code from Github.

Google's Pierre Far, also shared several tips earlier in his Google+ post titled "Website outages and blackouts the right way":

Webmasters should return a 503 HTTP header for all the URLs participating in the blackout (parts or the whole site) -- telling the crawlers: it's not the "real" content on the site and won't be indexed by the search engines crawlers, b. it won't cause duplicate content issues (when all of the pages are blacked out).

As Googlebot is currently configured, it will halt all crawling of the site if the site's robots.txt file returns a 503 status code for robots.txt. This crawling block will continue until Googlebot sees an acceptable status code for robots.txt fetches (currently 200 or 404).

"Keep it simple and don't change too many things, especially changes that take different times to take effect. Don't change the DNS settings. As mentioned above, don't change the robots.txt file contents. Also, don't alter the crawl rate setting in WMT. Keeping as many settings constant as possible before, during, and after the blackout will minimize the chances of something odd happening," advises Far.

Far advise "don't block Googlebot's crawling with a "Disallow: /" in robots.txt, as it will cause crawling issues for much longer than the few days expected for the crawl rate recovery."

Finally, monitor the crawl errors section in webmasters tools for a couple of weeks after the blackout to ensure there aren't any unexpected lingering issues.

Post navigation

About The Author

Deepak Gupta is a IT & Web Consultant. He is the founder and CEO of diTii.com & DIT Technologies, where he's engaged in providing Technology Consultancy, Design and Development of Desktop, Web and Mobile applications using various tools and softwares. Sign-up for the Email for daily updates. Google+ Profile.