Our knowledge of DNA continues to expand and, by extension, so does our ability to manipulate it. Recently, researchers have created a viable organism with a chromosome incorporating pairs of synthetic bases. [1] Others have engineered an organism containing only the genes necessary for life. [2] In addition to advancing our understanding of the life sciences, which may indirectly enhance human welfare, working with genes also has direct applications to human health: for example, genetic testing allows us to more effectively diagnose and treat diseases. [3] Genetic engineering has been extremely useful in cancer research. [4] Outside of the field of human biology, genetically modified crops may be controversial, but they offer significant agricultural and economic benefits. [5]

Yet for all the benefits we stand to gain from our understanding of genetic information, it raises a host of issues which baffle both our legal system and our ethical code. One such issue, which implicates both the law and ethics, is: what does it mean to “own” a gene, and who should do so? This may seem like an intellectual abstraction, but the answer is of great practical importance to medicine and research.

Much legal scholarship considering the issue, including in this publication and on this blog, deals primarily with IP laws. [6]Following the Supreme Court’s decision in Diamond v. Chakrabrty, [7] which has been called the biotech industry’s Magna Carta, one may patent living organisms one has created. This is not restricted to micro-organisms (although patents on humans are prohibited by statute) [8]which has led to suits over, for example, patented mice used for research purposes.[9] The decision remains controversial, with many believing that living beings and their genes simply should not be subject to ownership, of which they consider patents a form. [10] (Arguably, patents are not true ownership but simply a right to prevent certain actions.) Others feared that such patents could hinder medical and scientific practice, with negative impacts on human health and well-being.[11] In 2013, the Court alleviated many of these concerns in Association for Molecular Pathology v. Myriad Genetics. [12] The Court held that one may not patent natural genes but that synthetic genes, including cDNA, are still patent eligible. This removed potential obstacles to genetic testing, and to medical treatment based on genetic analysis. Patents may still pose barriers to research in areas, such as protein synthesis or genetic engineering, which rely on cDNA or other artificial DNA. Such patents may also encourage research in such areas, however, through creating the prospect of a significant financial reward.

Patents aside, the concept of ownership is one way to address the difficult question of balancing individual privacy against encouraging potentially lifesaving medical developments. Although scientific inquiry depends on widespread access to data, many seem to believe that one has property rights over one’s own cells and the genes contained within, including the right to control use of data derived from those cells.[13]In a widely-cited decision, Moore v. Regents of the University of California, [14] the Supreme Court of California held that existing law did not grant such rights.[15] The court declined to extend the law to create one, reasoning that to grant such a right would greatly hinder research, that human biological materials should be regulated by statute rather than by common law causes of action, and that the doctrine of informed consent protected genetic donors from unauthorized use of their genes. [16] Other courts have since reached similar conclusions. [17] Perhaps in response to these decisions, a number of states have passed legislation declaring genetic information to be personal property. [18]

The issue becomes even more difficult writ large, dealing with ownership by a culture rather than by one person, particularly when that culture attaches spiritual value to genetic information. Such information is often shared, or used beyond the purpose of the original study, without the donors’ consent. [19] The practical implications of this were recently demonstrated by the San, a widely studied people living in southern Africa, who issued a code of ethics for those intending to study their genes. [20] Among other provisions, the code requires researchers to consult the San as to conclusions and whether to publish, and forbids re-use of data without permission. Thus, although such a code respects the dignity and values of other cultures, it to some extent constrains the independent analysis that underlies scientific practice. Similar approaches have been taken by indigenous peoples in Australia and Canada, [21] and have been proposed for Native Americans in the United States. [22]

Although discussions of genetic ownership tend to focus on medicine and research, there are other contexts in which genetic information could be valuable; for example, it is thought that some insurance companies have used it in evaluating the risk of potential insurees. [23] Other difficulties are likely to arise as genetic technology continues to advance; for example, one scholar has queried whether operative cloning could undermine traditional animal breeding, and whether an ownership model of animals’ DNA could preserve professions which rely on the value of those animals’ genetic characteristics. [24]

To regard genetic information as personal property may resolve many of the worries associated with uninhibited genetic research; for better or for worse, this approach seems to be gaining ascendancy among the general public and (to a lesser extent) the scientific community. Other difficulties are likely to arise, however, as the full extent of a ‘property right’ in one’s DNA becomes apparent. Popular attitudes towards DNA may harden or change over time, as the public becomes more familiar with genetics. We are far from resolving the debates, legal and philosophical, over the “ownership” of genes. Nevertheless, I suspect that the era of primarily self-regulated genetic research is drawing to a close.

]]>4441A New Era for Privacy & Data Protectionhttp://stlr.org/2017/02/22/a-new-era-for-privacy-data-protection/
Thu, 23 Feb 2017 03:06:47 +0000http://blogs2.law.columbia.edu/stlr/?p=4380Continue Reading →]]>On October 27, 2016, just days before a presidential election, the Federal Communications Commission (FCC) passed new broadband consumer privacy rules. The new rules were passed with a 3-2 vote—straight on party lines, with all the Democrats voting for the rules and the two Republicans voting against. Now, however, the Republican commissioners have a majority and the new privacy rules could very well be overturned. The new rules require internet service providers to obtain permission from consumers before sharing their data with third parties. Of particular significance is the sharing of web browsing data with third party advertisers, who use the information to customize brand experience to particular consumers.

For over 80 years, the FCC has exercised the power granted by the Communications Act of 1934 to regulate telecommunications and broadcasting.[1] From regulating monopoly telephone services to protecting net neutrality, the FCC has confronted some of the most complicated and quickly-changing legal issues. Today, personal information data breaches are now one of the most divisive and important issues that consumers, providers, and regulators face—and the new privacy rules attempt to address these issues by placing some of the control in how and when personal information is used back into the hands of consumers. Data breaches are all too common. In fact, there is “widespread evidence of data breaches and vulnerabilities related to consumer information.”[2] Further, there is an increased risk in data breaches when companies share sensitive or confidential information with third party vendors.[3] The new FCC privacy provisions will affect how and when consumer data can be gathered by internet service providers before sharing that sensitive data with third parties.

Section 222 of Title II of the Communications Act provides that every telecommunications carrier “has a duty to protect the confidentiality of proprietary information of, and relating to, other telecommunication carriers, equipment manufactures, and customers.”[4] Further, all telecommunications carriers are limited in their ability to share such information with third parties.[5] Prior to the new privacy provisions, internet service providers were not restricted by this statute to share information with third party marketing groups. Instead, broadband internet service providers were classified as “information services.” The new privacy provisions reclassify internet service providers as telecommunication services—imposing the statutory duty of Section 222.[6]

Now, opt-in consent is required before the exchange of broadband consumer information to third parties. This opt-in consent requirement means that internet service providers must obtain consumer permission over categorically sensitive information, including precise geo-location, children’s information, health information, financial information, Social Security numbers, web browsing history, app usage history, and the content of communication.[7] This significant change to Section 222 for broadband internet service providers creates a higher obligation to protect consumer privacy when exchanging information to third parties, which could limit the number of personal information breaches. In fact, the “strong security protections are crucial to protecting consumers’ data from breaches and other vulnerabilities that undermine consumer trust.”[8]

The FCC outlines that the new privacy rules will strengthen the protection of customer information by requiring internet service providers to implement industry best practices and data breach notification requirements.[9] By allowing customers to opt-in to the sharing of personal information, consumers themselves can limit their own personal risk of a data breach. While these rules could change the game in the number and type of data breaches, their effect may not be felt by consumers. The data breach notification requirements will not become effective until 6 months after the publication of the rules in the Order in the Federal Register, which took place in January. Now with a new head of the FCC and a Republican majority, the new requirements may never reach implementation—and the future of consumer privacy on the internet remains a complicated issue with no clear answer.

]]>4380Digital Afterlife and How to Tweet Post Mortemhttp://stlr.org/2017/02/01/digital-afterlife-and-how-to-tweet-post-mortem/
Wed, 01 Feb 2017 14:22:32 +0000http://blogs2.law.columbia.edu/stlr/?p=4327Continue Reading →]]>Carrie Fisher passed away on December 27, 2016, at the Ronald Reagan UCLA Medical Center in Los Angeles, after suffering a massive heart attack. Fisher, a famous actress best known for her role as Princess Leia in the Star Wars franchise, maintained a number of quite active social media accounts, primarily a twitter account with 1.25 million followers and a Facebook account with 523,845 Likes. Since her passing, no activity has been registered on these accounts as of the time of this posting.

The issue of handling one’s digital assets following death has become more prominent, as technology and social media plays a more central role in our lives. Digital assets, such as social media accounts, websites and emails, now have an economic value much like tangible assets. One can only imagine the potential of Carrie Fisher’s twitter account, with its 1.25 million followers, which now sits idle.

Legislative History

The Uniform Fiduciary Access to Digital Assets Act (UFADAA), approved by the Uniform Law Commission on July 16, 2014, attempted to tackle this very issue. The act aims to extend a fiduciary’s existing control over the decedent’s tangible assets to include his or her digital assets. The UFADAA allows fiduciaries to access electronic records of the deceased, the same way current probate, trust and banking laws enable fiduciaries to access tangible banks accounts and records, subject to a will, trust or other records. Although some raised concerns regarding the privacy implications of the act, arguing it effectively voids user privacy choices following a consumer’s death, the UFADAA was nonetheless submitted for the consideration of state legislatures. The act was revised the following year in 2015 to address the privacy concerns raised by critics and received the endorsement of tech giants such as Facebook and Google, which store much of the data covered by the act. Currently, 23 states have enacted some version of the Revised UFADAA. California, in September 2016, became the most recent state to join the fold. An additional nine states have introduced a bill to adopt the act as state law.

The Goals of the Act

Other federal and state laws regarding probate, property or agency did not account for digital property since social media, blogs, emails and other forms of digital assets became prevalent only after their enactment. Had the UFADAA not been enacted as state law in so many states, the terms of service and privacy policies would have governed the access of surviving family members to social media and email accounts. Since these terms usually expire with the passing of the user and are not transferable, one’s digital assets would have remained out of reach for one’s successors. Hence, descendants would not have been able to gain control over valuable digital assets such as frequent flyer miles, manage the deceased contacts list to notify friends of his or her death and of service arrangements or to gain access to cherished memories, such as family photos stored on the cloud.

In the prefatory note of its 2014 version, the UFADAA makes it clear that it simply intends to remove barriers to a fiduciary’s access to electronic records while leaving other laws, such as probate, trust, banking, investment securities and agency law, unaffected. It is aimed at updating state fiduciary law for the Internet age. Under the UFADAA, a fiduciary appointed to manage the property of another person, such as an executor, trustee, personal representative or agent under a power of attorney, will bear the legal authority to manage and distribute both the decedent’s tangible assets and his or her digital assets, all in accordance with the decedent’s estate plan. Without such authority, any access to digital assets may be deemed unauthorized under federal law (the Computer Fraud and Abuse Act, 18 U.S.C. § 1030, and the Stored Communications Act section of the Electronic Communications Privacy Act, 18 U.S.C. § 2701).

Note that due to the aforementioned privacy concerns, the fiduciary still requires the decedent’s prior affirmative consent in a will, trust, power of attorney or other record, to access and disclose private electronic communications and personal photos. The act strives to respect the privacy of the original owner of the digital assets and fulfill his intent with respect to such assets.

What Kinds of Digital Assets Are There?

Section 2(10) of the Revised UFADAA defined “digital asset” broadly to mean any record that is electronic in which an individual has a right or interest. This includes, without limitation, social media accounts; blogs; emails, computer files, information stored on digital devices, photographs and documents uploaded to the web, digital music, web sites and domain names, frequent flyer miles, virtual currency and digital entitlements associated with online games and services.

Online Tools for Directions regarding Disclosure of Digital Assets

The Revised UFADAA clarified that general assent to the terms of service cannot be interpreted as consent for disclosure or non-disclosure of information to fiduciaries and that any provision limiting or granting fiduciaries access are void (based on the assumption that no one really reads the terms of service). However, section 4 of the act provides for users the option to grant affirmative consent through an online tool, i.e., an electronic service provided by the online service provider that allows the user, in an agreement distinct from the terms of service, to provide directions for disclosure or non-disclosure of digital assets to a third person. The section further provides that if the online tool allows the user to modify or delete a direction at all times, such disclosures will supersede any contrary direction by the user in a will, trust, power of attorney or other record. This is a powerful tool for users to prevent the unwanted disclosure of the content of private electronic communications.

This provision is known to be extremely important to giant tech companies based in Silicon Valley, California such as Facebook, Yahoo and Google, which hold immense amounts of user data. The recent passing of AB 691 (Revised Uniform Fiduciary Access to Digital Assets Act) by California’s state legislature, which included a similar provision in section 873(a) (allowing for online tools to express decedent’s intent and guidance with respect to disclosure over directions made by will or other record), enjoyed strong support by these companies. This is due to the fact that the provision offers strong liability protection, in addition to protecting user privacy.

Currently, Facebook offers an online tool that enables users to dictate what will happen their accounts following death and designate a “legacy contact” to look after their accounts. Google offers a similar solution through its “Inactive Account Manager” tool. Other services such as Linkedin, Twitter, Yahoo and Instagram only allow for family members to contact the service following the deceased person’s death and ask for access to or deletion of the account. They do not permit users to submit instructions in advance of death with respect to their wishes regarding access in an unfortunate event; therefore, they risk exposure to privacy violations (granted the deceased is not available to assert such claim).

What’s Next?

Access to and distribution of digital assets post-mortem has become an important consideration in estate planning. This issue has also emerged in other fields of law. A rising new trend in prenuptial agreements is to address not only tangible property but also digital assets that possess potential economic value. People are not just planning for an untimely demise but also for an untimely divorce.

The Revised UFADAA did not try to break new legal ground but simply extended the laws governing fiduciaries to fit our digital age. The number of states that have adopted it will soon pass the halfway mark. In those states, users will enjoy control over their digital assets in the same manner as they have over their tangible property. With social media accounts having millions of followers and thousands of photos and videos that generate massive amounts of revenue, it seems only fitting that people should be able to consider the post mortem disposition of their online presence.

]]>4327The FCC’s Latest Privacy Regulations: A New Stance on Private-Sector Protections?http://stlr.org/2016/12/12/the-fccs-latest-privacy-regulations-a-new-stance-on-private-sector-protections/
Mon, 12 Dec 2016 18:51:39 +0000http://blogs2.law.columbia.edu/stlr/?p=4296Continue Reading →]]>Editor’s Note: This post was written by guest contributor Ido Sivan Sevilla, a Ph.D Candidate in Public Policy & Information Security at the Hebrew University in Jerusalem. Mr. Sevilla earned his Master’s degree in Public Policy Analysis as a Fulbright Scholar at the University of Minnesota – Twin Cities, and served as a Legislative Fellow for Congressman Ami Bera of California’s 7th Congressional District. Mr. Sevilla’s research focuses on cyber security in national defense and the public sector.

The Federal Communication Commission’s (FCC) recently published regulations for Internet Service Providers (ISP) are significantly different from previous federal privacy regulations. To some extent, these new regulations follow a trend of increased privacy protections in the post-Snowden era.[1] Nonetheless, in many aspects, these privacy protections are qualitatively different than other privacy regulations that have emerged in the last few years. They are aimed at key private stakeholders; pose unified data breach notification and data security requirements to advance both privacy and cyber-security; and advance consumer privacy at the expense of corporate revenue. Are we at the brink of a new regulatory model for private-sector privacy protections in the United States?

ISPs are everyone’s gateway to the Internet. They exist at a critical junction between our personal devices and the websites and services we choose to explore. Potentially, they can learn about our browsing history, the nature of our search queries, the efforts we put to hide ourselves online, and the applications we use to connect with the world. As private companies, they hold metadata based on our online behavior that even the most sophisticated encryption solutions cannot hide[2] and then translate this valuable data into revenue.[3] Data analysts can easily deduce our dreams, fears, desires, and personality by applying big-data analytics to this sensitive information.[4] That is why this information is worth a fortune. It is the fuel that powers both our information economy and our surveillance society.[5] Governments and corporations are equally interested in using this data to accurately profile individuals[6] and both view ISPs as an easily-accessible gold mine of information that can be used to serve their interests.

The U.S. government recognized the importance of ISPs a decade ago. In 2006 the FCC required ISPs to design ‘surveillance-friendly’ infrastructures that would allow law enforcement agencies to easily wiretap their desired network traffic given an authorization from a court.[7] Ten years later, the FCC has finally moved on from ensuring that ISPs permit government surveillance to requiring ISPs to protect the privacy of their customers.

The new FCC rules are innovative for three main reasons. First, these regulations are no longer based on the content of the information at stake. Thus far, cyber-security and privacy regulations that protect personal information have only applied to the medical, financial, and federal government sectors. By targeting ISPs broadly, the government has signaled that the previous paradigm, which was reluctant to impose any kind of restrictions on private players in the Internet economy, is starting to change.[8] Regulating privately owned companies that handle all types of personal information is something the United States has never seen before. FCC regulators were probably influenced by their colleagues in the European Union, who embraced an even broader approach and tackled ‘Digital Service Providers’ (DSPs) across the financial, health, transport, and digital infrastructure sectors through the recently enacted Network Information Security (NIS) Directive.[9]

Second, the new rules require up-to-date data security practices and impose federal data breach notification requirements on all ISPs. The issue of breach notification, despite its importance to the cyber-security of a firm and the privacy of its customers,[10] has struggled to gain Congressional support for almost a decade.[11] The U.S. currently has forty-seven breach notification laws at the state level, but no national data breach law. This new FCC requirement brings us closer to a unified notification standard that increases the privacy of customers and the cyber-security of organizations at the same time.[12]

Third, the new rules break the standard Internet business model fueled by processing and selling of personal information in exchange for ‘free’ services.[13] ISPs that benefit significantly from accessing and using personal information must now become more transparent and obtain user consent prior to processing their information. This elevates consumer privacy over ISPs’ business interests, which may require them to rethink their business model altogether. This might even pave the way for ISPs to offer incentives to consumers for sharing their personal data. In 1996, Kenneth Laudon, a professor of information systems in NYU, published a seminal paper in which he embraced a market-based approach to privacy and suggested that privacy may be protected through market mechanisms.[14] Laudon suggested establishing ‘banks of personal information’ that would control all our data and collect benefits for us each time we agreed to let an entity use our personal information. The FCC’s new regulations, twenty years later, might be the first step toward embracing this innovative model.

Unsurprisingly, 91% of Americans strongly believe that they do not have control over the way their personal information is collected.[15] Do these rules increase our digital privacy and security? As always, the devil is in the details. The compliance and enforcement of these regulations will set the tone. However, we can already see how these rules apply broadly to information holders across sectors, take us one step closer to a unified data breach notification standard, and prioritize individual privacy over corporate revenue. These new regulations are a significant step towards tackling privacy threats from the private sector, even if they do not address ISP cooperation in government surveillance.

[1] Since the Snowden revelations, we have witnessed several court rulings that have strengthened privacy (see, e.g., FTC v. Wyndham Worldwide Corp., 799 F.3d 236 (3d Cir. 2015), Microsoft Corp. v. United States, 829 F.3d 197 (2d Cir. 2016), Klayman v. Obama, 957 F. Supp. 2d 1 (D.D.C. 2013)) and an increasing amount of pro-privacy legislation and proposals in Congress in rates that are comparable to the 1970s and early 1980s in the U.S. federal arena.

[6] Although they differ on their reasoning, governments are interested in increased control over the citizens to prevent threats while private companies want to predict the interests of potential customers and sell products.

[12] When facing costly notification requirements, companies are incentivized to invest in cyber-security.

[13] Shoshana Zuboff has called this phenomena ‘surveillance capitalism’ – the accumulation of data by the private market. See Shoshana Zuboff, Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, J. of Info. Tech. 75 (2015), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2594754

]]>4296Posting without Hosting: Implications of Russia’s Recent Removal of LinkedInhttp://stlr.org/2016/12/05/posting-without-hosting-implications-of-russias-recent-removal-of-linkedin/
Mon, 05 Dec 2016 14:26:33 +0000http://blogs2.law.columbia.edu/stlr/?p=4273Continue Reading →]]>If you had hoped to check out your career prospects on LinkedIn from Russia this past week, you might have been puzzled by the site’s failure to load. LinkedIn has been blocked following the Court of Appeal’s (for Moscow’s Taginsky District) recent decision that LinkedIn violates a Russian data protection law, Federal Law No. 242-FZ (“No. 242”), on two counts: not storing data about Russians on servers located in Russian territory; and processing information about individuals who are not registered on the LinkedIn website and who have not signed the company’s user agreement. Following the court’s decision, Roskomndazor, Russia’s telecommunication and media authority, formally announced its intent to enforce the court’s ruling by blocking access to the website.

Roskomondazor’s decision to block LinkedIn is considered the first application of No. 242 since it officially went into effect in September 2015. The law represents the latest step taken by Russia to further isolate access to user data from foreign interference.[1] Such efforts seem counterintuitive in this day and age where cross border Internet traffic continues to exponentially grow. The McKinsey Global Institute estimates that “global online traffic across borders grew 18-fold between 2005 and 2012,” with forecasts of an additional eightfold increase by 2025. SpaceX recently applied to the Federal Communications Commission (“FCC”) for the right to launch a global Internet service, powered by satellites placed in Earth’s orbit. The company’s effort follows additional initiatives undertaken by Boeing, Samsung, and Facebook.

However, Russia is not the only nation that has taken proactive steps toward curtailing the transmission of certain information outside its country. To date, countries including Australia, China, India, and South Korea, and organizations like the European Union, have enacted similar data localization laws. Most of these efforts primarily derive from Edward Snowden’s disclosure of the wide-scale surveillance undertaken by the United States National Security Agency (“NSA”).

Following the wake of the scandal, a number of countries enacted “protective” measures, citing the possibility of foreign surveillance as a reason for preventing data from crossing their borders. Although these laws are designed to enhance the privacy and security of personal information against unwarranted intruders, they usually end up having the opposite result.

As Anupam Chander and Uyen Le note in their article on data nationalism, there are two primary reasons why these measures weaken the privacy and security of a nation’s personal data.[2] First, any requirement that calls for localized data servers reduces a country’s ability to distribute information across multiple servers in different locations.[3] The resulting consolidation of user data into a single server creates a “jackpot” of information, making it easier for criminals to access large amounts of data at once.[4] Second, nations may increase the likelihood of a substantial data breach by designating data security responsibilities exclusively to local providers.[5] This concern has been raised by a number of information technology associations from Europe, Japan, and the United States, who argue “security is a function of how a product is made, used, and maintained, not by whom or where it is made.”[6] This line of reasoning was also advanced in an Australian court of law, wherein Microsoft asserted, “[the Australian government’s] focus on storing electronic health records within Australia’s borders ‘could have a detrimental effect’ on security.” In spite of Microsoft’s argument, the Australian court ultimately decided to uphold the legislation.

As such, Russia’s decision to prevent LinkedIn from operating in its country serves as a signal of the current tension underlying the data nationalism movement. On the one hand, Russia claims that it is acting in the best interest of its citizens by preventing large multi-national corporations from improperly storing and using the personal data of its citizens. However, one could contend that Russia’s decision to localize the data undermines its protection efforts. Moreover, a number of Internet experts believe that data localization ultimately serves to suppress free speech because the concentration of data actually facilitates new and more efficient forms of surveillance that allow for greater government interference.

Governments have the responsibility to improve national security in the face of unwarranted foreign intrusions. What determines an appropriate protective measure ultimately rests on a sliding scale that the general public has to carefully monitor. There are a number of corrective tools readily available to governments, including: contracts requiring companies to observe strict privacy protocols, audits of foreign suppliers, reviewing the local laws of the foreign suppliers for their privacy protections, and reputational sanctions.[7] However, any measure that serves to further fragment and isolate a country from the Internet should be evaluated with a critical eye.

[1] Prior to the enactment of No. 242, Russia had Federal Law No. 97-FZ, which requires individuals and legal entities that are information organizers on the Internet to store all data for at lease six months in Russian territory.

]]>4273The IPTV Transition: How will regulators and customers react to the impending changes?http://stlr.org/2016/11/22/the-iptv-transition-how-will-regulators-and-customers-react-to-the-impending-changes/
Tue, 22 Nov 2016 21:10:55 +0000http://blogs2.law.columbia.edu/stlr/?p=4277Continue Reading →]]>Internet Protocol Television (IPTV) is a relatively new technology that promises to revolutionize the Television marketplace. Services like AT&T U-verse and Verizon FIOS promise to deliver television and Internet services to customers with increased access to Video-on-Demand (VOD) and increased Internet bandwidth. Unlike the recent digital TV transition initiated by the FCC and subject to several administrative orders and long-term delays,[i] transition to IPTV will be triggered by market factors. Most notably, customers’ preferences for VOD and the limitations of traditional cable will lead many to consider IPTV.

IPTV is, simply put, the process of “streaming” traditional TV channels and more modern VOD services over a single Internet connection instead of a co-axial cable connection.[ii] The main advantage of IPTV is cost savings caused by the ability to send only the channels and programs customers want to their television sets.[iii] Traditional cable TV broadcasts all channels to each customer at all times, which uses up a lot of the bandwidth available to technically inferior co-axial delivery service.[iv] IPTV only delivers content that customers currently want to TV sets, which saves energy by not broadcasting hundreds of channels to millions of customers all at the same time. Additionally, with VOD services and other “time shifting” services becoming more popular, IPTV offers customers the ability to dedicate more bandwidth to those specialty services.[v] The last advantage IPTV has is in connection with the companies that currently offer or will soon offer IPTV services – better Internet access. AT&T U-verse (which uses IPTV) and Verizon FIOS (which uses a hybrid system) deliver higher speed and larger bandwidth using fiber-optic cable connections then anything available through traditional copper wiring. In fact, companies have probably maxed out the bandwidth available from cable wiring and will need to transition to fiber to meet customer demand for more bandwidth.[vi] These companies are able to co-opt their better Internet services with IPTV services, which among other things, saves bandwidth allowing more to be dedicated to general Internet services and provides a better platform for VOD services.

Net Neutrality and Privacy

IPTV is not going away anytime soon, and will continue to grow as traditional cable becomes more and more archaic. There are, however, several legal implications of the expansion of IPTV. Firstly, Net Neutrality – which was recently passed with the FCC’s 2015 Open Internet Order[vii] – bans “throttling,” which is defined as “impairing or degrading lawful Internet traffic on the basis of content, application, service, or use of non-harmful device.”[viii] The problem is that, IPTV’s network structure essentially creates a fast lane for VOD content because VOD streams “must use dedicated bandwidth per subscriber.”[ix] Depending on a Court’s interpretation of “Throttling”, IPTV’s VOD services could violate Net Neutrality – giving rise to damage awards to private plaintiffs under section 206 and 207 of the Communications Act.[x]

Secondly, there are serious privacy implications with the proliferation of IPTV services. Because only some programming is sent to TV sets, IPTV gives providers the ability to use targeting advertising in ways simply not possible with traditional cable television services.[xi] Instead of advertising on a channel-by-channel basis, IPTV would allow neighborhood-by-neighborhood advertising. Additionally, tracking customers’ viewing habits is much easier – and much more similar to internet tracking that firms like Google use to provide targeting advertising services – then it was to track viewing behavior on traditional broadcast cable TV.

Regulators are currently unprepared for the challenges that IPTV will bring. The Net Neutrality Order – although it is unclear whether it will stand under the next Administration – was not written to effectively deal with IPTV. VOD services are extremely popular and any Administrative Order that gives inferior competitors the ability to sue over VOD services, is an Administrative Order that will have to change, either through Congressional Action or an FCC rulemaking procedure. Moreover, Internet privacy advocates and regulators (at both the FTC and the FCC) will have to start treating TV services like the Internet and be willing to nullify the adhesion contracts that will certainly follow IPTV services and their intrusive data collection.

[i] Carriage of the Transmissions of Digital Television Broadcast Stations, CS Docket No. 98-120, Notice of Proposed Rulemaking, 13 FCC Rcd 15092, 15093, paras. 1-2 (1998)(laying out the first plan for digital transition); Carriage of Digital Television Broadcast Signals: Amendment to Part 76 of the Commission’s Rule, CS Docket No. 98-120, Declaratory Order, 23 FCC Rcd 14254. para. 1(2008) (stating February 2007 as the final date of implementation for digital transition; Amendment of Parts 73 and 74 of the Commission’s Rules to Establish Rules for Digital Low Power Television and Television Translator Stations, MB Docket 03-185, Third Report and Order, 30 FCC Rcd 14927, paras. 1-2 (2015)(dealing with final issues relating to digital transition).

[ii] It is important to note that IPTV does not technically “stream” TV services the same way Internet TV (like YouTube or Hulu) does. For a better explanation see IPTV and Internet Video: Expanding the Reach of Television Broadcasting, Simpson and Greenfield, National Association of Broadcasters (2007).

On November 4th, Google released Google Home, a smart speaker and entertainment hub, and competitor to Amazon’s Echo. Users can ask Google Home questions typically asked of a search engine (e.g., how long to boil an egg) or request actions typically enacted on a smartphone (e.g., setting a timer or calendar reminder).

The more a user engages with one of these smart hubs, the more user data the device and its servers accumulate. Because of algorithms incorporating sophisticated machine learning and artificial intelligence, a smart-hub user who divulges more data likely receives more personalized, accurate, and relevant answers. This tradeoff of data for personalization is one in which users increasingly divulge significant amounts of information, including sensitive and identifying information.

What are the expected privacy concerns?

In short, the same concerns that exist for mobile devices are likely to exist with these devices (e.g. the threat of illegal wiretapping, hacking, attacks, and identity theft prevail).

Who gets legal access to user data?

The first category of people who get access to user data is the users themselves. These devices respond to any voice that says its “wake word.” For Google Home, the wake word is “OK Google.” For Echo, it’s “Alexa.” Although some folks are concerned that the devices are “always listening,” and certainly the devices have such capability (as do our smartphones, which also have microphones and Internet connection), at this point, these smart hubs only begin recording data a fraction of a second before processing the voiced wake word. Users can review—as well as choose to delete—their recorded search history, but are warned that doing so could “degrade [their] experience.”

The Service Providers and their third parties are the second category that enjoys access to user data. Certainly, tech giants benefit from monetizing user data to the extent they believe users will tolerate. For example, Amazon will likely encourage Echo users to buy and sell goods in the Amazon marketplace just by chatting with their anthropomorphic “friend,” Alexa. Google will likely utilize Google Home as a platform to serve personalized ads or as a research tool to better personalize them.

Of course, these companies are wary to overstep users’ boundaries of comfort and trust. For example, Google promises in its Privacy Policy that “When showing you tailored ads, we may associate an identifier from cookies or similar technologies with topics such as ‘Cooking and Recipes’ or ‘Air Travel,’ but not with sensitive categories” (i.e., “personal information relating to confidential medical facts, racial or ethnic origins, political or religious beliefs or sexuality”).

These companies carefully safeguard user data from outsiders. After Edward Snowden’s disclosures in 2013, Apple, Google, and other tech giants rapidly advanced their encryption technology to protect from leaks and hacks.

These companies don’t want the perception that they are peddling off sensitive data. For example, because Samsung sends voice data received by its Smart TV to a third party for speech-to-text conversion, the company cautioned users to be wary of what they say in front of the TV. The warning raised a media firestorm of indignation and paranoia in February 2015, requiring Samsung to clarify that Samsung does “not retain voice data or sell it to third parties.”

And because user trust is essential for their continued success, it’s unlikely these tech giants will peddle off sensitive, disaggregated data any time soon. After all, third parties may not take the same precautions nor spare the same expense to protect and encrypt sensitive user information. So, Google vouches in its current privacy policy to only share personally identifiable information if the user consents or it’s for internal processing or legitimate legal reasons. But it says it may aggregate and share non-personally identifiable information with third parties.

The third and final category is the government. In 2014, FBI director James Comey urged citizens to consider the possibility of companies coding a “backdoor” into their encrypted products, thus facilitating legal government wiretapping and seizure of stored data. In a joint statement released in 2015, security experts declared their opposition to enabling government access to encrypted communication.

Tech leaders also argue that preventing backdoor access will not lead our government to “go dark” (i.e., fail to obtain necessary surveillance information), a concern expressed by Comey. Apparently, in 1993, the NSA invented the Clipper chip, an encryption telecommunications device with a built-in backdoor. Then, too, tech leaders resisted. Nonetheless, law enforcement officers have increasingly obtained clearance from the legislature and courts to wiretap and obtain information from telecommunications. Not to mention, with a court-approved warrant or subpoena, law enforcement may still be able to obtain phone data from other approaches (e.g., from the actual service provider or compelling decryption from the user).

So?

We are well on our way to welcoming artificial intelligence into our homes. If tech giants have anything to say about it, user data in their clouds and money in their pockets is in our future, but 1984 is not.

]]>4238Ninth Circuit Panel Throttles FTC Enforcementhttp://stlr.org/2016/10/25/ninth-circuit-panel-throttles-ftc-enforcement/
Tue, 25 Oct 2016 17:51:13 +0000http://blogs2.law.columbia.edu/stlr/?p=4206Continue Reading →]]>The Federal Trade Commission (FTC) recently filed a petition to appeal a Ninth Circuit decision that exempts telecom giant AT&T from enforcement action by the agency. The litigation dates back to October 2014, when divisions of the FTC’s Bureau of Consumer Protection brought an action against AT&T for its undisclosed speed throttling of “unlimited” data plan subscribers. AT&T landed a major blow in August 2016 when a Ninth Circuit panel determined that the FTC Act exempts common carriers from regulation, thereby stripping the FTC of enforcement authority over AT&T. The FTC argues that the decision creates an “enforcement gap” for companies classified as common carriers, leaving consumers vulnerable to shady practices with little recourse. The agency also expressed concern over the Court’s decision with regards to the FCC’s reclassification of broadband providers as common carriers, which would seem to exempt all Internet Service Providers from the FTC consumer protection authority.

First, a quick primer on the underlying common carrier reclassification. Back in February of 2015, the FCC voted to adopt rules that would promote net neutrality. The problem was that the FCC lacked the authority to prevent throttling and paid prioritization (aka net neutrality violations) for all Internet services, because the FCC had previously classified Internet services as an “information service” instead of a “Telecommunications Service.” In order to regulate Internet services to prevent Net Neutrality violations, the FCC reclassified broadband providers – both fixed and mobile – as common carriers of “telecommunications service,” granting the FCC broad authority under Title II of the Communications Act of 1934. This decision was largely heralded as a win for proponents of an open Internet, but the latest chapter in the feud between the FTC and AT&T exposes one potential issue created by the controversial decision.

Section 5 of the FTC Act (15 U.S. Code § 45) expressly prohibits unfair business practices and empowers the FTC to protect consumer interests. Using this authority, the FTC has gone after companies for fraudulent billing schemes, false advertising, and countless other abusive practices. AT&T sells a number of tiered mobile broadband data plans, mostly distinguished by limits on data usage and throughput, or speed. At the time of the suit, AT&T no longer offered “unlimited” data plans to new customers, but millions of existing plans were grandfathered into the new structure. Those plans were still purported to be unlimited, but while the data usage had no cap, AT&T surreptitiously throttled subscriber speed after a certain threshold had been reached. The FTC alleged that AT&T throttled speeds by up to 90 percent, affecting over 3.5 million subscribers without proper disclosure. In its initial suit, the agency sought a permanent injunction and financial damages in the form of refunds for affected customers.

A panel of three judges on the Ninth Circuit looked to the agency’s charter and agreed with AT&T’s argument that the FTC Act expressly exempts common carriers from regulation. Under the act, the FTC may “prevent persons, partnerships, or corporations, except . . . common carriers subject to the Acts to regulate commerce . . . from using . . . unfair or deceptive acts or practices in or affecting commerce.” 15 U.S.C. § 45(a)(2). This exemption exists to prevent companies from being subject to double regulation. However, when the litigation began, AT&T was classified as a common carrier of phone service, not broadband service. The FTC argued that the exemption applied to common carrier activity, rather than common carrier status, meaning it should be able to regulate the mobile broadband service because AT&T was only a common carrier as to its phone service. Using the language of the FTC Act and its legislative history, the court concluded that the exception is status based, not activity based, thereby stripping the FTC of power over AT&T.

In its appeal, the FTC argues that sticking with a status-based interpretation of the common carrier exception leaves a huge loophole for circumventing enforcement. All companies labeled as common carriers in any respect will now be exempt, while the agency fears a potential arms race for common carrier acquisitions. The FTC cites AOL and Yahoo as winners of the decision, as they are not common carriers, but through acquisition by Verizon might now be able to skirt regulation from the FTC. The FTC is also the main agency charged with protecting consumer data privacy, but the decision could potentially bar the agency from preventing deceptive privacy practices by companies like Google that own common carrier subsidiaries. Though the FCC will still have authority due to reclassification, the FTC notes that the FCC lacks the ability to redress consumer losses.

Under the current precedent, the FTC loses jurisdiction over any ISP, which is especially problematic considering the FCC’s limited role as a consumer protection agency. If the appeal is granted, the FTC will try its case before an 11-judge panel. The FTC’s primary arguments revolve around statutory interpretation and conflicting precedents in the Ninth Circuit, but it also offers strong policy arguments for consumer protection.

]]>4206If You’re Going to Delete All My Facebook Posts, Then You Might As Well Do It Righthttp://stlr.org/2016/10/19/if-youre-going-to-delete-all-my-facebook-posts-then-you-might-as-well-do-it-right/
Wed, 19 Oct 2016 14:27:13 +0000http://blogs2.law.columbia.edu/stlr/?p=4184Continue Reading →]]>Judicial benchslap stories are juicy legal fodder, and this story was no different. Recently, the legal community eagerly gossiped about a federal judge that lashed out against a well-known New York law firm. The offense? Judge Nicholas Garaufis of the Eastern District of New York was infuriated that the firm had sent a mere junior associate instead of a partner to a hearing on two cases that “implicate[] international terrorism and the murder of innocent people in Israel and other places.” While the judge has since apologized for his remarks and the salacious part of the story is largely over, the parties continue to litigate.[1]

The underlying claim in the pair of lawsuits is that Facebook facilitates terrorism by providing a platform for militant groups to incite attacks. This brings up the question of what role social media networks or even online service providers in general should play in policing users for potentially criminal or violence-inducing conduct. In these cases, Facebook was accused of not doing enough, while in other instances, a company can do too much.

Accordingly then, what kind of responsibilities do social media companies have?

From a purely legal standpoint, the answer is probably none. Section 230 of the Communications Decency Act of 1996 says that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In other words, social media platforms are not liable for the content that their users publish. For the most part, this is a good thing. It allows innovation and promotes free speech. If websites featuring user-generated content were liable for their users’ posts, many would likely self-censor to protect themselves from potential lawsuits. In addition, the task of censoring everything would be nearly impossible given the amount of content uploaded online. For example, more than 100 hours of video were uploaded to YouTube every minute in 2013.

Of course, this does not mean that social media companies give users a free rein. Promoting terrorism or committing crimes are (more likely than not) against companies’ terms of use, and so offending posts are taken down and users removed. This past year, Twitter suspended hundreds of thousands of accounts for posting terrorist content. This demonstrates that, while there may be no legal obligation to monitor users’ content, companies have nonetheless implemented policies and practices that mirror what they feel are their moral obligations.

However, even though companies have proactively undertaken some policing responsibility, the questions regarding how much these companies should be working with law enforcement still remain open. In other words, they have accepted moral duties, so how do they do it well?

Programs like YouTube’s flagging system require that either employees or users themselves monitor posted content. The government intervenes here only when content is reported to them. This one-step removed ensures that the right to privacy is safeguarded. It seems a bit paradoxical to say that these practices protect user privacy even when someone is still monitoring their content, but the Fourth Amendment only protects from unlawful searches by the government, not searches by private entities to whom you have freely given your information or to people whom you allow to view your information.

Still, it is problematic to allow private companies to become the ones that dictate what is and what is not acceptable online behavior. By leaving the choice to decide what gets reported, this leads to uneven governance across the Internet and chips away at the government’s ability to enforce its own laws (especially considering the challenges of Going Dark). Companies may also be overlooking coded language that seems innocuous, allowing criminal activity to continue.

Law enforcement and private companies clearly cannot act completely independently of one another; nor can they work too closely together. It seems as if it is a quite a difficult balance to strike.

Perhaps the best solution is actually one where neither monitors. Instead, something watches for them. Using artificial intelligence in the future, companies can operate programs that look at all users’ activity and then decide if there is reason for law enforcement to take a closer look at a particular user. Criteria for what these programs search for could be jointly set by law enforcement and social media companies, but it is important that the companies be the ones to control these programs to avoid Fourth Amendment concerns. Otherwise, potential litigants would have a strong argument that the government conducts unlawful search and seizures—social media companies would be effectively acting as agents for the government when administering these programs. Interestingly enough, this also renders moot the current struggle to decide exactly how much policing companies should be doing on behalf of law enforcement. Social media platforms would still escape legal liability as they do now, but objectivity in the A.I.-aided process allows them to approach enforcement in a more consistent manner.

]]>4184Police Body Cameras and Public Records Requests: Another Privacy Frontierhttp://stlr.org/2016/03/30/police-body-cameras-and-public-records-requests-another-privacy-frontier/
Wed, 30 Mar 2016 14:00:21 +0000http://blogs2.law.columbia.edu/stlr/?p=4062Continue Reading →]]>Police departments aroundthecountry have been rolling out body-worn camera (“BWC”) programs among other efforts to address accountability and transparency concerns in police conduct. Police departments, legislatures, and community groups alike believe that BWC programs will provide benefits in many forms, including lowering the incidence of police violence, reducing civilian complaints against officers, and streamlining internal police investigations. Public opinion polls overwhelmingly support their use, and news outlets have been eager to report early successes.

These programs are not without their challenges, however, as implementing BWC programs has sparked a multitude of legal and policy debates. Chief among these is over what happens to the video footage after it has been recorded: who gets to see it and whether it will be redacted are just some of the issues with which many departments are now grappling.

The question of whether (and in what form) the public will have access to BWC footage threatens the central goals of transparency and accountability for which these programs were designed. While denying public requests for BWC footage raises obvious transparency concerns; the privacy of victims and other individuals appearing in the footage must be respected. As state legislatures and law enforcement agencies outline policies on this point, they should keep in mind the ultimate purpose of BWC programs. A poorly crafted public access policy might make BWCs lead to less accountability and transparency than intended.

The Problem of Public Access

Whether private citizens can access BWC footage via Freedom of Information Act (FOIA) records requests, is a question the states and police departments must answer as BWC programs are rolled out. Requests for public records in particular are governed by each state’s public disclosure or FOIA laws, but most of these laws were written well before BWCs (and their attendant privacy implications) existed. Should footage of a medical emergency, or inside a private residence, or containing evidence in a current investigation, or of a sexual abuse victim be publicly available to any private citizen that files a records request?

It is inevitable that with each change to BWC policy affecting “when” and “where” an officer is required to activate his or her BWC, there must be a separate adjustment to the public disclosure policy. For example, if police were prohibited from activating BWCs inside a private residence, the privacy concerns from public disclosure of BWC would be substantially altered. Many states have responded to this problem by proposing or passing legislation that would exempt or limit at least some BWC footage from public disclosure coverage.

A Patchwork of Policies

At least 12 states have passed legislation that restricts public access to BWC footage. Some laws now restrict any public disclosure of footage unless it is used as evidence in a criminal investigation.[1] Others have exempted any BWC footage that was recorded on private property.[2] Some have simply stated that disclosure of footage will proceed along existing public records/FOIA request law.[3] In some cases, police departments are developing policies that might not even comport with a state statute.[4] Many more will decide on similar issues in the next year, including both California and Washington.

In 2014 in Seattle, a freedom of information advocate requested all of the BWC footage recorded in a pilot BWC program. Logistically unable to handle the request (redacting identities to comply with department policy and processing the video is a slow and expensive task) the Seattle Police Department hosted a “hackathon” to develop new technology better able to quickly redact identities and release footage to the public. The police department ultimately posted the redacted videos to a public YouTube channel. Many believe that cost-effective redaction of videos prior to their public disclosure is the best way to allay privacy concerns, including those addressed by the state legislation discussed above.

Is Redaction the Answer?

Redaction of individuals’ identifying information in BWC footage appears to be a win-win solution that can appease freedom of information advocates while limiting the concerns of civil liberties and privacy advocates. In its BWC policy recommendations, the American Civil Liberties Union recommends redaction “when feasible”. They suggest that redaction of identifying information and consent of those in the video is the best way to approach public release of BWC footage.

In response, police departments, especially those with large pilot programs, have argued that redaction is rarely feasible because complying with records requests while redacting personal information is overly costly and burdensome. Various stories revealing the high cost of obtaining footage have resulted from this technological reality, and critics of public disclosure of BWC footage no doubt count this among reasons to exempt the footage from FOIA requests.

Luckily, the impediments that video and redaction processing create are a bottleneck and not a roadblock. The increase in BWC programs has dramatically increased the demand for redaction software. Accordingly, substantial developmentsinredaction technology, including automated technology that can quickly recognize and blur faces and other identifying information, are rapidly coming to market. Improved redaction software will reduce the technological, time, and cost burdens that FOIA requests for BWC footage can create. As protecting privacy during the release of footage to the public becomes more efficient, all interested parties’ concerns can be addressed: police officers can ensure that non-essential footage is redacted;[5] victims, informants, and other individuals can protect their identities; and the public can put faith in the transparency and accountability that the taxpayer-funded BWC program has brought to their local police force.

A Way Forward

As BWC programs continue to roll out across the country, with more cameras and more footage being recorded, the problem of FOIA requests overtaxing police departments’ resources will only worsen. But public access to BWC footage is a critical component to the very success of the programs: if footage that the public knows exists can easily be exempted from public view at police discretion, the fundamental benefits of trust, objectivity, transparency, and accountability are threatened. The specific issue of whether the public has a right to access any recorded footage involves significant privacy concerns, but, as the ACLU recognizes, these concerns can be addressed by redacting personal and identifying information. As state legislatures and departments around the country draft legislation and BWC policy, they should take note that technological advancements in the coming years will give departments the ability to efficiently address privacy concerns while complying with FOIA requests that maximize transparency under the new BWC programs.