This report from the Berkman Center's Berklett Cybersecurity Project offers a new perspective on the "going dark" debate from the discussion, debate, and analyses of an unprecedentedly diverse group of security and policy experts from academia, civil society, and the U.S. intelligence community.
The Berklett group took up some of the questions of surveillance and encryption as some companies are encrypting services by default, making their customers' messages accessible only to the customers themselves. The report outlines how market forces and commercial interest as the increasing prevalence of networked sensors in machines and appliances point to a future with more opportunities for surveillance, not less.

Jonathan L. Zittrain, The Future of the Internet -- And How to Stop It (Yale Univ. Press & Penguin UK 2008).

Categories:

Technology & Law

Sub-Categories:

Communications Law

,

Cyberlaw

,

Networked Society

Links:

Type: Book

Abstract

This extraordinary book explains the engine that has catapulted the Internet from backwater to ubiquity—and reveals that it is sputtering precisely because of its runaway success. With the unwitting help of its users, the generative Internet is on a path to a lockdown, ending its cycle of innovation—and facilitating unsettling new kinds of control.
IPods, iPhones, Xboxes, and TiVos represent the first wave of Internet-centered products that can’t be easily modified by anyone except their vendors or selected partners. These “tethered appliances” have already been used in remarkable but little-known ways: car GPS systems have been reconfigured at the demand of law enforcement to eavesdrop on the occupants at all times, and digital video recorders have been ordered to self-destruct thanks to a lawsuit against the manufacturer thousands of miles away. New Web 2.0 platforms like Google mash-ups and Facebook are rightly touted—but their applications can be similarly monitored and eliminated from a central source. As tethered appliances and applications eclipse the PC, the very nature of the Internet—its “generativity,” or innovative character—is at risk.
The Internet’s current trajectory is one of lost opportunity. Its salvation, Zittrain argues, lies in the hands of its millions of users. Drawing on generative technologies like Wikipedia that have so far survived their own successes, this book shows how to develop new technologies and social structures that allow users to work creatively and collaboratively, participate in solutions, and become true “netizens.”

Actuarial risk assessments might be unduly perceived as a neutral way to counteract implicit bias and increase the fairness of decisions made at almost every juncture of the criminal justice system, from pretrial release to sentencing, parole and probation. In recent times these assessments have come under increased scrutiny, as critics claim that the statistical techniques underlying them might reproduce existing patterns of discrimination and historical biases that are reflected in the data. Much of this debate is centered around competing notions of fairness and predictive accuracy, resting on the contested use of variables that act as "proxies" for characteristics legally protected against discrimination, such as race and gender. We argue that a core ethical debate surrounding the use of regression in risk assessments is not simply one of bias or accuracy. Rather, it's one of purpose. If machine learning is operationalized merely in the service of predicting individual future crime, then it becomes difficult to break cycles of criminalization that are driven by the iatrogenic effects of the criminal justice system itself. We posit that machine learning should not be used for prediction, but rather to surface covariates that are fed into a causal model for understanding the social, structural and psychological drivers of crime. We propose an alternative application of machine learning and causal inference away from predicting risk scores to risk mitigation.

The architecture and offerings of the Internet developed without much steering by governments, much less operations by militaries. That made talk of “cyberwar” exaggerated, except in very limited instances. Today that is no longer true: States and their militaries see the value not only of controlling networks for surveillance or to deny access to adversaries, but also of subtle propaganda campaigns launched through a small number of wildly popular worldwide platforms such as Facebook and Twitter. This form of hybrid conflict – launched by states without state insignia, on privately built and publicly used services – offers a genuine challenge to those who steward the network and the private companies whose platforms are targeted. While interventions by one state may be tempered by defense by another state, there remain novel problems to solve when what users see and learn online is framed as organic and user-generated but in fact it is not.

June 2014 saw a media uproar about Facebook's emotional contagion study, published in the Proceedings of the National Academy of Sciences. In conjunction with researchers at Cornell, Facebook designed an experiment which altered the Facebook News Feed to explore if emotions can spread through Facebook. These feeds, the primary activity and content list on Facebook, are populated according to a proprietary algorithm. In the experiment, the algorithms for a random subset of users were manipulated to display either proportionately more negative emotional content or proportionately more emotional content; a control group saw content according to the current algorithm.
This study met vocal opposition not solely for manipulating the moods of Facebook users, but also because users neither volunteered nor opted in to such research, and were not informed of their participation in the study. This study is a motivating example of the moral, legal, and technical questions raised when algorithms permeate society.
This case explains the parameters of the experiment, the reaction in the media, and the legal issues introduced (including Federal Trade Commission standards for commercial practices and the US Department of Health and Human Services Policy for the Protection of Human Research Subjects, informally known as the "Common Rule"). In order to encourage examining the issues presented by algorithms in a number of different scenarios, the case uses six hypothetical situations that encourage participants to ponder the use of algorithms in different mediums, including their use with print media, charity, and business, among others. These hypothetical scenarios present varying aspects of the expanding role algorithms play, and will elicit more meaningful discussion and force participants to address through nuanced arguments the complicated issues surrounding algorithms. After briefing analyzing and debating all six scenarios, participants will delve deeper into one hypothetical, using their position on a hypothetical issue to inform their stance on the Facebook Emotional Contagion study. The exercise concludes with a class-wide debate of the ethics surrounding the Facebook Emotional Contagion study.

This publication is the second annual report of the Internet Monitor project at the Berkman Centerfor Internet & Society at Harvard University. As with the inaugural report, this year’s edition is a collaborative effort of the extended Berkman community. Internet Monitor 2014: Reflections on the Digital World includes nearly three dozen contributions from friends and colleagues around the world that highlight and discuss some of the most compelling events and trends in the digitally networked environment over the past year.
The result, intended for a general interest audience, brings together reflection and analysis on a broad range of issues and regions — from an examination of Europe’s “right to be forgotten” to a review of the current state of mobile security to an exploration of a new wave of movements attempting to counter hate speech online — and offers it up for debate and discussion. Our goal remains not to provide a definitive assessment of the “state of the Internet” but rather to provide a rich compendium of commentary on the year’s developments with respect to the online space.
Last year’s report examined the dynamics of Internet controls and online activity through the actions of government, corporations, and civil society. We focus this year on the interplay between technological platforms and policy; growing tensions between protecting personal privacy and using big data for social good; the implications of digital communications tools for public discourse and collective action; and current debates around the future of Internet governance.
The report reflects the diversity of ideas and input the Internet Monitor project seeks to invite. Some of the contributions are descriptive; others prescriptive. Some contain purely factual observations; others offer personal opinion. In addition to those in traditional essay format, contributions this year include a speculative fiction story exploring what our increasingly data-driven world might bring, a selection of “visual thinking” illustrations that accompany a number of essays, a “Year in Review” timeline that highlights many of the year’s most fascinating Internet-related news stories (and an interactive version of which is available at the netmonitor.org), and a slightly tongue-in-cheek “By the Numbers” section that offers a look at the year’s important digital statistics. We believe that each contribution offers insights, and hope they provoke further reflection, conversation, and debate in both offline and online settings around the globe.

In these edited remarks originally given at ROFLCon in May 2012, Jonathan Zittrain muses on the nature of memes and their relationships to their creators as well as to broader culture and to politics. The distributed environment of the internet allows memes to morph and become distanced from their original intentions. As meme culture becomes more and more assimilated into popular culture, subcultures like those of Reddit or 4chan have begun to re-conceptualize their own role from just meme propagators to cultural producers. Memes can gain commercial appeal, much to the chagrin of their creators. More strangely, memes can gain political traction and affiliation, like American conservative commentator Bill O’Reilly’s ‘You can‘t explain that’ or Anonymous’ ‘Low Orbit Ion Cannon’. Can meme culture survive becoming not just the property of geeks and nerds, but part of the commercial and political world?

In this article the author discusses aspects of smart technology that might have disarmed the insurgent group in Iraq, the Islamic State (ISIS), without bombs or bullets. Topics include remotely disabling weaponry with a kill switch similar to one on the iPhone, which resulted in reduced iPhone thefts, examples of fail-safe mechanisms that could be built using basic signature-and-authentication technologies, and the implications of a failed kill-switch strategy.

It has become increasingly common for a reader to follow a URL cited in a court opinion or a law review article, only to be met with an error message because the resource has been moved from its original online address. This form of reference rot, commonly referred to as ‘linkrot’, has arisen from the disconnect between the transience of online materials and the permanence of legal citation, and will only become more prevalent as scholarly materials move online. The present paper, written by Jonathan Zittrain, Kendra Albert and Lawrence Lessig, explores the pervasiveness of linkrot in academic and legal citations, finding that more than 70% of the URLs within the Harvard Law Review and other journals, and 50% of the URLs within United States Supreme Court opinions, do not link to the originally cited information. In light of these results, a solution is proposed for authors and editors of new scholarship that involves libraries undertaking the distributed, long-term preservation of link contents.

What is the Web? What makes it work? And is it dying? This paper is drawn from a talk delivered by Prof. Zittrain to the Royal Society Discussion Meeting 'Web science: a new frontier' in September 2010. It covers key questions about the way the Web works, and how an understanding of its past can help those theorizing about the future. The original Web allowed users to display and send information from their individual computers, and organized the resources of the Internet with uniform resource locators. In the 20 years since then, the Web has evolved. These new challenges require a return to the spirit of the early Web, exploiting the power of the Web's users and its distributed nature to overcome the commercial and geopolitical forces at play. The future of the Web rests in projects that preserve its spirit, and in the Web science that helps make them possible.

This briefing document was developed as part of a March 30, 2012 workshop entitled “Public Networks for Public Safety: A Workshop on the Present and Future of Mesh Networking,” hosted by the Berkman Center for Internet & Society at Harvard University. The event provided a starting point for conversation about whether mesh networks could be adopted within consumer technologies to enhance public safety communications and empower and connect the public while simultaneously improving public safety. Participants in this initial convening included members of government agencies, academia, the telecommunications industry, and civil society organizations; their helpful inputs were integral to the final version of this document.
Building on the dialogue at this gathering, this briefing document seeks to: sketch a broad overview of mobile ad hoc networks (MANETs) and mesh technologies; identify critical technical issues and questions regarding the communications effectiveness of those technologies; explain how public safety communications relate to mesh and offer a synopsis of current regulations affecting those communications; describe a set of basic use cases that emerged from the conference; map out stakeholders at the technical, regulatory, legal, and social levels, and associated interests, points of connection, and potential challenges; catalog select examples and, where possible, highlight potential next steps and areas for short term action; and, summarize key takeaways from the conference, with an emphasis on shared principles or best practices that might inform participants’ diverse efforts to improve communications affordances for the public and the public safety community.
The paper also synthesizes several strains of workshop discussion that probed big picture framing concerns that could inform the present and future of mesh. Specifically, it puts forth two related but distinct models for mesh: mesh in a technical sense and mesh as a metaphor or social layer construct, with a particular emphasis on the need for further conceptual development with regard to “social mesh.” The final section emphasizes key take-aways from the event, highlighting core principles and best practices that might both provide a theoretical underpinning for the future conceptual development of mesh networking technologies and social mesh models, respectively, and inform the real-world development of communications systems that involve either definition of mesh.
The Berkman Center thanks all of the workshop attendees both for their participation during the event and for comments offered during the development of this briefing document. Berkman Center Project Coordinator Alicia Solow-Niederman worked closely with Professor Jonathan Zittrain to plan and execute this event as well as to produce this briefing document. Berkman Center Research Assistants Andrew Crocker and Kevin Tsai provided exceptional research and contributions to this briefing document, and June Casey contributed indispensable support with background research.

In August 2010, selected faculty and researchers at the Berkman Center for Internet & Society at Harvard University, an independent, exploratory study analyzing ICANN’s decision-making processes and communications with its stakeholders. The study focused on developing a framework and recommendations for understanding and improving ICANN’s accountability and transparency.
The study was undertaken as part of ICANN’s first Accountability and Transparency Review. On November 4, 2010, the Berkman team’s independent report was publicly posted alongside ICANN’s Accountability and Transparency Review Team's Draft Proposed Recommendations for Public Comment.
The Executive Summary below outlines key Findings and Recommendations for Improvement. In addition to this Final Report, associated research materials, resources, and other supplementary inputs that were gathered in the course of the Berkman team’s work.
1. Problem Statement:
In recent years, ICANN has taken important actions — ranging from significant policy changes to formal reviews — to improve its accountability, transparency, and the quality of its decision making. Despite considerable efforts and acknowledged improvements, ICANN continues to struggle with making decisions that the global Internet community can support.
2. Independent Review of Transparency and Accountability at ICANN:
As part of a larger independent review process, faculty and researchers from the Berkman Center for Internet & Society have taken on the challenge of researching ICANN’s current efforts to improve accountability via mechanisms of transparency, public participation and corporate governance, and of analyzing key problems and issues across these areas.
3. Findings and Assessment:
In-depth research into the three focus areas of this report reveals a highly complex picture with many interacting variables that make fact-finding challenging and also render simple solutions impossible. With this complexity in mind, and referring to the main text of the report for a more granular analysis, the findings and assessments of this report can be condensed as follows.
ICANN’s performance regarding transparency is currently not meeting its potential across all areas reviewed and shows deficits along a number of dimensions. It calls for clearly defined improvements at the level of policy, information design, and decision making.
ICANN has made significant progress in improving its public participation mechanisms and gets high marks regarding its overall trajectory in this regard. Remaining concerns about the practical impact of public participation on Board decisions are best addressed by increasing visibility and traceability of individual inputs, in order to clarify how these inputs ultimately factor into ICANN decision-making processes.
ICANN’s greatest challenge ahead, despite significant recent efforts, remains corporate and Board governance. Proposed measures identified in this report aim to increase efficiency, transparency and accountability within the current context and in the absence of standard accountability mechanisms.
4. Recommendations:
There is no straightforward way to address the various challenges ICANN faces. The approach underlying this report’s recommendations takes an evolutionary rather than revolutionary perspective. This approach is aimed at continually improving ICANN’s accountability step by step, based on lessons learned, through a series of measured interventions, reinforced by monitoring and subsequent re-evaluation.
For each of the three focal areas covered in this report and for each of the key issues addressed, this report suggests ways in which the status quo can be improved. Some of these recommendations can be implemented quickly, others require policy changes, and still others call for more in-depth research, consultation and deliberation among the involved stakeholders.
This report’s recommendations vary in kind and orientation. They encourage the adoption of best practices where available and experimentation with approaches and tools where feasible. Several of the recommendations are aimed at improving information processing, creation, distribution, and responsiveness at different levels of the organization.

This essay makes the case that Internet-based platforms can be evaluated along two dimensions: the first between generative and sterile, indicating openness to further contribution and development from outsiders, and the second between hierarchy and polyarchy, indicating the ease with which those affected by the platform can escape its umbrella.
The quadrants formed by these two dimensions can help us to understand patterns in the development of new technologies and platforms, and to brainstorm the widest array of solutions to problems arising under them, particularly problems involving enforcement of regulations against bad actors. Using cybersecurity as a principal example, the essay explains why current national security approaches to Internet vulnerabilities are unduly narrow, and how focusing attention on the "fourth quadrant" can broaden the range of options.

Popular imagination holds that the turf of a state’s foreign embassy is a little patch of its homeland. Enter the American Embassy in Beijing and you are in the United States. Indeed, in many contexts – such as resistance to search and seizure by a host country’s authorities – there is an inviolability to diplomatic outposts. These arrangements have been central to diplomacy for decades so that diplomats can perform their work without fear of harassment and coercion.
Complementing a state’s oasis on foreign territory is the ability to get there and back unharried. Diplomats are routinely granted immunity from detention as they travel, and la valise diplomatique – the diplomatic pouch – is a packet that cannot be seized, or in most cases even inspected, as it moves about. Each pouch is a link between a country and its outposts dispersed in alien territory around the world.
Citizens and their digital packets deserve much the same treatment as they traverse the global Internet. Just as states expect to conduct their official business on foreign soil without interference, so citizens should be able to lead digitally mediated – and increasingly distributed – lives without fear that their links to their online selves can be arbitrarily abridged or surveilled by their Internet Service Providers or any other party. Just as the sanctity of the embassy and la valise diplomatique is vital to the practice of international diplomacy, the ability of our personal bits to travel about the net unhindered is central to the lives we increasingly live online.
This frame differs from the usual criteria for debating the merits of net neutrality. It does not focus on what makes for more efficient provision of broad-band services to end users. It is unaffected by what sorts of bundling of services by a local ISP might intrigue the ISP’s subscribers. It does not examine the costs and benefits of faraway content providers being asked to bargain for access to that local ISP’s customers. Instead, it recognizes that Internet users establish outposts far and wide, and that a new status quo of distributed selfhood is quickly taking hold.

Ubiquitous computing means network connectivity everywhere, linking devices and systems as small as a drawing pin and as large as a worldwide product distribution chain. What could happen when people are so readily networked? This paper explores issues arising from two possible emerging models of ubiquitous human computing: fungible networked brainpower and collective personal vital sign monitoring.

We assess the impact of spam that touts stocks upon the trading activity of those stocks and sketch how profitable such spamming might be for spammers and how harmful it is to those who heed advice in stock-touting e-mails. We find convincing evidence that stock prices are being manipulated through spam. We suggest that the effectiveness of spammed stock touting calls into question prevailing models of securities regulation that rely principally on the proper labeling of information and disclosure of conflicts of interest as means of protecting consumers, and we propose several regulatory and industry interventions.
Based on a large sample of touted stocks listed on the Pink Sheets quotation system and a large sample of spam emails touting stocks, we find that stocks experience a significantly positive return on days prior to heavy touting via spam. Volume of trading
responds positively and significantly to heavy touting. For a stock that is touted at some point during our sample period, the probability of it being the most actively traded stock in our sample jumps from 4% on a day when there is no touting activity to 70% on a day when there is touting activity. Returns in the days following touting are significantly negative. The evidence accords with a hypothesis that spammers "buy low and spam high," purchasing penny stocks with comparatively low liquidity, then touting them - perhaps immediately after an independently occurring upward tick in price, or after having caused the uptick themselves by engaging in preparatory purchasing - in order to increase or maintain trading activity and price enough to unload their positions at a profit. We find that prolific spamming greatly affects the trading volume of a targeted stock, drumming up buyers to prevent the spammer's initial selling from depressing the stock's price. Subsequent selling by the spammer (or others) while this buying pressure subsides results in negative returns following touting. Before brokerage fees, the average investor who buys a stock on the day it is most heavily touted and sells it 2 days after the touting ends will lose close to 5.5%. For those touted stocks with above-average levels of touting, a spammer who buys on the day before unleashing touts and sells on the day his or her touting is the heaviest, on average, will earn 4.29% before transaction costs. The underlying data and interactive charts showing price and volume changes are also made available.

This essay responds to Orin S. Kerr, Searches and Seizures in a Digital World, 119 Harv. L. Rev. 531 (2005), http://ssrn.com/abstract=697541.
Professor Kerr has published a thorough and careful article on the application of the Fourth Amendment to searches of computers in private hands - a treatment that has previously escaped the attentions of legal academia. Such a treatment is perhaps so overdue that it has been overtaken by two phenomena: first, the emergence of an overriding concern within the United States about terrorism; and second, changes in the way people engage in and store their most private digital communications and artifacts.
The first phenomenon has foregrounded a challenge by the President to the very notion that certain kinds of searches and seizures may be proscribed or regulated by Congress or the judiciary. The second phenomenon, grounded in the mass public availability of always-on Internet broadband, is leading to the routine entrustment of most private data to the custody of third parties - something orthogonal to a doctrinal framework in which the custodian of matter searched, rather than the person who is the real target of interest of a search, is typically the only one capable of meaningfully asserting Fourth Amendment rights to prevent a search or the use of its fruits.
Together, these phenomena make the application of the Fourth Amendment to the standard searches of home computers - searches that, to be sure, are still conducted regularly by national and local law enforcement - an interesting exercise that is yet overshadowed by greatly increased government hunger for private information of all sorts, both individual and aggregate, and by rapid developments in networked technology that will be used to satisfy that hunger. Perhaps most important, these factors transform Professor Kerr's view that a search occurs for Fourth Amendment purposes only when its results are exposed to human eyes: such a notion goes from unremarkably unobjectionable - police are permitted to mirror entirely a suspect's hard drive and then are constitutionally limited as they perform searches on the copy - to dangerous to any notion of limited government powers. Professor Kerr appreciates this as a troublesome result - indeed, downright creepy - but does not dwell upon it beyond suggesting that the copying of data might be viewed as a seizure if not a search, at least so long as it involves some physical touching or temporary commandeering of the machine. This view should be amplified: If remote vacuum cleaner approaches are used to record and store potentially all Internet and telephone communications for later searching, with no Fourth Amendment barrier to the initial information-gathering activity in the field, the government will be in a position to perform comprehensive secret surveillance of the public without any structurally enforceable barrier, because it will no longer have to demand information in individual cases from third parties or intrude upon the physical premises or possessions of a search target in order to gather information of interest. The acts of intruding upon a suspect's demesnes or compelling cooperation from a third party are natural triggers for judicial process or public objection. If the government has all necessary information for a search already in its possession, then we rely only upon its self-restraint in choosing the scope and depth of otherwise unmonitorable searching. This is precisely the self-restraint that the Fourth Amendment eschews for intrusive government searches by requiring outside monitoring by disinterested magistrates - or individually exigent circumstances in which such monitoring can be bypassed.
Taken together, the current areas of expansion of surveillance appear permanent rather than exigent, and sweeping rather than focused, causing the justifications behind special needs exceptions to swamp the baseline protections established for criminal investigations. This expansion stands to remove the structural safeguards designed to forestall the abuse of power by a government that knows our secrets.

China's Internet filtering regime is the most sophisticated effort of its kind in the world. Compared to similar efforts in other states, China's filtering regime is pervasive, sophisticated, and effective. It comprises multiple levels of legal regulation and technical control. It involves numerous state agencies and thousands of public and private personnel. It censors content transmitted through multiple methods, including Web pages, Web logs, on-line discussion forums, university bulletin board systems, and e-mail messages. Our testing found efforts to prevent access to a wide range of sensitive materials, from pornography to religious material to political dissent. We sought to determine the degree to which China filters sites on topics that the Chinese government finds sensitive, and found that the state does so extensively. Chinese citizens seeking access to Web sites containing content related to Taiwanese and Tibetan independence, Falun Gong, the Dalai Lama, the Tiananmen Square incident, opposition political parties, or a variety of anti-Communist movements will frequently find themselves blocked. Contrary to anecdote, we found that most major American media sites, such as CNN, MSNBC, and ABC, are generally available in China (though the BBC remains blocked). Moreover, most sites we tested in our global list's human rights and anonymizer categories are accessible as well. While it is difficult to describe this widespread filtering with precision, our research documents a system that imposes strong controls on its citizens' ability to view and to publish Internet content. This report was produced by the OpenNet Initiative, a partnership among the Advanced Network Research Group, Cambridge Security Programme at Cambridge University, the Citizen Lab at the Munk Centre for International Studies, University of Toronto, and the Berkman Center for Internet & Society at Harvard Law School.

The production of most mass-market software can be grouped roughly according to free and proprietary development models. These models differ greatly from one another, and their associated licenses tend to insist that new software inherit the characteristics of older software from which it may be derived. Thus the success of one model or another can become self-perpetuating, as older free software is incorporated into later free software and proprietary software is embedded within successive proprietary versions. The competition between the two models is fierce, and the battle between them is no longer simply confined to the market. Claims of improper use of proprietary code within the free GNU/Linux operating system have resulted in multi-billion dollar litigation. This article explains the ways in which free and proprietary software are at odds, and offers a framework by which to assess their value - a prerequisite to determining the extent to which the legal system should take more than a passing, mechanical interest in the doctrinal claims now being pressed against GNU/Linux specifically and free software generally.

We collected data on the methods, scope, and depth of selective barriers to Internet usage through networks in China. Tests conducted from May through November 2002 indicated at least four distinct and independently operable Internet filtering methods - Web server IP address, DNS server IP address, keyword, and DNS redirection with a quantifiable leap in filtering sophistication beginning in September 2002.

Jonathan L. Zittrain, The Role of Scientific and Technical Data and Information in the Public Domain: New Legal Approaches in the Private Sector, inThe Role of Scientific and Technical Data and Information in the Public Domain: Proceedings of a Symposium 169 (Nat'l Res. Council, Nat'l Acad. Sci. 2003).

Categories:

Technology & Law

,

Property Law

Sub-Categories:

Intellectual Property - Copyright

,

Information Commons

,

Science & Technology

Links:

Type: Book

Abstract

The first session focused on the role, value, and limits of scientific and technical data and information in the public domain. This was followed in the second session by an overview of the pressures on the public domain.

In the spring of 1998, the U.S. government told the Internet: Govern yourself. This unfocused order - a blandishment, really, expressed as an awkward "statement of policy" by the Department of Commerce, carrying no direct force of law - came about because the management of obscure but critical centralized Internet functions was at a political crossroads.
This essay reviews Milton Mueller's book Ruling the Root, and the ways in which it accounts for what happened both before and after that crossroads.

Jonathan L. Zittrain, ICANN: Between the Public and the Private, inThe Best in E-Commerce Law (Warren Agin ed., 2001).

Categories:

Technology & Law

Sub-Categories:

Networked Society

,

Cyberlaw

Type: Book

Jonathan L. Zittrain, What the Publisher Can Teach the Patient: Intellectual Property and Privacy in an Era of Trusted Privication, 52 Stan. L. Rev. 1201 (2000).

Current tax law makes it difficult to enforce sales taxes on most Internet commerce and has generated considerable policy debate. In this paper we analyze the costs and benefits of enforcing such taxes including revenue losses, competition with retail, externalities, distribution, and compliance costs. The results suggest that the costs of not enforcing taxes are somewhat modest and will remain so for several years. At the same time, compliance costs and the benefits of nurturing the Internet diminish over time. When tax costs and benefits take this form, a moratorium provides a natural compromise.

Microsoft has brilliantly exploited its current control of the personal computer operating system (OS) market to grant itself advantages towards controlling tomorrow's operating system market as well. This is made possible by the control Microsoft has asserted over user "defaults," a power Microsoft possesses thanks to a combination of (1) Windows' high market share, (2) the "network effects" that make switching to an alternative so difficult for any given consumer or computer manufacturer, and (3) software copyright, which largely prevents competitors from generating software that defeats network effects. The author suggests a much-reduced term of copyright for computer software--from 95 years to around five years--as a means of preventing antitrust problems before they arise.