Tag: Law

The Internet has changed how people get information, purchase goods, and interact with one another. The Internet has been labeled a human right by the United Nations,[1] and Hilary Clinton has identified Internet freedom as a core value in line with freedoms of expression.[2] Governments have struggled with questions about how to regulate the Internet. Lately, the Internet regulatory debate has centered around privacy on the web and security on the web. The two debates are more inextricably intertwined than may appear at first glance. Can there be complete privacy on the Internet while maintaining enough cyber awareness to ward off potential threats?

II. Background

In a recent New York Times article, Howard E. Shrobe, a computer science professor at the Massachusetts Institute of Technology, is quoted as saying, “[t]he software we run [on the internet], the programming language we use, and the architecture of the chips we use haven’t changed much in over 30 years….[e]verything [on the internet] was built with performance, not security, in mind.”[3]

Since Edward Snowden released troves of information shedding light on the National Security Agency (NSA) data collection methods, privacy on the internet has been a much discussed topic. Concerns center on governmental activity monitoring their own citizens’ data in the United States.

Prior to Edward Snowden’s disclosures, the Obama administration had already begun examining policy solutions to use data gathered from government entities to protect U.S. critical infrastructure for national security purposes.[4]

A. Snowden Sparks a Debate on Privacy

In 2013, a former contractor for the NSA, Edward Snowden, released thousands of documents to the media, giving the public a look into the secretive practices of the NSA.[5] Snowden’s leaks showed the breadth and depth of NSA data collecting practices on both foreign nationals and U.S. citizens located domestically. Snowden cited civil liberties as his primary motive for disclosing classified information.[6] If Snowden wanted to spark a public debate on the merits of government data collection practices, he was certainly successful.

Following Snowden’s leaks, James R. Clapper, Director of National Intelligence, apologized for previously lying to Congress. When asked if the NSA collected any type of data on millions of Americans, Clapper replied “no, sir.”[7] U.S. District Court Judge Richard Leon said that the agency’s controversial program appears to violate the Constitution’s Fourth Amendment, which protects Americans against unreasonable searches and seizures.[8] The program collects records of the time and phone numbers involved in every phone call made in the U.S., and allows that database to be queried for connections to suspected terrorists. “I cannot imagine a more ‘indiscriminate’ and ‘arbitrary invasion’ than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying it and analyzing it without judicial approval,” wrote Leon, a George W. Bush appointee, in the ruling.[9] The Supreme Court denied a writ of certiorari to hear the case.[10]

A White House-appointed review panel recommended that the government cease storing call data on hundreds of millions of Americans.[11] President Obama acknowledged the dialogue surrounding NSA data collection and civil liberties arose at least in part due to Snowden’s disclosures.[12]

Snowden’s disclosures also raised the issue of privacy on the Internet abroad. Brazilian President Dilma Rousseff championed legislation in her home country that has been touted as an internet bill of rights which limits the metadata that can be collected on Brazilians and promotes access to the Web.[13]

Whether or not the effects of Snowden’s disclosures are positive or negative may be one of opinion. What cannot be undermined, however, is the rise in awareness of the scant privacy available on the Internet. While Snowden’s actions led to a whiplash reaction to denounce the NSA’s overreach, which was compounded by the NSA falsely attributing averted terrorist attacks to the data collected, there are more considerations and factors weighing into the merits of monitoring web traffic.

B. Critical Infrastructure Concerns

In a 2013 report to Congress, the Department of Defense accused China of accessing and collecting data on U.S. diplomatic, economic and defense industries.[14] U.S. accusations were corroborated by a report by Mandiant, a cyber-security firm, which came to similar conclusions.[15] The accusations from Mandiant and the Defense Department demonstrated the vulnerability to U.S. national security interests against cyber-attacks.

Attempts to pass legislation to address cyber security concerns of private industry critical to national interests have stalled, especially after Snowden’s disclosures.[16] As a result, President Obama signed an Executive Order in February 2013 that directed the Department of Homeland Security to create a national framework that reflects the increasing role of cyber security in securing physical assets.[17] “Much of our critical infrastructure – our financial systems, power grids, pipelines, health care systems – run on networks connected to the internet, so this is a matter of public safety and of public health,” President Obama stated in January 2015 while introducing a renewed efforts to pass cyber security reform.[18]

C. Sony

In November 2014, Sony Pictures Entertainment suffered a massive cyber-attack that exposed terabytes of information including personally identifiable information (PII) of Sony employees, emails, and unreleased movies.[19] On November 24, 2014, Sony became aware of the breach when an ominous red skull with a warning that Sony’s secrets were about to be released appeared on computers at Sony. It is unclear when Sony’s systems became compromised.[20] A group calling itself “Guardians of Peace” took credit for the attack. On December 19, 2014, the U.S. Federal Bureau of Investigations (FBI) concluded that North Korea was behind the attack on Sony.[21]

On December 16, 2014, Guardians of Peace, the group claiming responsibility for the hack, posted terrorist threats online directed at movie theaters if they played Sony’s motion picture “The Interview.”[22] The movie is a comedy, which includes a scene depicting the North Korean dictator Kim Jong Un being killed. In June 2014, North Korea wrote to the Secretary General of the U.N. stating that the distribution of the movie should be regarded as an act of war.[23]

It should be noted however, that Norse, a private cyber security firm, also investigated the Sony hack and found no evidence of North Korea being responsible.[24]

Regardless of who is ultimately responsible,the cost of Sony’s hack is estimated to be upwards of $300 million.[25]

III. Analysis

“You have zero privacy anyway. Get over it,” the co-founder and chief executive of Sun Microsystems, Scott McNealy, said in response to growing concerns of consumer privacy in 1999.[26] As abrasive as he was, McNealy’s inelegant comment seems eerily prescient sixteen years after the fact. Every website a user visits is logged, and every post and online purchase leaves a trace of a user’s online presence.[27] Every email sent via Google’s ubiquitous Gmail service is scanned for data for potential advertisers.[28] With a $395 billion dollar company built on a principle of data mining and advertising, what chance does online privacy really stand?

Edward Snowden confirmed the notion that “big brother” is watching that existed long before 2012. As early as 2004, when Facebook was a small website for college students to interact, there was an implicit understanding of the importance of protecting your online image. There is no doubt that some information posted on the internet should be private, particularly in the case of credit card numbers used for online purchasers. There is also clearly some information that is not private at all, such as public tweets, which are now being collected by the Library of Congress.[29] Legal scholars will need to develop theories about all the information that falls between these two examples to determine what online information should be openly accessible and attributable and the information which should require a warrant to be admissible against a citizen.

Do the ends of protecting critical infrastructure from potentially massive disruptions, or preventing potential terrorist attacks through the means of meta-data collection justify NSA practices? This must be considered while weighing the merits of online data privacy.

Despite the difficulties, online anonymity may be a winning bargain for privacy advocates and policy makers. Protecting the U.S. economy and national security are goals too large to completely cease metadata collection, but with clear guidelines in place anonymity can be maintained until there is an established need to identify a person of interest.

As Dr. Shrobe stated, the Internet was built with performance in mind not security, so when the need to identify potential persons of interests arises there should be clear guidelines in place to authorize removing the veil of anonymity.[30] The guidelines should serve as the basis for a preemptive warrant to protect against violations of citizen’s Due Process rights. As the White House-appointed panel recommended, the government should cease storing call data on hundreds of millions of Americans – or at least cease storing data indefinitely.[31]

Sony is a private example of larger security concerns that come with an open Internet. The costs Sony has incurred and the publicity of the attack may serve to raise awareness around cyber security. A federal policy solution to protect industries not critical to national security interests may be a bridge too far, but private companies should begin to factor in cyber security as a cost of doing business in the Internet age, or risk being the next victim of a $300 million cyber-attack.

IV. Conclusion

The Internet has performed exceedingly well in connecting the world and delivering information quickly. If the Internet was built with performance in mind, as Dr. Shrobe stated, it may be time to consider what the Internet should evolve into. The Internet as a security-less means of accessing data may prove to be an economic costly proposition that is potentially detrimental to national security. Private companies can hire cyber security firms to manage their networks and protect against potential cyber intrusions, but the threat of cyber-attacks will not be completely eliminated. In order for the Internet to meet the challenges of the intricately connected world that it helped to create, it must evolve to become a safer medium through which businesses and governments operate. Until then, we can remember McNealy’s words every time we log onto an Internet connection and “get over” our lack of privacy. At least we can cross our fingers for anonymity on the web.

*J.D. Candidate, University of Illinois College of Law, expected 2017. B.A. Political Science, University of Illinois at Chicago, 2011. I would like to thank the entire team at the Journal of Law Technology and Policy for their help on this piece.

[12] Office of Press Secretary, Remarks by the President on the Review of Signals Intelligence, White House (Jan. 17, 2014, 11:15 AM), http://www.whitehouse.gov/the-press-office/2014/01/17/remarks-president-review-signals-intelligence.

[14] Office of the Secretary of Defense, Military and Security Developments Involving the People’s Republic of China 2013, Defense 36 (2013), http://www.defense.gov/pubs/2013_china_report_final.pdf.

[15] David Sanger, David Barboza, Nicole Perlroth, Chinese Army Unit Is Seen as Tied to Hacking Against U.S., N.Y. Times (Feb. 18, 2013), http://www.nytimes.com/2013/02/19/technology/chinas-army-is-seen-as-tied-to-hacking-against-us.html.

In recent decades, network-driven data analysis has been a source of major developments and insights in neuroscience,[1] sociology,[2] and information science,[3] just to name a few of the academic fields; these tools have also been used to develop precise product marketing initiatives, more appropriate recommendations on sites such as Pandora[4] and Amazon,[5] and efficient search algorithms such as Google’s PageRank.[6] Curiously, legal research is typically not especially network-based, despite the fact that network tools such as PageRank were inspired by tools in legal analysis (especially Lexis’ Shepard Citations).[7] It is a truism among legal scholars that statutes, enforcement, precedent and interpretation are all deeply interconnected.[8] The significant role of stare decisis in contemporary legal practice makes it all the more puzzling why legal scholarship tends to be conducted in a linear or modular form. The aim of this article is to encourage a more network-theoretic approach to the identification and interpretation of legal precedent that more appropriately fits the non-modular, network structure of law.

I begin by briefly reviewing the basic concepts and tools of network analysis. Following this introduction, I highlight an important shortcoming in the most common tools for legal scholarship, and some concrete steps that could be taken to improve the methods used by lawyers and legal scholars to represent and interpret legal precedent. In particular, I argue that services such as WestlawNext and Lexis Advance could be improved if users were given more resources for going beyond simple Boolean searches. If properly implemented into the user interfaces of these services, network representations of legal precedent could make the process of searching and drawing from legal precedent more efficient, both in terms of the time taken to conduct searches and the accuracy of the results. I conclude by noting some directions for future research.

II. Background

Networks have two components: objects and relations.[9] The objects are called nodes and the relations between those objects are edges or vertices.[10] In the network represented in Figure 1, the nodes are the numbered entities (1–10) and the edges are the lines connecting those entities. Not all edges are equal. If, for example, we represented a friendship network, it would be useful to distinguish between close friends and acquaintances. To track the strength of friendship ties, we could give distinct edge weights to each (e.g., two for close friends and one for acquaintances). In Figure 1, edge weight is represented by the color of the edge, with black edges representing strong ties and gray edges weak ties. If our representation of the network were sensitive to edge weight, 9 would be spatially closer to 8 than 10.

Figure 1. An example network with ten nodes and seventeen edges.

The creation of network representations usually involves attraction and repulsion between nodes.[11] Edges between nodes act as attracting forces, with the edge weight determining the strength of the attraction.[12] In order to preserve spatial distance between nodes, this attraction is countered by a general repulsive force between all nodes. To avoid unlimited repulsion between disconnected nodes, a gravitational force pulls all nodes to the center.

The most significant properties of nodes, for present purposes, are their relational properties. Degree, a basic relational property, is equal to the number of the node’s edges.[13] In Figure 1, node 2 has a degree of four because it is related to four other nodes. Degree is a limited measure because it only considers nodes in relation to their nearest neighbors and is insensitive to the significance of the connection.[14] In a trade network, for example, it would be important to know not just which countries trade with which, but also the quantity of goods traded. To track this information, we should consider weighted degree, which assigns distinct values to each edge based on the significance of that relation, but this information is still highly limited. In analyzing a criminal or terrorist network, for example, we can learn something from the fact that A communicated with B, but we learn far more about A if we also know that B worked with C, D, and E, where these are high level figures in the illicit organization.

To track such indirect connections, we also need a measure of network centrality. Various centrality algorithms are used for different purposes, but they share an important common feature: sensitivity to a node’s position in the network as a whole.[15] Here I mention just three. The first, betweenness centrality, is a measure of how often a node occurs in the shortest path between two other nodes.[16] Nodes with higher betweenness centrality are more likely to play an essential bridge role in connecting two otherwise separate groups of nodes. In Figure 1, node 8 has the highest betweenness centrality because 9 and 10 are only related to other nodes through 8. In a network of U.S. senators, with edges defined by voting records, centrist senators would have the highest betweenness centrality because they alone bridge the divide between Republican and Democrat voting blocks. Eigenvector centrality is a measure of the importance of a node in the network as measured by its connectedness to other nodes with high Eigenvector centrality.[17] This metric is similar to the third measure of centrality, Google’s PageRank metric for determining the relevance of websites in a search, which in turn is inspired by Shepardizing.[18] The PageRank for website W is determined by considering the number of other websites with links to W, with greater weight given to linking websites that are themselves frequently linked.[19] Above, node 7 has the highest Eigenvector centrality and PageRank because it has several connections with nodes that themselves have several connections. In a citations-based network, Eigenvector centrality is a measure for the relative centrality of an author to the discussion in their area of specialty.

For present purposes, we can think of individual court opinions as nodes in the network. The most significant edges in the network are citations to previous court opinions, but one could also conceptualize the legal precedent framework with edges indicating similarity of content, geographical regions, or time periods. Whatever data are chosen as the basic structure of the network, legal scholars could, as I argue below, benefit from a network-theoretic reconceptualization of the legal terrain.

III. Analysis

Online research tools such as WestlawNext[20] and Lexis Advance[21] already have limited network-based approaches, but these services could be substantially improved by extending the user’s ability to visualize and digest the interconnected network of cases constituting current legal precedent. In this section I present several ways that these services could be enhanced. Each of the suggested changes would be relatively easy to implement and could significantly improve scholars’ and lawyers’ ability to identify the most relevant precedents. These suggestions apply to Westlaw, LexisNexis, and other similar services, but I will focus on the current user interface of WestlawNext and leave it to the reader to see how the suggestions would apply to other services.

For WestlawNext, generalized inquiry usually begins with the user providing a citation, party names, keywords, or other information into a Boolean search algorithm.[22] While this process is fairly straightforward and efficient, it has a notable shortcoming. If, for example, your aim is to find cases involving pre-verbal infants causing harm, a search for “baby” will return just those cases where “baby” appears as a keyword or within the text; but, of course, cases mentioning “infant,” “toddler,” “small child,” or “newborn” could also prove relevant. Thus, these search engines could be improved by implementing semantic network databases such that nearby terms are given some weight.

Once the user has found a relevant case, WestlawNext provides excellent network-based information in the form of KeyCite.[23] This tool allows users to immediately see a summary evaluation of how the case fits into the network of legal precedent, whether the case has been superseded, affirmed, distinguished, or received other treatments, and the significance of each related case. This information is analogous to knowing node degree, types of edges, nearest neighbors, and edge weight, but is limited in the same way as these measures of node significance. A major shortcoming of the initial search results is that users are given a list of cases, C1–Cn, each related to the queried case, C0, where C1–Cn are each provided with specific information linking it to C0, but without any further information putting these cases in a broader legal context or showing how they might directly relate to one another. This is partially remedied by the diagrammatic representation of the case history on WestlawNext, wherein users see a small set of prior cases that have been granted rehearing, had their judgment reversed, etc., but there is a great missed opportunity at this stage of the search. Along with learning how the case directly relates to prior cases, it would be valuable to have network-based representations of a greater diversity of relations and a ranking system more sensitive to a case’s position within the network of legal precedents. Researchers could benefit from visual representations of several clusters of cases relevant to their specific topics, where the edges would indicate important relations between these cases beyond the relation of explicit undermining or supporting relations. For example, one could selectively add or remove edges indicating similarity in semantic content, relevant statutes, or topics. This would be beneficial for allowing scholars to freely navigate the metaphorical legal space in a literal physical space that intuitively maps onto the conceptual distances between the various cases. When starting the research process, this would provide users with an easily digestible, unified picture of the topic highlighting the most important judgments to consider in more detail, and, for the users already familiar with the legal landscape, this service would help them identify the most important gaps in their knowledge. For most of the relevant criteria, both Westlaw and LexisNexis already possess the data, so these services could be improved simply by adding functionality to the user interface.

It would also be beneficial to use network-based measures of centrality as an indicator of the significance of cases rather than raw citation counts or merely relying on a vague sense of importance that one has inherited from peers and educators. If one wished to know the most significant landmark cases on a specific issue, one could do far worse than seeking experts’ opinions, but a quantitative measure of citation counts may be a more reliable indicator of significance than even the intuitive judgments of experts. WestlawNext provides these citation numbers, but raw citation counts can be highly misleading as a method for ranking the significance of cases because this data is not sensitive to the relative importance of the court decisions citing the case in question, and some cases have received more citations simply in virtue of the fact that they were decided earlier. By analogy, in an academic citation network, being cited by the top scholar in the field is more important than being cited by ten small players. In the same way, court decisions cited in landmark cases are more significant than those cited by several less significant cases.

To gain a more accurate representation of the most significant cases, it would be better to have a system that mirrors academic rankings like H-index[24] or Google’s algorithms for ranking websites. This could be implemented by WestlawNext and similar services by providing users with a significance score for case C that is simultaneously sensitive to all of these factors: (1) the number of cases citing and cited by C, (2) the significance of the cited cases to C and the significance of C in the court decisions citing it, and (3) the relative importance of the citing and cited cases. This sort of method has been tested by James Fowler et al., who found inward relevance (one of many measures of network centrality) was a strong predictor of future citations.[25] Given the relative success of this and similar models for accurately identifying and predicting case significance, online archives such as Westlaw could improve the relevance of their search results by using network centrality for sorting and filtering results, and they could provide more meaningful information to users by including cases’ centrality scores in the listed search results.

IV. Recommendation

The advice offered above is specifically aimed at improving the efficiency of searches for cases with legal precedent, but these tools could be used in a greater variety of contexts. I conclude by briefly suggesting a few further possibilities. Closely related to the discussion above, the method of collecting and analyzing case precedent from a network perspective could be used by legal scholars to develop highly accurate pictures of the history and future of law. For example, Fowler et al. observed that, in the cases they reviewed, the Commerce Clause was the most significant legal issue in 1955, whereas First Amendment issues had become dominant in more recent years.[26]

By identifying and tracking the trends in law over the years, researchers could develop fine-grained, data-driven overviews of the history of the law while also developing accurate models for predicting future trends. Second, scholars could use network analysis to test for possible sources of bias in judicial decisions over the years by creating and analyzing social networks showing social or communication links between judges and lawyers that correlate, in a problematic way, with judges’ rulings. Finally, similar methods could be used to compare the structures of scientific and legal citation networks to see if the legal community’s structure is relevantly similar to the structure of the sciences.

*Ph.D., Philosophy, University of Illinois. Special thanks go to Laura Peet and Alexis Dyschkant for invaluable discussions regarding the nature and practice of law. I also wish to thank Jonathan Waskan and Jana Diesner for providing the empirical and theoretical tools needed to approach this topic.

[17] See id. at 417. This may seem paradoxical, as Eigenvector centrality for any given node can only be determined in reference to the Eigenvector centrality of other nodes. The paradox is removed because this metric is calculated on the basis of several iterations of the algorithm.

[23] Lexis Advance offers a similar service with Shepard’s, and its Map option mirrors WestlawNext’s case mapping function described later in the paragraph.

[24] Publish or Perish,Harzing, http://www.harzing.com/pop.htm (last visited Feb. 4, 2015). H-index is a measure of academics’ productivity. A scholar is given a score of h where she has h papers with h publications and the remaining papers have less than or equal to h citations.

[25] James Fowler et al., Network Analysis and the Law: Measuring the Legal Importance of Precedents at the U.S. Supreme Court, 15 Pol. Analysis 324–46 (2007).

When Indianapolis Colts’ future Hall of Fame quarterback Peyton Manning announced that he would not start the 2011 NFL season, his streak of 227 consecutive starts ended and fantasy team owners panicked. In 2011, there were approximately $650 million of fantasy football prizes on the line.[1] It is estimated that Manning’s absence shifted $65 million away from people who would have won their fantasy football leagues had he not been injured.

There are currently over 32 million fantasy sports players in the United States and Canada, and the industry generates more than $3 billion in revenue.[2] Fantasy sports have become as much of an American pastime as the games upon which they are based. But this popularity may be disguising the fact that fantasy sports are just another form of illegal gambling. In most states, the legality of a betting game depends upon the amount of skill versus chance required to play the game. In general, the more skill that is involved, the more likely the game is legal.

The problem is that as fantasy sports evolve to meet the needs of an ever-expanding fan base, many leagues have added features that allow less-knowledgeable players to participate. By lowering the amount of skill needed to play, the outcome is more chance-based. If this trend continues and current gambling law prevails, fantasy sports could become so dependent on chance that they will become illegal.

II. Gambling Law

A. Federal Law

The purpose of federal gambling law is to “aid the states in controlling gambling.”[3] Specifically, to assist the states in the “enforcement of their [gambling] laws.”[4] Federal gambling laws do not attempt to create uniformity between the states. Rather, they exist simply to supplement each state’s own laws.

B. State Law

Gambling regulation is mostly a function of state law and can vary considerably. The dictionary defines gambling as “play[ing] a game for money or property” or “bet[ting] on an uncertain outcome.”[5] However, most states allow activities that seem to fall into this category, such as state lotteries. In fact, in most states, an activity is legal unless a plaintiff makes an affirmative showing that a particular activity involves three elements: consideration, reward, and chance.[6]

1. Consideration

Consideration is often loosely defined as something given in exchange for something else. In the context of gambling, most courts construe this term narrowly holding that consideration exists only when a “participant provided money or a valuable item of property in exchange for the chance of greater winnings.”[7] However, some courts adopt a broader definition finding that consideration exists when any legal detriment is given in exchange for the chance to win a prize.[8]

2. Reward

In gambling, the reward is the prize that one receives after winning a game of chance. To meet this requirement, courts have held only that the reward must be tangible.[9]

3. Chance

Chance is the most controversial element. To constitute a game of chance, courts have held that the outcome of the game must depend upon factors that are out of a player’s control, as opposed to a player’s “judgment, practice, skill, or adroitness.”[10] To make this determination, courts have applied three tests: (1) the “dominant factor test,” (2) the “any chance test,” and (3) the “gambler’s instinct test.”

Most states use the dominant factor test.[11] In Johnson v. Collins Entertainment, the South Carolina court explained that a game is chance-based when “the dominant factor in a participant’s success . . . is beyond his control . . . even though the participant exercises some degree of skill.”[12] The threshold of the dominant fact test is the point at which either skill or chance affects the outcome by more than 50%.

Some states use the any chance test. In these states, an activity is a game of chance if it incorporates any element of chance, regardless of whether the game also incorporates skill.[13] Because almost every game involves some chance, most games will not survive scrutiny in these states.

Finally, a few states use the gambler’s instinct test. This test defines a game of chance as one that appeals to the “gambling spirit,” without regard to whether skill or chance dictates the outcome.[14] Because of the highly subjective nature of this test, a court’s decision can vary considerably.

III. The Legality of Fantasy Sports Under the Majority View

Most states adopt a narrow definition of consideration and use the dominant factor test. In these states, the structure and features of a particular fantasy game is of utmost importance. Legal fantasy games generally fall into three categories: (1) leagues that do not charge an entry fee; (2) leagues that do not award prizes; and (3) leagues that are predominately skill-based. The first two categories are relatively straightforward. Leagues that are free are legal because there is no consideration. Alternatively, leagues that do not award prizes are legal because there is no reward.

The third category is more complex. In this category, fantasy games are legal if the outcome is more than 50% based on skill. Fantasy leagues are generally considered skill-based if they allocate players through a traditional auction and span at least one entire season.[15] This is because fantasy players have the opportunity to offset chance occurrences, such as player injuries or adverse weather conditions, with efficient team management, lineup changes, and trade negotiation. It is this category of fantasy games that is most at risk as the popularity of fantasy sports increases.

IV. The Future of Fantasy Sports: How New Features Affect the Dominant Factor Test

A. Auto-Draft

Auto-draft is a feature used during a fantasy draft that ensures a fantasy team owner automatically drafts the highest-rated player available. Automatic drafting algorithms are designed to create competitive leagues. Beginners typically use auto-draft because they lack enough knowledge to fill their teams. Some argue that auto-draft is unfair because there is no guarantee that the owner using auto-draft would have actually selected the highest ranked player.[16]

B. Point Projections

Point projections are similar to a cheat sheet in that they predict how many points a player will earn during a game. To set a lineup, an owner starts the players on his team with the highest number of projected points. Point projections place a passive team owner in the same position as an owner who has done extensive research on his players’ current matchups, injury reports, or other conditions affecting a player’s potential performance.

C. Short Season Leagues

Fantasy games that stretch over longer time spans allow an owner’s managerial skills regarding drafting a team, setting lineups, and making trades to counteract the effects of chance. Fantasy leagues that span multiple seasons allow owners to employ strategies that may take several years. These leagues require a considerable level of commitment, knowledge, and skill. Conversely, some leagueslast only a day or a week. In these games, the outcome is more chance-based because it is closely tied to a single, real-world event.

D. The Effect

The intent of auto-draft, point projections, and short season games is to increase fantasy sports participation. These features accomplish this task by reducing the need to spend time analyzing statistics and setting lineups. But because the legality of a fantasy sports game is based on the level of skill required to play the game, these features, while increasing participation, are simultaneously pushing a multi-billion dollar industry to the brink of extinction. To avoid this outcome, current gambling laws should not be used to regulate fantasy sports.

V. How Fantasy Sports Differ From Other Gambling Games

Fantasy sports cannot be regulated effectively under current gambling law because they are different from other casino-type games. Fantasy sports are different because strategy can be used to overcome the chance elements involved in the game. To understand the impact of strategy in fantasy sports games, it is important to understand the differences between strategy and skill.

Skill is “the ability to use one’s knowledge effectively.”[17] Skill can be obtained through study, repetition, drill or practice. Often, exercising skill becomes an automatic response that occurs independently of any cognitive process. Strategy, on the other hand, is a deliberate, planned, and conscious activity. Strategy involves the application of skill but it also implies an understanding of the interaction between underlying concepts.

Managing a fantasy sports team takes skill and strategy. Drafting players, for example, is partly skilled-based because players’ statistics can be learned through study. Drafting players is also strategy-based because an owner must prioritize his selections by anticipating other owners’ choices. Trade negotiation, however, is primarily strategy-based. Skill-based trades would involve analyzing statistics to make mutually beneficial trades. But most trades are not mutually beneficial. Instead, trades typically involve psychological warfare, feeding off other owner’s impulsive natures, or exploiting other teams’ weaknesses. In fact, many fantasy experts insist that trade negotiation is an art.

By utilizing strategy, owners can prevent chance from determining the outcome of the game. All fantasy sports involve chance due to adverse weather conditions and possible player injuries. However, a skillful owner circumvents these elements by drafting backup players, checking game day weather and injury reports, and adjusting his lineup as necessary. Conversely, a poker player cannot eliminate the chance that he will be dealt an unfavorable hand, a craps player cannot anticipate the roll of the dice, and a roulette player cannot predict the number on which the ball will fall. Therefore, the ability to use strategy to “beat chance” distinguishes fantasy sports games from other illegal gambling.

VI. Resolving Fantasy Sports’ Differences Under the Law

A. Ambiguity

Not only are fantasy sports different from other gambling activities, they are also different from each other. Because of this, attempts to regulate fantasy sports under existing gambling laws have produced ambiguous guidelines. For example, in Humphrey v. Viacom, a New Jersey court held that the fantasy sports game at issue was legal because it would be “patently absurd” to conclude that the combination of an entry fee and a prize constituted gambling.[18] The court reasoned that such a holding would mean that spelling bees, beauty contests, and golf tournaments would also be considered gambling. Although this holdingappears to give fantasy sports a “clean bill of health,” it has been severely limited to its facts.[19] Therefore, fantasy sports games continue to be arbitrarily analyzed depending on the rules of each particular game. A better approach is for states to pass specific fantasy sports legislation.

B. Fantasy Sports Specific Law

Montana is currently the only state with specific statutory authorization for fantasy sports.[20] While the Montana Code is a good starting point, new legislation should expand the law by first defining a fantasy sports game and then requiring the game to meet a two-part test. First, like in Montana’s Code, a fantasy sports game could be defined as an activity in which “a limited number of persons . . . pay an entry fee for membership in the league” and create “a fictitious team composed of athletes from a given professional sport.” If the game meets the basic definition, its legal status could be determined based on (1) whether the game involves strategy; and (2) whether strategic decisions lessen the effect of chance on the game.

Part one of the test would require a court to consider whether the game involves strategy. This inquiry looks only at whether participants’ strategic decisions ultimately affect the outcome of the game. Part two asks whether a participant can use strategy to effectively “beat chance.” This requires a court to identify chance elements, such as player injuries or adverse weather conditions, and ask whether a strategic player could reduce the effect of those elements.

When the two-part test is met, the game should be deemed legal. Alternatively, if chance elements, such as those dependent on random number generators, dice throws, or card shuffles, cannot be controlled, the game should be illegal. The new law recognizes fantasy sports games as a game of strategy. By distinguishing them in this way, the law protects the legal status of true fantasy sports games.

VI. Conclusion

Fantasy sports emerged as an American pastime as participation skyrocketed over recent years. New features, designed to further increase participation, arguably lower the amount of skill involved in the game thereby threatening the legality of fantasy sports. To fix this problem, state legislatures must pass fantasy sports specific laws. By doing so, states can protect the multi-billion dollar industry.

[11] Anthony N. Cabot et al., Alex Rodriguez, a Monkey, and the Game of Scrabble: The Hazard of Using Illogic to Define the Legality of Games of Mixed Skill and Chance, 57 Drake L. Rev. 383, 390 (2009).