Monthly Archives: December 2016

Last July, the new US-EU Privacy Shield framework became effective. The Privacy Shield replaced the International Safe Harbor Privacy Principles (commonly known as “Safe Harbor”) which had been in place since 2000. Under the EU Data Protection Directive, companies can only transfer data outside of the EU to a country deemed to have an “adequate” level of data protection, and the US (which takes a sectoral approach to data privacy and has no comprehensive national data privacy law) is not one of those countries. Given the importance of EU-US data transfer in the global economy, the Safe Harbor principles were developed as an entity-level, instead of country-level, adequacy mechanism, to allow a US company to achieve a level of adequacy (in the eyes of the EU) which allowed EU-US data transfers with that company to take place. Safe Harbor served as an alternative to two other entity-level adequacy mechanisms: standard contract clauses (SCCs, also known as model contract clauses), which are separately required for each EU company transferring data to a US entity making them difficult to scale, and binding corporate rules (BCRs), which require Board of Directors approval and significant time and resources and have only been implemented by very large multinational companies. (There is also an individual-level adequacy mechanism – direct consent.)

Everything changed in October 2015, when the European Court of Justice (ECJ) released its decision in a case brought against Facebook brought by Austrian citizen Max Schrems. The ECJ held that the Safe Harbor framework did not provide adequate privacy protections to EU individuals, and was therefore invalid. Among other reasons for invalidation, the ECJ found broad US government powers to access data (including data of EU citizens) held by private US companies directly conflicted with the EU’s declaration of data protection as a fundamental human right. Given the importance of the Safe Harbor program in facilitating EU-US data transfers, its invalidation had a far-reaching impact. While the EU agreed to wait a few months before bringing any actions against companies in the Safe Harbor program which did not move to an alternative entity-level adequacy mechanism, US companies faced a difficult choice – switch to an alternative and more difficult/costly approach, such as standard contract clauses, or wait and see whether the EU and US could quickly agree on a Safe Harbor replacement before the EU’s enforcement deadline.

Fortunately, The European Commission and the US government quickly accelerated existing talks on resolving shortcomings of the Safe Harbor principles, leading to the announcement of the Privacy Shield program in February 2016. The European Commission quickly issued a draft adequacy decision for the Privacy Shield program, and despite some misgivings about the program from certain groups the European Union gave its final approval on July 12, 2016. The Privacy Shield program is made up of 7 core principles and 15 supplemental principles. Like Safe Harbor before it, it is a self-certification program, and there are a number of the principles common to both Safe Harbor and Privacy Shield. The Privacy Shield program seeks to address a number of the perceived shortcomings of the Safe Harbor principles, including protection for onward transfer of information by US companies to third parties such as their service providers, multiple ways for individuals to make a compliant about a Privacy Shield-certified company, stronger enforcement mechanisms, and an annual review mechanism. Its intent is to be a replacement entity-level mechanism which addresses the concerns around Safe Harbor cited by the ECJ in the Schrems decision, complies with EU laws, and respects EU citizens’ fundamental rights to privacy and data protection.

Challenges and Headwinds

Since the Privacy Shield program went live in July, over a thousand companies (1,234 as of December 10, 2016, according to the Privacy Shield List) have self-certified under the program. However, the Privacy Shield program, and EU-US data transfers in general, continue to face challenges and headwinds.

Legal challenges – déjà vu all over again? After the Privacy Shield program was announced in February 2016, some groups and individuals expressed concerns about the program. When Privacy Shield was approved in July 2016, Max Schrems went on record stating his belief that the Privacy Shield framework was fundamentally flawed and could not survive a legal challenge. As the first legal challenges against Privacy Shield have been filed, we will find out how prescient Mr. Schrems’ comments are. In September, the digital rights advocacy group Digital Rights Ireland filed an action in the EU courts arguing that the EU’s finding of adequacy for the Privacy Shield should be annulled on the basis that the Privacy Shield program’s privacy safeguards are not adequate. In November, a similar challenge was brought by La Quadrature du Net, a French privacy advocacy group. The results of these challenges may result in the Privacy Shield program being very short-lived. Additionally, the ECJ is considering another challenge against Facebook referred to it by the Irish Data Protection Commissioner, this time to standard contract clauses. The proponents in that case are arguing that the same concerns behind the ECJ’s Safe Harbor decision should apply to standard contract clauses. The forthcoming decision in this challenge has the potential to create a precedent that could bring down the Privacy Shield program as well.

Other public and private actions may erode Privacy Shield’s validity. On December 1, 2016, a change to Rule 41 of the Federal Rules of Criminal Procedure became effective. The change was intended to give investigators more power to obtain warrants against cyber criminals using botnets or otherwise masking their identity, such as through secure web browsers or virtual private networks. Under the amended rule, law enforcement seeking to use remote access to search media and obtain electronically stored information can obtain a warrant from a magistrate judge located in a district where “activities related to a crime may have occurred” if the actual location of the media or information has been “concealed through technological means.” Since this rule is not limited on its face to servers in the US, without further clarification of the scope of this rule it is possible for it to be used by law enforcement to have a US magistrate judge issue a warrant to search and seize information from servers located in the EU. This global reach would likely be found in direct conflict with the concepts of privacy and data protection as a fundamental human right under the EU’s Charter of Fundamental Rights. Additionally, in early October, reports surfaced that Yahoo! had secretly scanned the email accounts of all of its users at the request of US government officials, which if true would likely be inconsistent with the terms of the Privacy Shield agreement. Opponents of Privacy Shield could use actions such as these as ammunition in their efforts to invalidate the program. In fact, there have already been calls for the European Commission and the EU’s Article 29 Working Party to investigate the Yahoo! scanning allegations, and according to a European Commission spokesperson statement on November 11, 2016, the EC has “contacted the U.S. authorities to ask for a number of clarifications.”

Can any EU-US framework be adequate? The legal challenges and public/private actions cited above all lead to one fundamental question that many parties involved in the Privacy Shield program have been hesitant to ask – is there a fundamental, irreconcilable conflict between (1) the United States’ approach to privacy and (2) the EU’s commitment to privacy and data protection as fundamental human rights? If yes, the US’s sectoral approach to data privacy legislation and powers for law enforcement to obtain information from privacy companies and servers may mean that no entity-level mechanism to facilitate EU-US data transfers is “adequate” in the eyes of the EU, meaning that EU-US data transfers are approaching a dead end. While the US government has imposed restrictions on its surveillance activities in the post-Snowden world, it remains very unclear whether anything short of concrete legislation protecting the rights of EU citizens (which would run counter to US counter-terrorism activities), or a modification of the EU’s principles, would be sufficient. I suspect there may be a difference between the view of those in the EU seeking a pragmatic approach (those that believe that the importance of EU-US data transfers, including economic and geopolitical benefits, necessitate some compromise), and those seeking an absolute approach (those that believe that the EU’s belief that data protection is a fundamental human right must trump any other interests). The forthcoming decisions in the challenges to standard contract clauses and the Privacy Shield program will likely help shed light on whether this fundamental conflict is fatal to any entity-level mechanism.

Compliance, not certification, with the Privacy Shield principles is what matters. A number of US companies have chosen to tout their Privacy Shield self-certification via blog posting or press release (for examples, see here, here and here). While a press release on Privacy Shield certification can be a useful to demonstrate its global presence and commitment to data privacy, remember that it’s a self-certification process (although some companies are using third-party services such as TrustE to help them achieve compliance). A company’s certification of compliance with the Privacy Shield principles is less important than the processes and procedures they have put in place to manage their compliance. If you need to determine if a company is self-certified under the Privacy Shield program, you can search the database of certified companies at http://www.privacyshield.gov/list, and check their website privacy policy which should contain disclosures affirming and relating to their commitment to the Privacy Shield principles. If you’re a company certified under the Privacy Shield, be prepared to answer questions from EU companies on how you comply with the Privacy Shield principles – you may be asked.

So, what does all this mean? At the moment, Privacy Shield may be a bit rickety, but unless your company can effectively use standard contractual clauses or binding corporate rules, short of direct consent it’s the only game in town for US companies which need to receive data from their EU client, customers and business partners. Even SCCs may be a short-lived solution, meaning many companies may not want to invest the time, effort and expense required to adopt that entity-level approach. Due to the current state of Privacy Shield and EU-US data transfers in general, US companies may want to consider the wisdom of the “Herd on the African Savanna” approach to compliance – the safest place to be in a herd on the African savanna is in the center. It’s almost always the ones on the outside which get picked off, not the ones in the center. Unless there is a compelling business reason to be on the outside of the herd (desire to be viewed as a market leader, willingness to risk doing nothing until clearer direction is available, etc.), the safest place from a compliance perspective is to stick with the pack. While that approach is not for everyone, many companies may feel that being in the center of the herd of US companies dealing with EU-US data transfers is the safest approach while the fate of the Privacy Shield, and EU-US data transfers in general, plays out.

The best place to stop a snowball from rolling the wrong way is the top of the hill.

When it comes to managing risk in business, there are two fundamental principles:

You can’t disarm all of the land mines. A risk is like a land mine – it will detonate sooner or later once the right factors occur. Part of risk management is having enough information to know (or make an educated guess) at which risk “land mines” are more likely to go off than others, so you can stack rank and disarm the land mines in the right order. That way, hopefully you’ll disarm each one in time, and if one does goes off before you can disarm it it will cause minimal damage.

You don’t have to stop every factor from occurring; you have to stop at least one factor from occurring. If a risk “land mine” detonates, a number of things all went wrong at the same time. Think of it as the lock on Pandora’s Box – for the lock to open (the land mine going off), the pins in the cylinder (the environmental factors) must align perfectly with the key (the catalyst). As long as one of the pins are misaligned, the lock won’t open. If you don’t have the resources or ability to ensure all pins are misaligned, try to ensure at least one pin is misaligned so the land mine can’t go off. (If more than one is misaligned, that’s even better.)

To manage a risk, a business must first mitigate and shift the risk to reduce the chance of the land mine detonating to the greatest extent possible, and then accept or rejectthe residual risk to the business. (For more on this, please see my earlier LinkedIn article on Revisiting Risk Management).

When it comes to your relationships with your key vendors, suppliers and other partners/providers, risk management principles should be applied to both existing partners/providers, prospective partners/providers, and “inherited” partners/providers (e.g., through acquisition). There are a number of ways to mitigate and shift risk in these relationships:

Mitigating the Risks

Do due diligence on your partners and providers. Perform research to see if the partner/provider has had security or privacy problems in the past. If they are public, look at the risk factors in their securities filings. Look at the partner/provider’s privacy policy to see if they make any claims they likely cannot live up to, or are overly broad in what they can do with your company’s data. Watch out for unrealistic marketing statements regarding privacy, security or their ability to perform the obligations you are contracting for. Use RFPs to gather information on prospective partners/providers up front (and keep it in case you need to refer to it later on if something they told in you in RFP proves not to be true).

Don’t automatically disqualify companies that have had past problems. If an RFP reveals that a partner/provider has had a past issue, focus on what steps they have taken to remediate the issue and protect against a recurrence. The result may be that they have a more robust security and risk management program than their peers.

Ask them what they do. Consider adding privacy and security questions to your RFP to gather information on current practices and past problems/remediation efforts (and to make them put it in writing). Watch out for answers that are too generic or just point you to their privacy policy.

Set online alerts, such as Google Alerts, to stay up-to-date on the news relating to your prospective or current partner/provider during the course of your negotiations and relationship, and escalate any alerts appropriately. If the partner/provider is public, set an alert for any spikes (up or down) in stock price.

Plan for the inevitable. Inevitably, your business relationship will end at some point. It could end when you’re ready for and expecting it, but you can’t count on that. If your partner/provider is mission-critical, develop an “expected” and “unexpected” transition plan and confirm that the partner/provider can locate and provide you the data you need to execute on that plan. For example, ensure you have all information and data you may need if the partner/provider ceases operations (for example, routinely download reports and data sets from their portal, or set up an automated feed). Alternatively, consider ways to ensure that if a partner/provider creates and stores mission-critical information (e.g., order or personal information, critical reports or data, etc.), it’s mirrored securely to a location in your control on a regular basis so that if there’s a problem, you have a secure and current data set to work from. This may be required or important under your company’s business continuity plan, and your contractual commitments to your clients.

Know your alternatives. Keep abreast of alternative partners/providers, do initial vetting from a security perspective, and maintain relationships with them. If a problem occurs, the company may have to switch partners/providers quickly. If you have taken the time to cultivate a “rainy day” relationship, that partner/provider may be happy to go out of their way to help you onboard quickly should a problem with your existing partner/provider occur (in the hopes that your company may reward their help with a long-term relationship).

Know what you have to do to avoid a problem. Once negotiated, contracts often go in the drawer, and the parties just “go about their business.” Make sure you know what your and your partner/provider’s contractual obligations are, and follow them. If they have “outs” under the contract, ensure you know what you need to do in order to ensure they cannot exercise them. If terms of use or an Acceptable Use Policy (AUP) or other partner/provider policies apply, make sure the right groups at your company are familiar with your obligations, and ensure they are being checked regularly in case they are updated or changed. If possible, minimize the number of “outs” during the negotiation. For existing or inherited partners/providers, consider preparing a list of the provisions you want to try to remove from their agreements so you can try to address them when the opportunity arises in the future (e.g., in connection with a renewal negotiation).

Put contractual provisions in place. Sales and Procurement should partner with IT and Legal to ensure that the right risk mitigation provisions are included in partner/provider agreements on an as-needed basis. Consider adding a standard privacy and security addendum to your agreements, whether on their paper or yours. Common provisions to consider include a security safeguards requirement; obligation to protect your network credentials in their possession; obligation to provide security awareness training (including anti-phishing) to their employees (consider asking for the right to test their employees with manufactured phishing emails, or getting an obligation that they will do so); requiring partners/providers to maintain industry standard certifications such as ISO 27001 certification, PCI certification, SOC 2 Type 2 obligations, etc.; obligation to encrypt sensitive personal information in their possession; obligations to carry insurance covering certain types of risks (ensure your company is named as an additional insured, and try to obtain a waiver of the right of subrogation); rights to perform penetration testing (or an obligation for them to do so); a obligation to comply with all applicable laws, rules and regulations); an obligation to complete an information security questionnaire and participate in an audit; language addressing what happens in the event of a security breach; and termination rights in the event the partner is not living up to their obligations. Not all of these provisions make sense for every partner/provider. Another approach to consider is to add appropriate provisions to a supplier/vendor code of conduct incorporated by reference into your partner/provider agreements (ensure conflicts are resolved in favor of the code of conduct).

Shifting the Risks

Use contractual indemnities. An indemnity is a contractual risk-shifting term through which one party agrees to bear the costs and expenses arising from, resulting from or related to certain claims or losses suffered by another party. Consider whether to include in your partner/provider agreement an indemnity obligation for breaches of representations/warranties/covenants, breach of material obligations, breach of confidentiality/security, etc. Consider whether to ask for a first party indemnity (essentially insurance, much harder to get) vs. a third party indemnity (insulation from third party lawsuits). Remember that an indemnity is only as good as the company standing behind it. Also, pay close attention to the limitation of liability and disclaimer of warranties/damages clauses in the agreement to ensure they are broad enough for your company.

Request a Parental Guaranty. If the contracting party isn’t fully capitalized, or is the subsidiary of a larger “deep pocketed” organization, consider requesting a performance and payment/indemnification guaranty to ensure you can pursue the parent if the subsidiary you are contracting with fails to comply with its contractual obligations.

Acquire insurance. Finally, consider whether your existing or other available insurance coverage would protect you against certain risks arising from your partner/provider relationships. Review the biggest risks faced by your company (including risks impacting your partner/provider agreements) on a regular basis to determine if changes to your insurance coverage profile are warranted; your coverage should evolve as your business evolves. Understand what exclusions apply to your insurance, and consider asking your broker walk you through your coverage on an annual basis.

You’ve likely heard that Augmented Reality (AR) is the next technology that will transform our lives. You may not realize that AR has been here for years. You’ve seen it on NFL broadcasts when the first down line and down/yardage appear on the screen under players’ feet. You’ve seen it in the Haunted Mansion ride in Disneyland when ghosts seem to appear in the mirror riding with you in your cart. You’ve seen it in cars and fighter jets when speed and other data is superimposed onto the windshield through a heads-up display. You’re seeing it in the explosion of Pokémon Go around the world. AR will affect all sectors, much as the World Wide Web did in the mid-1990s. Any new technology such as AR brings with it questions on how it fits under the umbrella of existing legal and privacy laws, where it pushes the boundaries and requires adjustments to the size and shape of the legal and regulatory umbrella, and when a new technology leads to a fundamental shift in certain areas of law. This article will define augmented reality and the augmented world, and analyze its impact on the legal and privacy landscape.

What is “augmented reality” and the “augmented world?”

One of the hallmarks of an emerging technology is that it is not easily defined. Similar to the “Internet of Things,” AR means different things to different people, can exist as a group of related technologies instead of a single technology, and is still developing. However, there are certain common elements among existing AR technologies from which a basic definition can be distilled.

I would define “augmented reality” as “a process, technology, or device that presents a user with real-world information, commonly but not limited to audiovisual imagery, augmented with additional contextual data elements layered on top of the real-world information, by (1) collecting real-world audiovisual imagery, properties, and other data; (2) processing the real-world data via remote servers to identify elements, such as real-world objects, to augment with supplemental contextual data; and (3) presenting in real time supplemental contextual data overlaid on the real-world data.” The real world as augmented through various AR systems and platforms can be referred to as the “augmented world.” AR and the augmented world differs from “virtual reality” (VR) systems and platforms, such as the Oculus Rift and Google Cardboard, in that VR replaces the user’s view of the real world with a wholly digitally-created virtual world, where AR augments the user’s view of the real world with additional digital data.

“Passive” AR (what I call “first-generation AR”) is a fixed system — you receive augmented information but do not do so interactively, such as going through the Haunted Mansion ride or watching your television set. The next generation of AR is “active,” meaning that AR will be delivered in a changing environment, and the augmented world will be viewed, through a device you carry or wear. Google Glass and the forthcoming Microsoft HoloLens are examples of “active AR” systems with dedicated hardware; when worn, the world is augmented with digital data superimposed on the real-time view of the world. However, AR has found ways to use existing hardware — your smartphone. HP’s Aurasma platform is an early example of an active AR system that uses your smartphone’s camera and screen to create digital content superimposed on the real world. What AR has needed to go fully mainstream was a killer app that found a way for AR to appeal to the masses, and it now has one — Pokémon Go. Within days of its launch in early July, TechCrunch reported that Pokémon Go had an average daily user base of over 20 million users. Some declared it the biggest “stealth health” app of all time as it was getting users out and walking.

Active AR has the capacity to change how people interact with the world, and with each other. It is an immersive and engaging user experience. It has the capacity to change the worlds of shopping, education and training, law enforcement, maintenance, healthcare, and gaming, and others. Consider an AR system that shows reviews, product data, and comparative prices while looking at a shelf display; identifies an object or person approaching you and makes it glow, flash, or otherwise stand out to give you more time to avoid a collision; gives you information on an artist, or the ability to hear or see commentary, while looking at a painting or sculpture; identifies to a police officer in real time whether a weapon brandished by a suspect is real or fake; or shows you in real time how to repair a household item (or how to administer emergency aid) through images placed on that item or on a stricken individual. For some, the augmented world will be life-altering, such as a headset as assistive technology which reads road signs aloud to a blind person or announces that a vehicle is coming (and how far away it is) when the user looks in the vehicle’s direction. For others, the ability to collect, process and augment real-world data in real time could be viewed as a further invasion of privacy, or worse, technology that could be used for illegal or immoral purposes.

As with any new technology, there will be challenges from a legal and digital perspective. A well-known example of this is the Internet when the World Wide Web became mainstream in the mid-1990s. In some cases, existing laws were interpreted to apply to the online world, such as the application of libel and slander to online statements, the application of intellectual property laws to file sharing over peer-to-peer networks, and the application of contract law to online terms of use. In others, new laws such as the Digital Millennium Copyright Act were enacted to address shortcomings of the existing legal and regulatory landscape with respect to the online world. In some instances, the new technology led to a fundamental shift in a particular area of law, such as how privacy works in an online world and how to address online identity theft and breaches of personal information. AR’s collection of data, and presentation of augmented data in real time, creates similar challenges that will need to be addressed. Here are some of the legal and privacy challenges raised by AR.

Rethinking a “reasonable expectation of privacy.” A core privacy principle under US law is that persons have a reasonable expectation of privacy, i.e., a person can be held liable for unreasonably intruding on another’s interest in keeping his/her personal affairs private. However, what is a “reasonable expectation of privacy” in a GoPro world? CCTV/surveillance cameras, wearable cameras, and smart devices already collect more information about people than ever before. AR technology will continue this trend. As more and more information is collected, what keeping “personal affairs private” looks like will continue to evolve. If you know someone is wearing an AR device, and still do or say something you intend to keep private, do you still have a reasonable expectation of privacy?

What is a “reasonable expectation of privacy” in a GoPro world?

Existing Privacy Principles. Principles of notice, choice, and “privacy by design” apply to AR systems. Providers of AR systems must apply the same privacy principles to AR as they do to the collection of information through any other method. Users should be given notice of what information will be collected through the AR system, how long it will be kept, and how it will be used. Providers should collect only information needed for the business purpose, store and dispose of it securely, and keep it only as long as needed.

AR systems add an additional level of complexity — they are collecting information not just about the user, but also third parties. Unlike a cellphone camera, where the act of collecting information from third parties is initiated by the user, an AR system may collect information about third parties as part of its fundamental design. Privacy options for third parties should be an important consideration in, and element of, any AR system. For example, an AR system provider could ensure users have the ability to toggle the blocking of third party personal data from being collected or augmented, so personal information is only augmented when the user wants it to be. AR system providers may also consider an indicator on the outside of the device, such as an LED, to let third parties know that the AR system is actively collecting information.

Additionally, AR may create interesting issues from a free speech and recording of communications perspective. Some, but not all, court rulings have held that the freedom of speech guaranteed by the First Amendment extends to making recordings of matters of public interest. An AR system that is always collecting data will push the boundaries of this doctrine. Even if something is not in the public interest, many states require the consent of both parties to record a conversation between them. An AR system which persistently collects data, including conversations, may run afoul of these laws.

Children’s Privacy.It is worth a special note that AR creates an especially difficult challenge for children’s privacy, especially children under 13. The Children’s Online Privacy Protection Act (“COPPA”) requires operators of online services, including mobile apps, to obtain verifiable parental consent before collecting any personal information from children under 13. “Personal information” includes photos, videos, and audio of a child’s image or voice. As AR systems collect and process data in real time, the passive collection of a child’s image or voice (versus collection of children’s personal information provided to a company through an interface such as a web browser) is problematic under COPPA. AR operators will need to determine how to ensure they are not collecting personal information from children under 13. I expect the FTC will amend the COPPA FAQ to clarify their position on the intersection of AR and children’s privacy.

Intellectual Property. Aside from the inevitable patent wars that will occur over the early inventors of AR technologies, and patent holders who believe their patent claims cover certain aspects of AR technologies, AR will create some potentially interesting issues under intellectual property law. For example, an AR system that records (and stores) everything it sees will invariably capture some things that are protected by copyright or other IP laws. Will “fair use” be expanded in the augmented world, e.g., where an album cover is displayed to a user when a song from that is heard? Further, adding content to a copyrighted work in the augmented world may constitute a prohibited derivative work. From a trademark perspective, augmenting a common-law or registered trademark with additional data, or using a competitor’s name or logo to trigger an ad about your product overlaid on the competitor’s name or logo, could create issues under existing trademark law.

Discrimination. AR systems make it easy to supplement real-world information by providing additional detail on a person, place or thing in real time. This supplemental data could intentionally or inadvertently be used to make real-time discriminatory decisions, e.g., using facial or name recognition to provide supplemental data about a person’s arrest history, status in a protected class, or other restricted information which is used in a hiring or rental decision. An AR system that may be used in a situation where data must be excluded from the decision-making process must include the ability to automatically exclude groups of data from the user’s augmented world.

The world of online digital marketing and advertising will expand to include digital marketing and advertising in the augmented world. Imagine a world where anything — and I mean anything — can be turned into a billboard or advertisement in real time. Contextual ads in the augmented world can be superimposed anytime a user sees a keyword. For example, if you see a house, imagine if an ad for a brand of paint appears because the paint manufacturer has bought contextual augmented ads to appear in an AR system whenever the user sees a house through the augmented world.

Existing laws will need to be applied to digital marketing and advertising in the augmented world. For example, when a marketing disclaimer appears in the online world, the user’s attention is on the ad. Will the disclaimer have the same effect in an augmented environment, or will it need to be presented in a way that calls attention to it? Could this have the unintended consequence of shifting the user’s attention away from something they are doing, such as walking, thereby increasing the risk of harm? There are also some interesting theoretical advertising applications of AR in a negative context. For example, “negative advertising” could be used to blur product or brand names and/or to make others more prominent in the augmented world.

The Right of Publicity. The right of publicity — a person’s right to control the commercial use of his or her name, image, and likeness — is also likely to be challenged by digital marketing in the augmented world. Instead of actively using a person’s likeness to promote a product or service, a product or service could appear as augmented data next to a person’s name or likeness, improperly (and perhaps inadvertently) implying an endorsement or association. State laws governing the right of publicity will be reinterpreted when applied to the augmented world.

Negligence and Torts. AR has the capacity to both further exacerbate the problem of “distracted everything,” paying more attention to your AR device than your surroundings, as some users of Pokémon Go have discovered. Since AR augments the real world in real time, the additional information may cause a user to be distracted, or if the augmented data is erroneous could cause a user to cause harm to him/herself or to others. Many have heard the stories of a person dutifully following their GPS navigation system into a lake. Imagine an AR system identifying a mushroom as safe to eat when in fact it is highly poisonous. Just as distracted driving and distracted texting can be used as evidence of negligence, a distracted AR user can find him/herself facing a negligence claim for causing third party harm. Similarly, many tort claims that can arise through actions in the real world or online world, such as liable and slander, can occur in the augmented world. Additionally, if an AR system augments the real world in a way that makes someone think they are in danger, inflicts emotional distress, or causes something to become dangerous, the AR user, or system provider, could be legally responsible.

Contract liability. We will undoubtedly see providers of AR systems and platforms sued for damages suffered by their users. AR providers have and will shift liability to the user through contract terms. For example, Niantic, the company behind Pokémon Go, states in their Terms of Use that you must “be aware of your surroundings and play safely. You agree that your use of the App and play of the game is at your own risk, and it is your responsibility to maintain such health, liability, hazard, personal injury, medical, life, and other insurance policies as you deem reasonably necessary for any injuries that you may incur while using the Services.” AR providers’ success at shifting liability will likely fall primarily to tried-and-tested principles such as whether an enforceable contract exists.

None of the above challenges are likely to prove insurmountable and are not expected to slow the significant growth of AR. What will be interesting to watch is how lawmakers choose to respond to AR, and how early hiccups are seized on by politicians and reported in the press. Consider automobile autopilot technology. The recent crash of a Tesla in Autopilot mode is providing bad press for Tesla, and fodder for those who believe the technology is dangerous and must be curtailed. Every new technology brings both benefits and potential risks. If the benefits outweigh the risks on the whole, the public interest is not served when the legal, regulatory and privacy pendulum swings too far in response. Creating a legal, regulatory and privacy landscape that fosters the growth of AR, while appropriately addressing the risks AR creates and exacerbates, is critical.

To many, “personally identifiable information” (also “PII” or “personal information”) means information that can be used to identify an individual, such as a person’s name, address, email address, social security number/drivers’ license number, etc. However, in the US, there is no uniform definition of personal information. This is because the US takes a “sectoral” approach to data privacy. In the US, data privacy is governed by laws, rules and regulations specific to market sectors such as banking, healthcare, payment processing, and the like, as well as state laws such as breach notification statutes). Companies, such as Google, often include their own definition of personal information in their privacy policy. Even though there is no uniform definition, however, it’s clear that that more and more information is falling under the PII/personal information umbrella.

One category of data with potentially significant implications to US businesses if classified as PII are Internet Protocol (IP) and Media Access Control (MAC) addresses.

An IP address is a unique numerical or hexadecimal identifier used by computing devices such as computers, smartphones and tablets to identify themselves on a local network or the Internet, and to communicate with other devices. IP addresses can be dynamic (a temporary IP address is assigned each time a device connects to a network), or static (a permanent IP address is assigned to a network device which does not change if it disconnects and reconnects). There are two types of IP addresses – the original IPv4 (e.g., “210.43.92.4”), and the newer IPv6 (e.g., “2001:0db8:85a3:0000:0000:8a2e:0370:7334”).

A MAC address is a unique identifier used to identify a networkable device, such as a computer/phone/tablet/smartwatch, as well as other connected devices such as smart home technologies, printers, TVs, game consoles, etc. A MAC address is a 12-character hexadecimal (base 16) identifier, e.g., “30:0C:AA:2D:FB:22”. The first half of the address identifies the device manufacturer, and the second half is a unique identifier for a specific device. If a device needs to talk to other devices, it likely has a MAC address.

Why do devices need both? There are incredibly technical reasons for this, but at a very high level, MAC addresses are used to identify devices on a local wired or wireless network (e.g., your home network) to transmit data packets between devices on that local network, and IP addresses are used to identify devices on the worldwide Internet to transmit data packets between devices connected directly to the Internet. Your router has an IP address assigned by your ISP, as well as a MAC address which identifies it to other devices on the local network. Your router assigns a local IP address (e.g., 192.168.1.2-192.168.1.50) to connected devices by MAC address. Network traffic comes to your router via IP address, and the router determines what MAC device on the network to which to route the traffic.

Think of a letter mailed to your attention at your corporate office address of 1234 Anyplace Street, Suite 1500, Anytown, US 12345. The mailing address will tell the mail carrier what address to deliver it to, but the carrier won’t deliver it right to you personally. Suppose you are in Cube 324. Your mail room will look up your cube number, and deliver the letter to you. The letter is like an online data packet, the mailing address is like an IP address, the cube number is like a MAC address, and the mail room is like a router — the router takes the inbound packet delivered by IP address and uses the local device’s MAC address to route the packet to the right device on the network.

Canada’s approach. In Canada, the Personal Information Protection and Electronic Documents Act (PIPEDA) defines “personal information” as “information about an identifiable individual.” The Office of the Privacy Commissioner of Canada (OPCC) has released an interpretation making clear that this definition must be given a “broad and expansive interpretation,” and that it includes information that “relates to or concerns” a data subject. With respect to IP addresses, according to the OPCC an Internet Protocol (IP) address is personal information if it can be associated with an identifiable individual. (Note that in Canada, business contact information is not considered personal information, which implies that an IP or MAC address of a work computing device associated with an employee’s work contact information is not personal information.)

The European approach. In Europe, the current Data Protection Directive and the proposed Data Protection Regulation both define personal data as “any information relating to an identified or identifiable natural person.” Individual EU member states differ on whether an IP address should be considered personal data. The European Court of Justice (ECJ) has held that IP addresses are protected personal information “because they allow … users to be precisely identified,” and is considering whether to adopt an even stronger position that dynamic IP addresses collected by a website operator are personal information even if though the Internet service provider, and not the website operator, has the data needed to identify the data subject. The same rules should apply to MAC addresses. The new Data Protection Regulation, which will override member state implementations of the Directive, states in its findings that “[n]atural persons may be associated with online identifiers provided by their devices, applications, tools and protocols, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags. This may leave traces which, in particular when combined with unique identifiers and other information received by the servers, may be used to create profiles of the natural persons and identify them.”

In the US, the sectoral and state-by-state approach to data privacy does not paint a clear picture as to whether an IP address or MAC address should be considered personal information.

Specific laws. The one US statute that clearly states that IP and MAC addresses are personal information is the Children’s Online Privacy Protection Act (COPPA). In 2013, the FTC revised the COPPA Rule, which defines “personal information” as “individually identifiable information about an individual collected online,” as specifically including IP addresses, MAC addresses, and other unique device identifiers. The Health Insurance Portability and Accessibility Act (HIPAA) includes device identifiers (such as MAC addresses) and IP addresses as “identifiers” that must be removed in order to de-identify protected health information. State security breach notification laws define personal information, but those laws do not include IP address, MAC address, or other device identifier as PII.

The FTC’s view. In April, Jessica Rich, the Director of the FTC’s Bureau of Consumer Protection, wrote on the FTC’s business blog about cross-device tracking. In her remarks, she restated the FTC’s long-held position that data is personally identifiable, “and thus warranting privacy protections, when it can be reasonably linked to a particular person, computer, or device. In many cases, persistent identifiers such as device identifiers, MAC addresses, static IP addresses, or cookies meet this test.” She then specifically cited the FTC’s 2013 amendments to the COPPA Rule as an example of this in practice. Director Rich’s comments signal that the FTC views IP and MAC addresses, and other unique device identifiers, in a similar manner as the Office of the Privacy Commissioner of Canada — if it can be associated with an identifiable individual, it should be considered personal information.

Google’s View. It is also worth looking at Google’s definition from its privacy policy, given Google’s prominence as a collector and user of consumer personal information. Google defines personal information to include both information that personally identifies a person, as well “other data which can be reasonably linked to such information by Google, such as information we associate with your Google account.” This is essentially the FTC’s view, with a reasonableness standard.

Given all this, what should US businesses do?

Consider using a term to define IP addresses, MAC addresses, and other user device identifiers which identify a thing, not a person, but can be linked to an individual depending on what information is collected or obtained about that individual. I call this information linkable information.If linkable information is, or reasonably can be, associated or linked with an identifiable individual in your records, it becomes personal information.

Think of your driver’s license and your license plate as things. Your drivers’ license has your name, photo, and other information, so it identifies you. Therefore, a copy of your license would be personal information. On the other hand, your license plate by itself identifies a thing (your vehicle), and therefore by itself is linkable information, but not personal information. However, if your license plate is contained in a list of names and associated license plates maintained by a company, the license plate is associated with you, and therefore the company should handle it as personal information. Similarly, your phone number identifies a thing (your phone, not you, as you can let anyone use your phone) and therefore is linkable information; if your number is linked with an identifiable individual (e.g., the number is associated with a recording an individual’s voice on a phone call), the phone number becomes personal information.

An IP address in a server log, by itself, is linkable information not linked or associated with an individual, and therefore not personal information. However, an IP address as part of an electronic signature record, where the IP address is collected and stored with a person’s name, time/date stamp of acceptance, and IP address are collected, would be personal information.

If your company’s privacy policy defines personal information to include device identifiers such as IP addresses and MAC addresses, or defines when device identifiers would be considered personal information, ensure you are doing what your privacy policy says you will do. Failing to comply with a stated privacy policy can give rise to an FTC investigation and/or complaint under §5 of the FTC Act, as well as state AG investigations/actions and private litigation.

If you collect information from European consumers, given the extra-territorial reach of the upcoming Regulation US companies should carefully watch how IP and MAC addresses fall into the EU’s definition of personal data, and determine whether it needs to comply with Europe’s approach.

If you collect IP address information from a child under 13 through a website or app governed by COPPA, by law it’s personal information.

Talk to your IT group about whether you collect any device information, such as IP or MAC addresses, that could be linkable information, and analyze whether that data is linked or associated with personal information in your systems.

Almost every business has an online presence of some form. Many have a website which serves as anything from an online company brochure to a fully-featured online store or customer/vendor/user portal. Some have apps available through Google Play Store, the Apple App Store, or other app stores. A number of companies spend significant sums on their websites and apps to design robust features and content delivered through a compelling user experience. But if there’s one place website and app operators miss the mark, it’s ensuring the right legal disclosures are in place, and that the ones that are in place are saying the right things.

When most people think of a website or app disclosure, they think of a privacy policy and terms of use. These are definitely important. However, There are a number of other disclosures required or recommended under federal and state law that companies should consider to manage risk and avoid potentially distracting and costly litigation. At the same time, saying too much in disclosures such as your privacy policy can expose your company to unnecessary risk.

There are four core rules that should apply to all website disclosures:

Write them in plain English.

Avoid using undefined technical jargon and using marketing bluster.

Make them easy to understand and use.

Make them 100% accurate and truthful.

Consider having your company’s User Experience group review your disclosures and policies to ensure they are as easy to read and navigate as possible. Consider using design elements such as progressive reduction and progressive disclosure (you can see my earlier blog post on this topic by clicking here.) The goal is to ensure consumers easily understand your disclosures. If you ever have an issue with a term or provision in your disclosures, being able to argue that the content and design were optimized for easy reading and navigation can pay dividends.

Here are some website and app disclosures to consider:

The Privacy Policy. States such as California have laws requiring companies to have online privacy policy. Since almost every website is accessed by users in California, it’s safe to say you are legally required by state law to have a privacy policy. Companies in certain industries or sectors such as in the healthcare sector (HIPAA) and financial sector (Gramm-Leach-Bliley) have specific requirements for their privacy policies. A privacy policy is also required by law in some states on an information category basis, such as Connecticut’s requirement that anyone collecting Social Security Numbers have a publicly displayed privacy policy with certain required disclosures. Certain laws also mandate that you cover certain topics in your privacy policy (e.g., California’s requirement to disclose how you handle “do-not-track” headers, and California’s requirement to provide information on how minors who are your registered website users can request that you remove their personal information). Don’t forget that your privacy policy needs to apply to, and be displayed on, your company’s apps as well.

A company’s privacy policy obligations can be summarized simply: say what you do, and do what you say. “Say what you do” means ensure your privacy policy fully describes how you collect, use, and share information (both personally identifiable information, such as your name and address, and non-personally identifiable information such as behavioral data) collected from or about your customers. “Do what you say” means ensuring your day-to-day business activities with respect to information collected from consumers falls within the boundaries of what you say you do in your privacy policy. Two important rules to follow are, (1) if you want to change how you collect, use or share information from consumers, make sure your privacy policy allows it first, and give prior notice to website users that your privacy policy is changing; and (2) if you want to change how you use information you’ve already collected from consumers, you’ll need permission from the consumers first. Always include an effective date on your privacy policy (again, a state law requirement).

Look for a more detailed post on privacy policies coming soon.

Terms of Use/Terms of Service. Your terms of use (sometimes also referred to as “terms of service”) should describe the rights and obligations applicable to both your company’s website/app/online service users and to your company itself with respect to the operation and/or use of an online website, app, and/or online service. It should cover topics such as ownership of the website and company-provided content on it (including your copyrights, trademarks and licensed trademarks), and associated restrictions (e.g., no screen scraping website content); disclaimers of third party content, such as third party ad networks on your site, and language to prevent use of your company’s trademarks other content to create the appearance of sponsorship by or affiliation with a third party; whether or not you collect information from children under 13 (if you do, ensure you are complying with the Children’s Online Privacy Protection Act or “COPPA”); an obligation to report lost or stolen passwords and change passwords regularly; what you can do with user-generated content uploaded or shared to the website (e.g., a broad right and license to use it), and related terms (e.g., it’s provided royalty-free and with no license costs, that it doesn’t infringe anyone else’s rights, etc.); a feedback provision if users may provide feedback or comments; links to third party content; and important legal terms such as jurisdiction, choice of law, indemnification, and the like. Many website operators include an acceptable use policy as part of their Terms of Use/Terms of Service; some have a separate policy on their website.

DMCA Notice. If your website collects, displays, or otherwise uses or shares user-generated content, consider a copyright notice (also called a “DMCA notice”). The Digital Millennium Copyright Act creates a “safe harbor” from copyright infringement for websites operators who honor takedown requests and display on their website information for their designated “copyright agent” to which takedown requests can be sent. There’s more to the statute than that, so if you need a DMCA notice please review one of the multitude of articles out then on crafting a proper DMCA notice. Don’t forget that you need to register your designated copyright agent with the US Copyright Office by filing a “Designation of Copyright Agent” form.

California “Shine the Light” Notice. In 2005, California enacted the “Shine the Light” law as part of its Consumer Records Act. The law requires businesses to provide disclosures to California consumers of the types of customer information they share with third parties for the third party’s direct marketing purposes during the immediately preceding calendar year. If your business shares collected personal information with third parties for the third party’s direct marketing purposes and does business in California, with a few exceptions this law applies to you. Businesses are required to let customers know how to submit requests for this information. While there are a few options, the simplest for most businesses is to include a link on the company’s homepage to “Your California Privacy Rights” or “Your Privacy Rights” to a page describing customer’s rights under the “Shine the Light” law and the email/physical address to which requests should be sent. There has been an uptick in class action litigation recently against companies which do not have a “Shine the Light” disclosure on their website.

Terms of Sale. If you sell products through your website, consider using a Terms of Sale to govern the sales transaction. Terms of Sale typically include provisions such as placing an order; when it is accepted by the company; delivery and fulfillment terms; the return/cancellation policy; information on prices (e.g., subject to change without notice, not required to honor incorrect pricing); license rights to software; etc.

Warranties. One policy you may want to consider adding to your website are product warranties. Last year Congress passed, and President Obama signed, the E-Warranty Act of 2015. This law amended the 1975 Magnuson-Moss Warranty Act to allow companies to put their warranties online instead of including them on or in product packaging. The product documentation or packaging would need to include a link to the online warranty, instead of the warranty terms themselves. Companies that sell products that come with warranties should consider reviewing and taking advantage of the E-Warranty Act.

Supply Chains Notice. In 2010, California enacted the Transparency in Supply Chains Act. The law requires large retailers doing business in California (over $100 million in annual revenue identifying itself as a retail seller or manufacturer on their CA tax return) to post disclosures on their websites on their “efforts to eradicate slavery and human trafficking from their [direct] supply chain for tangible goods offered for sale” in five specific areas: verification, audits, certification, internal accountability, and training. It requires the disclosures be accessible through the company’s homepage via a “conspicuous and easily understood” link.

Be careful your disclosures aren’t saying too much. While having the right disclosures for your websites and apps is important, avoid saying too much. Remember, when it comes to disclosures, what you say can hurt you. Website disclosures are not the place for marketing puffery. If you make a statement such as “100% guaranteed,” “we encrypt all data,” or “we use best-in-the-industry [whatever],” and it turns out to be false or inaccurate, you can expect state AGs and the FTC (and class action counsel) may come knocking. Generally, one of the roles of the Federal Trade Commission is to ensure that companies are not engaging in unfair or deceptive trade practices. This extends to ensuring that companies are making accurate and truthful disclosures on their websites. Some states, such as Pennsylvania, have expressly included false and misleading privacy policy statements as a deceptive or fraudulent business practice.

At the extreme end of this, consider what has been happening in New Jersey. Class action counsel have been using an extremely broad interpretation of NJ’s largely-ignored-until-recently Truth in Consumer Contract, Warranty and Notice Act to go after companies operating business-to-consumer (B2C) websites. The law prohibits sellers from providing notices, terms, or contracts with provisions that violate “any clearly established legal right of a consumer or responsibility of a seller” under federal or state law (whether or not the consumer is happy with the purchase). Class action counsel are bringing suit under this statute stating that just displaying a website notice with a general limitation of liability, broad disclaimers of warranty, statements that certain terms such as warranty disclaimers may not apply to particular consumers without specifying whether NJ consumers are affected, or other limitations on a consumer’s rights is a violation of the statute. Most of these cases are settling before trial, but like other nuisance lawsuits they can end up costing your business considerable time and lost productivity if you end up facing one.

Most companies place their website disclosures at the bottom of the page in a footer. Do not bury them or make them hard to find. Your policies should be accessible through no more than 2-3 clicks via a logical navigation path. While putting your disclosures in the footer makes sense and is very common, consumers may argue that they simply never saw the disclosures because they never scrolled down to the bottom. Consider also making website disclosures “contextual,” i.e., place policy and disclosure links in close proximity to the related usage. For example, on pages where you are actively collecting information, consider putting a link to the privacy policy right next to the “submit” button, or before a consumer places an order on your e-commerce website, add language verifying they have read and agree to your terms of sale and privacy policy. Consider providing a welcome message, with notice of your privacy policy and terms of use, to consumers visiting your website as a disappearing pop-up, e.g., one that appears for 3-4 seconds at the top of the webpage then fades out, similar to “cookie disclosures” on many EU-based websites.

Finally, consider working with IT to create simple shortcuts for your most common policies (e.g., “privacy.company.com” or “www.company.com/privacy” for your privacy policy) so you have a short and simple URL you can use where you need to direct consumers to your online disclosures.

The Internet is a vast repository of knowledge and information. Fortunately, there are a number of search engines, websites and tools (including Google, Bing, Yahoo, Ask.com, Wikipedia, and GitHub) to help navigate the waters. When you are looking for something personally or professionally — a picture, audio clip, or video clip for a presentation, a font for a poster, a great article on a topic you want to share with your friends, samples of others’ software code to get past a development issue, song lyrics, etc. — in many cases what you are looking for is just a few clicks and/or searches away. But once you’ve found it online, can you use it? To answer that question, let’s start by debunking a couple of myths.

Myth #1 – If it’s on the Internet, it’s free for anyone to use. Many people think “public domain” and “free to use” is synonymous with “on the Internet.” It’s not. The Internet is an incredible tool for communicating and sharing information. However, the Internet does not negate or trump intellectual property ownership rights. Just because someone posted content online does not automatically mean it’s actually free for anyone to copy and use it for any purpose. In almost all cases, posting something online does not automatically cancel any intellectual property rights held by the owner of the content, including copyright and rights of publicity. The Recording Industry Association of America’s war on consumer music file sharing through peer-to-peer file sharing networks (remember the original Napster?) is a great reminder that people who copy and reuse the copyrighted works of others may be held liable for doing so.

Now that we’ve debunked the myths, let’s talk about how and when you can use online content. Online content can be grouped into 3 categories:

Copyrighted or Otherwise Protected Content. The first category is content that is clearly covered by copyright or other intellectual property protections like rights of publicity. This category includes things like content with a copyright notice on it; content used under a license (“used with permission”); content on a site with terms of use or another disclaimer restricting the ability to copy or reuse content without permission; pictures of celebrities; famous cartoon characters; and so on. If you want to use copyrighted or otherwise protected content, you need the permission of the copyright owner, i.e., a license to use it.

“Fair Use” exception. There is one important exception — the “fair use” exception — that provides a limited right to use copyrighted content for purposes such as commentary, criticism, scholarly research, news reporting, public classroom education, parody, or other “transformative” purposes (new meaning, added value, or a different manner of expression) without the copyright owner’s permission. The exception recognizes that in some cases, the benefit to society to allow use of a work outweighs the copyright holder’s rights to control use of the work. If it’s fair use, it’s not copyright infringement. However, there’s no definitive rule as to what is and is not fair use. Instead, courts look at four factors to determine fair use – (1) the purpose and character of the use (i.e., is it transformative, is it commercial or non-commercial, etc.), (2) the nature of the copyrighted work, (3) the amount and substantiality of the copyrighted work that is used (i.e., is it more than a “de minimis” portion of the work), and (4) the effect of the use on the potential market for, and value of, the copyrighted work).

Open Source and “Public Licensed” Content.The second category of content is “open source” and “public licensed” content. “Open source” refers primarily to software — its computer software distributed under a license whose source code is available for modification or enhancement by anyone as long as the requirements of the license are followed. For more on open-source software, please see my earlier post on the topic. You can use open source software you find online as long as you comply with the terms of the open source license.

“Public licensed” content is content distributed under a similar license. It grants anyone certain rights to use the content in a way that would normally be prohibited under copyright law, as long as the use is within the boundaries of the license. These public licenses preserve the owner’s copyright in the content, but cede certain rights to anyone who wants to use the content. The most common public licenses are the six Creative Commons licenses. Creative Commons licenses give anyone the right to use content for noncommercial purposes with attribution to the content owner, and depending on the type of license, may additionally be able to use the content commercially, create derivative works from the content, and/or share the content with others under the same license (“share-alike”). You can use public licensed content as long as you comply with the terms of the public license.

Everything Else – “Murky Content”. The third category is everything else — anything not clearly subject to copyright or other IP protection, and not clearly open source or public licensed (let’s call it “murky content”). This is the content that causes the most trouble, because Internet users may have no way to know whether something they find online that looks like it’s free to use is actually subject to copyright or other intellectual property protection, or is governed by a public or open source license. In this case, you need to make a judgment/risk call whether to use murky content. Generally, using murky content for commercial purposes carries the most risk, and using it non-commercially carries the least. Most people don’t create and freely share content for the fun of it — they derive value from it. If murky content looks like something someone would want to monetize, it’s likely protected content requiring some form of license to use. There’s no sure way to gauge the risk of using murky content — the only way you’ll ever know for use is if you get in trouble for using it, and by then it’s too late.

Some argue there is an “implied license” to use online content which protects users of online content. They argue that if a content owner posts content on their website or makes it available through Google, promotes links to the content through methods such as social media buttons, and does not restrict the ability to copy the content (e.g., no disabling of the ability to save or “screen scrape” content), the content owner is implying that it’s OK to reuse it. The biggest issue with the “implied license” argument is that like fair use there’s no easy way to know if it applies or not — you have to make a judgment call. It’s also important to note that the implied license argument assumes that the content was posted by the content owner. Courts may be very hesitant to embrace this concept as it would mean significantly watering down copyright protection for online content.

Don’t forget photos may bring up not just copyright issues, but rights of publicity as well. If you use someone’s picture found online to promote your product or service, not only could you face a copyright suit from the copyright owner of the photo, the subject of the photo could have a claim against you using their name or likeness in a commercial manner without their consent. It’s also worth noting that using images of a cartoon or corporate logo grabbed from the Internet could also create trademark infringement or trademark dilution issues.

Finally, as you navigate the world of online content, keep these simple rules in mind:

Check the applicable terms and policies before using web-based content. If you find content you want to reuse on a website or online service, check the Terms of Use, Applicable Use Policy, or similar terms or policy for ownership, license, or usage rights language that could give you the right, or restrict your right, to use the content.

Use license filters when searching images. If you’re searching for images on Bing Images or Google Images, you can filter your search by license type (e.g., in Google Images, if you click “Search Tools” you can search by “Usage Rights” such as “Labeled for reuse with modification,” “Labeled for reuse,” “Labeled for noncommercial reuse with modification,” and “Labeled for noncommercial use.” Keep in mind that license information may be wrong, but you’ll have an argument that you relied on the license type filter.

If it looks professionally done, it probably is. If you find content online that looks like it was made by a pro, it probably was. If there’s no license associated with professional-looking content, there’s a reasonable chance that someone else redistributed it without the ownership or copyright attribution. Also, if you can’t easily save content (e.g., the “save as” in the right-click context menu or the “copy” function for the browser is disabled on that website), it’s likely because the content owner does not want people capturing or “screen scraping” their content.

Just because it’s protected doesn’t mean you can’t use it. Finally, don’t forget that many copyright owners are willing to grant a license to use their work if you ask them. Some may just want the exposure and ask for an attribution; some may want a license fee. A little internet sleuthing can uncover the owner’s email address or other contact information for contact purposes. If you like the content and are willing to obtain a license to use it, make sure the license you receive is broad enough for the way you intend to use the content or work, both today and in the foreseeable future.

Managing the review and negotiation of contracts involves regular stack ranking of projects. With many agreements to review and other job responsibilities for both in-house counsel and business counterparts alike, the value or strategic importance of the agreement often determines the amount of attention it receives. Given this, attorneys and their business counterparts generally do not have time for a “deep dive” into every nook and cranny of an agreement under negotiation. They focus their available resources on the big-ticket items — obligations of the parties, termination rights, ownership, confidentiality, indemnification/limitation of liability, and the like — and may only have time for a cursory review (at best) of other contract terms that appear in most agreements, called the “legal boilerplate.”

If you have a little extra time to spend on an agreement, here are six clauses that are worth a closer review. Why these? If worded improperly, each of these clauses can have a significant adverse impact on your company in the event of an issue or dispute involving that clause.

(1) the Notices clause. Failure to provide timely notice can case major issues. So can failing to receive a notice that was properly served. If mail can take some time to be routed internally, consider avoiding certified or first-class mail as a method of service. Personal delivery and nationally or internationally recognized express courier service (FedEx, UPS, DHL, etc.) with signature required on delivery are always good choices. Notice by confirmed fax or by email to a role address (e.g., “legal@abc.com”) are also options to consider, either as a primary method of notice or as a required courtesy copy of the official notice. Use a role and not a named person in the ATTN: line – if the named person leaves, routing of the notice may be delayed. Consider requiring that a copy of every notice be sent to your legal counsel. Consider whether to make notice effective on delivery, versus effective a fixed number of days after sending (whether or not actually received). It is also worth considering making notice effective on a refused delivery attempt – the other side should not be able to refuse a package to avoid being served with notice. Ensure delivery is established by the delivery receipt or supporting records.

(2) the Dispute Resolution clause. Ensure the agreement’s dispute resolution mechanism (litigation vs. arbitration), and any dispute escalation language, is right for your company given the potential claims and damages that could come into play if you have a dispute. Make sure you’re OK with the state whose law governs the agreement (and ensure it applies without regard to or application of its conflicts-of-laws provisions). If neither home state law is acceptable, consider a “neutral” jurisdiction with well-developed common law governing contracts e.g., New York. Ensure you’re OK with the venue — consider whether it is non-exclusive (claims can be brought there) or exclusive (claims can only be brought there), and whether a “defendant’s home court” clause might be appropriate (a proceeding must be brought in the defendant’s venue). Finally, ensure the parties’ rights to seek injunctive relief — an order to stop doing something, such as a temporary restraining order or injunction, or an order to compel someone to do something — are not too easy or hard to obtain. In some cases, whether a party needs to prove actual damages or post a bond in order to obtain an injunction can play a critical role.

(3) the Order of Precedence clause. If your agreement has multiple components (e.g., a master services agreement, separate Terms and Conditions, incorporated policies from a web site, service exhibits or addenda, statements of work, project specifications, change orders, etc.), which piece controls over another can become critically important if there is a conflict between the two (e.g., liability is capped in Terms and Conditions, but unlimited in a Statement of Work). Ensure the order of precedence works for you. Consider whether to allow an override of the order of precedence if expressly and mutually agreed to in an otherwise non-controlling contract component. Don’t forget about purchase orders — they often have standard terms which can conflict with or override the contract terms unless they are specifically excluded. If you are negotiating a SaaS agreement, consider how acceptable use policies, terms of use, and other online policies may relate to the agreement. Watch out for other agreements/terms incorporated by reference, or on the other hand, consider incorporating your standard terms and having them control in the event of conflicting terms.

(4) the Assignment/Change of Control clause. If consent to assignment or a change of control is required, the clause can create significant headaches and delays during an M&A closing process or during a corporate reorganization. A client or vendor with “veto power” could leverage that power to get out of the contract, or to obtain concessions/renegotiated terms. Consider whether to include appropriate exclusions from consent in the event of a reorganization or change of control, but keep a notice requirement. Consider whether a parental guaranty is an appropriate trade-off for waiving consent. Also consider whether consent is needed in a transaction where the party continues to do business in the same manner it did before (e.g., change of control of a parent company only).

(5) the Subcontractor clause. Ensure you have approval rights over subcontractors where necessary and appropriate, especially if they are performing material obligations under the agreement or will have access to customer data or your systems. A service provider may not be willing or able to give an approval right to a subcontractor providing services across multiple clients, but may be OK with approval of a subcontractor providing services exclusively or substantially for your company. Include the ability to do due diligence on the subcontractor; remember that subcontractors can be an attack route for hackers seeking to compromise a company’s network. Ensure a party is fully liable for all acts and omissions of the contractor. Consider pushing security obligations through to the subcontractor. Require subcontractors to provide phishing training. Consider limitations on what obligations of the other party can be subcontracted.

(6) the Non-Solicitation clause. Consider limiting a non-solicitation clause to those employees key to each party’s performance under the agreement, and other named personnel such as executive sponsors or corporate officers. Most often, neither party can live up to a clause that covers every employee at the company. Ensure there are appropriate exclusions for responses to job postings, recruiter introductions, and contact initiated by the covered party. Consider whether the clause prevents soliciting an employee as well as hiring them, and whether you want to restrict one or both.