An Internet Protocol (IP) address is a series of digits assigned by an Internet Service Provider (ISP) such as Time Warner or Verizon, to each computer that accesses the internet.

Static IP addresses are permanent IP addresses, usually assigned to organizations with large networks.

Most individuals however get assigned “dynamic” IP addresses, which are IP addresses that may potentially be changed by the ISP provider when they experience a need for it, but which in practice do not change that often. Individual ISP subscribers may maintain the same dynamic IP address for long periods of time, such as eight to twelve months. Individual subscribers usually also have their dynamic IP address changed when they travel, move to a different home or a different city, or if they change their routers, or anytime they access the Internet with their device from a different network.

Dynamic IP addresses, just as static IP addresses, do not enable a link to be established between the IP address and a given computer or user. Only the ISP has access to the additional subscriber information required to establish that link.

Why is this a privacy issue?

Many websites collect and store static and dynamic IP addresses of the computers that visit their sites, together with the time and date of visit and use this information for marketing or other purposes, such as fraud and security monitoring.

If dynamic IP addresses are Personal Data, then all applicable laws and regulations regarding the collection and processing of personal data apply to the collection and processing of dynamic IP addresses as well.

Unique identifiers and Combined Identifiers

Most global privacy/data protection laws and regulations define personal data as data that not only uniquely identifies a person, such as the name of a person, but also data that, while on its own may not uniquely identify an individual, but when combined with other data, may render an individual identifiable. A simple example would be a phone number (landline and/or mobile). A phone number, on its own, does not identify an individual. However, with the use of reverse lookup tools , a phone number can be used to identify an individual, by associating a name and address with that phone number.

In other words, most information collected by an organization about an individual is likely to be considered personal information if it can be attributed to an identified or “identifiable” individual, whether by one unique identifier or by a combination of two or more data elements.

The special case of dynamic IP addresses

If a user of the website concerned has revealed his/her identity during the website consultation period, e.g. by creating a profile, then the operator of that website is able to identify the website user by linking his/her name to his/her computer’s IP address. In that case, even a dynamic IP address will most probably qualify as personal data.

What if the website user has not revealed his/her identity when visiting a website?

Is access to the IP address alone enough to identify that user?

An IP address, whether static or dynamic, can be traced back to an individual when combined with Internet subscriber information held by the ISP provider.

In case of a static IP address, the subscriber information remains the same, regardless of the date when the access to the website by that subscriber occurred.

In the case of dynamic IP addresses, one needs to know the date of access to the website in addition to the IP address of the subscriber, since dynamic IP addresses of internet connected devices tend to change over time.

The fact that additional information on date of access is needed for the identification of dynamic IP addresses renders dynamic IP addresses a tad more “unidentifiable” than static IP addresses.

However, most websites collecting IP addresses also collect time and date of access, so that the distinction between static and dynamic IP addresses from a privacy perspective is not all that significant. All one needs to identify a dynamic IP address, in addition to the subscriber data that connects it to it, is the desired time frame of the subscriber data.

The answer to this question, from a privacy/data protection compliance perspective, depends first of all on which privacy/data protection laws and regulations your organization is subject to. Usually, there has to be some connection between a country’s privacy/data protection laws and the organization in question.

Here are some questions, which may help determine these connections:

In which countries does your organization have a seat of business? These might be the countries whose privacy/data protection laws apply to your organization.

To individuals from which countries does your organization market its business? Some countries make their laws applicable to businesses that are targeting their country’s residents or citizens or both, even if the business in question has no physical presence in that country.

For example, if your organization has a seat of business in one or more of the 28 EU member states, and processes personal data in the context of that business, the EU Data Protection Directive 95/46 (the Directive) and the Member State’s national data protection laws based upon it will apply to your organization.

Under the expanded territorial applicability of the General Data Protection Regulation (GDPR), which will replace the Directive and all the Member States’ national privacy/data protection laws as of May 2018, your organization will also be subject to the GDPR if it markets to or monitors data subjects of EU Member States, even if the organization in question has no physical presence in a EU Member State.

Are dynamic IP addresses “personal data” under the EU Data Protection Directive? Under the GDPR?

Until recently, there was no legal clarity or certainty whether dynamic IP addresses, collected and processed by websites or third parties were Personal Data under the Directive.

However, the recent ruling of the Court of Justice of the European Union (CJEU) of October 19, 2016 in the Patrick Breyer v. Bundesrepublik Deutschland case removed all doubt: The CJEU ruled that a dynamic IP address of a website user is personal data with respect to the website operator, if that website operator has the legal means allowing it to identify the user in question with the help of additional information about that user which is held by that user’s ISP. For example, most countries allow for law enforcement (with or w/o a court order) to approach the ISP for more detailed information about who an IP address was assigned to at the time of access in case of a criminal investigation.

If for example, a website is the victim of a cyber attack, the website operator is usually able to contact the competent authorities, so that the latter can take the necessary steps to obtain the relevant IP address subscriber information from the ISP and to bring criminal proceedings.

That mere possibility for a data subject to potentially become identified through his/her device’s IP address renders the IP address into personal data. Controllers must ask themselves the following question: Is it reasonably likely that they or a third party might be able to identify an individual through the IP addresses which they collect and process, even if recourse to data held by a third party (here, the ISP) is required in order to obtain identification? If the answer is yes, the IP address is personal data and must be handled accordingly.

Since the GDPR has retained the same basic, broad definition of “personal data” as the definition of the Directive, it is reasonable to predict that the Breyer decision of the CJEU will apply to the GDPR and that dynamic IP addresses will be considered personal data under the GDPR.

What are the practical implications of dynamic IP addresses being considered personal information?

Once an organization has established that it is subject to the EU Directive and/or, in a short while, to the GDPR, it must ensure that all requirements applicable to the collection and handling of personal information of data subjects are applied to the collection and handling of dynamic IP addresses of these data subjects. Some examples include notice and consent requirements, use limitations of the collected dynamic IP addresses, the provision of adequate information security to this data set, the restriction of retention periods and the restriction of cross-border transfers of and cross-border access to dynamic IP addresses.

This past month saw another batch of large data breaches, with “Heartbleed” considered by some the largest data security breach in the history of the internet; a flurry of legislative efforts by the States to regulate the use of drones, student privacy and government surveillance; a landmark victory for the FTC’s authority to regulate commercial data security practices; important privacy legislation in Australia, Brazil, and Canada; the EU Art 29 WP was busy as a bee publishing Opinions on EU Data Protection, and the European Parliament voted in favor of the proposed General Data Protection Regulation, leaving the next step up to the Council of Ministers.

Big Data

• FTC to Examine Effects of Big Data on Low Income and Underserved Consumers at September Workshop

• In a Joint Statement at the EU-US Summit on 26 March 2014 EU and U.S. officials announced a commitment to strengthening the Safe Harbor framework by this coming summer

FACEBOOK

• Facebook admits users are confused about Privacy, will show more on-screen explanations, in an effort to practice “surprise minimization” or “minimize the surprise to the consumer”.

FERPA/Student Privacy

• Kentucky enacts law Protecting Student Data In the Cloud

• Louisiana House Passes Student Privacy Bill

• Florida Senate Passes Student Privacy Bill, which would prohibit schools from collecting political and religious beliefs and biometric information from students

• Kansas House Passes Student Privacy Bill which would restrict access to student records and prohibit the state from collecting information relating to students’ and their families’ personal beliefs or practices on issues such as sex, family life, morality and religion.”

• The Colorado House Education Committee unanimously passed a bill that would put restrictions on the sharing of education data.

• Idaho: New law limits DNA collection by law enforcement: only upon criminal conviction or by court order

• Utah: New law makes any electronic data obtained by law enforcement without a warrant, including location data, inadmissible in a criminal proceeding.

• Indiana: Anti-Surveillance Bill signed into law- requires police to obtain search warrants before using drones, using cellphones to track individuals or demanding passwords for electronic devices among other restrictions

. The U.S. Supreme Court heard oral arguments in Riley v. California and United States v. Wurie, two cases involving the warrantless search of an individual’s cell phone incident to arrest and will decide on an important Fourth Amendment question: can the police search the entire contents of an individual’s cell phone incident to any lawful arrest. To be followed.

• Yahoo webcam images from millions of users intercepted by GCHQ; 1.8m users targeted by UK agency in six-month period alone. Material included large quantity of sexually explicit images

• Introducing the ACLU’s NSA Documents Database. These documents stand as primary source evidence of our government’s interpretation of its authority to engage in sweeping surveillance activities at home and abroad, and how it carries out that surveillance.

• NSA Said to Exploit Heartbleed Bug for Intelligence for Years

• U.S. v Lavabit judgment: Fourth Circuit affirms district ruling: Lavabit in contempt. Lavabit tried giving the feds its SSL Key In 11 pages of 4-Point Type; Feds complained that it was illegible

Surveillance

• FBI Plans to Have 52 Million Photos in its NGI (next generation identification) Face Recognition Database by Next Year

The FCC issued two rulings regarding exemptions to the “express consent” requirement under TCPA (The TCPA and associated FCC rules require parties to obtain “prior express consent” before transmitting autodialed or prerecorded informational calls or text messages to a wireless telephone number).

• The FCC exempted package delivery notifications from the “prior express consent” requirement when the called party is not charged for them by the wireless carrier. For example, under the exemption, FedEx or UPS will not need prior express consent of package recipients for automated shipment notification messages sent to their mobile telephone numbers.

• In the context of “text-based social networks” such as GroupMe, “prior express consent” to receive automated text messages can be obtained through an intermediary (in this case, the text message group creator), where the messages are administrative in nature and concern the use and cancellation of the service.

Data Brokers/FCRA

Data Security

The White House released the National Institute of Standards and Technology’s (NIST) Final Cybersecurity Framework: a set of industry best practices and standards to help owners and operators of critical infrastructure develop better cybersecurity programs.

EU Data Protection

Facebook must comply with German data protection law, the Higher Court of Berlin rules. The High Court of Berlin finds that Facebook’s data processing is handled by US parent company, not FB Ireland. If the court had found that the user data was processed by Facebook Ireland and not by Facebook US, the Irish Data Protection law would have applied; According to the EU Directive, the law of the EU Member State applies, where the company has an establishment and where the processing is carried out in the context of the activities of the establishment.(EU Directive 95/46/EC, Art.4,1(a)); In the absence of this condition (as was the case here, since the court decided that no processing was occurring in Ireland, but instead the processing happened through data centers in the US), the second rule of applicable law applies: the Member State on whose resident’s computers or other devices the data controller (FB here) sets cookies EU Directive 95/46/EC, Art.4,1(c)), in this case Germany;

HIPAA

Q: Is a mental healthcare provider allowed to share psychotherapy notes with anyone?

A: NO, not even with another healthcare provider for treatment purposes, unless patient gives consent. As for sharing the notes with the patient, HIPAA leaves it to the discretion of the mental healthcare provider.

Q: What if patient threatens to blow up a school?

A: Yes, this is an imminent safety threat. Depending on the applicable State Law, there may even be a “duty to warn”.

Remember that in a State with stricter laws, the stricter State law prevails.

by Monique Altheim on February 22, 2014

A study commissioned in Australia by the National Association for Information Destruction (NAID), published on Feb.19, has found significant amounts of sensitive personal information left on recycled computers. The researchers purchased 52 computers randomly on sites such as eBay, and hired a reputable forensic investigator to find out whether any personal information was left on the drives. Out of the 52 devices, 15 still contained highly confidential personal information, including health and financial information, as well as personal photos and videos. Those devices had been “recycled” by individuals, law firms and government agencies and the forensic evidence showed that all the files in question had been subjected to attempted deletion.

Clearly, many still believe that pressing the “delete” button will permanently delete a file and/or have never heard of forensic retrieval of digital data. Whether one operates in a jurisdiction that mandates secure disposal of personal data or not, improper removal of personal data on computers, smartphones or tablets is certainly bad practice. It is not only bad practice in the case of recycling of a device, as was the case in this study, but also when disposing of a device. Even when simply deleting personal files that have reached the end of their lifecycle, one needs to ensure their professional and final disappearance. Otherwise, these files may easily come back to life through a simple forensic examination of the computer in question, as was the case with the famous incriminating documents in the Enron case. The incriminating files, the needles in the haystack, had all been “deleted” by Enron employees and later retrieved by forensic experts during the investigation of the Enron scandal.

My panel consisted of, from left to right, Oscar Puccinelli, an attorney and professor of Constitutional Law at the National University of Rosario in Argentina, Jeimy Cano, CIS at Ecopetrol and professor at the Univesidad de Los Andes in Bogota, Colombia, Gustavo Betarte, CTO at Tilsor and researcher and professor at the Engineering School of the Univesidad de la Republica in Montevideo, Uruguay, Yoram Hacohen, at the time, head of the Israeli Law, Information and Technology Authority (ILITA), and William C. Barker, associate director and chief cyber security advisor at the National Institute of Standards and Technology (NIST).

FTC Comissioner Julie Brill recently held her first Twitter chat on the topic of privacy and the FTC.

TWITTER LINGO FOR BEGINNERS:

Those who are regular twiteratti can skip the following paragraph, but for those still not familiar with Twitter lingo, I have included a short introduction to Twitter shorthand:

@JulieBrillFTC: This is Julie Brill’s twitter handle, or twitter user name. Tweeters need to create a twitter handle in order to tweet.

RT: Re Tweet; When @JulieBrillFTC tweets: RT@soandso, she re-tweets @soandso’s tweet; in other words, she repeats that person’s tweet.

MT: Modified Tweet; When @JulieBrill tweets: MT@soandso, she retweets @soandso’s tweet, but with a slight modification, usually in order to remain within the 140 character limit.

In the Twitter chat, @JulieBrillFTC RT’d or MT’d participant’s questions (Q). She preceeded her answers with an A.

#: Hashtag. A hashtag on Twitter is the pound sign, followed by an acronym or word to group all tweets related to a particular topic. If you click on that particular hashtag link, you will see all tweets that were posted with that hashtag included in their tweets. In @JulieBrillFTC’s Twitter chat, the chosen hashtag was #FTCpriv

Tweets have a limit of 140 characters. A lot more can be crammed into a tweet by the use of a link to an article, something which @JukieBrillFTC avails herself of in her answers to tweeters’ questions. There are even several ways of shortening the links, to leave more characters free for use in the tweet.

I reposted @JulieBrill’s Twitter chat in a user friendly way. Tweets that were not directly relevant to the Q&A were omitted. Tweets by those who posted the questions were omitted as well to avoid unnecessary duplication of the questions, since @JuliBrillFTC re-tweeted them anyway. Since Twitter operates as a live feed, later tweets appear before earlier tweets. Therefore, for someone not used to Twitter, it might be disconcerting to read the answers before the questions. I therefore reversed the order of the tweets, and posted the earlier ones before the older ones.

The FTC, as well as many other U.S. regulatory and enforcing agencies have always stayed away from imposing specific technologies for ensuring data security, since technology changes at the speed of light and the type of technology to be applied is always contextual and depending on the type of data handled and the type of company handling the data. “Reasonable and appropriate practices” it is. And @JulieBrillFTC managed to squeeze in the FTC’s opinion on the need for FEDERAL legislation on data security and data breach notification, since the U.S. doesn’t have one yet. (Most of the States have data security and data breach notification laws, but they are all different from each other and create an impossible patchwork of laws). All this in 140 characters. Hats off! On the other hand, in order to make any sense of those <140 characters, one does need to have some background knowledge of the topic.

The future of U.S.-EU Safe Harbor is on every privacy professional’s mind these days. Here, with a tweet, @JulieBrillFTC has indicated that Safe Harbor is the subject of negotiations between the US Government and the EU Commission in order to tweak it into a viable solution. The end of Safe Harbor? Not.

Another good policy exchange was the following one, assuming one knows that IoT stands for “Internet of Things”:

Q11 RT @ajamietalbot Among all the data issues facing FTC, which do you think are the most pressing and deserve FTC focus? #FTCpriv

Well, yes, having access to one’s data and having the ability to correct wrong information is a very good start, but it is far from sufficient to ensure the integrity of the algorithms that are used to make important decisions about an individual. For example, how do we ensure that the algorithm itself is not based on some illegal discriminatory premises? Clearly, Twitter is not an adequate channel to discuss such deep and granular issues.

Safe Harbor protects U.S. consumers? Really? And I thought that it only protected personal data originating from the EU. Who knew? Maybe the lightning speed at which one must react on Twitter can be faulted for such seemingly erroneous statements. I have no doubt that @JulieBrillFTC did not make a mistake in her area of expertise, but short tweets are conducive to ambiguous meanings and maybe incorrect interpretations.

CONCLUSION

A Twitter chat is the democratic communication tool par excellence. Every Jo/Jean Shmo with a twitter handle can instantly communicate with an authority figure, regardless of where in the world he/she resides, as long as he/she has an internet connection.

The format works well for simple, concrete questions that require simple and concrete answers.

As soon as the question requires a more granular response, Twitter fails to deliver. It is simply impossible to convey nuance, cover grey areas and explain complex matters with a 140 character tweet. Inserting a link to an article that deals with the issue at hand is a good way of introducing more nuance and information in a tweet or Twitter chat.

Under Section 5 of the Federal Trade Commission Act (FTCA), the FTC must protect consumers from “deceptive and unfair” commercial practices in the economic sectors under its jurisdiction. One of those deceptive or unfair practices is the lack of data security to protect a wide variety of sensitive consumer data, such as social security numbers, health data etc… Over the years since its first settlement in 2002, the FTC has developed certain principles.

The FTC’s standard for appropriate data security is “reasonableness”, which is a flexible standard that varies according to a.o. the sensitivity of the data or the size and complexity of the business. In other words, the security requirements of a large financial institution will be greater than the security requirements of a small grocery store.

Despite the fact that the FTC allows for such elasticity in the application of appropriate security standards, it proposes five basic data security practices that should be followed by every business:

Data Mapping: Know what data the company has, where it is and who has access to it. This knowledge will help expose possible vulnerabilities.

Data Minimization: A company should only collect and retain data that it really needs for its legitimate business purposes. (eg. no need to retain pin numbers of payment cards after the payment has been made).

Under the Telephone Consumer Protection Act (TCPA), in order for marketers to call or text a telephone subscriber via autodialer or prerecorded messages (robocalls), the subscriber must have given the robocaller “prior express consent” to do so.

What constitutes “express consent” under the TCPA?

The TCPA does not define “express consent.” Congress delegated to the FCC the authority to make rules and regulations to implement the TCPA.

The FCC has defined “express consent” as follows:

“any telephone subscriber who releases his or her telephone number has, in effect, given prior express consent to be called by the entity to which the number was released. “

and “persons who knowingly release their phone numbers have in effect given their invitation or permission to be called at the number which they have given, absent instructions to the contrary.”

Plaintiff, a blood donor gave his cell phone number on a new donor information sheet to the defendants, a blood bank. He subsequently got a few automated, telemarketing text messages from the defendants in 2012, suggesting he give more blood, which he found quite offensive. Plaintiff claimed he had not given defendant, the blood bank, express consent to “robocall” him, as required under the TCPA. He only shared his cell phone number as a contact number for the blood bank to reach him. The US District Court for the Middle District of Florida ruled that giving his cell phone on the new donor information constituted his express consent to the defendants to robocall him at that number through marketers. The Court granted defendant’s motion to dismiss the case. The Court followed the definition of “express consent”, as defined by the FCC. (see above).

The Court decided that when the blood donor shared his cell phone number with the blood bank, he thereby gave “express consent” to the blood bank to share his sensitive health data with marketers and to have those marketers “robocall” him.

Most courts have followed this interpretation of “express consent” under TCPA, while other courts have argued that If consent is not manifested by explicit and direct words, it is not express consent. Rather, it is merely “implied consent.”.

On February 15, 2012, the FCC adopted additional protections for consumers concerning unwanted robocalls. One of the changes concerned the “consent” issue.

Effective October 16, 2013, in order for marketers to call or text a telephone subscriber via autodialer or prerecorded messages (robocalls), the subscriber must have given the robocaller “prior UNAMBIGUOUSwritten express consent” to do so.

Gone is the “implied-express consent” as previously defined by the FCC.

Unambiguous consent means that the consumer must receive a “clear and conspicuous disclosure” that he will receive future calls that deliver autodialed and/or pre-recorded telemarketing messages on behalf of a specific marketer.

In other words, the consent form to be signed by the consumer should look something like this:

“ I hereby consent to receive autodialed and/or pre-recorded telemarketing calls and/or texts from or on behalf of [marketer] at the telephone number provided above. “

Under this new definition of consent, our blood donor might have won his case.

Or, if the blood bank had given him a clear and informed choice, he might very well have agreed to share his cell phone number with marketers in order to be notified of future blood donor opportunities. He would have made an informed choice and the overburdened justice system might have had fewer time-wasting and costly class-action law suits to deal with.

This new consent requirement resembles very closely the requirement of “unambiguous consent” of the data subject that forms one of the most important legal grounds for processing personal data by data controllers under the EU Data Protection Directive. (Article 7. (a) Directive 95/46/EC).

Article 2 (h) of Directive 95/46/EC defines consent as “any freely given specific andinformedindication of his wishes by which the data subject signifies his agreementto personal data relating to him being processed.”

The validity of consent as a mechanism to regulate privacy in our era of big data, predictive algorithms and internet of things has been a subject of debate for a while now, and more recently, the cause of a heated polemic in privacy circles.(see: “I Never Said That”—A Response to Cavoukian et al. by Viktor Mayer-Schönberger)

The latest FCC implementation of the TCPA is one example of how the concept of consent is still alive and well. Whether consent by the consumer is meaningful often depends on whether the term “consent” is defined in a meaningful way or not.

EU Parliament draft report on NSA , in which the LIBE Committee declares that the fight against terrorism is but a “fig leaf” for political and economic espionage and in which same committee essentially recommends the Member States to end or suspend all data flows to the U.S. and other such countries until they repent and mend their ways.

First Amendment

Virginia Court Scales Back Right to Online Anonymity: A Virginia company filed a defamation lawsuit against seven anonymous Yelp users who wrote critical reviews about it. After filing the suit, the company subpoenaed Yelp for information that would identify the seven reviewers. A Virginia statute requires a subpoena for the identity of an anonymous Internet users’ identity to identify communications “that are or may be tortious or illegal.”

Subscribe via Email

Privacy Policy

We do not sell, rent, trade or otherwise disclose email addresses or other personal information visitors submit through our website.
Please note that we use analytics performance cookies on our blog in order to collect aggregated and anonymised statistical data about our blog. By subscribing to or by using this website, you agree to these cookies being served on your computer or other electronic device.
Cookies are small text files that are placed on your machine to help the site provide a better user experience. You may prefer to disable cookies on this site. The most effective way to do this is to disable cookies in your browser. We suggest consulting the Help section of your browser or taking a look at the About Cookies website which offers guidance for all modern browsers.

Disclaimer

This website is for general information purposes only. Information posted is not intended to be legal advice.