The issue of the application of privacy/data protection laws to political parties in Canada is not new – Colin Bennett and Robin Bayley wrote a report on this issue for the Office of the Privacy Commissioner of Canada in 2012. It gained new momentum in the wake of the Cambridge Analytica scandal when it was brought home to the public in a fairly dramatic way the extent to which personal information might be used not just to profile and target individuals, but to sway their opinions in order to influence the outcome of elections.

In the fallout from Cambridge Analytica there have been a couple of recent developments in Canada around the application of privacy laws to political parties. First, the federal government included some remarkably tepid provisions into Bill C-76 on Elections Act reform. These provisions, which I critique here, require parties to adopt and post a privacy policy, but otherwise contain no normative requirements. In other words, they do not hold political parties to any particular rules or norms regarding their collection, use or disclosure of personal information. There is also no provision for independent oversight. The only complaint that can be made – to the Commissioner of Elections – is about the failure to adopt and post a privacy policy. The federal government has expressed surprise at the negative reaction these proposed amendments have received and has indicated a willingness to do something more, but that something has not yet materialized. Meanwhile, it is being reported that the Bill, even as it stands, is not likely to clear the Senate before the summer recess, putting in doubt the ability of any amendments to be in place and implemented in time for the next election.

Meanwhile, on June 6 2018, the Quebec government introduced Bill no 188 into the National Assembly. If passed, this Bill would give the Quebec Director General of Elections the duty to examine and evaluate the practices of the provincial political parties’ collection, use and disclosure of personal information. The Director General must also assess their information security practices. If the Bill is passed into law, he will be required to report his findings to the National Assembly no later than the first of October 2019. The Director General will make any recommendations in this report that he feels are appropriate in the circumstances. The Bill also modifies laws applicable to municipal and school board elections so that the Director-General can be directed by the National Assembly to conduct a similar assessment and report back. While this Bill would not make any changes to current practices in the short term, it is clearly aimed at gathering data with a view to informing any future legislative reform that might be deemed necessary.

In the wake of the Cambridge Analytica scandal, Canada’s federal government has come under increased criticism for the fact that Canadian political parties are not subject to existing privacy legislation. This criticism is not new. For example, Prof. Colin Bennett and Robin Bayley wrote a report on the issue for the Office of the Privacy Commissioner of Canada in 2012.

By way of preamble to this critique of the legislative half-measures introduced by the government, it is important to note that Canada already has both a public sector Privacy Act and a private sector Personal Information Protection and Electronic Documents Act (PIPEDA). Each of these statutes sets out rules for collection, use and disclosure of personal information and each provides for an oversight regime and a complaints process. Both statutes have been the subject of substantial critique for not going far enough to address privacy concerns, particularly in the age of big data. In February 2018, the House of Commons Standing Committee on Access to Information, Privacy and Ethics issued a report on PIPEDA, and recommended some significant amendments to adapt the statute to protecting privacy in a big data environment. Thus, the context in which the provisions regarding political parties’ privacy obligations are introduced is one in which a) we already have privacy laws that set data protection standards; b) these laws are generally considered to be in need of significant amendment to better address privacy; and c) the Cambridge Analytica scandal has revealed just how complex, problematic and damaging the misuse of personal information in the context of elections can be.

Once this context is understood, the privacy ‘obligations’ that the government proposes to place on political parties in the proposed amendments can be seen for what they are: an almost contemptuous and entirely cosmetic quick fix designed to deflect attention from the very serious privacy issues raised by the use of personal information by political parties.

First, the basic requirement placed on political parties will be to have a privacy policy. The policy will also have to be published on the party’s internet site. That’s pretty much it. Are you feeling better about your privacy yet?

To be fair, the Bill also specifies what the policy must contain:

(k) the party’s policy for the protection of personal information [will include]:

(i) a statement indicating the types of personal information that the party collects and how it collects that information,

(ii) a statement indicating how the party protects personal information under its control,

(iii) a statement indicating how the party uses personal information under its control and under what circumstances that personal information may be sold to any person or entity,

(iv) a statement indicating the training concerning the collection and use of personal information to be given to any employee of the party who could have access to personal information under the party’s control,

(v) a statement indicating the party’s practices concerning

(A) the collection and use of personal information created from online activity, and

(B) its use of cookies, and

(vi) the name and contact information of a person to whom concerns regarding the party’s policy for the protection of personal information can be addressed; and

(l) the address of the page — accessible to the public — on the party’s Internet site where its policy for the protection of personal information is published under subsection (4).

It is particularly noteworthy that unlike PIPEDA (or any other data protection law, for that matter), there is no requirement to obtain consent to any collection, use or disclosure of personal information. A party’s policy simply has to tell you what information it collects and how. Political parties are also not subject to any of the other limitations found in PIPEDA. There is no requirement that the purposes for collection, use or disclosure meet a reasonableness standard; there is no requirement to limit collection only to what is necessary to achieve any stated purposes; there is nothing on data retention limits; and there is no right of access or correction. And, while there is a requirement to identify a contact person to whom any concerns or complaints may be addressed, there is no oversight of a party’s compliance with their policy. (Note that it would be impossible to oversee compliance with any actual norms, since none are imposed). There is also no external complaints mechanism available. If a party fails to comply with requirements to have a policy, post it, and provide notice of any changes, it can be deregistered. That’s about it.

This is clearly not good enough. It is not what Canadians need or deserve. It does not even come close to meeting the standards set in PIPEDA, which is itself badly in need of an overhaul. The data resources and data analytics tools available to political parties have created a context in which data protection has become important not just to personal privacy values but to important public values as well, such as the integrity and fairness of elections. Not only are these proposed amendments insufficient to meet the privacy needs of Canadians, they are shockingly cynical in their attempt to derail the calls for serious action on this issue.

What is the proper balance between privacy and the open courts principle when it comes to providing access to the decisions of administrative tribunals? This is the issue addressed by Justice Ed Morgan in a recent Ontario Superior Court decision.The case arose after the Toronto Star brought an application to have parts of Ontario’s Freedom of Information and Protection of Privacy Act (FIPPA) declared unconstitutional. To understand this application, some background may be helpful.

Courts in Canada operate under the “open courts principle”. This principle has been described as “one of the hallmarks of a democratic society” and it is linked to the right of freedom of expression guaranteed by s. 2(b) of the Canadian Charter of Rights and Freedoms. The freedom of expression is implicated because in order for the press and the public to be able to debate and discuss what takes place in open court, they must have access to the proceedings and to the records of proceedings. As Justice Morgan notes in his decision, the open courts principle applies not just to courts, but also to administrative tribunals, since the legitimacy of the proceedings before such tribunals requires similar transparency.

Administrative bodies are established by legislation to carry out a number of different functions. This can include the adjudication of matters related to the subject matter of their enabling legislation. As the administrative arm of government has expanded, so too has the number and variety of administrative tribunals at both the federal and provincial levels. Examples of tribunals established under provincial legislation include landlord-tenant boards, human rights tribunals, securities commissions, environmental review tribunals, workers’ compensation tribunals, labour relations boards, and criminal injury compensations boards – to name just a very few. These administrative bodies are often charged with the adjudication of disputes over matters that are of fundamental importance to individuals, impacting their livelihood, their housing, their human rights, and their compensation and disability claims.

Because administrative tribunals are established by provincial legislation, they are public bodies, and as such, are subject to provincial (or, as the case may be, federal) legislation governing access to information and the protection of personal information in the hands of the public sector. The applicability of Ontario’s Freedom of Information and Protection of Privacy Act is at the heart of this case. The Toronto Star brought its application with respect to the 14 administrative tribunals found in the list of institutions to which FIPPA applies in a Schedule to that Act. It complained that because FIPPA applied to these tribunals, the public presumptively had to file access to information requests under that statute in order to access the adjudicative records of the tribunals. It is important to note that the challenge to the legislation was limited a) to administrative tribunals, and b) to their adjudicative records (as opposed to other records that might relate to their operations). Thus the focus was really on the presumptive right of the public, according to the open courts principles, to have access to the proceedings and adjudicative records of tribunals.

Justice Morgan noted that the process under FIPPA requires an applicant to make a formal request for particular records and to pay a fee. The head of the institution then considers the request and has 30 days in which it must advise the applicant as to whether access will be granted. The institution may also notify the applicant that a longer period of time is required to respond to the request. It must give notice to anyone who might be affected by the request and must give that person time in which to make representations. The institution might refuse to disclose records or it might disclose records with redactions; a dissatisfied applicant has a right of appeal to the Information and Privacy Commissioner.

In addition to the time required for this process to unfold, FIPPA also sets out a number of grounds on which access can be denied. Section 42(1) provides that “An institution shall not disclose personal information in its custody or under its control”. While there are some exceptions to this general rule, none of them relates to adjudicative bodies specifically. Justice Morgan noted that the statute provides a broad definition of personal information. While the default rule is non-disclosure, the statute gives the head of an institution some discretion to disclose records containing personal information. Thus, for example, the head of an institution may disclose personal information if to do so “does not constitute an unjustified invasion of personal privacy” (s. 21(1)(f)). The statute sets out certain circumstances in which an unjustified invasion of personal privacy is presumed to occur (s. 21(3)), and these chiefly relate to the sensitivity of the personal information at issue. The list includes many things which might be part of adjudication before an administrative tribunal, including employment or educational history, an individual’s finances, income, or assets, an individual’s eligibility for social service or welfare benefits, the level of such benefits, and so on. The Toronto Star led evidence that “the personal information exemption is so widely invoked that it has become the rule rather than an exemption to the rule.” (at para 27). Justice Morgan agreed, characterizing non-disclosure as having become the default rule.

FIPPA contains a “public interest override” in s. 23, which allows the head of an institution to release records notwithstanding the applicability of an exception to the rule of disclosure, where “a compelling public interest in the disclosure of the record clearly outweighs the purpose of the exemption.” However, Justice Morgan noted that the interpretation of this provision has been so narrow that the asserted public interest must be found to be more important than the broad objective of protecting personal information. In the case of adjudicative records, the Information and Privacy Commissioner’s approach has been to require the requester to demonstrate “that there is a public interest in the Adjudicative Record not simply to inform the public about the particular case, but for the larger societal purpose of aiding the public in making political choices” (at para 31). According to Justice Morgan, “this would eliminate all but the largest and most politically prominent of cases from media access to Adjudicative Records and the details contained therein” (at para 32).

The practice of the 14 adjudicative bodies at issue in this case showed a wide variance in the ways in which they addressed issues of access. Justice Morgan noted that 8 of the 14 bodies did not require a FIPPA application to be made; requests for access to and copies of records could be directed by applicants to the tribunal itself. According to Justice Morgan, this is not a problem. He stated:“their ability to fashion their own mechanism for public access to Adjudicative Records, and to make their own fine-tuned determinations of the correct balance between openness and privacy, fall within the power of those adjudicative institutions to control their own processes” (at para 48). The focus of the court’s decision is therefore on the other 6 adjudicative bodies that require those seeking access to adjudicative records to follow the process set out in the legislation. The Star emphasized the importance of timeliness when it came to reporting on decisions of adjudicative bodies. It led evidence about instances where obtaining access to records from some tribunals took many weeks or months, and that when disclosure occurred, the documents were often heavily redacted.

Justice Morgan noted that the Supreme Court of Canada has already found that s. 2(b) protects “guaranteed access to the courts to gather information” (at para 53, citing Canadian Broadcasting Corp. v. New Brunswick (A.G.)), and that this right includes access to “exhibits entered into evidence, photocopies of all such records, and the ability to disseminate those records by means of broadcast or other publication” (at para 53). He found that FIPPA breaches s. 2(b) because it essentially creates a presumption of non-disclosure of personal information “and imposes an onus on the requesting party to justify the disclosure of the record” (at para 56). He also found that the delay created by the FIPPA system “burdens freedom of the press and amounts in the first instance to an infringement” of s. 2(b) of the Charter (at para 70). However, it is important to note that under the Charter framework, the state can still justify a presumptive breach of a Charter right by showing under s. 1 of the Charter that it is a reasonable limit, demonstrably justified in a free and democratic society.

In this case, Justice Morgan found that the ‘reverse onus’ placed on the party requesting access to an adjudicative record to show why the record should be released could not be justified under s. 1 of the Charter. He noted that in contexts outside of FIPPA – for example, where courts consider whether to impose a publication ban on a hearing – the presumption is openness, and the party seeking to limit disclosure or dissemination of information must show how a limitation would serve the public interest. He stated that the case law makes it clear “that it is the openness of the system, and not the privacy or other concerns of law enforcement, regulators, or innocent parties, that takes primacy in this balance” (at para 90). Put another way, he states that “The open court principle is the fundamental one and the personal information and privacy concerns are secondary to it” (at para 94).

On the delays created by the FIPPA system, Justice Morgan noted that “Untimely disclosure that loses the audience is akin to no disclosure at all” (at para 95). However, he was receptive to submissions made by the Ontario Judicial Council (OJC) which had “admonished the court to be cognizant of the complex task of fashioning a disclosure system for a very diverse body of administrative institutions” (at para 102). The OJC warned the court of the potential for “unintended consequences” if it were to completely remove tribunals from the FIPPA regime. The concern here was not so much for privacy; rather it was for the great diversity of administrative tribunals, many of which are under-resourced and under-staffed, and who might find themselves “overwhelmed in a suddenly FIPPA-free procedural environment” (at para 103). Justice Morgan also noted that while the Toronto Star was frustrated with the bureaucracy involved in making FIPPA applications, “bureaucracy in and of itself is not a Charter violation. It’s just annoying.” (at para 104) He noted that the timelines set out in FIPPA were designed to make the law operate fairly, and that “Where the evidence in the record shows that there have been inordinate delays, the source of the problems may lie more with the particular administrators or decision-makers who extend the FIPPA timelines than with the statutory system itself” (at para 105). He expressed hope that by removing the ‘reverse onus’ approach, any issues of delay might be substantially reduced.

As a result, Justice Morgan found the “presumption of non-disclosure for producing Adjudicative Records containing “personal information” as defined in s. 2(1)” to violate the Charter. Given the complexity of finding a solution to this problem, he gave the legislature one year in which to amend FIPPA. He makes it clear that tribunals are not required to follow the FIPPA request process in providing access to their Adjudicative Records, but it does not breach the Charter for them to do so.

This is an interesting decision that addresses what is clearly a problematic approach to providing access to decisions of administrative tribunals. What the case does not address are the very real issues of privacy that are raised by the broad publication of administrative tribunal decisions. Much ink has already been spilled on the problems with the publication of personal information in court and tribunal decisions. Indeed the Globe24hr case considered by both the Office of the Privacy Commissioner of Canada and the Federal Court reveals some of the consequences for individual privacy when such decisions are published in online repositories. Of course, nothing in Justice Morgan’s decision requires online publication, but openness must be presumed to include such risks. In crafting a new legislative solution for adjudicative records, the Ontario government might be well advised to look at some of the materials produced regarding different strategies to protect privacy in open tribunal decisions and might consider more formal guidance for tribunals in this regard.

**********************

Interested in the issues raised by this case? Here is a sampling of some other decisions that also touch on the open courts principle in the context of administrative tribunals:

Earlier this year, uOttawa’s Florian Martin-Bariteau and Véronique Newman released a study titled Whistleblowing in Canada. The study was funded by SSHRC as part of its Knowledge Synthesis program. The goal of this program is to provide an incisive overview of a particular area to synthesize key research and to identify knowledge gaps. The report they have produced does just that. Given the very timeliness of the topic (after all, the Cambridge Analytica scandal was disclosed by a whistleblower), and the relative paucity of legal research in the area, this report is particularly important.

The first part of the report provides an inventory of existing whistleblower frameworks across public and private sectors in Canada, including those linked to administrative agencies. This on its own makes a significant contribution.The authors refer to the existing legislative and policy framework as a “patchwork”. They note that the public sector framework is characterized by fairly stringent criteria that must be met to justify disclosures to authorities. At the same time, there are near universal restrictions against disclosure to the broader public. The authors note that whistleblower protection in the private sector is relatively thin, with a few exceptions in areas such as labour relations, health and environmental standards.

The second part of the report identifies policy issues and knowledge gaps. Observing that Canada lags behind its international partners in providing whistleblower protection, the authors are critical of narrow statutory definitions of whistleblowing, legal uncertainties faced by whistleblowers, and an insufficient framework for the private sector. The authors are also critical of the general lack of protection for public disclosures, noting that “internal mechanisms in government agencies are often unclear or inefficient and may fail to ensure the anonymity of the whistleblower” (at p. 5). Indeed, the authors are also critical of how existing regimes make anonymity difficult or impossible.The authors call for more research on the subject of whistleblowing, and highlight a number of important research gaps.

Among other things, the authors call for research to help draw the line between leaks, hacks and whistleblowing. This too is important given the different ways in which corporate or government wrongdoing has been exposed in recent years.There is no doubt that the issues raised in this study are important, and it is a terrific resource for those interested in the topic.

The post is the second in a series that looks at the recommendations contained in the report on the Personal Information Protection and Electronic Documents Act (PIPEDA) issued by the House of Commons Standing Committee on Access to Information and Privacy Ethics (ETHI). My first post considered ETHI’s recommendation to retain consent at the heart of PIPEDA with some enhancements. At the same time, ETHI recommended some new exceptions to consent. This post looks at one of these – the exception relating to publicly available information.

Although individual consent is at the heart of the PIPEDA model – and ETHI would keep it there – the growing number of exceptions to consent in PIPEDA is reason for concern. In fact, the last round of amendments to PIPEDA in the 2015 Digital Privacy Act, saw the addition of ten new exceptions to consent. While some of these were relatively uncontroversial (e.g. making it clear that consent was not needed to communicate with the next of kin of an injured, ill or deceased person) others were much more substantial in nature. In its 2018 report ETHI has made several recommendations that continue this trend – creating new contexts in which individual consent will no longer be required for the collection, use or disclosure of personal information.In this post, I focus on one of these – the recommendation that the exception to consent for the use of “publicly available information” be dramatically expanded to include content shared by individuals on social media. In light of the recent Facebook/Cambridge Analytica scandal, this recommended change deserves some serious resistance.

PIPEDA already contains a carefully limited exception to consent to the collection, use or disclosure of personal information where it is “publicly available” as defined in the Regulations Specifying Publicly Available Information. These regulations identify five narrowly construed categories of publicly available information. The first is telephone directory information (but only where the subscriber has the option to opt out of being included in the directory). The second is name and contact information that is included in a professional business directory listing that is available to the public; nevertheless, such information can only be collected, used or disclosed without consent where it relates “directly to the purpose for which the information appears in the registry” (i.e. contacting the individual for business purposes).There is a similar exception for information in a public registry established by law (for example, a land titles registry); this information can similarly only be collected, used or disclosed for purposes related to those for which it appears in the record or document. Thus, consent is not required to collect land registry information for the purposes of concluding a real estate transaction. However, it is not permitted to extract personal information from such a registry, without consent, to use for marketing. A fourth category of publicly available personal information is information appearing in court or tribunal records or documents. This respects the open courts principle, but the exception is limited to collection, use or disclosure that relates directly to the purpose for which the information appears in the record or document. This means that online repositories of court and tribunal decisions cannot be mined for personal information; however, personal information can be used without consent to further the open courts principle (for example, a reporter gathering information to use in a newspaper story).

This brings us to the fifth category of publicly available information – the one ETHI would explode to include vast quantities of personal information. Currently, this category reads:

e) personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.

ETHI’s recommendation is to make this “technologically neutral” by having it include content shared by individuals over social media. According to ETHI, a “number of witnesses considered this provision to be “obsolete.” (at p. 27) Perhaps not surprisingly, these witnesses represented organizations and associations whose members would love to have unrestricted access to the contents of Canadians’ social media feeds and pages. The Privacy Commissioner was less impressed with the arguments for change. He stated:“we caution against the common misconception that simply because personal information happens to be generally accessible online, there is no privacy interest attached to it.” (at p. 28) The Commissioner recommended careful study with a view to balancing “fundamental individual and societal rights.” This cautious approach seems to have been ignored. The scope of ETHI’s proposed change is particularly disturbing given the very carefully constrained exceptions that currently exist for publicly available information. A review of the Regulations should tell any reader that this was always intended to be a very narrow exception with tightly drawn boundaries; it was never meant to create a free-for-all open season on the personal information of Canadians.

The Cambridge Analytica scandal reveals the harms that can flow from unrestrained access to the sensitive and wide-ranging types and volumes of personal information that are found on social media sites. Yet even as that scandal unfolds, it is important to note that everyone (including Facebook) seems to agree that user consent was both required and abused. What ETHI recommends is an exception that would obviate the need for consent to the collection, use and disclosure of the personal information of Canadians shared on social media platforms. This could not be more unwelcome and inappropriate.

Counsel for the Canadian Life and Health Insurance Association, in addressing ETHI, indicated that the current exception “no longer reflects reality or the expectations of the individuals it is intended to protect.” (at p. 27) A number of industry representatives also spoke of the need to make the exception “technologically neutral”, a line that ETHI clearly bought when it repeated this catch phrase in its recommendation. The facile rhetoric of technological neutrality should always be approached with enormous caution. The ‘old tech’ of books and magazines involved: a) relatively little exposure of personal information; b) carefully mediated exposure (through editorial review, fact-checking, ethical policies, etc.); c) and time and space limitations that tended to focus publication on the public interest. Social media is something completely different. It is a means of peer-to-peer communication and interaction which is entirely different in character and purpose from a magazine or newspaper. To treat it as the digital equivalent is not technological neutrality, it is technological nonsensicality.

It is important to remember that while the exception to consent for publicly available information exists in PIPEDA; the definition of its parameters is found in a regulation. Amendments to legislation require a long and public process; however, changes to regulations can happen much more quickly and with less room for public input. This recommendation by ETHI is therefore doubly disturbing – it could have a dramatic impact on the privacy rights of Canadians, and could do so more quickly and quietly than through the regular legislative process. The Privacy Commissioner was entirely correct in stating that there should be no change to these regulations without careful consideration and a balancing of interests, and perhaps no change at all.

The recent scandal regarding the harvesting and use of the personal information of millions of Facebook users in order to direct content towards them aimed at influence their voting behavior raises some interesting questions about the robustness of our data protection frameworks. In this case, a UK-based professor collected personal information via an app, ostensibly for non-commercial research purposes. In doing so he was bound by terms of service with Facebook. The data collection was in the form of an online quiz. Participants were paid to answer a series of questions, and in this sense they consented to and were compensated for the collection of this personal information. However, their consent was to the use of this information only for non-commercial academic research. In addition, the app was able to harvest personal information from the Facebook friends of the study participants – something which took place without the knowledge or consent of those individuals. The professor later sold his app and his data to Cambridge Analytica, which used it to target individuals with propaganda aimed at influencing their vote in the 2016 US Presidential Election.

A first issue raised by this case is a tip-of-the-iceberg issue. Social media platforms – not just Facebook – collect significant amounts of very rich data about users. They have a number of strategies for commercializing these treasure troves of data, including providing access to the platform to app developers or providing APIs on a commercial basis that give access to streams of user data. Users typically consent to some secondary uses of their personal information under the platform’s terms of service (TOS). Social media platform companies also have TOS that set the terms and conditions under which developers or API users can obtain access to the platform and/or its data. What the Cambridge Analytica case reveals is what may (or may not) happen when a developer breaches these TOS.

Because developer TOS are a contract between the platform and the developer, a major problem is the lack of transparency and the grey areas around enforcement. I have written about this elsewhere in the context of another ugly case involving social media platform data – the Geofeedia scandal (see my short blog post here, full article here). In that case, a company under contract with Twitter and other platforms misused the data it contracted for by transforming it into data analytics for police services that allowed police to target protesters against police killings of African American men. This was a breach of contractual terms between Twitter and the developer. It came to public awareness only because of the work of a third party (in that case, the ACLU of California). In the case of Cambridge Analytica, the story also only came to light because of a whistleblower (albeit one who had been involved with the company’s activities). In either instance it is important to ask whether, absent third party disclosure, the situation would ever have come to light. Given that social media companies provide, on a commercial basis, access to vast amounts of personal information, it is important to ask what, if any, proactive measures they take to ensure that developers comply with their TOS. Does enforcement only take place when there is a public relations disaster? If so, what other unauthorized exploitations of personal information are occurring without our knowledge or awareness? And should platform companies that are sources of huge amounts of personal information be held to a higher standard of responsibility when it comes to their commercial dealing with this personal information?

Different countries have different data protection laws, so in this instance I will focus on Canadian law, to the extent that it applies. Indeed, the federal Privacy Commissioner has announced that he is looking into Facebook’s conduct in this case. Under the Personal Information Protection and Electronic Documents Act(PIPEDA), a company is responsible for the personal information it collects. If it shares those data with another company, it is responsible for ensuring proper limitations and safeguards are in place so that any use or disclosure is consistent with the originating company’s privacy policy. This is known as the accountability principle. Clearly, in this case, if the data of Canadians was involved, Facebook would have some responsibility under PIPEDA. What is less clear is how far this responsibility extents. Clause 4.1.3 of Schedule I to PIPEDA reads:“An organization is responsible for personal information in its possession or custody, including information that has been transferred to a third party for processing. The organization shall use contractual or other means to provide a comparable level of protection while the information is being processed by a third party.” [My emphasis]. One question, therefore, is whether it is enough for Facebook to simply have in place a contract that requires its developers to respect privacy laws, or whether Facebook’s responsibility goes further. Note that in this case Facebook appears to have directed Cambridge Analytica to destroy all improperly collected data. And it appears to have cut Cambridge Analytica off from further access to its data. Do these steps satisfy Facebook’s obligations under PIPEDA? It is not at all clear that PIPEDA places any responsibilities on organizations to actively supervise or monitor companies with which it has shared data under contract. It is fair to ask, therefore, whether in cases where social media platforms share huge volumes of personal data with developers, is the data-sharing framework in PIPEDA sufficient to protect the privacy interests of the public.

Another interesting question arising from the scandal is whether what took place amounts to a data breach. Facebook has claimed that it was not a data breach – from their perspective, this is a case of a developer that broke its contract with Facebook. It is easy to see why Facebook would want to characterize the incident in this way. Data breaches can bring down a whole other level of enforcement, and can also give rise to liability in class action law suits for failure to properly protect the information. In Canada, new data breach notification provisions (which have still not come into effect under PIPEDA) would impose notification requirements on an organization that experienced a breach. It is interesting to note, though, that he data breach notification requirements are triggered where there is a “real risk of significant harm to an individual” [my emphasis]. Given what has taken place in the Cambridge Analytical scandal, it is worth asking whether the drafters of this provision should have included a real risk of significant harm to the broader public. In this case, the personal information was used to subvert democratic processes, something that is a public rather than an individual harm.

The point about public harm is an important one. In both the Geofeedia and the Cambridge Analytica scandals, the exploitation of personal information was on such a scale and for such purposes that although individual privacy may have been compromised, the greater harms were to the public good. Our data protection model is based upon consent, and places the individual and his or her choices at its core. Increasingly, however, protecting privacy serves goals that go well beyond the interests of any one individual. Not only is the consent model broken in an era of ubiquitous and continuous collection of data, it is inadequate to address the harms that come from improper exploitation of personal information in our big data environment.

This blog post is the first in a series that looks at the ETHI Report and its recommendations. It addresses the issue of consent.

The enactment of PIPEDA in 2001 introduced a consent-based model for the protection of personal information in the hands of the private sector in Canada. The model has at its core a series of fair information principles that are meant to guide businesses in shaping their collection, use and disclosure of personal information. Consent is a core principle; other principles support consent by ensuring that individuals have adequate and timely notice of the collection of personal information and are informed of the purposes of collection.

Unfortunately, the principle of consent has been drastically undermined by advances in technology and by a dramatic increase in the commercial value of personal information. In many cases, personal information is now actual currency and not just the by-product of transactions, changing the very fundamentals of the consent paradigm. In the digital environment, the collection of personal information is also carried out continually. Not only is personal information collected with every digital interaction, it is collected even while people are not specifically interacting with organizations. For example, mobile phones and their myriad apps collect and transmit personal information even while not in use. Increasingly networked and interconnected appliances, entertainment systems, digital assistants and even children’s toys collect and communicate steady streams of data to businesses and their affiliates.

These developments have made individual consent somewhat of a joke. There are simply too many collection points and too many privacy policies for consumers to read. Most of these policies are incomprehensible to ordinary individuals; many are entirely too vague when it comes to information use and sharing; and individuals can easily lose sight of consents given months or years previously to apps or devices that are largely forgotten but that nevertheless continuing to harvest personal information in the background. Managing consent in this environment is beyond the reach of most. To add insult to injury, the resignation felt by consumers without meaningful options for consent is often interpreted as a lack of interest in privacy. As new uses (and new markets) for personal information continue to evolve, it is clear that the old model of consent is no longer adequate to serve the important privacy interests of individuals.

The ETHI Report acknowledges the challenges faced by the consent model; it heard from many witnesses who identified problems with consent and many who proposed different models or solutions. Ultimately, however, ETHI concludes that “rather than overhauling the consent model, it would be best to make minor adjustments and let the stakeholders – the Office of the Privacy Commissioner (OPC), businesses, government, etc. – adapt their practices in order to maintain and enhance meaningful consent.”(at p. 20)

The fact that the list of stakeholders does not include the public – those whose personal information and privacy are at stake – is telling. It signals ambivalence about the importance of privacy within the PIPEDA framework. In spite of being an interest hailed by the Supreme Court of Canada as quasi-constitutional in nature, privacy is still not approached by Parliament as a human right. The prevailing legislative view seems to be that PIPEDA is meant to facilitate the exchange of personal information with the private sector; privacy is protected to the extent that it is necessary to support public confidence in such exchanges. The current notion of consent places a significant burden on individuals to manage their own privacy and, by extension, places any blame for oversharing on poor choices. It is a cynically neo-liberal model of regulation in which the individual ultimately must assume responsibility for their actions notwithstanding the fact that the deck has been so completely and utterly stacked against them.

The OPC recently issued a report on consent which also recommended the retention of consent as a core principle, but recognized the need to take concrete steps to maintain its integrity. The OPC recommendations included using technological tools, developing more accessible privacy policies, adjusting the level of consent required to the risk of harm, creating no-go zones for the use of personal information, and enhancing privacy protection for children. ETHI’s rather soft recommendations on consent may be premised on an understanding that much of this work will go ahead without legislative change.

Among the minor adjustments to consent recommended by ETHI is that PIPEDA be amended to make opt-in consent the default for any use of personal information for secondary purposes. This means that while there might be opt-out consent for the basic services for which a consumer is contracting (in other words, if you provide your name and address for the delivery of an item, it can be assumed you are consenting to the use of the information for that purpose), consumers must agree to the collection, use or disclosure of their personal information for secondary or collateral purposes. ETHI’s recommendation also indicates that opt-in consent might eventually become the norm in all circumstances. Such a change may have some benefits. Opt out consent is invidious. Think of social media platform default settings that enable a high level of personal information sharing, leaving consumers to find and adjust these settings if they want greater protection for their privacy. An opt-in consent requirement might be particularly helpful in addressing such problems. Nevertheless, it will not be much use in the context of long, complex (and largely unread) privacy policies. Many such policies ask consumers to consent to a broad range of uses and disclosures of personal information, including secondary purposes described in the broadest of terms. A shift to opt-in consent will not help if agreeing to a standard set of unread terms amounts to opting-in.

ETHI also considered whether and how individuals should be able to revoke their consent to the collection, use or disclosure of their personal information. The issues are complex. ETHI gave the example of social media, where information shared by an individual might be further disseminated by many others, making it challenging to give effect to a revocation of consent. ETHI recommends that the government “study the issue of revocation of consent in order to clarify the form of revocation required and its legal and practical implications”.

ETHI also recommended that the government consider specific rules around consent for minors, as well as the collection, use and disclosure of their personal information. Kids use a wide range of technologies, but may be particularly vulnerable because of a limited awareness of their rights and recourses, as well as of the long-term impacts of personal information improvidently shared in their youth. The issues are complex and worthy of further study. It is important to note, however, that requiring parental consent is not an adequate solution if the basic framework for consent is not addressed. Parents themselves may struggle to understand the technologies and their implications and may be already overwhelmed by multiple long and complex privacy policies. The second part of the ETHI recommendation which speaks to specific rules around the collection, use and disclosure of the personal information of minors may be more helpful in addressing some of the challenges in this area. Just as we have banned some forms of advertising directed at children, we might also choose to ban some kinds of collection or uses of children’s personal information.

In terms of enhancing consent, these recommendations are thin on detail and do not provide a great deal of direction. They seem to be informed by a belief that a variety of initiatives to enhance consent through improved privacy policies (including technologically enhanced policies) may suffice. They are also influenced by concerns expressed by business about the importance of maintaining the ‘flexibility’ of the current regime. While there is much that is interesting elsewhere within the ETHI report, the discussion of consent feels incomplete and disappointing. Minor adjustments will not make a major difference.

Up next: One of the features of PIPEDA that has proven particularly challenging when it comes to consent is the ever-growing list of exceptions to the consent requirement. In my next post I will consider ETHI’s recommendations that would add to that list, and that also address ‘alternatives’ to consent.

The Office of the Privacy Commissioner of Canada has released its Draft Position on Online Reputation. It’s an important issue and one that is of great concern to many Canadians. In the Report, the OPC makes recommendations for legislative change and proposes other measures (education, for example) to better protect online reputation. However, the report has also generated considerable controversy for the position it has taken on how the Personal Information Protection and Electronic Documents Act currently applies in this context. In this post I will focus on the Commissioner’s expressed view that PIPEDA applies to search engine activities in a way that would allow Canadians to request the de-indexing of personal information from search engines, with the potential to complain to the Commissioner if these demands are not met.

PIPEDA applies to the collection, use and disclosure of personal information in the course of commercial activity. The Commissioner reasons, in this report, that search engines are engaged in commercial activity, even if search functions are free to consumers. An example is the placement of ads in search results. According to the Commissioner, because search engines can provide search results that contain (or lead to) personal information, these search engines are collecting, using and disclosing personal information in the course of commercial activity.

With all due respect, this view seems inconsistent with current case law. In 2010, the Federal Court in State Farm Mutual Automobile Insurance Co. v. Canada (Privacy Commissioner) ruled that an insurance company that collected personal information on behalf of an individual it was representing in a law suit was not collecting that information in the course of commercial activity. This was notwithstanding the fact that the insurance company was a commercial business. The Court was of the view that, at essence, the information was being collected on behalf of a private person (the defendant) so that he could defend a legal action (a private and non-commercial matter to which PIPEDA did not apply). Quite tellingly, at para 106, the court stated: “if the primary activity or conduct at hand, in this case the collection of evidence on a plaintiff by an individual defendant in order to mount a defence to a civil tort action, is not a commercial activity contemplated by PIPEDA, then that activity or conduct remains exempt from PIPEDA even if third parties are retained by an individual to carry out that activity or conduct on his or her behalf.”

The same reasoning applies to search engines. Yes, Google makes a lot of money, some of which comes from its search engine functions. However, the search engines are there for anyone to use, and the relevant activities, for the purposes of the application of PIPEDA, are those of the users. If a private individual carries out a Google search for his or her own purposes, that activity does not amount to the collection of personal information in the course of commercial activity.If a company does so for its commercial purposes, then that company – and not Google – will have to answer under PIPEDA for the collection, use or disclosure of that personal information. The view that Google is on the hook for all searches is not tenable. It is also problematic for the reasons set out by my colleague Michael Geist in his recent post.

I also note with some concern the way in which the “journalistic purposes” exception is treated in the Commissioner’s report. This exception is one of several designed to balance privacy with freedom of expression interests. In this context, the argument is that a search engine facilitates access to information, and is a tool used by anyone carrying out online research. This is true, and for the reasons set out above, PIPEDA does not apply unless that research is carried out in the course of commercial activities to which the statute would apply. Nevertheless, in discussing the exception, the Commissioner states:

Some have argued that search engines are nevertheless exempt from PIPEDA because they serve a journalistic or literary function. However, search engines do not distinguish between journalistic/literary material. They return content in search results regardless of whether it is journalistic or literary in nature. We are therefore not convinced that search engines are acting for “journalistic” or “literary” purposes, or at least not exclusively for such purposes as required by paragraph 4(2)(c).

What troubles me here is the statement that “search engines do not distinguish between journalistic and literary material”. They don’t need to. The nature of what is sought is not the issue. The issue is the purpose. If an individual uses Google in the course of non-commercial activity, PIPEDA does not apply. If a journalist uses Google for journalistic purposes, PIPEDA does not apply. The nature of the content that is searched is immaterial. The quote goes on to talk about whether search engines act for journalistic or literary purposes – that too is not the point. Search engines are tools. They are used by actors. It is the purposes of those actors that are material, and it is to those actors that PIPEDA will apply – if they are collecting, using or disclosing personal information in the course of commercial activity.

Canada’s Federal Court of Appeal has handed down a decision that addresses important issues regarding control over commercially valuable data. The decision results from an appeal of an earlier ruling of the Competition Tribunal regarding the ability of the Toronto Real Estate Board (TREB) to limit the uses to which its compilation of current and historical property listings in the Greater Toronto Area (GTA) can be put.

Through its operations, the TREB compiles a vast database of real estate listings. Information is added to the database on an ongoing basis by real estate brokers who contribute data each time a property is listed with them. Real estate agents who are members of TREB in turn receive access to a subset of this data via an electronic feed. They are permitted to make this data available through their individual websites. However, the TREB does not permit all of its data to be shared through this feed; some data is available only through other means such as in-person consultation, or communications of snippets of data via email or fax.

The dispute arose after the Competition Commissioner applied to the Competition Tribunal for a ruling as to whether the limits imposed by the TREB on the data available through the electronic feed inhibited the ability of “virtual office websites” (VOWs) to compete with more conventional real estate brokerages. The tribunal ruled that they did, and the matter was appealed to the Federal Court of Appeal. Although the primary focus of the Court’s decision was on the competition issues, it also addressed questions of privacy and copyright law.

The Federal Court of Appeal found that the TREB’s practices of restricting available data – including information on the selling price of homes – had anticompetitive effects that limited the range of broker services that were available in the GTA, limited innovation, and had an adverse impact on entry into and expansion of relevant markets. This aspect of the decision highlights how controlling key data in a sector of the economy can amount to anti-competitive behavior. Data are often valuable commercial assets; too much exclusivity over data may, however, pose problems. Understanding the limits of control over data is therefore an important and challenging issue for businesses and regulators alike.

The TREB had argued that one of the reasons why it could not provide certain data through its digital feed was because these data were personal information and it had obligations under the Personal Information Protection and Electronic Documents Act to not disclose this information without appropriate consent. The TREB relied on a finding of the Office of the Privacy Commissioner of Canada that the selling price of a home (among those data held back by TREB) was personal information because it could lead to inferences about the individual who sold the house (e.g.: their negotiating skills, the pressure on them to sell, etc.). The Court noted that the TREB already shared the information it collected with its members. Information that was not made available through the digital feed was still available through more conventional methods. In fact, the Court noted that the information was very widely shared. It ruled that the consent provided by individuals to this sharing of information would apply to the sharing of the same information through a digital feed. It stated:“PIPEDA only requires new consent where information is used for a new purpose, not where it is distributed via new methods. The introduction of VOWs is not a new purpose – the purpose remains to provide residential real estate services [. . .].” (at para 165) The Court’s decision was influenced by the fact that the consent form was very broadly worded. Through it, TREB obtained consent to the use and dissemination of the data “during the term of the listing and thereafter.” This conclusion is interesting, as many have argued that the privacy impacts are different depending on how information is shared or disseminated. In other words, it could have a significant impact on privacy if information that is originally shared only on request, is later published on the Internet. Consent to disclosure of the information using one medium might not translate into consent to a much broader disclosure. However, the Court’s decision should be read in the context of both the very broad terms of the consent form and the very significant level of disclosure that was already taking place. The court’s statement that “PIPEDA only requires new consent where information is used for a new purpose, not where it is distributed via new methods” should not be taken to mean that new methods of distribution do not necessarily reflect new purposes that go beyond the original consent.

The Federal Court of Appeal also took note of the Supreme Court of Canada’s recent decision in Royal Bank of Canada v. Trang. In the course of deciding whether to find implied consent to a disclosure of personal information, the Supreme Court of Canada had ruled that while the balance owing on a mortgage was personal information, it was less sensitive than other financial information because the original amount of the mortgage, the rate of interest and the due date for the mortgage were all publicly available information from which an estimate of the amount owing could be derived. The Federal Court of Appeal found that the selling price of a home was similarly capable of being derived from other publicly available data sources and was thus not particularly sensitive personal information.

In addition to finding that there would be no breach of PIPEDA, the Federal Court of Appeal seemed to accept the Tribunal’s view that the TREB was using PIPEDA in an attempt to avoid wider sharing of its data, not because of concerns for privacy, but in order to maintain its control over the data. It found that TREBs conduct was “consistent with the conclusion that it considered the consents were sufficiently specific to be compliant with PIPEDA in the electronic distribution of the disputed data on a VOW, and that it drew no distinction between the means of distribution.” (at para 171)

Finally, the Competition Tribunal had ruled that the TREB did not have copyright in its compilation of data because the compilation lacked sufficient originality in the selection or arrangement of the underlying data. Copyright in a compilation depends upon this originality in selection or arrangement because facts themselves are in the public domain. The Federal Court of Appeal declined to decide the copyright issue since the finding that the VOW policy was anti-competitive meant that copyright could not be relied upon as a defence. Nevertheless, it addressed the copyright question in obiter (meaning that its comments are merely opinion and not binding precedent).

The Federal Court of Appeal noted that the issue of whether there is copyright in a compilation of facts is a “highly contextual and factual determination” (at para 186).The Court of Appeal took note of the Tribunal’s findings that “TREB’s specific compilation of data from real estate listings amounts to a mechanical exercise” (at para 194), and agreed that the threshold for originality was not met. The Federal Court of Appeal dismissed the relevance of TREB’s arguments about the ways in which its database was used, noting that “how a “work” is used casts little light on the question of originality.” (at para 195) The Court also found no relevance to the claims made in TREB’s contracts to copyright in its database. Claiming copyright is one thing, establishing it in law is quite another.

Note that leave to appeal this decision to the Supreme Court of Canada was denied on August 23, 2018.

An Ontario small claims court judge has found in favour of a plaintiff who argued that her privacy rights were violated when a two-second video clip of her jogging on a public path was used by the defendant media company in a sales video for a real-estate development client. The plaintiff testified that she had been jogging so as to lose the weight that she had gained after having children. She became aware of the video when a friend drew her attention to it on YouTube, and the image “caused her discomfort and anxiety” (para 5). Judge Leclaire noted that the “image of herself in the video is clearly not the image she wished portrayed publicly”.

At the time of the filming, the defendant’s practice was to seek consent to appear in its videos from people who were filmed in private spaces, but not to do so where people were in public places. The defendant’s managing associate testified that if people in public places “see the camera and continue moving, consent is implied.” (at para 9) The judge noted that it was not established how it could be known whether individuals saw the camera. The plaintiff testified that she had seen the camera, and had attempted to shield her face from view; she believed that this demonstrated that she did not wish to be filmed.

Although the defendant indicated that the goal was to capture the landscape and not the people, the judge found that “people are present and central to the location and the picture.” (at para 10) The judge found that the photographer deliberately sought to include an image of someone engaging in the activity of jogging alongside the river. Although the defendant argued that it would not be practical to seek consent from the hundreds of people who might be captured in a video of a public space, the judge noted that in the last two years, the defendant company had “tightened up” its approach to seeking consent, and now approached people in public areas prior to filming to seek their consent to appear in any resulting video.

The plaintiff argued that there had been a breach of the tort of intrusion upon seclusion, which was first recognized in Ontario by the Ontario Court of Appeal in Jones v. Tsigein 2012. Judge Leclaire stated that the elements of the tort require 1) that the defendant’s actions are intentional or reckless; 2) that there is no lawful justification for the invasion of the plaintiff’s private affairs or concerns; and 3) that the invasion is one that a reasonable person would consider to be “highly offensive causing distress, humiliation or anguish.” (Jones at para 71) Judge Leclaire found that these elements of the tort were made out on the facts before him. The defendant’s conduct in filming the video was clearly intentional. He also found that a reasonable person “would regard the privacy invasion as highly offensive”, noting that “the plaintiff testified as to the distress, humiliation or anguish that it caused her.” (at para 16)

Judge Leclaire clearly felt that the defendant had crossed a line in exploiting the plaintiff’s image for its own commercial purposes. Nevertheless, there are several problems with his application of the tort of intrusion upon seclusion. Not only does he meld the objective “reasonable person” test with a subjective test of the plaintiff’s own feelings about what happened, his decision that capturing the image of a person jogging on a public pathway is an intrusion upon seclusion is in marked contrast to the statement of the Ontario Court of Appeal in Jones v. Tsige, that the tort is relatively narrow in scope:

A claim for intrusion upon seclusion will arise only for deliberate and significant invasions of personal privacy. Claims from individuals who are sensitive or unusually concerned about their privacy are excluded: it is only intrusions into matters such as one's financial or health records, sexual practises and orientation, employment, diary or private correspondence that, viewed objectively on the reasonable person standard, can be described as highly offensive. (at para 72)

Judge Leclaire provides relatively little discussion about how to address the capture of images of individuals carrying out activities in public spaces. Some have suggested that there is simply no privacy in public space, while others have called for a more contextual inquiry. Such an inquiry was absent in this case. Instead, Judge Leclaire relied upon Aubry v. Vice-Versa a decision of the Supreme Court of Canada, even though that decision was squarely based on provisions of Quebec law which have no real equivalent in common law Canada. The right to one’s image is specifically protected by art. 36 of the Quebec Civil Code, which provides that it is an invasion of privacy to use a person’s “name, image, likeness or voice for a purpose other than the legitimate information of the public”. There is no comparable provision in Ontario law, although the use of one’s name, image or likeness in an advertisement might amount to the tort of misappropriation of personality. In fact, with almost no discussion, Judge Leclaire also found that this tort was made out on the facts and awarded $100 for the use of the plaintiff’s image without permission. It is worth noting that the tort of misappropriation of personality has typically required that a person have acquired some sort of marketable value in their personality in order for there to be a misappropriation of that value.

Judge Leclair awarded $4000 in damages for the breach of privacy which seems to be an exorbitant amount given the range of damages normally awarded in privacy cases in common law Canada. In this case, the plaintiff was featured in a 2 second clip in a 2 minute video that was taken down within a week of being posted. While there might be some basis to argue that other damage awards to have been too low, this one seems surprisingly high.

It is also worth noting that the facts of this case might constitute a breach of the Personal Information Protection and Electronic Documents Act (PIPEDA) which governs the collection, use or disclosure of personal information in the course of commercial activity. PIPEDA also provides recourse in damages, although the road to the Federal Court is a longer one, and that court has been parsimonious in its awards of damages. Nevertheless, given that Judge Leclaire’s preoccupation seems to be with the unconsented-to use of the plaintiff’s image for commercial purposes, PIPEDA seems like a better fit than the tort of intrusion upon seclusion.

Ultimately, this is a surprising decision and seems out of line with a growing body of case law on the tort of intrusion upon seclusion. As a small claims court decision, it will carry little precedential value. The case is therefore perhaps best understood as one involving a person who was jogging at the wrong place at the wrong time, but who sued in the right court at the right time. Nevertheless, it should serve as a warning to those who make commercial use of footage filmed in public spaces; as it reflects a perspective that not all activities in public spaces are ‘public’ in the fullest sense of the word. It highlights as well the increasingly chaotic privacy legal landscape in Canada.