What legal and ethical risks do you think would be most surprising to the average psychologist in clinical practice?

When I talk with other mental health providers, I am frequently impressed by the considerations and care they take to protect patients. Our hope in writing this article was to explain some concerns and provide mitigating steps for practitioners using technology. This article was not intended to alarm; rather, to inform and encourage mental health providers to take action to protect patients. Perhaps what might be most surprising is that we believe technology has an important role to play in clinical practice despite these risks. And more specifically, concepts such as “HIPAA-Compliance” (i.e., this does not signify a government certified or approved technology), cross-border data storage (e.g., WISC-V being stored in Canada), and unintended recipients of messages (e.g., concern for intimate partner violence) may be novel for providers.

What are 3 things psychologists need to do right now to protect themselves and their patients’ information?

This is a great question, as distilling down to applied concepts is an essential part of this article. First, providers should read privacy policies, terms of service, and any other conditions that are being agreed to and impact patients. Reading is arduous and time-intensive, but I know of no other certified body that is legally reviewing these documents for practicing psychologists. Thus, the responsibility is in our hands. Second, informed consent documents and processes should be modified to account for any technology that might be utilized. Explaining to clients in written text and speech about the advantages, disadvantages, and other considerations of technology use (e.g., If we text message between sessions, you will be able to contact me should an emergency arise, but I may only respond during 8-5 business hours. Additionally, how can we protect your device from being accessed by unintended recipients?). Third, begin consulting and asking lawyers, legal experts, and/or tech-focused psychologists about practices with technology. Providers should check in about the addition of new technologies or methods of use with others. I would love to see a greater sense of collaboration and openness regarding the sharing of tools and technologies on professional listservs, forums, and other message boards.

Can we design electronic systems to encourage youth to be more engaged in their health care?

Absolutely! This aspect of care is one of the most exciting and revolutionary at this time. For instance, Apple recently added patient medical records to their “Health” app on iOS. Essentially, patients can login to their electronic medical records and see what reports and results have been generated. This kind of direct access to information increases connection between providers and patients, autonomy and investment in health, and helps coordinate future meetings. Placing tools and information where patients are at – in their hands – is one of the most exciting frontiers for care.

Has insurance billing fully caught up with these technological options in providing psychotherapy?

I should preface this response by saying this is out of my area of research, practice, and competence. As I understand billing, text messages and emails are not services reportable to insurance. That is a fundamental problem for providers trying to meet clients where they are at. If you receive a text message with suicidal ideation from a patient, will you receive any compensation for your responses? I do not know how insurance accounts for these between-session interactions. However, I am more optimistic about televideo/telemental psychotherapy, as it seems many insurance providers have recognized the analogous nature of remote versus in-person clinical practice.

What is most important for psychologists collecting research or clinical data to keep in mind about how to responsibly manage electronic data?

Like any good psychologist would say, “It depends.” Research generally is tied to IRBs at institutions of higher learning. Flexibility about practices should be directed by those working at the college/university. Individual researchers may be restricted when trying to initiate their own privacy and security practices, which might actually be a net positive for participant data. Within private practice, the degrees of freedom open up. I could write notes by hand or place digital notes in a computer without Internet access. All these efforts might be appropriate mitigating steps depending on the scenario (e.g., politician who has been targeted regarding their personal information). The most appropriate steps – in any situation – include documenting efforts to protect privacy, reading third-party policies, consulting with other providers, and having thorough decision-making models for using technology.

Facebook and Cambridge Analytica have been front and center in the news lately for how they (mis)handled research data. Given your expertise with electronic data usage, what lessons have you taken from this situation?

Fundamentally, if a provider is not paying for a service associated with their clinical practice or research, they are likely giving away patient/participant data. When a service is “free,” companies must make money indirectly. In this Information Age, data is the currency of choice. When you pay for a service, you are likely choosing a different business model for you and your patients. For instance, Google Drive has a free, personal edition. Store your patient data in the free version, and you are likely sharing more information about your caseload than allowed/intended (i.e., Google is reading the names and information to better advertise to you and learn from what you write). Pay a monthly fee for G Suite (minimum $5 per month) and then you can sign a HIPAA-compliant Business Associate Agreement (BAA) with Google. This upfront payment changes data handling procedures and prevents the company from harvesting the data in your account (i.e., email, calendar, and drive). Social media is no different. We use their “free” services and quickly click through their privacy policies/terms and conditions, but they are trying to make money to pay for and profit from it all. Facebook is a publicly traded company with shareholders and a corporate structure. And there is no way to pay for greater privacy.

Further, what advice would you give psychologists who maintain a professional social media identity across different platforms (e.g., Facebook, Twitter, LinkedIn)?

Accessibility is key for collaboration and initial contact between researchers, psychologists, and patients. Social media is one arm for marketing and coordination, but should not be the mechanism for patient contact. Facebook is wonderful at what it does – same with Twitter and LinkedIn. But none of these platforms’ stated missions/purposes include anything about clinical practice or research. I cannot blame a practitioner from trying to market and attract new business via social media. However, I would highly recommend alternative methods (e.g., website, Google Maps address/AdWords, and developing a local network within community for referrals). My advice or words are informed by a history of awful leaks at the hands of social media companies. These companies retain data for unknown periods of time – potentially forever – with no way consumer-facing way to delete files. When we think about protecting patient privacy and confidentiality, a forever-retained data policy should scare us.

Author Bio

Samuel Lustgarten is a doctoral candidate at the University of Iowa’s Counseling Psychology Ph.D. program. His research aims to better understand ethical, legal, and training ramifications of using technology in practice. Samuel is currently completing his doctoral internship at the University of Wisconsin’s University Health Service. His work has been published in American Psychologist, Professional Psychology: Research and Practice, and Clinical Psychology: Science and Practice.