An often overlooked, but essential component of asset-protection planning is making a careful assessment of a client’s financial status. Although timing is generally considered the most critical element of asset protection, due consideration must be given to both the value of assets exchanged and the effect of plan implementation on one’s balance sheet.

A fraudulent transfer claim is often the most effective weapon to disrupt an asset protection plan and reach assets transferred beyond a creditor’s grasp. Creditors can prove a fraudulent transfer by showing actual or constructive intent to hinder collection under the Uniform Fraudulent Transfer Act.1Actual intent involves a debtor’s state of mind and a subjective analysis of whether the debtor intended to avoid a claim.2 In the context of a lawsuit, proving actual intent is unpredictable because it generally involves trying to prove a defendant’s thoughts and intentions. By contrast, “constructive” intent generally involves an objective, two-part test. First, a debtor must have transferred assets in exchange for less than “reasonably equivalent value.”3 Additionally, the asset transfer must generally have left the debtor “insolvent” or undercapitalized to carry on business.4 Constructive intent, thus, generally involves an objective analysis of financial values. This objective analysis is often the most expeditious method of attacking a transfer as fraudulent.

A professional analysis of “reasonably equivalent value” and “solvency,” in connection with a protective transfer, can bolster and clarify asset-protection planning. The same financial analysis is integral to proving a constructive fraudulent transfer.

Reasonably Equivalent Value Determining whether the debtor received reasonably equivalent value for assets transferred essentially involves a value comparison of “what went out” versus “what was received.”5 Reasonably equivalent value does not necessarily mean equal value.6 It is often dependent on the circumstances.7 Timing could be a consideration. Property that must be quickly sold is likely to fetch far less than if listed, advertised, and marketed under regular market conditions.

Another consideration is the perspective from which value is determined. Creditors will argue that assets received by a debtor must be excluded from the solvency calculation to the extent exempt from creditor claims. A debtor may, for example, argue that receipt of protected limited liability company (LLC) equity constitutes reasonably equivalent value received in exchange for the transfer of exposed cash or other personal assets into the LLC; or that receipt of an interest in a protected financial account (such as an IRA) is an equivalent exchange for cash divested to the account. Although the law is unsettled, courts have suggested that the determination of whether the debtor received reasonably equivalent value should be made from the standpoint of the creditor.8 In a case arising in Florida, the 11th Circuit determined that transfers were not made for “reasonably equivalent value” when they drained assets that would otherwise have been available to creditors.9

Based on this interpretation, an ownership interest received by the debtor but not available to a creditor would not satisfy the act’s definition of equivalent value. Similarly, receipt of legally exempt or protected assets (in exchange for divested exposed assets) would not count “as reasonably equivalent value,” because the assets received would have no value from a creditor’s perspective. As an example, the conversion of cash from a personal checking account to a legally exempt IRA would not constitute an exchange for reasonably equivalent value in the fraudulent transfer context because the IRA is untouchable to the creditor. The contribution of cash into an LLC in exchange for an LLC membership interest is also not likely to be deemed reasonably equivalent value unless the interest is subject to foreclosure by a creditor.

Insolvency The second element of a constructive fraudulent transfer requires proof that a transfer rendered the debtor insolvent. A debtor is considered insolvent if the sum of his or her debts is greater than the fair value of his or her assets.10 This is generally referred to as “balance sheet” insolvency.11 The meaning of “fair value” in this context is not defined, but has been described as the sale of assets in a reasonable time at regular market value.12 Some parties may argue that assets should be valued according to generally accepted accounting principles (GAAP). Courts have ruled that GAAP may be potentially relevant, but is not controlling.13 The law requires that judges, not accountants, nor the board that promulgates GAAP, be the final arbiters of solvency.14 There are several legal considerations that veer from GAAP standards. For example, exempt assets are generally excluded from the calculation. Liabilities may be treated differently than in GAAP financial reporting. Valuations involving contingent assets and liabilities and disputed claims at the time of a transfer also warrant special consideration.

Value of Contingent Claims While the dollar value of contingent claims may be reasonably certain, the liability may never arise. Guarantees represent a classic example of contingent claims. Liability occurs only if the primary obligor fails to perform. In the solvency analysis (to determine if a transfer is constructively fraudulent), contingent claim values depend on the likelihood and timing of liability. The loan guarantee may be considered a greater liability if a default occurs (or is more likely to occur) tomorrow versus one year from tomorrow. Hindsight is generally not a factor in valuing contingent claims.

Under a “probability discount rule,” the “fair value” of a contingent liability “should be discounted according to the possibility of its ever becoming real.”15 In one case, a court gave the following example, where Company A, valued at $1.7 million, guaranteed a $28 million loan to Company B:

“Suppose that on the date the obligations were assumed there was a 1 percent chance that [Company A] would ever be called on to yield up its assets to creditors of [Company B]. Then the true measure of the liability created by these obligations on the date they were assumed would not be $28 million; it would be a paltry $17,000. For at worst [Company A] would have to yield up all of its assets (net of other liabilities), that is, $1.7 million, and the probability of this outcome is by assumption only 1 percent....”16

According to the court in Matter of Xonics Photochemical, Inc., 841 F.2d 198 (7th Cir. 1988), the proper value of the $28 million contingent claim was, arguably, not $28 million, but only $17,000. Alternatively, under the same assumptions, it might be reasonable to assign a $280,000 value to the contingent claim, based on a 1 percent chance of loan default. Either way, as this analysis suggests, valuation of a contingent claim is not an exact science.

Effect of Collateral Another issue affecting valuation of contingent claims is whether such claims are fully or partially secured (and the stability of the value of such security). The real estate crash of the mid-2000s made clear the issue of stability in the value of collateral. When mortgage loans went into default, many first-position lenders with an initial 70 to 80 percent loan to value (LTV) ratio, were left with partially unsecured loans after real estate values fell 50 percent or more. Many second- and third-position mortgagees were left with claims that were, for all practical purposes, entirely unsecured. In a strong market, real estate encumbered by first- and second-mortgage loans may have a neutral or positive effect on a balance sheet. The same loans may result in hefty liabilities, far in excess of the value of real estate to which they attach, in a depressed market.

This argument was raised successfully in BB&T v. Hamilton Greens, LLC, 2016 WL 3365270 (Bankr. S.D. Fla. 2016). Three months after a $3 million construction loan default, the bank sued the developer and the loan guarantors. Six months after the lawsuit was filed, one of the guarantors created an offshore trust into which he transferred most of his assets (nearly $1.7 million). After obtaining a $4.9 million judgment, the bank sought to invalidate the transfers to the trust as fraudulent transfers.

The defendant testified to several circumstances supporting his position that he had no reasonable expectation that he would owe any money to the bank arising from the development loan. The property securing the debt was (at the time of the loan) valued at $15 million, his co-guarantors had a combined net worth of over $100 million, and his co-guarantors had indemnified him from the claim and discharged his other contingent liabilities. Relying on these and other factors pertaining to the debtor’s intent, the court ruled (considering the minimal likelihood of liability at the time assets were transferred) that the transfers to the trust were not fraudulent transfers.17 The ruling supports the premise that a debt guaranty constitutes a personal liability (in the solvency calculation) only to the extent that the debt is unsecured at the time of the alleged fraudulent transfer.

Value of Disputed Claims Claims may be disputed as to liability, value, or both. Malpractice, personal injury, product liability, and environmental claims generally fall into this category. Disputed claims differ from contingent claims in that the latter are based on the occurrence of future events that may not occur.18

In In re Babcock & Wilcox Co., 274 B.R. 230, 262 (Bankr. E.D. La. 2002), a court was required to value future asbestos liabilities and noted that various methodologies would yield a wide range in results. Multiple variables, including exposure estimates, exposure intensity levels, causation, latency periods, product identification, etc., made the endeavor quite problematic. One of the more difficult tasks for the court was determining whether hindsight was appropriate since, after the fact, it was known that claims had apparently ripened at a much higher rate than predicted (and, thus, forced the debtor into bankruptcy). Ultimately, the court determined that hindsight was inappropriate and the court was left only to determine whether Babcock’s financial predictions were reasonable under the circumstances existing at the time they were made.19

Some courts have, however, considered subsequent events (such as claim rates, default rates, or the value at which an asset or liability is ultimately negotiated) in assessing whether a party’s valuation estimates were reasonable when made.20 Thus, in calculating insolvency, a court could consider future circumstances (unknown to the debtor at the time assets are transferred) as relevant for determining whether to accept a party’s own valuation estimates for disputed claims. One prudent approach (to diminish judicial hindsight) is to make clear the debtor’s reliance (as a condition to transfer) on a professional assessment of value.

Value of Assets The value of assets in the fraudulent transfer context may be different than under accounting or regulatory definitions.21

Values determined by GAAP, SEC rules, and IRS revenue procedures may be unreliable because tax and accounting values on a balance sheet often have little to do with fair value in legal terms. Book value of retail inventory may be subject to an upward adjustment in anticipation of selling at retail value. Assets such as preferred distributions, debt-equity conversion rights, and voting control may have substantially different book values than fair values. Assets may also be valued individually or packaged in groups based on business considerations. Also, traditional GAAP rules for the treatment of debt as equity, and vice versa, may not be relevant in determining whether they are truly debt or equity in fraudulent transfer analysis.22

In EBC I, Inc. v. America Online, Inc. (In re EBC I, Inc.), 380 B.R. 348, 358 (Bankr. D. Del. 2008), both parties in a fraudulent conveyance action cited IRS Revenue Procedure 77-12 to support their arguments valuing the inventory of a retail business (for solvency purposes). One question was whether to use book value for inventory or to make an upward adjustment, because the inventory was to be sold at retail prices. The court was not persuaded to follow the revenue procedure, noting “[r]evenue [p]rocedures, like [GAAP], to be unhelpful because the tax and accounting implications of how assets are listed on a company’s balance sheet often have little to do with what a willing buyer and willing seller would agree is the fair market value of those assets.”23

The fair value of a debtor’s business assets may depend upon whether a company is a “going concern or on its deathbed.”24 The assets of a going concern are more likely to be sold in a prudent manner with reasonable time constraints and, thus, equate to a deliberate and calculated fair-market value.25 Liquidation value, however, may be more appropriate for valuing assets of a desperate debtor on its financial “deathbed.”26

Other Considerations Additional issues to consider are the impact and cost of insurance on liabilities, rights against third-party indemnities and guarantors, and potential claims against third parties.

Ultimately, in deciding whether a debtor is solvent, a court should ask: What would a willing buyer pay for the debtor’s entire package of assets and liabilities (excluding creditor protected assets and exempt assets)? To this question, the court in In Re Tousa, Inc., 422 B.R. 783 (Bankr. S.D. Fla. 2009), answered that if the price is positive, the debtor may be solvent; if the price is negative, the debtor may be insolvent.27 Arriving at the result may involve as much art as science and will certainly require the expertise of a seasoned appraiser.

Conclusion Assessment of financial status is essential to avoid fraudulent transfer claims. Receipt of reasonable equivalent value and maintaining client solvency are as critical as the timing of transfers. Determination of those issues may be highly subjective. The rulings discussed above have set the stage for debtors to defend personal planning by relying on determinations of value and solvency. Careful consideration of these issues could mean the difference between preserving or destroying the efficacy of an asset protection plan.

4SeeFla. Stat. §726.105(1)(b)(1) and (2); UFTA §(4)(a)(i) and (ii); see also §726.106, which provides similar modes proving a fraudulent transfer (as to present, but not future, creditors).

5In re Vilsack, 356 B.R. 546, 553 (Bankr. S.D. Fla. 2006). All states have adopted some form of the UFTA (or its successor, the Uniform Voidable Transactions Act) and courts routinely look to other states and to analogous provisions under the Federal Bankruptcy Code to interpret provisions of the act. See ASARCO LLC v. Americas Mining Corp., 396 B.R. 278, n.49, citing Creditor’s Comm. of Jumer’s Castle Lodge, Inc. v. Jumer, 472 F.3d 943, 947 (7th Cir. 2007); In re W.R. Grace & Co., 281 B.R. 852, 857 (Bankr. D. Del. 2002) (stating the court could seek guidance from cases interpreting similarly worded statutes, like the Bankruptcy Code); see also In re Tower Envtl., Inc., 260 B.R. 213, 222 (Bankr. M.D. Fla. 1998) (noting that Florida UFTA statutes are similar in form and substance to their bankruptcy analogs and stating that it is, thus, “appropriate to analyze the similar provisions of the state statutes and [the Bankruptcy Code] contemporaneously”). As such, we look to multiple states and bankruptcy courts in attempting to predict how a particular court might interpret provisions under the Florida act.

It has been said that the term “eavesdropper” evolved from those who stood under the eaves of a house to surreptitiously listen to the goings-on inside. In this age of digital advancement, we now invite eavesdroppers into our homes and offices in the form of artificially intelligent digital assistants. While devices like the Google Home, Apple’s Siri and the Amazon Echo offer great convenience and enjoyment, there are privacy trade-offs; and some are less obvious than others.

Data generated from user interactions with artificially intelligent digital assistants is typically captured and sent to the service provider’s cloud for storage and processing. This data, which we voluntarily provide, can then be analyzed and used by the service provider in machine learning to develop and strengthen artificial intelligence systems. Data is vital for this. Machines need data to learn—the more the better—and digital assistants have the power to capture vast amounts of it. Few consider, however, what happens to the data we provide or, for that matter, even the type of data we provide.

Digital assistants obviously capture voice data from the user which can be converted to text. But less obvious is the information captured about the user. Information about you is much richer than mere text. Do you engage your digital assistant at the same time every morning? Do you speak with an accent? What type of mood were you in when you asked for that Van Morrison song? Do you regularly turn down your “smart” thermostat and dim your lights at the same time each evening, except on Saturdays? What type of ambient background noise is typically present? How many people live in your home? Are there any children?

In addition to text-based information, digital assistants might capture your voice tone, inflection, volume, and behavioral patterns. Data about user interactions has the potential to be incredibly valuable. Big data has become a catch-phrase for the industry of companies collecting, analyzing, and processing vast quantities of data. Some companies, like Soul Machines and Air New Zealand, are already working on creating machines that can detect human emotion and communicate empathically. While this may improve customer service experiences, it may also be used to influence shopping and travel habits, persuade viewing and entertainment preferences, and perhaps even predict—or manipulate—elections.

Here in the United States, we enjoy a right to be free from government intrusion into our private lives, but that right exists only when we have a reasonable expectation of privacy. What privacy expectations are reasonable when we share so much about ourselves with a digital assistant? European law is wrestling with some of these issues, but outside of the health care and financial arenas, and companies targeting children, U.S. legal doctrine is not currently well-equipped to deal with the treatment of big data and the companies who collect and use it. For now, courts will need to deal with issues on a case-by-case basis.

To determine your rights in the information you provide to your digital assistant, you’ll need to consult with the service provider’s terms of service and privacy policy. You may find, however, that your privacy expectations are not supported by a service provider’s actual terms. Both Google and Amazon disclose that the content of your requests may be shared with third parties. Other companies likely have similar policies. To what extent, if any, can users reasonably hold any expectation of privacy from government monitoring or private sharing?

Beyond the content of your requests, what about other information about you? Companies tend to be less clear about what happens with the non-text based data they capture, store, and share about their users. Besides your written interactions, what other information does your service provider share? Usage habits? Calendar details? Shopping lists? These are some of the questions to ponder when using your digital assistant.

We live in the information age- in the age of big data. This data can be used to enrich our lives but it also has the potential to provide vastly more information about users than what users expressly intend. By now, many are aware of Amazon’s fight with law enforcement over the disclosure of Echo transcripts. How many also know that police relied on data from a smart water meter showing abnormally high water usage as evidence in its investigation? As companies continue to collect data about users—and as predictive models for this data are fine-tuned, courts will need to redefine the parameters of reasonableness when it comes to the expectation of privacy.

For now, users need to be aware of the privacy paradox offered by artificially intelligent digital assistants. As the saying goes, if you don’t know what the product is, then you are the product. The question is ripe with respect to digital assistants: Are we the customer, or are we the product? Or, is it a combination of both?

Devices like Amazon Echo, Google Home and the forthcoming Apple Homepod are bringing artificial intelligence to the masses. They offer the potential to increase our efficiency by managing our calendar, contacts and to-do lists. With a simple verbal command, they can bring us customized news briefings and stock market reports and even brighten us up with music and jokes. I am a fan, but if you decide to invite one of these devices into your daily routine, you need to understand the privacy implications.

Imagine you are on the lookout for the perfect stock. Each morning, you awake at 4 a.m., listen to a customized “flash briefing” from your Amazon Echo, and then begin your research (The Amazon Echo is a voice-controlled digital assistant that exhibits a weak form of artificial intelligence and responds to the “wake word,” “Alexa.”). While reviewing search alert results, one company catches your attention: Blue Star Airlines (BLUSTARR). A legal analyst suggests that BLUSTARR, a small-cap regional player, is about to get a favorable ruling which will end a government investigation that has long hindered the company’s prospects.

You dig deeper, learning everything you can about BLUSTARR. Perhaps you have another digital assistant at the office—one created specifically for financial advisors (we’ll call it the “Gecko Terminal” or “GT”— To date, there is no such device, but with the fast pace of technology, more specialized devices are sure to hit the market). Every day, for a week, you ask your GT for more specifics about BLUSTARR: price updates, financial analysis, company news and announcements, etc.

Armed with your information, you are ready to make a move when you notice something odd. Your routine “flash briefing” now includes updates about BLUSTARR. Your colleagues are beginning to talk more about the stock and both of your digital assistants, Echo at home and GT at the office, provide BLUSTARR updates without being asked. Why, now, is there so much talk about this relatively obscure company? To answer this question, you need to consider what happens to the data created from interactions with a voice-controlled digital assistant (“VDA”).

First, understand that VDAs capture not only voice data, but much more. They can capture data from other connected devices, such as calendar entries, location data and web search history. Other information about your interactions, such as voice tone, inflection, volume and behavioral patterns can also be captured. And this data generates its own set of data called metadata (generally, data about data). This may include the time of recording, size or length of an audio file, and the identity and geolocation of the person requesting information.

This data helps make your VDA smarter. Emerging patterns may help personalize your user experience. If you regularly interact with your VDA at certain times of the morning and evening, your VDA may learn to turn on the lights, start your coffee and begin your flash briefing at the same time each day. In our example above, the frequency with which you suddenly start asking for updates on BLUSTARR might suggest that there is something special about the company or its stock. Your VDA (through the service provider) may take note of this, learn your preference, and adjust flash briefings accordingly. These devices operate on principles of machine learning that are designed to interpret your data and tailor responses to your interests.

There are obvious tradeoffs. To get the most utility from your digital assistant, you must give up some level of privacy. But what type of data is your service provider capturing? And, who has access to this data? Is your data available to third parties?

Consider that artificial intelligence and sentiment analysis are already being deployed for investment research. Is it possible that your stock research data might be shared with third parties, sold to data brokers, or end up being used by your competitors? What are your rights to limit the use and dissemination of such data?

In our example above, your research could be shared so that, if patterns develop, other VDAs and their service providers take note and prioritize news and information for their users accordingly. This could explain, in our hypothetical, why BLUSTARR seems to be mentioned more frequently on your colleagues’ (and competitors’) VDAs.

You may be surprised to discover that, outside of certain regulated areas (Notable exceptions include medical records covered by the Health Insurance Portability and Accountability Act of 1996 [HIPAA] Privacy Rule, financial records covered by the Gramm-Leach-Bliley Act [GLBA], and data pertaining to children under age 13, covered by the Children’s Online Privacy Protection Act [COPPA].), U.S. legal doctrine is not well-equipped to deal with the storage, use, and sharing of data collected by VDAs. If you use one of these devices, take time to review your service provider’s terms of service and privacy policy. These constitute the contract between you and your service provider, which governs your rights and obligations when using their devices. Read the “Q & A” sections and consider whether the explanations are consistent with your expectations. If not, your expectations may be unreasonable.

You may be surprised to discover that some of your data is shared. Some service providers remain silent or vague as to how they capture, store, and use your data, as well as with whom they share it. As for my own expectations, absent a stated policy to the contrary, I assume a service provider has little interest in keeping my information private. Big data is big business.

Americans place a high value on privacy, dating back to the foundation of our country and the Fourth Amendment right to be secure in our “persons, houses, papers, and effects.” Interestingly, the word “privacy” is not found in the Fourth Amendment. Over time and through legal battles, however, courts have come to recognize a fundamental “zone of privacy” contained within the “penumbra” of rights protected by the Constitution.[1] Now, with advances in artificially intelligent devices and machine learning, individuals willingly sacrifice that hard-fought privacy in return for the many conveniences offered by “smart” digital assistants.

Digital assistants can be wonderfully helpful. They offer the potential to make us more efficient by managing our calendar, contacts and to-do lists. They can bring us the news and weather with a verbal command and can also brighten us up with music and jokes. I have an Amazon Echo at home and considered adding a similar device to my office. But if you consider doing so, be sure to understand that this convenience comes with substantial privacy implications.[2] A digital assistant remains faithfully at your beck and call because it is always “listening.” Apple Inc.’s Siri, Microsoft Corp.’s Cortana, and other digital assistants employ passive listening technology that keeps these devices in constant standby (listening) mode. Google Inc. explains that its Home device listens to snippets of conversations to detect a "hotword.”[3] Amazon.com Inc.’s Echo begins streaming to the cloud “a fraction of a second of audio before the wake word” is spoken.[4] When the wake word is detected, our assistants swing into action, ready to capture and process our inputs to provide the best results.

Service providers typically disclose that they may track and record user interactions to improve the user experience. This allows our devices to become smarter. How? By learning more about us — our preferences, habits and routines. With some devices, you may be able to review and delete your interaction history, but that comes with tradeoffs. Apple cautions that If you delete user data associated with Siri, the “learning process will start all over again.“[5] Deleting your interaction history in Google will limit the personalized features of your Google Assistant. Amazon similarly explains that deleting your voice recordings “may degrade your Alexa experience.”

Digital assistants capture more than just voice data. Microsoft explains that Cortana is most helpful when “you let her use data from your device.” This may include web search history, calendar and contact entries, and location data. It may also include demographic data, such as your age, gender and ZIP code. You may be able to limit access to personal data (for instance, by not signing in to your Microsoft account), but like deleting your verbal interactions, you will lose the benefits of personalization and your experience will be more generic.

Optimum use of a digital assistant requires that you share personal data and thus give up some level of privacy. Beyond enhancing the user experience, however, users may question how else service providers might process, analyze and share our data. What are your rights to restrict the use and dissemination of collected data? Can private parties or the federal government obtain your data through a subpoena, search warrant or court order? Could a government agency simply purchase our data from service providers or data brokers? To challenge a search under the Fourth Amendment, you must have a reasonable expectation of privacy. Is such expectation reasonable in the presence of an always-listening digital assistant? While these devices are generally designed only to record information once a designated "wake" word is spoken, few consider the practical reality that to detect the wake word, the device must always be "listening" for it.

What if a device is unintentionally activated? Accidental activations, through similar sounding words or simple software glitches, create risks of unintended recordings. My daughter has a friend named “Alexia.” Can you imagine the confusion when she visits our home, where our Amazon Echo responds to the wake word “Alexa”? Even worse, researchers have already figured out how to surreptitiously trigger voice-controlled devices by transmitting high-frequency sounds inaudible to the human ear.[6] What type of data might a smart device capture without our express intent to provide it?

There are risks present in the data that you voluntarily intend to share. Many service providers disclose within their terms of service or privacy policy that user data may be shared with third parties. While the stated purpose is usually to improve the user experience, it is often less clear whether such data may be used or shared for other purposes. In fairness to service providers, data may be anonymized, creating an element of “practical obscurity,” but with today’s computing power, how difficult will it be to piece together several anonymous data points to recreate the source?[7] Data broker Axciom Corp., for one example, is said to have an average of about 1,500 data points per person in its database.[8]

While digital assistants obviously capture voice data from users, which can be converted to text, less obvious is what additional data might be captured about users. In addition to text-based information, digital assistants might capture your voice tone, inflection, volume and behavioral patterns. Information about you is much richer than mere text. Information such as what time you typically interact with your digital assistant, the type of information or music you request, and how you might adjust your smart lights, appliances, and thermostat can tell a lot about your habits and routines. Whether you speak with a male or female voice, old or young, or with an accent, can tell a lot about you. Other information, such as ambient background noise and whether children are present might offer further glimpses into your home or office. Some companies, like Soul Machines and Air New Zealand, are already working on creating machines that can detect human emotion.[9]

Data about users’ interactions has the potential to be incredibly valuable. Beyond personalizing the experience, such data may also be used to influence shopping and travel habits, persuade viewing and entertainment preferences, and perhaps even predict — or manipulate — elections and public sentiment. Despite the power inherent in this data, the law remains unclear in how such data can be collected, used and shared.

Data generated from user interactions with digital assistants is typically captured and sent to the service provider or a partner for storage and processing. This data can then be analyzed and used to develop and strengthen artificial intelligence through machine learning techniques. Data is vital for this process. Machines need data to learn — the more data points the better — and digital assistants have the power to capture vast amounts of divergent data. Few consider, however, the richness of the various types of data we voluntarily (impliedly or expressly) provide.

What privacy expectations are reasonable when we share so much about ourselves with a digital assistant? European law is wrestling with some of these issues,[10] but outside of the health care and financial arenas,[11] and companies targeting children,[12] U.S. legal doctrine is not currently well-equipped to deal with the treatment of big data and the companies who collect and use it.

One view is that data should be regulated like a commodity. Another view suggests companies should self-regulate through adoption of “responsible use” policies. Responsible use would suggest that companies use data in accordance with the stated purpose for which it is collected. It may also suggest different standards of secrecy and protection based on various types of data collected by digital assistants. For example, shopping lists, “flash news briefings,” and other interactions that involve third parties might be subject to a different privacy standard than daily interactions with a personal fitness monitor, thermostat, or synchronized calendar. Companies collecting data should consider policies designed to provide the appropriate level of privacy protection when processing, analyzing and sharing user data. For companies in the business of collecting user data, this also means that business practices should be consistent with written terms of service and privacy policies.

What about government rights? Under the third-party doctrine, Fourth Amendment privacy protections are lost when otherwise private information is freely shared.[13] In its current form, application of the third-party doctrine suggests that any communication you have with your personal digital assistant may be subject to search and compelled disclosure because it is freely shared with the service provider who may, in turn, share it with third parties. This would clearly seem to be the case with verbal interactions occurring after wake word activation, but what about recordings that may have been unintentional or, worse, surreptitious? Even with intentional interactions, can it really be said that users intend to share so much information about themselves? What if a service provider’s privacy policy fails to mention sharing or, better, states that user information will not be shared?

For attorneys and other professionals subject to client confidentiality rules, there are additional questions that arise as to how the presence of an artificially intelligent, always-listening assistant may impact privilege concerns. Privileges, like the attorney-client privilege, generally protect the confidentiality of communications between certain professional advisers and their clients. Although held as sacrosanct by courts, the privilege of confidentiality can be lost when the substance of communications is shared with a third party. Courts routinely find that information in the hands of third parties is not protected by any privilege. Does the presence of a digital assistant — one that may listen and share information — put that privilege at risk?

To what degree might a court carve an exception to privacy or privilege protections for information recorded through digital assistants? Would it make a difference if a recording was intentional or inadvertent? Might the protection depend upon whether the information consisted of content (i.e., voice recordings) or, instead, other information about a certain interaction, such as the date, time, and length of a meeting as well as any “subject” as shown on a synced calendar?

The technology landscape is moving fast and current legal doctrine is often ill-equipped to deal with new issues. We live in the information age — in the age of big data. This data can be used to enrich our lives but it also has the potential to provide vastly more information about users than what users expressly intend.For now, users need to be aware of the privacy paradox offered by artificially intelligent digital assistants. Such devices have the potential to be extremely helpful, but understand that their helpfulness is a product of the sometimes very personal data we, as users, provide. The more data you provide, the better your digital assistant will perform. And, the more data you provide, in its various forms, the more your service provider (and potentially its partners) will know about you. ​​ ​​​​

Eric Boughman is an attorney at Forster Boughman & Lefkowitz in Maitland-Orlando, Florida.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general info rmation p urposes an d is not intended to be and should not be taken as legal advice. ​​[1] See Griswold v. Connecticut, 381 U.S. 479, 484 (1965).

[10] See EU General Data Protection Regulation, as approved April 14, 2016 (and scheduled to take effect on May 25, 2018), available at http://www.eugdpr.org/.

[11] See generally, Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule, 45 CFR Part 160 and Subparts A and E of Part 164,as to medical records; and the Gramm-Leach-Bliley Act (GLBA),15 U.S.C. §§ 6801-27, as to financial institutions.

If you have an Amazon Echo, try this: Say, "Alexa, tell me a joke," but do it very quickly so that you finish the request before Alexa "wakes up" (indicated on the Echo by the blue light). Did you notice that Alexa dutifully complied, seemingly catching the request before she (it?) was awake? There is a simple explanation for this: Alexa (like other artificially intelligent digital assistants) is always listening. Indeed, Alexa starts recording "a fraction of a second” before the wake word. Google Home listens to snippets of conversations to detect the "hotword."

After becoming more familiar with Alexa at home, I considered adding an Echo or similar device to my law office. I imagined the added convenience of having my own artificially intelligent digital assistant in the office. She could make notes and calendar entries, add items to my checklist, tell me who I'm meeting for lunch and where, and perhaps add time entries and quickly retrieve obscure facts, all with a simple verbal command. But since smart devices like Alexa are always listening, the added convenience comes with a tradeoff – one with substantial privacy implications. How comfortable would you be knowing that transcripts of your verbal interactions are kept by many digital assistants' service providers?

What are your rights to restrict the use and dissemination of collected voice data? Can private parties or the federal government obtain this data through a subpoena, search warrant, or court order (or without)? To challenge a search under the Fourth Amendment, you must have a reasonable expectation of privacy. Is such expectation reasonable in the presence of a digital assistant? While these devices are generally designed only to record information once a designated wake word is spoken, few consider the practical reality that to detect the wake word, the device must always be listening for it.

What if a device is accidentally activated? In a recent client meeting, someone answered in agreement to a question, beginning with, “sure, he can do that …” On a nearby iPhone, Siri heard her name and began actively listening. Even scarier: A friend recently explained that he loves his new Samsung Galaxy phone but is annoyed that Bixby (Samsung’s AI assistant) is often triggered unintentionally and seems to have a mind of his own. Accidental activations, often through similar sounding words or simple software glitches, create risks of unintended recordings.

Additional risks are present in the data you intend to share. Your privacy expectations may be undercut by a service provider's terms of service or privacy policy for a given device. For instance, as disclosed by Alexa’s terms of use, if you access third-party services and apps through Alexa, Amazon (naturally) shares the content of your requests with those third parties. Amazon further discloses that data you provide may be stored on foreign servers. As such, U.S. Fourth Amendment protections may not apply.

Amazon handles the information received from Alexa in accordance with its privacy policy. Your interactions with Alexa, including voice recordings, are stored in the cloud. You can review and delete them, but Amazon explains that deleting them may degrade your Alexa experience. Google similarly explains that deleting your interaction history will limit the personalized features of your Google assistant. Artificially intelligent devices need data from users – the more the better – to learn and adapt. The privacy paradox is that users must, therefore, agree to sacrifice some degree of privacy to enrich the user experience.

Companies like Amazon and Apple have made headlines vigorously defending their customers' privacy. But what about third parties from whom they subcontract services? Apple is notoriously stingy about sharing information, but both Google and Amazon acknowledge sharing information with third-party providers, generally to "improve the customer experience." Will these third parties – some perhaps overseas – defend privacy as vigorously if challenged?

Under the third-party doctrine, Fourth Amendment privacy protections are lost when otherwise private information is freely shared. In its current form, application of the third-party doctrine suggests that any communication you have with your personal digital assistant may be subject to search and compelled disclosure because it is freely shared with the service provider. This would clearly seem to be the case with verbal interactions occurring after wake-word activation, but what about recordings that may have been unintentional, or worse, surreptitious?

For attorneys, there are additional questions that arise as to how the presence of an artificially intelligent, always listening assistant may impact attorney-client privilege. The privilege, which protects the communication between an attorney and their client as confidential, is generally held as sacrosanct by courts, but it can be lost when the substance of those communications is shared with a third party. Moreover, courts routinely find that information in the hands of third parties is not protected by attorney-client privilege. Does the presence of a digital assistant put that privilege at risk?

To what degree might a court carve an exception to privacy or privilege protections for information recorded through digital assistants? The technology landscape is moving fast and current legal doctrine is often ill-equipped to deal with new issues. For now, if I decide to add a "smart" digital assistant to my office, I'll be sure to unplug or deactivate it during any meetings that I wish to remain confidential.

The information provided here is not legal advice and does not purport to be a substitute for advice of counsel on any specific matter. For legal advice, you should consult with an attorney concerning your specific situation.

Article Written for: American Bar Association’s Business Law TodayEric Boughman, Sara Beth A.R. Kohut, David Sella-Villa, Michael V. SilvestroThe decision to use voice-controlled digital assistants, like Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, and the Google Assistant, may present a Faustian bargain. While these technologies offer great potential for improving quality of life, they also expose users to privacy risks by perpetually listening for voice data and transmitting it to third parties.

Adding a voice-controlled digital assistant to any space presents a series of intriguing questions that touch upon fundamental privacy, liability, and constitutional issues. For example, should one expect privacy in the communications he engages in around a voice-controlled digital assistant? The answer to this question lies at the heart of how Fourth Amendment protections might extend to users of these devices and the data collected about those users.

Audio-recording capabilities also create the potential to amass vast amounts of data about specific users. The influx of this data can fundamentally change both the strength and the nature of the predictive models that companies use to inform their interactions with consumers. Do users have rights in the data they generate or in the individual profile created by predictive models based on that user’s data?

On another front, could a voice-controlled device enjoy its own legal protections? A recent case questioned whether Amazon may have First Amendment rights through Alexa. Whether a digital assistant’s speech is protected may be a novel concept, but as voice-controlled digital assistants become more “intelligent,” the constitutional implications become more far-reaching.

Further, digital assistants are only one type of voice-controlled device available today. As voice-controlled devices become more ubiquitous, another question is whether purveyors of voice-controlled devices should bear a heightened responsibility towards device users. Several security incidents related to these devices have caused legislators and regulators to consider this issue, but there remains no consensus regulatory approach. How will emerging Internet-of-Things frameworks ultimately apply to voice-controlled devices?

Voice-Activated Digital Assistants and the Fourth Amendment

Voice-activated digital assistants can create a record of one’s personal doings, habits, whereabouts, and interactions. Indeed, features incorporating this data are a selling point for many such programs. Plus, this technology can be available to a user virtually anywhere, either via a stand-alone device or through apps on a smartphone, tablet, or computer. Because a digital assistant may be in perpetual or “always-on” listening mode (absent exercise of the “mute” or “hard off” feature), it can capture voice or other data that the user of the device may not intend to disclose to the provider of the device’s services. To that end, users of the technology may give little thought to the fact their communications with digital assistants can create a record that law enforcement (or others) potentially may access by means of a warrant, subpoena, or court order.

A recent murder investigation in Arkansas highlights Fourth Amendment concerns raised by use of voice-controlled digital assistants. While investigating a death at a private residence, law enforcement seized an Amazon Echo device and subsequently issued a search warrant to Amazon seeking data associated with the device, including audio recordings, transcribed records, and other text records related to communications during the 48-hour period around the time of death. See State of Arkansas v. Bates, Case No. CR-2016-370-2 (Circuit Court of Benton County, Ark. 2016).

Should one expect privacy in the communications he engages in around a voice-activated digital assistant? The Arkansas homeowner’s lawyer seemed to think so: “‘You have an expectation of privacy in your home, and I have a big problem that law enforcement can use the technology that advances our quality of life against us.’” Tom Dotan and Reed Albergolti, “Amazon Echo and the Hot Tub Murder.” The Information (Dec. 27, 2016) (hereinafter “Dotan”).

To challenge a search under the Fourth Amendment, one must have an expectation of privacy that society recognizes as reasonable. With few exceptions, one has an expectation of privacy in one’s own home, Guest v. Leis, 255 F.3d 325, 333 (6th Cir. 2001), but broadly, there is no reasonable expectation of privacy in information disclosed to a third party. Any argument that a digital-assistant user has a reasonable expectation of privacy in information disclosed through the device may be undercut by the service provider’s privacy policy. Typical privacy policies provide that the user’s personal information may be disclosed to third parties who assist the service provider in providing services requested by the user, and to third parties as required to comply with subpoenas, warrants, or court orders.

The Bates case suggests that data collected by digital assistants would bear no special treatment under the Fourth Amendment. The police seized the Echo device from the murder scene and searched its contents. Unlike a smartphone that would require a warrant to search its contents, see Riley v. California, 134 S. Ct. 2473, 2491 (2014), the Echo likely had little information saved to the device itself. Instead, as an Internet-connected device, it would have transmitted information to the cloud, where it would be processed and stored. Thus, the Arkansas law enforcement obtained a search warrant to access that information from Amazon.

Under existing law, it is likely a court would hold that users of voice-activated technology should expect no greater degree of privacy than search engine users. One who utilizes a search engine and knowingly sends his search inquiries or commands across the Internet to the search company’s servers should expect that the information will be processed, and disclosed as necessary, to provide the requested services.

Perhaps there is a discernible difference in that voice data, to the extent a service provider records and stores it as such, may contain elements that would not be included in a text transmission. For example, voice data could reveal features of the speaker’s identity (such as a regional accent), state of mind (such as excitement or sadness), or unique physical characteristics (such as hoarseness after yelling or during an illness), that would not be present in text.

Or perhaps it is significant that some information transmitted might enjoy a reasonable expectation of privacy but for the presence of the device. Although digital-assistants usually have visible or audio indicators when “listening,” it is not inconceivable that a digital assistant could be compromised and remotely controlled in a manner contrary to those indicators.

Further, the device could be accidentally engaged, particularly when the “wake word” includes or sounds like another common name or word. This could trigger clandestine or unintentional recording of background noises or conversations when the device has not been otherwise intentionally engaged. See Dotan (“[T]he [Echo’s seven] microphones can often be triggered inadvertently. And those errant recordings, like ambient sounds or partial conversations, are sent to Amazon’s servers just like any other. A look through the user history in an Alexa app often reveals a trove of conversation snippets that the device picked up and is stored remotely; people have to delete those audio clips manually.”).

The technology of voice-activated digital assistants continues to advance, as evidenced by the recent introduction of voice-controlled products that include video capabilities and can sync with other “smart” technology. Increasing use of digital assistants beyond personal use will raise more privacy questions. As these devices enter the workplace, what protections should businesses adopt to protect confidential information potentially exposed by the technology? What implications does the technology have for the future of discovery in civil lawsuits? If employers utilize digital assistants, what policies should they adopt to address employee privacy concerns? And what are the implications under other laws governing electronic communications and surveillance?

First Amendment Rights for Digital Personal Assistants?

The Arkansas v. Bates case also implicates First Amendment issues. Amazon filed a motion to quash the search warrant, arguing that the First Amendment affords protections for both users’ requests and Alexa’s responses to the extent such communications involve requests for “expressive content.” The concept is not new or unique. For example, during the impeachment investigation of former President Bill Clinton, independent counsel, Kenneth Starr, sought records of Monica Lewinsky’s book purchases from a local bookstore. See In re Grand Jury Subpoena to Kramerbooks & Afterwords Inc., 26 Media L. Rep. at 1599 (D. D.C. 1998).

Following a motion to quash filed by the bookstore, the court agreed the First Amendment was implicated by the nature of expressive materials, including book titles, sought by the warrant. Ms. Lewinsky’s First Amendment rights were affected, as were those of the book seller, whom the court acknowledged was engaged in “constitutionally protected expressive activities.” Content that may indicate an expression of views protected by free speech doctrine may be protected from discovery due to the nature of the content. Government investigation of one’s consumption and reading habits is likely to have a chilling effect on First Amendment rights. See U.S. v. Rumely, 345 U.S. 41, 57-58 (1953) (Douglas, J., concurring); see also Video Privacy Protection Act of 1988, 18 U.S.C. § 2710 (2002) (protecting consumer records concerning videos and similar audio-visual material).

Amazon relied on the Lewinsky case, among others, contending that discovery of expressive content implicating free speech laws must be subject to a heightened standard of court scrutiny. This heightened standard requires a discovering party (such as law enforcement) to show that the state has a “compelling need” for the information sought (including that it is not available from other sources) and a “sufficient nexus” between the information sought and the subject of the investigation.

The first objection raised by Amazon did not involve Alexa’s “right to free speech,” but instead concerned the nature of the “expressive content” sought by the Echo user and Amazon’s search results in response to the user’s requests. The murder investigation in question, coupled with the limited scope of the request to a 48-hour window, may present a compelling need and sufficient nexus that withstands judicial scrutiny.

However, Amazon raised a second argument that Alexa’s responses constitute an extension of Amazon’s own speech protected under the First Amendment. Again, the argument is supported by legal precedent.

The court considered search results an extension of Baidu's editorial control, similar to that of a newspaper editor, and found that Baidu had a constitutionally protected right to display, or to consciously not display, content. The court also analogized to a guidebook writer’s judgment about which attractions to feature or a political website aggregator’s decision about which stories to link to and how prominently to feature them.

One unique issue that arises in the context of increasingly “intelligent” computer searches is the extent to which results are not specifically chosen by humans, but instead returned according to computer algorithms. In Baidu, the court was persuaded by the fact that the algorithms are written by humans and thus “inherently incorporate the search engine company engineers’ judgments about what materials” to return for the best results. By its nature, such content-based editorializing is subject to full First Amendment protection because a speaker is entitled to autonomy to choose the content of his message. In other words, to the extent a search engine might be considered a “mere conduit” of speech, First Amendment protection might be less (potentially subject to intermediate scrutiny), but when the search results are selected or excluded because of the content, the search engine, as the speaker, enjoys the greatest protection.

Search results arising from computer algorithms that power search engines and digital assistants may currently be considered an extension of the respective companies’ own speech (through the engineers they employ). Current digital assistants are examples of “weak artificial intelligence.” Thornier legal questions will arise as the artificial intelligence in digital assistants gets smarter. The highest extreme of so-called “strong” artificial intelligence might operate autonomously and be capable of learning (and responding) without direct human input. The First Amendment rights of such systems will no doubt be debated as the technology matures.

Voice Data and Predictive Models

Digital assistants have the potential to gather massive amounts of data about users. Current voice data analytic tools can capture not only the text of human speech, but also the digital fingerprint of a speaker’s tone, intensity, and intent. Many predictive models rely extensively on lagging indicators of consumption, such as purchases made. Voice data might be able to provide companies with leading indicators, such as information about the user’s state of mind and triggering events that may result in the desired interactions with a company.

Incorporating voice data into current predictive models has the potential to make them vastly more accurate and specific. A digital assistant might record and transmit the message “Pat is going to the hospital for the last time.” Based on only text of the message, an algorithm might predict that a tragic event is about to take place. But with a recording, analysis of the voice’s pitch, intensity, amplitude, and tone could produce data that indicates that the speaker is very happy. Adding such data into the predictive model, might result in the user beginning to see ads for romantic tropical vacations, instead of books about coping with grief.

User interactions with digital assistants will also give rise to new predictive models. Before going to sleep, a user might ask a digital assistant to play relaxing music, lower the temperature of the home, and turn off certain lights. With a new predictive model, when the user asks the digital assistant to play relaxing music at night, the digital assistant might recognize the user’s “going to sleep sequence,” and proceed to lower the temperature of the home and turn off lights automatically.

In addition to the richness of data in a single voice recording, predictive models based on voice interactions with digital assistants are potentially more robust because digital assistants are always “listening.” This “listening” largely takes the form of recording the voice interactions between the user and the digital assistant. Terms of service of the most popular digital assistants typically do not indicate the precise moment when recording starts. Some voice-controlled products have been marketed with an increased focus on privacy concerns. Apple’s forthcoming HomePod speaker, for instance, is said to be designed so that no voice data is transmitted from the device until the “wake word” is spoken.

A digital assistant may begin recording and analyzing voice data even when it is not specifically “turned on” by the user. This makes the potential data set about the user much larger, which results in a more robust predictive model. If the digital assistant is always “listening,” its owners’ statement, “I’m going to take a nap,” could trigger the “going to sleep sequence” described above. If voice recordings are used in conjunction with current predictive models, a user’s statement, “we’re expecting a child,” could be used as a very powerful leading indicator of specific future purchases.

Legal analysis in this growing field should distinguish voice-data recordings (and data derived from these recordings) from the text of these recordings. The current legal framework applicable to voice recordings captured by digital assistants and their use in predictive models is very limited. California has enacted a statute governing certain uses of voice recordings collected from connected televisions. See CA Bus. & Prof. Code §22948.20. However, the states generally have not regulated the use of voice recordings from digital assistants, and have permitted use of voice data in various predictive models with relatively little restriction.

Each digital assistant has terms of service and privacy policies that their parent companies promulgate (and change from time to time). Users, therefore, should know that voice recordings are captured by digital assistants with their consent. The terms of service for some digital assistants specifically note that voice recordings may be used to improve the digital assistant itself and may be shared with third parties. Thus, voice data is likely to be used in predictive models.

Call centers have been using real-time voice-data analytics systems. Interestingly, as part of these technology packages, certain voice-data analytics systems can detect and scrub personally identifiable information from voice recordings. Digital assistants may use similar technologies to avoid recording and storing regulated content (e.g., health information, financial information, etc.) to avoid becoming subject to privacy regulations. Doing so may expose those recordings for use in various predictive models.

Even if digital assistants only record interactions between the user and the device, the richness of voice data means that predictive models may become finely tuned to each individual user. Every interaction with a digital assistant may help build a unique user profile based on predictive modeling.

As discussed in this article, certain elements of a user’s interaction with the digital assistant may include “expressive content,” and both the user and the digital assistant may have constitutional protections. If a digital assistant develops a rich user profile based on both “expressive content,” and data from other sources, how much of that profile still enjoys constitutional protections? As individuals sacrifice privacy for convenience offered by digital assistants, will their profile will become more akin to a private journal? As the technologies develop, what rights can the individual be said to have given up to the discretionary use of the service provider and third parties?

Voice Data and the Internet of Things

Digital assistants are not the only voice-controlled devices available to consumers. What about voice-controlled devices that may seem innocuous, or might not even be used by the actual purchaser, like an Internet-connected children’s toy? Unsurprisingly, there have already been a few well-publicized data security incidents involving voice data from these types of products. Although the products may be relatively niche at present, the issues raised are not and underscore broader risks associated with the use and collection of consumer-voice data.

One security incident involved a line of Internet-connected stuffed-animal toys. The toys had the ability to record and send voice messages between parents (or other adults) and children through a phone-based app. Voice data from both parents and children was collected and stored on a hosted service. Unfortunately for users, the voice-recording database was publicly accessible and not password protected. Over two million voice recordings were exposed. Worse still, third parties gained unauthorized access to the voice data and leveraged it for ransom demands. Over 800,000 user account records were compromised.

Another recent incident involved a doll offering interactive “conversations” with users. Voice data was transferred to a third-party data processor, who reserved the right to share data with additional third parties. When this toy was paired with an accompanying smartphone app, voice data could be accessed even without physical access to the toy. Security researchers discovered paths to use an unsecured Bluetooth device embedded in the toy to listen to—and speak with—the user through the doll.

Concerns over this doll and other similar products have triggered responses from European governmental agencies. For example, in December 2016, the Norwegian Consumer Council published a white paper analyzing the end-user terms and technical security features of several voice-controlled toys. Forbrukerrådet, #Toyfail: An analysis of consumer and privacy issues in three internet-connected toys (Dec. 2016). Complaints have also been filed with privacy watchdog agencies in several European Union member states, including France, the Netherlands, Belgium, and Ireland. Some complain that voice data is collected and processed by third parties in non-EU states, like the United States, who are not subject to EU privacy and use regulations. Third parties include voice-data processors who also perform voice-matching services for law-enforcement agencies.

More recently, German regulators announced that the sale or ownership of one such toy was illegal under German privacy laws after the toy was classified as a “hidden espionage device.” Although German regulators are not pursuing penalties against owners, they have instructed parents to physically destroy the toy’s recording capabilities. This unusual step may ultimately signal increased regulation of voice controlled consumer products under German law.

Complaints regarding similar products have also been filed in the United States with the Federal Trade Commission and other bodies. Privacy groups have questioned whether these devices comply with the consent requirements of the Children’s Online Privacy Protection Act (COPPA) and its associated rules and regulations. COPPA applies to operators of online sites and services involved in collecting personal information from children under 13 years of age and provides additional protections that may be applicable to voice-controlled toys.

Aside from COPPA, given the lack of comprehensive legislation or regulation at the federal level, there remains a patchwork of state and federal laws that may regulate voice-controlled products. One bill that covers voice data (as part of a broad class of personal information) has passed the Illinois State Senate and is now pending in the Illinois State House. The Right to Know Act, HB 2774, would require operators of websites and online services that collect personally identifiable information to: (i) notify customers of certain information regarding the operators’ sharing of personal information, including the types of personal information that may be shared and all categories of third-parties to whom such information may be disclosed; (ii) upon disclosure to a third-party, notify consumers of the categories of personal information that has been shared and the names of all third parties that received the information; and (iii) provide an email or toll-free phone number for consumers to access that information. Importantly, the current draft of the Illinois Right to Know Act also creates a private right of action against operators who violate the act. Whether this bill or similar laws will be enacted remains an open question.

Conclusion

Data collected by voice-controlled digital assistants and other connected devices presents a variety of unresolved legal issues. As voice-controlled features continue to develop, so too will litigation, regulation, and legislation that attempt to balance the rights of users, service providers, and perhaps even the underlying technology itself. The issues presented in this article are deeply interrelated. When even one of the associated legal questions is settled, other issues in this emerging field could quickly follow suit, but new issues will likely emerge.

About the Authors:

Eric Boughman is a partner with the Orlando, Florida, corporate law boutique, Forster Boughman & Lefkowitz, where he focuses on legal issues affecting businesses and entrepreneurs.

Sara Beth A.R. Kohut is Counsel at Young Conaway Stargatt & Taylor, LLP, in Wilmington, Delaware, where her practice involves mass tort bankruptcy cases and settlement trusts, as well as privacy and data security matters.

David Sella-Villa is the Assistant General Counsel of the South Carolina Department of Administration assigned to technology, privacy, and information security issues.

Michael V. Silvestro is a Principal in the Chicago office of Skarzynski Black LLC, where his practice focuses on insurance coverage and litigation, including cyber risks. The ​views and opinions expressed in this article are those of the authors in their respective individual capacities, and do not reflect the opinions, policies, or positions of any of their respective employers or affiliated organizations or agencies.

Article Written for: Forbes.comDespite some misconceptions, using bitcoin or other cryptocurrencies for asset protection in connection with offshore planning may be an effective strategy. A crucial facet to using foreign trusts to protect wealth is ensuring that the trustee and trust assets remain outside of any jurisdiction where the grantor might be sued. Some U.S. states may find the concept of self-settled trusts anathema to public policy and thus choose to ignore the trust and treat the grantor/beneficiary as the de facto owner of trust assets.

Offshore limited liability companies are similarly exposed. Consider, for instance, a Florida resident who forms a Nevis, single-member LLC to shelter assets in a Florida bank account. If that individual is sued, Nevis law limits the judgment creditor’s remedy to a lien on LLC proceeds (called a “charging order”). The charging order would not entitle the creditor to take control of the LLC in Nevis. If the creditor were to sue in Florida, however, a Florida court might ignore Nevis law (as it would apply to ownership of the LLC) and permit the creditor to foreclose on the LLC interest. As the new owner, the creditor would have direct access to the LLC bank account and any other assets owned by the LLC. Farfetched? This is essentially what happened in Wells Fargo v. Barber.

As highlighted by the Barber case, effective implementation of offshore asset protection requires that assets be transferred to a safe location outside the reach of U.S. courts. Assets must also remain outside the direct control of members and beneficiaries. Courts have mechanisms, via contempt orders, to impose sanctions, fines and jail time to compel a debtor to disclose and turn over assets unless it is truly impossible for the debtor to comply. (While there are colorful examples of jailed debtors under contempt, bona fide impossibility is a valid defense.)

Digital Asset ProtectionDigital assets, like cryptocurrencies, offer a way to keep assets safely away from potentially hostile U.S. courts because they exist entirely on a decentralized digital ledger known as the blockchain.

Cryptocurrency transactions are executed on the blockchain via a two-key system. The keys include an address (public) key and a secret (private) key. Think of the address as a transparent envelope. Anyone can see inside the envelope, but only the secret key can open it to access the contents. Keys are simply a sequence of numbers and letters. The secret key remains under the owner’s control and provides the ability to transfer the asset. “Transfer” is somewhat of a misnomer, as assets don’t actually move. Rather, the blockchain ledger is updated to reflect the transfer of ownership. The right to spend digital currency is granted to the holder of the secret key corresponding to the address posted in the blockchain. Additionally, an address can be created to require a combination of multiple secret keys (multiple owners) to be spent (transferred).

A critical point is that anyone with the appropriate secret key(s) can execute the transfer of the asset. Keys are often kept on a computer or mobile device, but they can also be stored on detached storage devices (such as a USB drive), a sheet of paper in a safe (referred to as “cold storage”), or even memorized (although relying on so-called “brain wallets” may not be advisable). In the most simplistic terms, using cryptocurrency in asset protection may simply involve the transfer of a private key to an offshore trustee (or manager).

Properly selected offshore trustees are unlikely to become subject to the jurisdiction of a court where a defendant may be sued. Absent jurisdictional authority, a court is powerless to compel the trustee to turn over assets. An added benefit is that blockchains are decentralized. This means they are not subject to any central authority (such as a bank or other financial institution) that might be legally compelled to provide a court with access or control over assets in its possession. Without the complete private key, no court or legal authority can manipulate ownership of a blockchain asset.

Worries about a rogue trustee or manager can be allayed by requiring multiple keys, such that two (or more) parties are required for access. Those parties could be co-trustees, a trustee and trust protector, co-managers, or a manager and member of a board of directors. A trustee acting alone would have no ability to unilaterally access that portion of trust assets requiring multiple keys. Cryptocurrency assets could also be divided, such that a small portion of currency remains under a single trustee’s control whereas a larger portion is restricted and remains in cold storage subject to joint-key access.

Because cryptocurrency transactions are semi-transparent and are time-stamped on the blockchain, ready proof is available to legitimize the timing and propriety of protective transfers. Transfers made well before assertion of a claim are far less likely to be challenged by creditors and courts. Such proof and transparency may be particularly useful when facing the threat of a contempt order, as mentioned above.

The Future Of Digital Currency

Additionally, unique potential exists for digital currencies as programmable money. It is not beyond the realm of possibility that digital currencies may one day be programmed to respond to pre-defined duress situations and to execute certain functions in the form of so-called “smart contracts.” For instance, assets under control of a trustee who reports being summoned to appear in a hostile court might be programmed to immediately transfer to a successor trustee. Similarly, a trustee who fails to account to beneficiaries at pre-designated times might be effectively removed in a similar fashion by self-executing currency. As the technology develops and becomes more refined, smart contracts and the concept of programmable money will inevitably be integrated into asset protection planning.

The possibilities are utterly astonishing and as these financial technologies further evolve, so will additional use cases. This is just the very tip of the asset protection iceberg. Even now, cryptocurrencies, with their unique digital properties, transparency and decentralization, offer exciting, leading-edge opportunities as effective and legitimate modern asset protection tools.

The information provided here is not legal advice and does not purport to be a substitute for advice of counsel on any specific matter. For legal advice, you should consult with an attorney concerning your specific situation.

Article Written for : Forbes.comIs it any surprise that our new president, Donald Trump, may have strategically manipulated the tax code to avoid paying federal income tax? Mr. Trump calls this “smart,” and many in the same boat would agree. Similarly, sophisticated clients and advisors implement legal tactics to prudently preserve and protect wealth.

One strategy growing in popularity is the “self-settled” trust for asset protection. Under traditional trust law, a grantor conveys assets to a trustee, for the benefit of someone else, such as his children. The gift “divides” ownership between so-called legal title and equitable title. The trustee may legally oversee the assets (pursuant to a trust agreement) benefitting beneficiaries (who have no control over trust assets). Once the assets are in trust, they are generally protected from future creditors of the grantor, trustee (with legal title), and beneficiaries (with equitable title).

This splitting of legal and equitable title traditionally shields trust assets from creditors of (1) the trustee, who has no legal right to use or distribute trust assets other than for the benefit of the beneficiaries; or (2) beneficiaries, who have no legal ability to demand or direct distributions or convey title to trust assets. Traditional trusts typically have a “spendthrift” provision which protects trust assets by restricting the beneficiary from assigning future income or trust assets to creditors (thereby prohibiting a creditor of a beneficiary from attaching trust income or assets). The traditional trust, for instance, allows parents and others to make protected gifts to children.

Self-settled trusts are distinct in that they are funded by a grantor who retains the benefit of trust assets. Only legal title is conveyed to a third-party trustee to put trust assets (theoretically) outside the reach of creditors. Several offshore jurisdictions and 16 U.S. states have enacted statutes permitting such arrangement. In these jurisdictions, the grantor may "have his cake (protection) and eat it (the assets) too." The grantor/beneficiary enjoys the fruits of the trust assets but has no legal right to transfer title or direct proceeds to creditors. Self-settled trusts are therefore often referred to as “asset protection trusts.”

Domestic asset protection trusts, or DAPTs may have their place in certain asset protection plans, but they still remain largely untested by the courts. Indeed, several non-DAPT states consider such trusts anathema to public policy. Often overlooked in predicting the effectiveness of an asset protection strategy is giving due consideration to which state’s law may ultimately be called upon to test it.

Consider the case of a resident of Washington (which offers no DAPT) who forms a DAPT for his own benefit in Alaska (which does have a statute permitting DAPTs). He names himself as a beneficiary and his son as co-trustee (along with an Alaska trust company). The grantor/beneficiary then proceeds to transfer a significant portion of his assets, including title to financial accounts, automobiles, and real estate interests into the trust.

How effective is this arrangement when creditors come knocking in Washington? The debtor will rightly claim that he technically does not own any of the trust assets, as legal title now resides in the trustee. This is similar to what actually occurred in the case of In re Huber, where the settlor of the Alaska trust lost everything due to his failure to consider local law and policy.

In Huber, a bankruptcy court in the State of Washington relied on Washington law to essentially disregard a DAPT formed in Alaska. The court looked to several factors suggesting the trust was a sham. The court cited the fact that the beneficiary did not reside in Alaska, one of the trustees was not in Alaska, and perhaps most importantly (as discussed below), trust assets were not located in Alaska. The court also noted Washington’s “strong public policy” against self-settled asset protection trusts. The bankruptcy trustee was therefore entitled to disregard the trust and seize trust assets (as personal assets of the grantor/creditor).

Huber offers insight into the factors to consider in using DAPTs for asset protection planning.

Practically, we learned that keeping trust assets and a trustee in a non-DAPT state exposed trust assets to the local court where the assets were located. Had the assets and trustee resided beyond the reach of the court, only the grantor/beneficiary would have remained in the court’s jurisdiction. Without a legal title, the beneficiary would have no ability to turnover trust assets to creditors. Thus, we can surmise that leaving assets in a DAPT state (with the trustee), pursuant to a properly drafted trust agreement (limiting beneficiary control over trust assets) would likely have substantially restricted court intervention benefitting creditors.

By focusing merely on technical legalities and failing to recognize the practical ability of a court to reach the trustee and trust assets, the Huber DAPT was never designed to hold up in stormy weather and was thus doomed to fail. Practical considerations, such as the law and policy of the state where a DAPT might ultimately be tested, should always be respected.

For individuals and their advisors considering DAPTs for wealth preservation, carefully consider the details that might ultimately be examined if you (or your lawyer) are compelled to defend the structure in court. It is best to engage experienced counsel and consider factors that might influence the plan's efficacy. These may include not only the state of formation, but also the state in which you live and work (and are thus most likely to be sued), the identity and location of the trustee, whether there are multiple beneficiaries, and the types of assets you wish to protect. For instance, cash and other liquid assets can be relocated to more protective jurisdictions, but real estate is always at risk of being subjected to local law. Often overlooked is the fact that courts that enter judgments generally prefer to see those judgments enforced. The prudent planner should therefore assume that any DAPT will one day be subjected to intense judicial scrutiny and plan accordingly.

The information provided here is not legal advice and does not purport to be a substitute for advice of counsel on any specific matter. For legal advice, you should consult with an attorney concerning your specific situation.