Search

Subscribe

There's a new criminal tactic involving hacking an e-mail account of a company that handles high-value transactions and diverting payments. Here it is in real estate:

The scam generally works like this: Hackers find an opening into a title company's or realty agent's email account, track upcoming home purchases scheduled for settlements -- the pricier the better -- then assume the identity of the title agency person handling the transaction.

Days or sometimes weeks before the settlement, the scammer poses as the title or escrow agent whose email accounts they've hijacked and instructs the home buyer to wire the funds needed to close -- often hundreds of thousands of dollars, sometimes far more -- to the criminals' own bank accounts, not the title or escrow company's legitimate accounts. The criminals then withdraw the money and vanish.

The fraud is relatively simple. Criminals hack into an art dealer's email account and monitor incoming and outgoing correspondence. When the gallery sends a PDF invoice to a client via email following a sale, the conversation is hijacked. Posing as the gallery, hackers send a duplicate, fraudulent invoice from the same gallery email address, with an accompanying message instructing the client to disregard the first invoice and instead wire payment to the account listed in the fraudulent document.

Once money has been transferred to the criminals' account, the hackers move the money to avoid detection and then disappear. The same technique is used to intercept payments made by galleries to their artists and others. Because the hackers gain access to the gallery's email contacts, the scam can spread quickly, with fraudulent emails appearing to come from known sources.

I'm sure it's happening in other industries as well, probably even with business-to-business commerce.

@Dan H: It would, but this expertise is much too expensive/exotic for these kind of business. Besides that the clients also have little to no knowledge about it. A download center would probably a better solution, IF it is secured properly.

I don't understand how the criminals can just vanish? Aren't there electronic records equivalent paper trail of where the money was transferred to? Then shouldn't it be possible to eventually track down the last bank account where the money landed? And how do they cash that much money out? I wish there was more detail on that end of the transaction. I understand it might be trivial to open bank accounts in some shady countries but I don't see how the criminals can get their hands on that much cash without being identified. Isn't that activity something an organization like Interpol would be involved in?

depends:
if the mail is hacked because owner wrote credentials on phishing website yes, email signing will save you; signature cert is on the pc and there is no reason to send it to a website, the private key is called private for a reason and there is always a "don't send it to anyone".

but if the mail is hacked because someone open a virus and the virus control the pc (so it steal the mail credentials) digital signature will not save you as they can steal the cert too.

if you are using Qubes OS or a properly set yubikey (not default config) you will be safe. because Qubes OS store the cert in a different VM. and the yubikey can't be cloned (so if someone steal it from you you will notice).

about the yubikey config: i say "not default" because in the default config if someone control your pc he can keylog your pin and use the yubikey if it is plugged in the pc; this means that if you leave it inserted (or also if you insert it for a while in the moment you need it) bad guys can do all what they want: sign and decrypt without you noticing.
if you instead set the "require touch on" and "block the config (fix)" they can't use it while is plugged (they are remote attackers).
they can only "steal" your use so that you think that you are signing A but you are signing B. but this can be noticed as after the operation A will not be signed.
same apply in Qubes OS (split-gpg).

to cite Joanna Rutkovska: while protecting the user’s private key is an important task, we should not forget that ultimately it is the user data that are to be protected.

It's actually very easy to disappear. Criminals hire for small amount of money some unemployed/asocial person to use their bank account or pay them to open bank account. Those poor people withdraw the money and hand over to the criminals.

... is suing the title agency ... for negligence ...
this is good, is their fault if they are hacked. they will say "complex state sponsored malware" while it is probaly "reused the same passwowrd 90214782390 times and it was "passw0rd"

...telephones and confirms banking instructions with clients over the phone...
come on! is 2017 and you are not able to have a secure conversation over the internet? maybe we need less ads about backdooring encryption and more ads about how to use it (properly)

1 Regularly change all passwords for email, software and wifi
NO, people will never do this, is boring and will make people using dumb password so they can remember them. better to advice "use a unique password for each service (and use a password manager since you will never remember al them)"

2 Ensure all anti-virus software is up to date.
not only antivirus updated but also the os and all the apps. you will never be safe if you use windows xp.

3 Only send invoices by email if they have been encrypted (password-protected).
mmm again no: scammer will send an encrypted zip scam file saying "the file has been protected with a password for additional safety". as Dan H said in a comment above they need to digitally sign it, encrypt it is only optional.

4 After sending or receiving an invoice by email, call and/or send a text or WhatsApp message to the recipient to double-check the sort code and account number.
this is good, is like 2FA, check over a second different communication channel will help.

5 Urge all staff to be extremely vigilant when opening emails and do not download any attachments or click on web links from an untrusted source. Always confirm legitimacy over the telephone with the sender if in doubt.
THIS IS NO SENSE! email are dsesigned to be opened. they don't have a way to check if it is legit since sender can be spoofed and the check will probably be "have you sent me a mail? yes? ok so i can open it! it is safe!".
checking if a mail is trusted or not is kinda difficult, far more easy to tell them open all the links and attachments in a Qubes OS disposable VM (so that nothing gets compromised). it can also be automated so people don't need to remember this.

Our church has been targeted (unsuccessfully) multiple times. It seems the criminals wait until the main finance person leaves for vacation (they monitor her out of office reply) and then construct a panicked “emergency” email/need for a transfer of funds from the lead pastor or similar (which has always failed due to our controls and procedures, and always comes from a spoofed account, not a hacked one).

But even though it has failed every time, it always causes a flurry of “is this legit?” among the accounting staff which is when I (IT mgr) gets pulled in. Crazy world. We subsequently no longer allow key personnel to enable external out of office replies anymore.

Some of these smell like tricky insiders just blaming cybercriminals... “Somebody sent out another email saying: ‘Ignore my previous invoice. I sent you old bank details; please use this invoice instead.’” top level cyber criminal deception!!!111

I mean sure some people are legitimate victims but some are moving to scam insurance money too, why we need solutions for banking that can be verified and add confidence for insurers.

@Dan H: As has been said by others, depending on where the breach has occurred the signature can also be compramised. But even if it hasn't been, in a long exchange of emails, how likely is it that people are going to check for every single message that the signature is correct?

Fundamentally for security to be good it must be easy and automatic, and email is far too clumsy and old to be made workably secure. Even experts struggle to use 'secure' encrypted signed email and even then the emails are frequently all encrypted with the same key that changes infrequently if ever, so if it's ever lost/broken all the messages ever sent become readable. [assuming they were intercepted in the meantime, and/or still available on the mail server]

This means that email is fundamentally and unfixably insecure, there is effectively no such thing as secure email and it should not be used for any communication purpose where security/privacy is required.

More secure communication systems [such as Signal, Whats App, etc] that use end-to-end encryption with keys changed every time you communicate should be used for communications that require privacy and security.... which is pretty much all communication... We should get rid of email entirely. Which will of course require some redesign of systems that assume people have email addresses for signing up to things, and use something else [or a selection of somethings else] instead. Which will have the added benefit of not having peoples insecure unencrypted emails as the key to all their other accounts for everything else they use on the internet.

The problem, for people like me, is that Signal, and Whats app et al, were designed for smartphones. I do not own a smartphone, and never will, and thus cannot access these secure systems as people do not seem to want to create such similar systems that work for desktop/laptop pc's without requiring at least a smartphone account to set it up as Signal does. And I am not prepared to pay to own a personal tracking and recording device specifically so I can have a secure private communication program so I don't have to be tracked or recorded. This assumption that everyone owns, or will own, a smartphone is deeply misguided.

There is nothing new under the sun.
Yet another variety of a good old mail fraud that unfortunately still works in some countries. However the history of this one goes thousand years back. You compromise the "supply chain" and "fish" the actual mail out of a victim's postbox or a pigeonhole (or from the ancient courier's bursa while he was sleeping), replace the invoice and then put it back where it belongs hoping that the victim won't spot the difference.
The digitalised modern version of the same fraud - compromise the victim's Outlook client, set up the "forward" rule and then passively monitor the correspondence until invoice arrives. Then simply intercept and replace the invoice with forged one.

I have a friend who runs a company renting out commercial property for a living. Several years ago he told me that they have a strict rule that the account for sending rent to must be confirmed by a telephone call from an individual that is known to the staff member taking the call. Email is not an option for precisely this reason. (Signed email is not an option because he, for one, couldn't cope.)

The fundamental issue is that banks choose what if any protection they wish to provide to their account holders, but those same banks are not responsible if those account holders are defrauded. This results in banks implementing a minimal level of security protection including checks and balances, since there is no incentive for them to do more.

So why not implement regulation/protection rules that shift more liability to them in a reasonable way?

When fraud is established within a one week period of transaction, why not force the bank which received the wired/transfer funds to return it. If any criminal transfered any fraudulent funds again, that bank can in turn go after the next bank, and so on. This could be established as required policy for banks participating in a funder transfer network such as swift or domestic financial network.

A one week delay can be imposed on account holders for any purchases, withdrawals, or transfer of funds to banks outside this network (in which they cannot get fraud returns from them), that uses any funds which is transfer into the account holder.

From what I've seen, the most common tactic of keeping this hidden from view is setting an email rule to send all correspondence between the compromised email account and the 'accountant' email address to Junk and 'Mark As Read'. Just an FYI.

If they're fully penetrating the endpoint target's networks no keymatching or fancy binaries will save you, they're going to get those and use them from the target network impersonating them. The only thing changed would be the depositing account #. Everything would be legitimate except that. The bank would have no specific reason to call back and reverify, they'd be doing that on every single transaction if that were the case.

Then it's a matter of time until the bank discovers their stuff is missing and hopefully alerts the depositing banks before mules can drain it all. In this case they got most of it blocked and returned in a couple days.

Man, a lot of people talk a lot of talk about Signal, but I'd really appreciate something with similar security properties that worked more like email insofar as you can send files, store messages in a portable client database.. such as on a flash drive, and run the application from a PC instead of requiring smartphone hardware to be in the mix. :(

... I'd really appreciate something with similar security properties that worked more ...

Such applications in use ARE IN NO WAY SECURE nor can they be.

The problem is that as long as the communication end point is beyond the security end point the attackers just go around the security and get the plaintext.

The current state of comercial consumer OS's and the devices they run on is such that if the device is connected to any kind of communications network then it's game over security wise. No if's, no but's and no maybe's

As has been pointed out in the past it does not matter how big you make a fence, if people can just walk around it with no real effort.

Signal, WhatsApp, etc, etc are just the equivalent of a single fifty foot post that would not even stop a one legged blind man getting around... Oh and because of the way the apps work they also come with a free extra of a huge red flag at the top of the pole advertising you have something to hide to all who care to look...

@Anders still I don't get it. Even if criminals hired people to withdraw money for a small fee it would take months to move even tens of thousands of dollars without attracting attention at the banks. I still do not understand how the first bank that moves the money can't track down where that money went on each hop. Or why the last bank in the hop can't block the account and return the money. Something is not on the up and up here with banks if they can't keep track of money. And criminals should not be able to cash out tens of thousands of dollars without attracting attention. No one just walks up to a bank and takes 100k out in cash. So if companies are losing $800k there has to be someone crooked in the bank to move that money into cash. And once that bank is identified other banks should blacklist them. Why this isn't happening is beyond reason. I had to transfer money to an African company for a safari I was taking after a business trip. The checks and balances that my bank went through to ensure everything was legit and secure was incredible (including name, address, account numbers, phone numbers, etc). They even knew what the fees were at the receiving bank so I could make sure I covered their receiving fees too. So I still say something is wrong when money/criminals just disappear without a way to wind back the transactions.

"In some cases, the banks didn't realise a breach had taken place and a significant amount of money was stolen well after the attack was completed. In a few cases, the malicious activity was reported to the banks by third-party firms responsible for processing the bank's debit and credit card transactions."

If they don't realize they're being taken right away, that's the #1 determinant.
The only solution is professional, paid eyeballs (and bots) monitoring everything 24/7.
For smaller banks a capable response team is repeatedly the weak link.

A slightly older version of this scam: break into a company's PBX (a.ka. IVR system, auto-attendant) and add a menu option such as "Vice President," with a menu option that reaches your (burner) cell phone.

The scammer then order equipment from local vendors to be delivered to specified locations. When the vendor calls to confirm, he reaches the scammer, who confirms the order.

In other words: calling a company to verify identity may, in fact, fail as a method of verification. And I find it easy to imagine a scam that is carried out from start to finish with forged business cards of company X, emails from company X, and phone calls to and from company X.

@Clive Robinson: You say that "Such applications** in use ARE IN NO WAY SECURE nor can they be." however I respectfully suggest that you have missed the point.

Perfect security is impossible, thus demands that something have perfect security or be dismissed as "in no way secure" are thus demanding the impossible.
In the same way that as it's impossible to be absolutely certain about anything some claim it's thus impossible to know anything, which is absurd. Knowledge is simply uncertain and comes with a probability attached to it.
Similarly security is not perfect and comes in degrees; things are more or less secure, and also more or less useable. [Frequently but not always as a result of how secure they are. Email as a case in point becomes less and less useable the more secure you try to make it, rapidly becoming practically unusable long before it becomes usefully secure.]

Breaching security has a cost, and is done to get some benefit.
If the benefit is greater than the cost an attacker stands to gain by breaching my security.
Thus I do not need perfect security, what I need is [in general] security that makes the cost of access higher than the expected benefit.

Email does not achieve this [clearly] and thus can be labelled as deeply insecure, the cost of breaching the security is so low that it's almost free and mass surveillance of everyone is possible and indeed done multiple times over.

Programs such as Signal and Whats App et al do [in general] achieve this, because the cost of breaching the security is higher than the expected benefit for most attackers for most people and they prevent mass surveillance of the messages being sent. [With varying degrees of privacy]

It is still possible to break into the various devices [PC, Mac, Smartphone, etc] that they might run on and get at the data but there are steps that can be taken to make that hard enough that generally someone has to really want to attack you and get at your data to apply enough resources to get in. [The previous story on this blog being a case in point]

As for using the apps signalling that you have something to hide... There are over 1 Billion Whats App users... There is a safety in numbers. The more people use these secure apps for their communications and the more normal and widespread they become the more resources it would take to break into everyone’s device to bypass there security and read their messages. At a Billion+ users Whats App long surpassed the numbers where that was practical for anyone.
Using these services isn't flagging you up as having something to hide; it's simply becoming the way we communicate.
I would just like that functionality to be available on devices other than smartphones. For those of us who will never ever use them.

[**Apparently referring to 'Signal and Whats App etc etc'...]

@me: I'm generally agreeing with you but... How many people do you think could actually use a "Qubes OS disposable VM"..? I volunteer to help people learn basic computer skills and there are still plenty of people out there who are still struggling with the basics of how to use a mouse/keyboard, how to access emails/the internet, and what a web browser is... etc. And your suggestion is that to avoid fraud while they are going through the stress and hassle of trying to find a house they should set up a VM to make sure their email is more secure? If those are the lengths you need to go to to fix email, email is so broken it needs to be dropped. And replaced with a system that regular people can actually use. One I can explain to someone who is 70-90 years old and has never used a computer/smartphone/tablet before but is now required to use one to access certain government or other services.

@David
i understand that qubes might seem difficult but i don't think it is.
not more than using normal pc. (install is difficult for normal people)

i instaled yesterday thunderbird for my dad and passwd the whole evening explaining him how to use:
-click on email to read it
-this is the button to make new email, this one is to delete old one
-left column is sent/received/deleted
nothing more...

but he is not very skilled with pc and i still have to teach him how to minizmize programs, how to reopen them after minimizing, how to copy&paste.. they are not simple tasks for him.

but suppose i install him qubes ready to be used, with windows (that he yet "know") i set thunderbird to automatically open links in disposable vm, same for attachments.
even if i don't explain him what i have done he will probably not notice any difference, but he is much more secure.

full use of qubes is probably not possible for him (normal windows is quite difficult for him) but i think that the avarage user if has an installed qubes os ready to use will not find it difficult.

yet i understand that there are more simple solutions for email (like double checking by phone).
the problem is that i can spend a week teaching someone how to secure email, pc, how to check for a phishing page, https, ... improving 1% of their security (and with them undertanding only half of what i say and forget the other half).
or i can say them "get qubes" and explain them for 5 min that "you can have as many different pc you want, when your friend get a virus you don't get it too because you have a different pc obviosuly"
and they are *much* more secure for every task, not only email.

However, as I understand the topic here, the underlying problem isn’t the “far too clumsy and old” email, it is bad (online) account authentication.
Think of “Email is ideal, because we know it is vulnerable”.

Perfect security is impossible, thus demands that something have perfect security or be dismissed as "in no way secure" are thus demanding the impossible.

No I've not missed the point, it's the designets of the security apps and the users they mislead that have missed the point.

I'm not talking about "perfect security" I'm not even talking "reasonable security" I'm talking about "snake oil security", because that is what those apps are "snake oil" when used in the recomended if not enforced way.

Further whilst "Perfect security" may be an impossibility "full insecurity" is less than trivial to achive with a little legislation applied at the weakest point in the system.

As @Bruce and many others myself included, have repeatedly pointed out year after year it matters not one jot how strong the strongest link in the security chain is, it's how weak the weakest link is that matters where security is concerned.

Go back and look at the CarrierIQ debacle a few years ago to see just how easy security is to bypass when the Comms End Point reaches further than the Security End Point.

But worse using such apps is not just "Painting a target on your back", it's using anti-aircraft flood lights to make sure it can not be missed from space and just to be sure having a three hundred member pipe band playing so even the blind are aware of it for miles around.

As I've repeatedly said the sort of attackers you need to wory about care not one jot how secure the path is between the secure apps or how secure the apps are, they realy realy do not care. Why should they when it's way way way easier to get at the plaintext in the user interface or ask for the "business records" that contain the plaintext or KeyMat. Failing that they simply do an "end run attack" around the security which is these days ridiculously simple with commercial consumer grade OS's.

Which brings us to your comment of,

Breaching security has a cost, and is done to get some benefit. If the benefit is greater than the cost an attacker stands to gain by breaching my security.

The commercial consumer OS developers have done it for the sorts of attacker you need to worry about to protect themselves. They take your plaintext and put it up in their cloud or ET Phone Home with it to their mothership via "Telemetry" or "Test Harnesses" to provide "Help desk support" etc etc.

So the cost is less than a big fat zero to the attackers you need to worry about as the US legislature has put in place legislation that makes these development companies hand over the data free gratis and for nothing, because it buys the companies "immunity" from the legislation.

Thus as you say,

I respectfully suggest that you have missed the point.

Of how the US IC, LEA, and legislature work entirely.

It's realy past time XKCD redid the $5 wrench cartoon. This time showing Bill Gates or Mark Zuckerburg triping over the IC door step / threshold and thus throwing armfulls of user secrets at the IC guys till they are snowed under...

@Clive Robinson: "The commercial consumer OS developers have done it for the sorts of attacker you need to worry about to protect themselves. They take your plaintext and put it up in their cloud or ET Phone Home with it to their mothership via "Telemetry" or "Test Harnesses" to provide "Help desk support" etc etc"

Wow. Ok, well I don't like windows unturnoffable telemetry as much as the next guy... Which is why I turned it off, the service's are disabled and the relevant servers are blocked by firewalls. [which is pretty trivial to do].
But let's say that like most people I haven't done that...

It's certainly possible/probable that as part of the telemetry data some of my private documents/messages could get uploaded to Microsoft/[OS provider of choice].
which is why the telemetry thing is such a big issue.
But what you are suggesting is that the security of signal et al is worthless because ALL of my messages/documents are available for free to anyone because they are uploaded by telemetry... Which is b*****t.
Telemetry doesn't upload anywhere close to that much data [and in my case it's uploading none, which as I say was easy enough to achieve]. People would have noticed and screamed from the rooftops if telemetry was uploading the entire freaking contents of your hard drive to Microsoft [or whoever].
So the threat of telemetry is that it might upload revealing scraps of personal data, which again is a big deal, but not the all encompassing threat that makes not encrypting and/or obfuscating the data in transit not worthwhile.

"As @Bruce and many others myself included, have repeatedly pointed out year after year it matters not one jot how strong the strongest link in the security chain is, it's how weak the weakest link is that matters where security is concerned"

And I notice that @Bruce recommends people use Signal...

Yes, it matters how weak the weakest link in the security chain is.

However for our purposes, email is like sending messages by postcard, just about any person who cares to can intercept and read/copy/alter/duplicate/imitate your messages along the way.
It's more secure to send messages by fax [in a fictional world without wire tapping] even if you keep all the printed out documents in unlocked draws and never lock your house because your threat exposure is still less than the people communicating via email, especially when there are a billion+ of you doing it. You can't use the argument that all those billion+ people are suspicious criminals with stuff to hide nobody will buy that. With those kind of numbers you are just people.

Making secure apps more widely available and easier to use so more people use them for regular communication means the communications that need to be secure get hidden in everyday fluff that is encrypted to the same standard. That way the 'bad guys' don't know what to target.

Speaking of which...

"As I've repeatedly said the sort of attackers you need to wory about care not one jot how secure the path is between the secure apps or how secure the apps are, they realy realy do not care."

The people who were taken in by this scam apparently had to worry about the kind of attackers who apparently do care about how secure the path is between the two people communicating.
If these people had been communicating by Signal or Whats App then the attackers would have had a vastly harder time intercepting the communications undetected or sending the fake messages undetected. Hacking an email account is easy, hacking into Signal or Whats App is really hard.
This story would likely not have happened in a world where people communicated via secure apps instead of email. [criminals will of course find other ways of targeting people there will be displacement but that’s not an excuse for not fixing weak security]

So while the secure apps might not mitigate against every threat, and they might not mitigate against the threats you’re paranoid about, they do mitigate against some threats! Threats that matter in real ways to real people.

As someone who has met and dealt with people who have lost tens of thousands of pounds to 'Microsoft scammers' these kind of threats are very real and worth mitigating against. Even if they are not as sexy as your paranoid overarching government conspiracy new world order.

And yes, the fact that the business model of the internet is surveillance and that governments and security agencies have got addicted to that is a big deal. But it is not the only deal.
“Go back and look at the CarrierIQ debacle a few years ago to see just how easy security is to bypass when the Comms End Point reaches further than the Security End Point.”

I did go and have a look at this and this makes my point perfectly. It indeed be a choice whether or not you upload your encryption key to the cloud... but to argue that an unencrypted computer is less secure than one encrypted by that system is to fail to understand maths.
Yes there are ways that attackers could potentially get the key but it is an extra step they have to take to get at your system, and extra bit of difficulty [however small you think it is] and that makes it harder and thus more secure. The number of people who can attack that system is smaller and thus that system is more secure.
Most people are [rightly or wrongly] more concerned about criminals stealing their data/identity/money than they are about the NSA spying on them.
And the measures we are talking about definitely help protect against criminals, even if you think they are less help against the NSA.
And sure, there are probably some elite crackers who could go though these protections like they weren’t there... but the overwhelming majority of criminals that most people are likely to be ‘attacked’ by are not those elites. They are the ‘Microsoft Scammers’ of the world, the script kiddies, the scum of the Earth, with little skill and fewer morals who are after a fast buck and a little security really is all it takes to put them off and make the difference.
So you telling people that the protections these pieces of software offer are worthless because they don’t fully protect against elite hackers or the NSA ... the attackers people need to ‘worry about’ is bullshit... Because most people most of the time do not actually need to worry about the NSA*.
They DO however need to worry about cybercrime.

But what you are suggesting is that the security of signal et al is worthless because ALL of my messages/documents are available for free to anyone because they are uploaded by telemetry... Which is b*****t.

You are quite deliberatly misquoting me which is a no-no. The points I made were,

1, The attackers you need to worry about (SigInt and LEO) can easily bypass these security apps and get access to the plaintext at the User Interface.

2, That the developers and suppliers of the main consumer commodity OS's are by various forms of legislation made complicit in enabaling the SigInt and LEO agencies in gaining access to the plaintext.

3, I indicated just three of the many ways of getting access to the plaintext can be achived. Further we know that these methods have without doubt already been used to access the plaintext.

Further you are being somewhat deceitful your self with,

However for our purposes, email is like sending messages by postcard, just about any person who cares to can intercept and read/copy/alter/duplicate/imitate your messages along the way.

What you are talking about is "unarmoured plaintext" email, which while most use it that way, there is no requirment to do so.

So you are also "Trying to have your cake and eat other peoples".

It's only later you say,

However for our purposes, email is like sending messages by postcard,

Not just without explanation but also by falsifying the argument with "for our purposes" when it should be "for my purposes". It's your argument not other peoples and as I've noted it's not true as an argument just an observation on common usage.

Which brings us to your point of,

Making secure apps more widely available and easier to use so more people use them for regular communication means the communications that need to be secure get hidden in everyday fluff that is encrypted to the same standard. That way the 'bad guys' don't know what to target.

Is based on a number of false premises. Firstly the use of a secure app on a device where the attackers can easily endrun around to the plaintext IS IN NO WAY A SECURE SYSTEM. Worse it gives people a false sense of security which is actually a lot worse than using insecure communications, where the parties will excercise caution.

As for "That way the 'bad guys' don't know what to target." you appear to be ignoring two very sailent points,

I did go and have a look at [CarrierIQ] and this makes my point perfectly.

Either you have not read anything of relavance or you do not understand what you have read. It has nothing what so ever to do with "a choice whether or not you upload your encryption key to the cloud".

The CarrierIQ software was a "test harness" that was amongst many other security avoiding methods the equivalent of a key logger and screen scraper that then sent back it's log files back across the Intetnet to CarrierIQ's servers.

Whilst what was sent back was configurable it could be done by virtially unlimited numbers of people. All the NSA had to do was sit there and passively copy the data as it went through an upstream node of CarrierIQ's servers... It had nothing to do with encryption keys bring put on the cloud or not, just end running the plaintext at the user interface.

So it does not make your point in any way that a reasonable person would understand.

As for people losing money to Internet scamers it happens, even with ordinary snail mail. The only way you can protect people against these sort of scams is by education not technology. Because every technology you apply the scamers will just move to another method of communication as we see with other similar non email attacks. Trying to secure every type of technology realy would be an impossability...

So try actually living in the real world where non cyber-scams and cons are just one of a great deal of everyday threats people face.

Oh and if you want to make email more secure, can I suggest you have a go at those same developer entities that make the commercial comodity OS's and Applications because it's their failings in the name of gimicks for increased "usability" which are more responsible for email scams working than justvabout anything else technologicaly.

If you want to know how to make security apps actualy secure you first have to move the security end point beyond the communications end point available to the attackers. There are known ways to do this and have been since before the age of personal computers all you have to do is a little reading. The fact that neither the current hardware or apps support this means that apps will remain insecure, no if's, no but's and no maybe's. That's what the laws of physics and mathmatics show without any doubt what so ever.

The fact you are trying to argue otherwise renders you on par with a certain Australian politician... Worse we know that some of the app developers have done the required reading, yet keep quiet about it should make you question them why, not argue with someone who has repeatedly pointed out why and how to solve the issues practically, since before the apps existed...

I guess it's your agender that makes you behave this way, so perhaps you better put your cards on the table...

Unfortunately security is binary: Something is secure - or not.
There is no scale, no unit, no device to measure it.
And we have at least the feeling (if not the knowledge) that we’ll never come close to the secure state, not only in the IT.
That said, we better accept that in reality there is no security.

Now you argue that Signal is better / more secure than email.
That’s indisputably true, but only for a small part of the communication.
Like a strong railing, but only for a small part of the cliff.
That’s very dangerous for the ordinary people and thus would be illegal (at least in the EU).

My point is: The biggest danger is to feel secure [1].

Re “purpose”:What you can’t send by email you shouldn’t send by Signal, or otherwise.
But most have to learn that the hard way.

[1] Also the TLAs are not secure, what they have today will be lost tomorrow.
The only point they have in advance is impunity.

@Clive Robinson
I will provide a longer [better] response when I have the time to sit down and formulate it.
But I just wanted to quickly respond because I think there has been some miscommunication [because when does that ever happen on the internet] of intent and meaning ...

First you say...
"Either you have not read anything of relavance or you do not understand what you have read. It has nothing what so ever to do with "a choice whether or not you upload your encryption key to the cloud".

The CarrierIQ software was a "test harness" that was amongst many other security avoiding methods the equivalent of a key logger and screen scraper that then sent back it's log files back across the Intetnet to CarrierIQ's servers."

You provided no link, and just said to go back and look at the CarrierIQ thing, so I used the onsite search, found what looked to me like the right thing, and responded to that. Apparently that wasn't what you were talking about. It was a story you had commented on and your comments and that story fitted exactly with what I was already talking about so it made sense to me that that was the right story.
Apparently you were referencing some other incident however, I am not going to make the same mistake I just made in trying to guess which articles you are referencing [I apologise for doing so before] and will not respond to that unless and/or until you provide a direct link to the article/story/post you meant. [or if links are banned some other means of directing me accurately to the right page, I am not playing guessing games].

Secondly you say...
"You are quite deliberatly misquoting me which is a no-no. The points I made were,"
No. I am doing nothing of the sort. I don’t do that, I don’t see the point.
Any QUOTES were in quotation marks and were verbatim, spelling mistakes and all.
Anything else was my summery/characterisation of what I thought you were talking about.
I fully accept that I might be wrong, and if that is the case I apologise for mischaracterising your position. However it was not, and will never be, done intentionally or maliciously.
I took a certain meaning from your words and honestly responded to that.

Bearing that in mind, I would finish by saying that my quick once over of your post does seem to be confirming to me the same impression of your position conveyed by your previous posts so I will try to quickly state my position more clearly.

There are a diverse array of possible threats online, from various cybercriminals, stalkers and abusive ex’s, Con artists of all sorts, and yes government security agencies... Different people have different levels of exposure to these different threats, and thus need different security solutions, there is not a one size fits all.
However... Most people most of the time [in the west] are not actually [currently] under threat from oppression from our government from their surveillance. [the caveats there to indicate that I do realise that some people are and that this could change and that this capability should not exist and is dangerous and should be fought against]. The threats they actually face, that actually threaten to materially impact them day to day are NOT from SigInt agencies.
What does this mean?
I don’t really give a damn [for the purposes of this argument] that there is no practical way of securing your system [without living in a faraday cage with the power off] from the NSA [or insert boggart of choice here] because they are not actually a threat to most people in their day to day life.
I care if a system works against regular cybercriminals and/or con artists, the kind I have personally seen ruin real flesh and blood people’s lives. And the good news is that these measures DO work on these threats. So it is NOT ‘Snake Oil’ to say to someone who wants security against these threats to say ‘here are some tools that will help with that’ and ‘these communication apps are more secure than email’.
If your response is to say... But the NSA... Then it is indeed you that has missed the point.

Alex P has it right. These fraudster leave a huge trail. They are easy to track. If the banks and law enforcement don't do their job, that is negligence on their part. I never hear about that kind of scam being successful in Europe.