The deepest root of all the misunderstandings that constitute the InfoSec discourse nowadays is that the normal people («security experts» included) do not understand what is software, and its fundamental difference from the physical world we live in.

The entire realm of software is purely artificial.

Not only programs and functions, not only bugs and security holes, but also all the notions and intentions, all phenomena in the realm of software, even those perceived as «natural», are created by a man.

There are no natural laws that a program must follow and obey. While your computer does follow all the laws of physics, your programs do not at all. This very distinction makes a computer useful for us. The purpose and the only purpose of your computer's existence is to create a virtual TABULA RASA world, the world devoid of any laws, the world completely disconnected from the physical reality, the world that you are supposed to populate with laws of your own creation.

In other words, a computer can produce any output from any input — this is the definition and the characteristic property of a computer. This is what they always forget, and I stress ALWAYS.

REMEMBER THAT! If you want to improve your «safety», «cyber security», whatever. Every time you assume any expectation to a program of someone else's creation. Remember that! Every time you are disappointed: I gave this stupid machine a perfect input! Remember what a computer is: a machine that produces any output from any input — no restrictions at all. If you remember it well, first you will stop acting surprised when you wonder into a trap, second you will become more challenging prey, third you will stop believing InfoSec selling stories.

The trouble of all serious social troubles is that they do not allow for a prolix bloated discussion that normal people value so much. Muslims want us dead. Hitlary committed a high treason. Douchebank is a fraud. Credit cards are not secure. 2+2==4 — there is no room for a discussion!!! Here are some prooflinks, case closed, the public is bored and ignores the issue in question.

On the other hand, the lack of evidence, the absence of solid research method, the absurdity of the subject — open the gates for creativity and rhetoric and demagoguery and entertainment of all sorts. One may write volumes on Bigfoot, UFO, ghosts, gods, multiculturalism, oppression, patriarchy, microaggression. And I assure you those volumes will sell magnificently — people love talking much more than thinking.

I said it earlier this century, "state-sponsored malware/spyware developers ARE de facto blackhats".

There is no «legitimate» third side to receive zero days. Either you give a priority to your software vendor (and contribute to the defensive side) or you do not and contribute to the bad guys. Yes, bad.

Not that I blame vulnerability researchers for being immoral. I am a free market advocate: if a software vendor is not willing to pay a competitive price for vulnerability information, it certainly deserves the consequences. I just hate hypocrites that fail to admit the obvious fact that they are no different to blackhats — because «we sell to government and law enforcement only» clause makes no real difference.

But, wait!

They ARE different.

The ideal black market for zero day exploits is free and open for anyone, including software vendors searching for the exploits in their software. You, as a seller, do not want to sell your exploit to the vendor of the vulnerable software, because you are interested in the exploit's longevity. But on the black market there is no way for you to know if a buyer works for the vendor (directly or indirectly).

Contrary to that, the real market (thoroughly regulated by the government) completely rigs the game to the detriment of the software vendors. First, a software vendor is explicitly banned from participation (by this «we sell only to law enforcement»), no legitimate purchases for a vendor, tough luck. Second, it is open for trusted brokers who make huge profits from the fact they got government approvals (see HBGary leak to find out how hard some people try to set a foot there with quite limited success).

There is a simple remedy to many information security woes about smartphones.

And it is simple. And extremely unpopular. Vendors, operators definitely won't like it.

Just it: turn a smartphone to a computer. No, not like now. Really.

A computer does not run «firmware» bundled by «vendor» and «certified to use». It runs operating system, supplementary components like libraries and device drivers, and applications, both system and users'.

And there are updates. When there is a bug, an update is issued, not by the computer vendor, but by the OS or other software vendor. While «firmware» which FCC et al should care of is the tiny thing that runs inside broadband module you user probably never think of at all.

I've seen people arguing that it would break things. Due to device fragmentation people will get broken updates, brick their phones and overload repair centers. Come on. Never seen bundled OTA firmware update doing that? It is actually safer if the update is going to be granular and won't touch things it does not need to.

But you won't ever seen unfixed remote code execution bug to stay for years or even forever if your phone vendor decides that it no longer necessary to support this model.

I want my smartphone to be a real computer. With OS, applications, and no unremovable bloatware that is burned in by the vendor or (worse) MNO. Do you?

UPDATE: and surely initiatives like this will get middle finger as they deserve and no questions could be raised. You may run anything you want on your COMPUTER.

(it is a copy of my old LinkedIn blog post, I saved it here because Linkedin sucks big time as a blog platform)

The full story is here:pastebin.com/raw/0SNSvyjJ
and it is worth reading for all InfoSec professionals and amateurs: perfect, outstanding example of an «old school» hack described step by step.

Also it provides us a classic example of another issue often overlook, or rather intentionally ignored: starting from certain (rather small) organization size and complexity, a sophisticated attacker WILL compromise your Active Directory. There is no «if» in this sentence: it is inevitable. I've seen many pen tests and many advanced attacks by malicious entities — ALL, I mean it, ALL of them ended like that.

That leads us to obvious, yet controversial conclusion: for certain valuable resources it is better to keep them OFF the domain. This means cutting away the whole branch of an attack graph: no SSO, no access from domain-joined admin workstations, no access recovery via domain-based email, no backups on AD-enabled storage, whatever. Which rises some completely different issues, but that's it.

Seriously. They are «not responsible»! Who is, then? Those guys are getting paid enormous amount of money for being MANAGERS. Manager is a person who is responsible — for solving problems he/she might not truly comprehend as well, but that's ok. I do not expect them to really know a thing or two about IT or security. An executive should understand business risks, that's enough. If there is a business risk that an executive does not understand and is not willing to, he/she should consider getting another job, probably McDonalds could offer them an intern position?

Those people say they are utterly incompetent — and they say it in public and get away with that. And everyone thinks it is ok.

I've seen a lot of companies where it is not — not necessary big corporations with huge IT staff. There is just no reason to have anything of significant value on a workstation (and quite a few reasons to have it on a file share) and it is not a huge complication to live without it.

I'd be more worried about the fact that if you've got ransomware (or any malware at all) it means you have been compromised. And you are just lucky that the attacker was not sophisticated enough to get any other advantage of the situation (in a way that would be even more harmful to you), maintaining covert access for indefinite amount of time and silently ruining your business the way you wouldn't even be able to identify before it's too late.

So it is not about desktop backups, or antivirus, or advanced anti-APT self-guided silver bullets. It is about you.

It all started as a Facebook discussion. A colleague of mine witnessed an impressive talk on a conference: a representative of a penetration testing company claimed he would hack any company in one hour. He was challenged to do this, and here is the solution:

With simple search of social networks and the company’s website, he profiled the target company and obtained a contact of a sales person. Then he crafted simple trojan executable (not really tailored at this time, just some generic one), encrypted the archive and sent it to that person; then he called by phone pretending he has an urgent business proposal and mentioned the email he have just sent.

The salesperson replied: ”I cannot open the documents, my antivirus does not allow me to". «Strange, which one?» "(some name)" «Ok, I will send you a new archive, it should work». And it did (now it was a better crafted trojan).

Yes, simple as that.

Could it be thwarted with a proper training?

Yes. And no.

You may expect some vigilance from a person who understands the risks.
But what the risks are and could a training help to understand it?

From a salesperson's perspective, chances are there is a technical issue. A salesperson estimates the probability of this as, say, 90% (we may discuss his reasoning later).

“If I manage to close the deal, circumventing the procedures that do not allow me to open these documents, I get, say, $30K bonus. If I do not, I get nothing.
10% chance is there is a malicious hacker trying to steal the data from the company. If a hacker succeeds, and I get the blame, I am to be fired and I lose, say, $50K in total consequences”.

Given our salesman has a decent experience and learned some basic probability theory, it is totally acceptable for him to ignore the danger; this would be a reasonably profitable strategy that incurs no extra cost. Add some internal competition among sales people and you easily see that he would play this lottery again and again.

Let's talk about someone a bit higher in a corporate food chain, or even at the top of it — CEO, CFO, VP of sales, etc.

The perspective changes drastically. If the contract is secured, the company gets $1M. If a large-scale network breach occurs, sensitive data get leaked, or something similarly happens, the company loses $15M. And that persons bonus is affected accordingly.

The balance is all different now (even if we assume probabilities to be the same).

Who is our CISO (or whoever is in charge of the data security) working for? The answer is obvious.

But there are caveats, as usual.

The first caveat is that if, say, our worst-case scenario loss is estimated to be low and the associated damage to be benign, then the doing nothing strategy of risk acceptance (as bad as it sounds) is a business justified course of action.
If you dislike this choice, you may try spend some resources to decrease the probability and the impact, don't expect the business side to be very cooperative. It is still a lot of money, but not enough to let you interfere with any revenue generating processes.

And the second caveat is more serious. It is that all our risk estimations are produced by the business risk management process, which is an enigma for us, a black box. It either works, or we blindly assume it works because it is «someone else's problem».

It the business risk management is ad hoc, or does not exist in your organisation, or is non-functional, it gets substituted with “information security risk management”, where the most prominent «information sources» are: «FBI/CSI reports», «SEC-mandated leak disclosures», «industry analysis reports» — the highest grade nonsense, zero relevance is guaranteed.

It is better than nothing to base our guess on, but a blatant attempt to sell our qualitative estimation as quantitative data is a pure hoax.

However, chances are there is no risk management at all in your company, not even a dysfunctional one.

I think most people in the industry know that, but most are afraid to tell the truth aloud.

If you do not know your business environment, the probability estimation is pointless.
If you do not know the real business impact of a breach, your loss estimation is baseless.

Multiply these to get nonsense squared.

But you need to “justify” your security choices anyway. Scaremongering sounds like a decent plan now?

Before you indulge into an experiment investigating the effects of whatever quality of a subject, it is the best for you to make sure beforehand that the quality in question does belong to your subject.

We colloquially say: «a red pencil» as if it is not a question whether a pencil can be red. Indeed, it can. In this particular case our «intuition» coincide with physical reality. We can create an experiment that demonstrates a possibility of any colour be a quality of a pencil. We can clearly define «red» as a specific feature of the light spectrum, and we can unambiguously link those spectra to each pencil. We can see (experimentally) that some pencils share this quality, while some do not. Even if the dividing line between these sets is fuzzy, we now have a CHARACTERISTIC PROPERTY of a «red pencil»: all red pencils share this property, and all non red do not have it. Facing a pencil, we can (experimentally) determine if it is red (and to what extent).

It is perfectly legitimate for anyone to call a pencil «red» or otherwise tag a pencil with a colour, because of the physics, not because the language allows it. Language is equally suitable for describing reality and nonsense as well. We still can call a pencil «aggressive» but it does not make physical sense. Aggressiveness can not be observed in pencils. There are many qualities applicable to pencils and there are many qualities inapplicable to pencils. Some qualities are plainly inapplicable to some objects — this fact is so basic that is often forgotten.

Now, I give you two grains of wheat, one is «GMO» and another isn't.
Can you conceive an experiment that tells me which is which?

Maybe it is time to make one step back and determine if «GMO» is a quality of an organism? Is there any CHARACTERISTIC PROPERTY of a «GM organism», something that all «GM» subjects share, while none of the rest have? Please, define this property for me. ...or simply ask yourself (every time you are looking for the magical label on the food package) what is this characteristic property I am looking for?

Now, as you have yelled at me all your suggestions, think carefully which of them is actually a property of an organism. Not single one. All that you have come up with are qualities of a production process or a design process or even earlier. None of those can be observed in a grain of wheat.

Observing a car, can you tell, for example, a difference between a car that was sketched with HB pencil and a car sketched with 2B pencil during their stage of development? In case of a car you would not claim that all qualities of a design phase are inherited by the product. You may consider me foolish to even suggest this very possibility. It is too obvious for you that a car and a car production process are two wildly different objects. Ok, then. What makes you claim that «GM» property of an organism design process is also a quality of a resulted organism? Hopefully you are not going to claim that organisms and their production processes are the same object.

However, you may legitimately conjecture that this particular property somehow translates from the design process to the organism. This is why I gave you these two grains of wheat. Take them and prove your conjecture. Show me the CHARACTERISTIC PROPERTY of «GMO».

I know you are wondering what all this nonsense has to do with passwords.
Well, this is all about the information entropy, which you do happily assign to your passwords without even a glimpse of doubt: IS IT REALLY A QUALITY OF A PASSWORD??? CAN I CREATE A CHARACTERISTIC RELATION THAT MAPS PASSWORDS ON REAL NUMBERS AND IS A FUNCTION???

It has become a dangerous fad to talk about biometrics as a replacement for the traditional authentication methods. It is often claimed that passwords are losing battle to biometrics… Despite the futility of the claim, I don't even need to dismiss it. There is one simple physical fact that renders this entire «battle» completely impossible.
Read more →