A discussion of the increasing pointlessness of using floppy disks leads me to wonder if they might have a security advantage over more advanced storage media. With flash drives, sectors can be switched in and out of the address space within the device such that many of them are not openly available to read, so all attempts to read or overwrite the whole drive can fail. Of much greater concern though is the possibility that this could be used as a way to transfer viruses between machines and to leak hacked data back out. If you want to keep one or more computers off the Internet so that they can never be hacked from the outside, you will still likely want to transfer some data between your closed network and machines that are Internet connected, but that's risky.

It wouldn't be at all hard for a government to hack your open machines (Internet-connected ones) and make them spy on you, but they also want to know what you're doing on your closed network (or single isolated machines), because whatever you're doing there may be of value to them. If you only need to transfer data inwards from an open machine to a closed one, you might think it's safe to do this by buying a new flash drive each time and never using it again on the open machine after the first data transfer, but is that really safe? There may be a back door in the SMM code (system management mode) which can read and run code from hidden sectors on flash drives, so even if your closed machine is running your own operating system, it may be able to run code from the virus in the background invisibly within SMM. It is also possible for a virus to open up unexpected routes of communication between the closed machine and the open one, such as sending out ultrasound or generating patterns in the radio interference that computer chips put out, while the open machine would be listening out for those communications and would send all the data out over the Internet.

By keeping a closed machine in a faraday cage and soundproof room, you could prevent any data leaking out in that way, but your other problem comes if you need to send data out from the closed machine to the open one. How can you do it when you know that both machines may have been hacked? Any flash drive or hard drive, even if it's new, could have data written to it on the closed machine without you knowing, and that data would be picked up by the open machine - bit by bit your secret data would leak out to the spies, and studying the data on the disc wouldn't show any of the data that's being transferred if it's all being done on hidden sectors that you can't access. Any kind of wire between the machines could potentially be hijacked by a sophisticated virus in the SMM code and could be used for rapid transfers without you noticing, so it occurs to me that floppy discs could still have a useful role here. You could also use optical discs (perhaps DVD-RAM is fast enough to be practical even for repeated small data transfers), but the big advantage of floppy disc drives is that you can hear the seeks easily and tell that it's only writing the tracks that it should be. You could also add an extra machine in the middle and copy onto a second disc only the sectors that you want to transfer, and check that all the rest on the last used track and the next track beyond it are empty. This step would also eliminate the risk of a virus trying to pass hidden data between tracks, though it would only help if you're sure that that machine isn't infected with a militar-grade virus too. You should still be able to transfer data from the open to the closed machine on new flash drives without any risk, but all transfers in the opposite direction would be made on floppy discs (or optical ones if there's a lot of data to transfer and you're sure you're able to check all the content each time).

With some government security agencies reintroducing the mechanical typewriter in order to keep secrets secret, it doesn't seem so unlikely that the floppy disc could continue to have a vital role for a long time to come. Can anyone think of any problems with this though, and are there easier ways that are just as secure (and as easy to check that they're secure)?

Pretty much any hardware with flashable firmware (which is pretty much any modern hardware) can be hacked, and any storage device can have data "hidden" on it as long as the firmware of the drive reading the device can be modified. Even a floppy disk can have extra data "hidden" on tracks 81 and 82 which most system won't even know is there, and data could be stored between the sectors or in extra sectors after the end of the track (floppy disks traditionally have 18 sectors per track, although this can be increased to 22 sectors in practice) although this may require modifying the firmware and/or operating system of the target machine.

The only truly secure way to protect against malware is to store any executable code in true read-only memory (not flash memory) and ensure that nobody can replace the ROM chips or even the whole machine (note that hardcoded checksum verifications are *not* secure as the chip containing the checksum verification code could be replaced too, or the entire system's circuit board replaced with a malicious one). In short, true protection against malware requires highly inconvenient hardware design and strong physical security should physical access be a concern.

_________________When you start writing an OS you do the minimum possible to get the x86 processor in a usable state, then you try to get as far away from it as possible.

And you also have to consider that there are maybe some backdoor built into the microcode of your CPU, GPU and other micro controller by foreign government with collaboration from the manufacturers of said chips. We even had a lecture at my University IT security and ethics class about built-in kill switch and that was in the early 90's.

My second rule of security is that "100% secure" is always unnecessary. The goal is only to ensure that it's harder for someone to bypass the security than it is for someone to use the next easiest alternative. For example, if you're trying to protect a banana that cost $1.00 from thieves; then you only need to make sure it'd cost a thief more than $1.00 to steal your banana because a thief's next easiest alternative is to just buy their own banana.

My third rule of security is that the security itself must be cheaper than the next easiest alternative. For example, if you're trying to protect a banana worth $1.00 you do not spend $123456 on a extremely good quality safe.

For defending some data from a government; the first question you need to ask is "what is the government's next easiest alternative to acquire that data". If the answer is that the government could just send some thugs to your house to threaten your life and/or torture you until you give them the data; then you have a rough guide for how much security your computer actually needs (e.g. "no more than about $0.50 - the cost of a bullet if ammo is ordered in bulk").

Cheers,

Brendan

_________________For all things; perfection is, and will always remain, impossible to achieve in practice. However; by striving for perfection we create things that are as perfect as practically possible. Let the pursuit of perfection be our guide.

Another rule to remember is that the most effective tools for circumventing security are the user's gullibility, greed, blindness to people they consider beneath them (such as janitors and IT technicians), acceptance of people who appear to be doing necessary jobs that you don't understand or care about (such as janitors and IT technicians), and willingness to go along with authority figures. Primary props for this include fake ID badges, authoritative (or panicked) voices over the phone, the Clipboard of Authority, and janitor's jumpsuits.

The second-most effective set of tools for circumventing security are intimidation, extortion, pain, and fear, which are more likely to be successful but are also more likely to get noticed in a way that leads to rapid response and stern prosecution. In that case, the main props are things like lengths of rubber hose, a pair of pliers (applied to the fingernails), guns, strategically placed video cameras, kidnapped or arrested family members, threats of prosecution for criminal activities (whether or not the person making the threat can back it up), and prostitutes who are willing to roll over on their johns.

Most crackers and spies will go with the first option if at all possible, which they call Social Engineering and is the basis of phishing and many other types of scam. It can be terrifyingly effective, even when done just as a type of prank or obscene phone call, as cases like that of Louise Ogborn make horribly clear.

This doesn't mean that they won't resort to the second if the stakes are high enough.

In other words, if someone wants to get your information that badly, chances are they can, one way or another.

However, odds are they don't, and more to the point, most such persons aren't going to be targeting anyone in particular. When they do, that person is hosed, period, end of subject, especially if the ones looking for the information are backed by legal authority and jurisdiction.

Or, as Peter Welch once said on this subject "4chan might destroy your life and business because they decided they didn't like you for an afternoon, and we don't even worry about 4chan because another nuke doesn't make that much difference in a nuclear winter."

Oh, and Brendan? You need to read that essay. Right now. Even if you read it already, as it is clear that the message didn't get through before.

_________________Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTFμή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίανLisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.

Even a floppy disk can have extra data "hidden" on tracks 81 and 82 which most system won't even know is there, and data could be stored between the sectors or in extra sectors after the end of the track (floppy disks traditionally have 18 sectors per track, although this can be increased to 22 sectors in practice) although this may require modifying the firmware and/or operating system of the target machine.

You can count the seeks though and hear longer gaps if the head's jumping a long way, so a virus trying to hide stuff on tracks 80, 81, 82, or 83 would give itself away instantly. Even writing between the tracks would be noticed as the head has to be moved one way, then back again to get it slightly off line. To read the hidden data is much harder still though, and going through a copying step on a middle machine (preferably a very old one) which is only ever used for that purpose would make it very hard for any hidden data to make it through. The best bet would be to try to fit extra sectors in by reformatting tracks and hope there's no additional copying step, but that's easy to find if you look for it, so again it's a giveaway, and no one wants their military-grade virus to be discovered and stolen so easily. The Iranians eventually found the one the Americans used against them, but only after it had revealed itself by doing an astronomical amount of damage.

______________________________________________________________

AMenard wrote:

And you also have to consider that there are maybe some backdoor built into the microcode of your CPU, GPU and other micro controller by foreign government with collaboration from the manufacturers of said chips. We even had a lecture at my University IT security and ethics class about built-in kill switch and that was in the early 90's.

Yes, you have to assume that all the machines you're using have backdoors built into them, and if you're moving data from an open machine to a closed machine on single-use flash drives you have to assume that you are triggering that backdoor into action, but it may be possible to stop it doing any harm beyond destroying your data - it may still be possible to stop it leaking industrial secrets to the outside. (That could still be a disaster though unless you have some way to check that your backups aren't being corrupted as they're written which doesn't itself open up the possibility of corrupting them while checking them.)

______________________________________________________________

Brendan wrote:

For defending some data from a government; the first question you need to ask is "what is the government's next easiest alternative to acquire that data". If the answer is that the government could just send some thugs to your house to threaten your life and/or torture you until you give them the data; then you have a rough guide for how much security your computer actually needs (e.g. "no more than about $0.50 - the cost of a bullet if ammo is ordered in bulk").

The trick then is to have a government protect you, although you can't rely on them to get the security right. America leaks its military secrets like a sieve and China simply hoovers everything up. They don't isolate their data adequately, and many/most of the chips they're using have been manufactured in China, so they ought to assume there's a spy in every machine and make sure it never gets any opportunity to send a report home.

______________________________________________________________

Schol-R-LEA wrote:

Another rule to remember is that the most effective tools for circumventing security are the user's gullibility, greed, blindness to people they consider beneath them (such as janitors and IT technicians), acceptance of people who appear to be doing necessary jobs that you don't understand or care about (such as janitors and IT technicians), and willingness to go along with authority figures. Primary props for this include fake ID badges, authoritative (or panicked) voices over the phone, the Clipboard of Authority, and janitor's jumpsuits.

Which means you either need to manage without such staff and do all the maintenance work yourself, or replace them all with robots (and have some way of making sure that they can't be recruited by the enemy). I can see why the Americans can't keep their secrets secret - it's almost guaranteed that all secrets will leak out no matter how well you try to defend them. You can probably do it well enough on a very small scale, but as soon as you try to expand it you will inevitably bring in people who can't be trusted, and people can be bought easily.

I wonder how many servers one person could reasonably keep going in a closed base without needing any additional help? New parts could be passed in, and old parts would need to be destroyed before being passed out.

As mentioned earlier, your biggest threat would be spies who are intentionally backdooring things at the source. Heck, a lot of exploits in this kind of extremely secure systems require somebody to be there in person to work.

Although if SMM is your biggest worry, then huuuuh, don't use x86. Don't assume you have to use stock PC hardware.

DavidCooper wrote:

You can count the seeks though and hear longer gaps if the head's jumping a long way, so a virus trying to hide stuff on tracks 80, 81, 82, or 83 would give itself away instantly. Even writing between the tracks would be noticed as the head has to be moved one way, then back again to get it slightly off line.

You're unlikely to notice it in practice, especially since each file access will result in a completely different amount of noises depending on where the head is and the current state of the filesystem cache, not to mention that the files have been likely accessed in different ways each time. You're not going to tell short of figuring out a way to get it detected by the operating system somehow (although I'd assume the drive would lie to you about this too in such a case).

Although if SMM is your biggest worry, then huuuuh, don't use x86. Don't assume you have to use stock PC hardware.

You can't assume there aren't backdoors on any chip. It's safer to assume that there are always spies built in and to make sure they can't get any information out.

Quote:

You're unlikely to notice it in practice, especially since each file access will result in a completely different amount of noises depending on where the head is and the current state of the filesystem cache, not to mention that the files have been likely accessed in different ways each time. You're not going to tell short of figuring out a way to get it detected by the operating system somehow (although I'd assume the drive would lie to you about this too in such a case).

The idea would be to use them only for exporting data from a closed machine to an open one, so there would be no jumping around writing files and no directory - it would just write as many consecutive sectors as the data takes up, and then it would stop. On a third machine in between the other two, those sectors would be copied and written to another disc as an extra security step, and that is the only task that this middle machine would ever be used for. While this means that only small amounts of data can be exported in a given length of time, huge amounts of data can still be imported from the open system to the closed one, so there's no problem there other than the cost of having to use a new drive each time and the risk of a virus erasing or corrupting data (which would require a sophisticated virus as it would need to expolit a hardware/SMM backdoor if it is ever to be run).

If larger amounts of data ever need to be exported from the closed machine to the open one, DVDs might be viable for the transfers, again with a middle machine copying only the required sectors. But maybe I'm missing an easier way if there's some other device that can ensure data is only be sent in one direction and that no extra data can be sneaked into that stream. Screens and cameras could be used, but a sophisticated virus could potentially be written to send an extra visual signal out which won't be noticed by the software that's supposed to be reading the screen. If that software is running on the open machine, its functionality could be uncovered in full. However, a middle machine could again be used, this time to take input from a camera watching a screen attached to the closed machine, and it could send the data on in a similar way to the open machine, but using completely different software and a different display method, with the result that any hidden signal on the first screen would always be filtered out by this step. That might be a better approach.

(There is, of course, a possibility that a sophisticated virus could work out how the software on the closed machine works and inject its own data into the data stream while making the software think it's meant to be there, but that could be made so difficult that the virus would need to become too advanced and bulky to be able to hide itself, and it would also be too valuable to risk handing such advanced code to an enemy by planting it on their machines as it would likely be worth more than any of the data it's trying to steal.)

If someone keeps getting burglarized because they keep forgetting to lock their front door when they go out, would you recommend that they put a deadbolt on their window? That's what you are doing when you are worrying about the microcode or the flash memory: trying to fix something that is relatively minor risk, and relying on the people who would be exploiting it not to brute-force past your fix, while ignoring the bigger risk of someone trying to catch you with your pants down by feeding you a plausible lie and getting you to do their dirty work for them.

I am not saying you shouldn't be concerned about those things, or that there is any real solution to the 800 lb gorilla in the room, I'm just putting the matter into perspective. Worrying about the smaller risks just because you don't know how to stop the bigger ones is basically a form of premature optimization. Chances are YAGNI, because the would-be infiltrators have ways to get the same thing done with less effort, less risk of getting caught, and at a lower cost. Brendan was dead right about that part.

And before anyone says that they would never fall for something like that: no one can be on the top of their game all the time. Everyone gets played some of the time, no matter how clever they are (and often it is their own cleverness that trips them up). Policies and procedures, when well-designed and appropriate to the situation, can minimize the risks and improve the odds of catching such mistakes, but they cannot work 100% of the time.

_________________Rev. First Speaker Schol-R-LEA;2 LCF ELF JAM POEE KoR KCO PPWMTFμή εἶναι βασιλικήν ἀτραπόν ἐπί γεωμετρίανLisp programmers tend to seem very odd to outsiders, just like anyone else who has had a religious experience they can't quite explain to others.

If someone keeps getting burglarized because they keep forgetting to lock their front door when they go out, would you recommend that they put a deadbolt on their window?

If the most likely way industrial secrets can leak out is by piggybacking on the data that's meant to get out, you make damned sure there's no data piggybacking on it. How is that bolting a window to stop something escaping through the front door?

Quote:

That's what you are doing when you are worrying about the microcode or the flash memory: trying to fix something that is relatively minor risk, and relying on the people who would be exploiting it not to brute-force past your fix, while ignoring the bigger risk of someone trying to catch you with your pants down by feeding you a plausible lie and getting you to do their dirty work for them.

If a government is defending a bunker containing the servers, they can handle most of the security, making sure that only a few trusted people can ever get inside. It's possible to have hidden, remotely-operated guns take out anyone who tries to enter without permission, and none of the security team would know who to coordinate with to try to get around that hazard (or how many groups are controlling how many of those guns). Each of the people who are allowed to enter is a risk, but a risk that has to be taken, and with some projects it may be necessary for them to live in the bunker full time for several years without leaving it at all during that time. The only other risks are hackers working for other governments, and mistakes in exporting data.

Quote:

I am not saying you shouldn't be concerned about those things, or that there is any real solution to the 800 lb gorilla in the room, I'm just putting the matter into perspective. Worrying about the smaller risks just because you don't know how to stop the bigger ones is basically a form of premature optimization. Chances are YAGNI, because the would-be infiltrators have ways to get the same thing done with less effort, less risk of getting caught, and at a lower cost. Brendan was dead right about that part.

No, the risk I'm worrying about here is the biggest one left after working out how to block the other approaches.

Quote:

And before anyone says that they would never fall for something like that: no one can be on the top of their game all the time. Everyone gets played some of the time, no matter how clever they are (and often it is their own cleverness that trips them up). Policies and procedures, when well-designed and appropriate to the situation, can minimize the risks and improve the odds of catching such mistakes, but they cannot work 100% of the time.

If you're guarding nuclear weapons, you go to great lengths to keep the enemy out of the control loop. It's the same with this - perfect security is impossible, but you need to get as close to perfect security as is reasonable for the given risks and you don't just ignore a route to being hacked on the basis that the head of GCHQ (if that's who you've teamed up with) might be working for a foreign power.

The main reason not to use USB is the fact they can inject drivers in your OS when you plug them. A USB " flash drive " could also be a hub combining a flash drive AND a remote controlled input device..Or a simulated network interface sending some of your network traffic to a shady neighbourhood

Who is online

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum