Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

wiredmikey writes with a story from Security Week that describes a security silver lining to the inevitable errors that arise in NAND flash chips. By seeking out (or intentionally causing) defects in a given part of the chip, a unique profile can be created for any device using NAND flash which the author says may be obscured, but not reproduced: "[W]e recognize devices (or rather: their flash memory) by their defects. Very much like humans recognize faces: by their defects (or deviations from the 'norm') a bigger nose, a bit too bushy eyebrows, bigger cheeks. The nice twist is that if an attacker manages to read your device identity, he cannot inscribe it into his own device. Yes, he can create errors — like we did. But he cannot control where in the block they occur as this relies solely on microscopic manufacturing defects in the silicon."

What they are saying is that this hardware has a unique "biometric" and that can be used to definitively identify true chips/boards from fake. Hmmm...

First thought that popped up is that this isn't new: floppy disks were "copy protected" using defects punched into the original disk. That didn't work out very well so why would this?

Second thought was that biometrics have strengths and weaknesses and are not unspoofable. Why would this be any different?

Several other things come to mind:

1. If a h/w encoded w/o (write only) serial number is not good enough, why is this better? Is it because it is cheaper? ie: the flash mem is already there so additional gates/circuitry is not required?

2. What happens if the h/w tech changes? ie: flash mem is no longer cheap and ubiquitous because the whole h/w base has moved to a new technology? In other words, it binds h/w verification, which we want to be a reliable long term solution, to h/w technology which is highly volatile. Probably not a good idea.

3. There is an assumption that these defects are random. I know from experience that many things we assume to be random are actually patterned and predictable. For example: I have observed DRAM chips that power on with repeatable bit patterns that sometimes vary with production run. Highly consistent, quality controlled production runs tends to remove entropy from the product. Faults and errors occur but within well defined constraints. So... disk drives used to fail within a fairly broad standard deviation from the MTBF but now, in a storage centre with hundreds of drives, I get multiple drive failures almost all at once. The standard deviation is much narrower because the manufacturing process is so well controlled. I used to replace drives when they failed and I was confident that the spares and RAID set redundancy would be sufficient to cover the rebuild time. Now I replace drives before the point in time when I expect failures to start because I can get multiple drive failures within the disk rebuild time. The failures are random but correlated. Go figure. Fortunately, tech change often happens before pre-emptive replacement is required.

If we base a h/w verification scheme on the randomness of some aspect of a manufactured product then the scheme is bound to the manufacturing process. If you change your process then you change the verification confidence and security. Not good to make these things dependent.

I think that if there is a need to provide h/w verification then the scheme should be controllable and independent of h/w technology and processes. It should also be able to encode other information with it (er... it should be extensible). Code a w/o number onto the chip that works like PGP or a cert. Forget about biometrics.

A zillion years ago, I wrote an Int 13h intercept that could emulate floppy errors on an IBM XT. It seemed to be the obvious thing to do.

Likewise, the solution here will be to emulate the HW errors. The more that depends on the HW "fingerprint", the more motive there will be to create such an intercept for flash chips. Once created, it won't be very expensive.

That, of course, assumes that you can't just hack the routine that scans for the errors and cause it to report whatever you want it to or that you don

1. A write-once serial number doesn't prevent an attacker obtaining a brand-new, unimprinted device and cloning the original device's serial number. Which cells in a block of flash memory fail early is entirely a physical process issue, and falls in the general category of intrinsic physical unclonable functions [wikipedia.org]. The entire point of a PUF is that it can't be duplicated; by definition, this means it can't be backed up, customized, or otherwise controlled, only observed.

If you have a working Treacherous Computing setup that you believe isn't breached, what would you want the technique in the article for? With working TC, you have all of that and more. Without TC, it can be worked around with a simple kernel patch.

When you test for specific hardware behavior as a means of authentication, it's always a good idea to include speed measurements & checks in your code. That way, it's harder for the emulator to fake stuff. As this is common practice, an attack against this scheme would need to take care of these tests, as well.

When you test for specific hardware behavior as a means of authentication, it's always a good idea to include speed measurements & checks in your code. That way, it's harder for the emulator to fake stuff.

Give me an eval board of a microcontroller with a USB device interface and a week of time, and you will get all of that and more. As matter of fact, I have an AVR32UC3A based board here that I built myself. I can plug it into a PC and it will do whatever the code in the MCU tells it to do, including

Please note that I did not really believe in the defect thing in the first place. I was stating that timing would be an issue. There may be weird command paths that produce unique delays and unless you tested all possible combinations of commands so you can emulate everything, you can't be sure there isn't something left.

There may be weird command paths that produce unique delays and unless you tested all possible combinations of commands so you can emulate everything, you can't be sure there isn't something left.

That would be indeed a somewhat worthy challenge; though the hacker would just run the tool for a day or two and collect all feasible code paths and delays associated with them. That alone wouldn't be a big deal to spoof.

However in practice a dependency on those "unique delays" is impossible. The average life

What they mean is that the flash problems can't be replicated. They can trivially be spoofed.

This is like a copy-protected CD, except in this case it's like the CD is built into a USB-CDROM, so it's even easier to fake.

Yeah, it sure would be a hassle trying to get another USB-CDROM exactly like that...but, um, if you're faking hardware, you just get some hardware that, duh, says exactly the right things instead of replicating the hardware.

It's getting almost funny how someone states that something is "unbreakable" or "uncopiable" (remember quantum encryption stories?) and then a few months later, someone finds a workaround, or some previously unthought of method of breaking the security.

That said, though, relying on random microscopic flaws for unique identity is very clever and would be *extremely difficult* (not impossible) to copy.

Not saying I can do it, but I'm sure someone...somewhere...will figure out a way.

...you mean I can't create a simple device that works as a flash drive, but every time the OS requests a bad block, it responds with an entirely fake response that just so happens to match the identity of the spoofed drive? Say, by using any low-cost prototyping board to spoof a USB interface? Or SATA interface?

Yeah. I don't know that much about security, but I think having a board on the device to digitally sign data from the drive would be better (public key cryptography type thig). At least that way you couldn't simply copy the device's signature using a program in the machine it's plugged into. If you design it right you'd have to have physical access to the internals of the device to copy its private key.

This isn't so much about DRM as verifying the source of information. Similar technologies are involved, but it's not the same concept. DRM is about obscuring information to all but authorised users, while signing information is about making sure that an authorised source has written a message (or a driver for example), and anyone is free to read it.

I wasn't talking about public/private keys for DRM, I was talking about verifying sources of information (information which anyone is free to read). I also implied it would probably be possible to copy the private key if you had physical access to the device. With the Wii or an iPhone or whatever you wouldn't even need to have access to the private key to sign software, you would just need some way of making the device think that all sources are authorised.

And the most retarded part is that just about everyone in any technical community can tell them why the idea is idiotic, useless and dangerous. I mean, there are pretty few things the internet does better than highlight your stupidity; they should learn to use that wonderful virtue.

Can someone send them a simple email explaining how to first post their new ideas in a tiny forum so children can tell them why it won't work, before talking to the news?

you mean I can't create a simple device [...] by using any low-cost prototyping board to spoof a USB interface? Or SATA interface?

Markus Jakobsson wrote in the article:

No need for error-correcting codes; in fact, we will read and write "raw", which is possible since all of this will be done on OS level.

He's talking about using raw NAND flash without a (hardware) controller, which is more than likely soldered to the motherboard. All USB flash drives have a controller performing error correction, as do all CompactFlash, SD, and Memory Stick memory cards. The only popular consumer flash storage devices that don't have a built-in controller are SmartMedia and xD-Picture cards; the controller for these is inside the camera or the USB card reader.

Since this relies of flaws in the silicon - it is surely really easy to accidentally change the error profile of a device so it is no longer recognised

But who is going to solder in such a TSOP flash chip? In addition, reading the "key" from a device changes its error profile because setting it back to all 1's is an erase, and an erase causes wear. It's like a vinyl record, deteriorating with each play, but that might be exactly what publishers of works of entertainment want.

Obviously reading wouldn't destroy the key because if it did, then the legitimate reader wouldn't be able to read the key either. Of course, even if reading doesn't put enough ware on it to really matter, your point does being up the fact that chips do wear. This is a lock that is eventually going to fail just due to age.

I can do it for you. Or you can buy an analog of a ZIF socket. None of that is rocket science - there are billions of TS[S]OP chips installed (and safely removed) on this planet. To give you an example, you can use a low temperature alloy that melts in hot water. This way you can install and remove TSOP chips until you die from old age, and the ICs will still work fine. They do that with 1000-ball BGAs at trade shows.

Either replace the controller with one that reports to the OS what you want it to report, or replace the flash with a circuit that ACTS like flash ram but returns the error profile you want to the controller.

What if one more bit goes bad during normal usage..Identity is gone. Any thing tied to it will stop working.."Very much like humans recognize faces: by their defects"..if your son had plastic surgery without your knowledge..you will fail to recognize him?

What if one more bit goes bad during normal usage..Identity is gone. Any thing tied to it will stop working.."Very much like humans recognize faces: by their defects"..if your son had plastic surgery without your knowledge..you will fail to recognize him?

Especially if that plastic surgery was done unintentionally just by looking at him one time too many.

I get the impression that they would apply a specific "identity creation" process to the NAND chip, and that would bring out the inherit flaws in the chip fab process. You can apply the same process to other silicon, but you won't get the same result.

You may well be able to emulate it using some awesome hardware, but how is that going to help if this is using your mobile phone internal memory for purchase authentication? You going to carry around your FPGA emulation rig to spoof payment authorisation?

System requirements for the technology that you mention include soldering, and a lot of end users don't have the coordination to solder a TSOP [wikipedia.org] or especially a BGA [wikipedia.org] correctly.

System requirements for the technology that you mention include soldering, and a lot of end users don't have the coordination to solder a TSOP [wikipedia.org] or especially a BGA [wikipedia.org] correctly.

When you say a lot of end users don't, you're implicitly admitting that not all end users don't. So by your own logic there are users who have the coordination to solder correctly. Remind me how this maintains the "unspoofable" part of this "technology" that you seem to be struggling so hard to justify.

So by your own logic there are users who have the coordination to solder correctly.

There are users who can solder, but the number of them is commercially insignificant, just as the number of people with a gaming PC connected to a TV is commercially insignificant. Providing an unlocking service is already a crime (17 USC 1201 and foreign counterparts).

but how is that going to help if this is using your mobile phone internal memory for purchase authentication?

There are already systems in place for that: SIM cards and electronic serial numbers. Neither of those require purposefully breaking read-write memory in a way that provides no benefit over simple ROM, and both are just as "unspoofable" as this is. Not to mention that SIMs/ESNs have a much reduced chance of randomly changing the identity.

Neither of which will help if you have removed the SIM card in your unlocked smartphone to use it as a PDA.

With the example of iPhone/iPodTouch, they still have EDID (basically ESN) when no SIM is present... And if you never connect the device to a network, the whole reason for having this kind of ID is moot anyway.

I hear your point; The flip argument might be that security is never an absolute, but rather a question of the time it takes to break it. (Safes are in fact rated in the hours it takes to breach them).One can emulate; however the emulation is often not as time-effective as the real; so I wonder if a reader could not detect the time difference of the emulation?

Someone could just create an emulator/interpreter to sit between the chip and the PCB. It reads the input, responds correctly for the emulated chip, and puts it a good or bad space according to what the emulated chip should be.

Precisely. And compromising an OS is in fact an expected norm by any self-respecting computer user. It goes together with having the right to f.e. change system files etc. So the second question is valid: why bother? For those with "trusted computing" systems however, it's a whole other mixbag.

"If we run a secure boot or a reliable software-based attestation scheme before we ID a device, we know that there is no active malware that may modify the report that results from reading the machine identity. So we know that the reading actually comes from the intended block, and that it was done correctly."

However if this secure boot thingy is comprimised you can force to read it form a virtualized memory block that contains a forged block. . You ca

This whole thing is dumb. If I had a system which already couldn't be tampered, I wouldn't need this NAND thing. And for the NAND, I can read out all the info about the NAND that could possibly be used as a key and then replicate it in a hardware-based emulator that I attach to the board in place of the NAND, leaving the rest of the system in place so it can answer any difficult security questions that are asked.

From what I know of flash, the 'bad bits' aren't repeatedly bad. The bad-sector-swap-out-routine in most flash drives and USB sticks will actually swap out a sector after a single read that can't be ECC-corrected, but that doesn't mean all the bits in the sector can't be written correctly ever again.

For example, in this [ieee.org] article (IEEExplore, so paywalled for you, sorry) a generic NAND flash chip has been tested for bit-error-rates. In the 5K write cycles after an average bit has failed, it only failed to be written correctly 4 times more. That would mean that a single erase-rewrite cycle would write the complete sector without any bit errors 99% of the time: to find 'most' of the bad bits, the sector would have to be rewritten 1000s of times every time the software would want to check the fingerprint.

Not only would that take a fair amount of time, it would also introduce new failed bits. That would mean the ID of the flash chip can only be checked so many times beffore the complete sector goes bad.

So let me get this straight... They "create" an ID by writing and rewriting a bunch of bits until they start failing, then mark the whole block bad. To "read" the identity, they set all bits to 0 and see which ones are stuck at 1 and then set all bits to 1 to see which are stuck at 0. The "bad block" ID area has already been written to thousands of times intentionally. What's going to guarantee that by "reading" the bad block ID (with 2 assignments each time), we won't unintentionally be making the final write to an extra bit or two?

That can be fixed by using some kind of error recovery code. But I still don't see the utility of this. It's just a ROM with random content for every device.If all you want is random content on my machine that I send multiple times to you, it can be stored in normal undamaged flash and generated in a multitude of ways.

If all you want is data I can't change, on my general purpose machine, sorry, that's not gonna happen - I can just swap the whole chip (or even the whole machine).

If all you want is data I can't change, on my general purpose machine, sorry, that's not gonna happen

Then the major motion picture studios can choose not to sell or rent their works to you if you choose to use only a general-purpose machine. Some major video game developers have made similar decisions.

For one thing, intentionally remaining ignorant of part of the popular culture would make me look like the Area Man Constantly Mentioning He Doesn't Own a Television [theonion.com]. For another, even if I don't buy copies of movies, enough other people do that MPAA studios' business model remains sustainable. Once enough things are blocked from general-purpose computers, the economies of scale in making and selling home computers become weaker, and home computers eventually get discontinued in favor of home-priced applian

If the major motion picture studios want to sell or rent their works to someone to get money for it, they better sell what the market wants, not whatever kind of spyware fantasies they themselves entertain.

The market has shown its willingness to go along with the spyware fantasies of major publishers of works of entertainment. Look at the popularity of DVD players, BD players, video game consoles, and other video-playing appliances, and the dearth of home theater PCs.

All that needs to be done is find out how to emulate the key.

And watch a copyright owner successfully sue to block distribution of the key. Do you remember what happened to Lik Sang?

And watch a copyright owner successfully sue to block distribution of the key. Do you remember what happened to Lik Sang?

One minute you're talking about the market supporting DRM, the next about government force doing so. Which is it?

What exactly did you mean by "government force doing so"? The United States government didn't force Nintendo, Microsoft, and Sony to add lockouts to all their products, but they did anyway. The market supports DRM, and the electorate supports its government backing by continuing to elect the Republican Party and the Democratic Party to the United States Congress.

Not only that, but the law of entropy demands that new errors will happen just when you need to use it online to pay your mortgage. Now try replacing it? If the premise is that you can't intentionally create an error to match a known pattern then how does one replace a "failed" identity? You simply do it like this http://www.flylogic.net/blog/?p=10 [flylogic.net] Actually, forcing an error is easy, its getting rid of an unwanted error that is hard, and you can't prevent new errors. In other words, John Doe is toast when a

Spoofing means to make a parody of or mis-represent. Spoofing does not imply that you're duplicating the original device it means that you make others think it's the original device. You don't need to re-create the hardware errors to do this, just intercept the calls which are looking for this hardware ID, and then spoof it.

This may be an unduplicatable ID, but it is a far cry from unspoofable.

Bad blocks are inherent in NAND flash. SLC NAND Flash devices are more reliable (have fewer errors) and costly. MLC NAND Flash devices are less reliable (have more inherent errors) but are affordable and easily available. NAND Flash devices are known to progressively degrade [cyclicdesign.com] until the number of bad blocks is too high to reliably store data. Inherent errors during manufacturing increase on usage (both read and write.) Most Flash Storage Devices will ultimately become too error-prone to store data. The industry might want to justify inherent errors (and gradually increasing errors) by calling it a fingerprint. They are still searching [intel.com] for techniques to make NAND Flash more reliable.

The article fails to provide mathematical basis to prove that two NAND flashes cannot have the same bad blocks on manufacturing or at some point of usage thereby obscuring identity. NAND flash controllers are designed to check and resolve errors using known algorithms. Most controllers allow hardware to hide errors while allowing OS device drivers to read the NAND flash medium. The Operating System and the NAND Flash Controller are at least two points were any such fingerprint can be compromised. The Filesystem adds another layer of abstraction. The number of "Real" bad blocks and remaps is usually stored on the NAND Flash. Altering the Bad Block Table is not difficult.

Hard Disks interestingly have similar failure rates and complex issues like Data remanence which have been studied. I wonder why no one proposed a signature scheme for using errors on Hard Drive Platters to identify them. Computer Forensics for Hard Drives has a longer track record of being studied. Marketing fud can be ignored.

There are tons of problems with this, not the least of which lies in the fact that if you have a secure boot and trusted environment, you don't really need a NAND chip to read an identity from, you can make do with a file that user cannot remove or alter, i.e. a system file. That's what trusted would mean here. This however presents another problem - amount of users willing to use such a "trusted" system is inversely proportional to how many of these users grok computers. Typically, your mildly paranoid hac

Easy to spoof by implementing a flash memory emulation in a microcontroller. A chip that will behave like a flash chip, but in fact provides an extra abstraction layer and simulates faulty areas. Just like HDD controller that remaps faulty sectors to free area at the end of the disk, so from PC viewpoint the disk is fault-free and continuous, doing a similar device (which on top of remapping bad sectors, simulates ones where ones are not present, and makes them look precisely as expected) for flash seems quite easy.

SafeDisc (and older DRM schemes) detected bad sectors on disks, which are hard to duplicate. On the other hand, they've very easy to emulate. This technique sounds very similar, and the fact that they haven't addressed the emulation issue makes me VERY skeptical.

That was my immediate thought as well. Not only were those systems easy to emulate, they also had the problem that damage would make the disk (or disc) unusable by the application long before it was actually unreadable by screwing up the pattern. As with most content protection, it didn't work and screwed legitimate consumers while not harming pirates at all, yet for some reason the idea keeps coming back with every new data medium.

I'm an embedded designer, and I recently created a system which has a raw NAND flash memory chip installed on it. We've manufactured a few hundred of these already, and the majority NAND chips come from the factory with half a dozen bad blocks marked, but I've personally seen a few NAND chips which have *no* bad blocks.

And devices which do have bad blocks have the blocks marked as bad by programming them, so you can mark any good block as bad if you want. So there's nothing stopping me from buying a few trays of NAND, reading the bad blocks and picking out the few error-free ones, and cloning the NAND chip from one of these supposedly "unclonable because of its bad blocks" devices referred to in the original post - copying bad blocks and all.

But you don't even have to do that.

Even devices which *do* have bad blocks may not have hard failures in those blocks, where a bit is completely unable to be programmed or erased. And if you successfully erase a bad block, you've just marked that block as good again. So with enough program/erase cycles, you may be able to successfully make a bad block good again and hold the data you want. If not, move onto the next chip from your tray of NAND and try again.

And you might not even have to get that 100% right, provided you don't have more than 1 bit of error per sector between the original device and the clone. The ECC will correct that bit error, and the now-cloned device (assuming it uses a proper NAND filesystem) should just encounter the bad sector, move the block and mark the previously-bad block as bad again. At this point, you may only need to buy a few NAND chips instead of a few trays in order to clone any given NAND chip.

Now as a protection against this last idea, the device could fsck its NAND on boot and set a maximum # of new bad blocks as a tripwire for cloning protection. But if you know what that threshold is, just throw more NAND chips at the problem until you successfully program one below that threshold.

Schemes like this are (as others have observed) pretty common, and don't address the real problem: what we "desperately need" is a trustworthy way of knowing that an automated system is acting in accord with its owner's intent. Alas, that does not seem to be on the horizon.

I mean, suppose my computer has "secure boot" and "unspoofable identity" and "remote attestation". That's great, if my goal is to prove to the secure server at the other end of the connection that I am running various specific (albeit a priori bug-infested versions) of Windows, drivers, browsers, JVMs, etc. But that's a silly goal, because my adversary is just going to take advantage of it, by running malware on my system that looks like it's acting on my behalf (after all, it has ready access to my unspoofable identity) but is actually transferring the contents of my bank account to the Netherlands Antilles without my knowledge.

Not to mention the general uselessness of "remote attestation" that a TPM provides: it may be able to attest to your configuration (modulo flaws in your gigabytes of software that enable attestation to be subverted or bypassed), but how on earth is the remote end going to make a meaningful decision based on the identity of hundreds or thousands of components that are attested to? Sure, it can reject known bad (flawed) components, but it's preposterous to imagine that you can know what all the bad components might be. Remote attestation is a plausible way of validating that a machine's configuration is the same as it was when it left a corporate IT department, but for making decisions about arbitrary machines in the hands of arbitrary consumers, it's useless.

And as for this specific scheme, come on: it might be a way to identify a flash device reliably if you have the hardware in hand, but, as described, it's done in software. That's right, software, which can be made to emulate any particular configuration of bit errors it desires, without there necessarily even being a physical flash device in the picture. Yes, for limited-resource embedded systems, and environments where access timing can be inferred with high accuracy, there are tricks one can use to make such attacks difficult, but for general-purpose PCs connecting over unreliable high-latency networks? Nope... not without mountains of false alarms.

Make no mistake: trustworthy computing is a hard problem. Unique IDs are fun to research, but not closely related to the solution.

This sounds like an early 80s copy-protection scheme that depended on the bad-sector map of the installed hard drive to identify it. It was reliable because only a low-level format would change the pattern, and very few people ever did a low-level format to their drives. The scheme failed when production improved and most drives could be manufactured error-free.

OK, you take your flash, write it until it breaks, and use the resulting cells to determine an identity. How do you read it? Write it, then read back which cells aren't written. Uh, wait, if you read it by writing it, won't you cause more failures?

Furthermore, flash often doesn't fail so cleanly. Some cells will simply not write to 0. However, other times, they will become leaky and read as 0, but then flip back to 1 at some later time. So the ap

With regard to the actual flash ID technique, I thought any decent flash device (e.g. all SD cards) would have a wear-leveling controller

Markus Jakobsson wrote in the article:

No need for error-correcting codes; in fact, we will read and write "raw", which is possible since all of this will be done on OS level.

It appears he refers to raw NAND flash chips without a dedicated hardware controller in front, soldered directly to the motherboard. These chips don't behave like an SD or CF or SATA or USB device; they're more like xD-Picture cards.

It means that I control the hardware (motherboard) which does the identity checking - and for any practical purpose of identification, it means that I can force it to claim that "yes, I have a NAND chip with xxxx pattern" to anyone who wants to identify my harware - say, a software program installer or a networked device. It's just like the processor ID numbers, which were embedded in the silicon and 'unchangeable'.

It's just like spoofing a hardware dongle - since the consumer OS isn't "trusted", almost an