I'm going to encrypt my password multiple times in DES and then spray paint it on the side of this bridge and post a photo of the bridge on my homepage. They will never break my security system because we use encryption. So I can feel safe putting my password on the side of that bridge.

To increase security in a password just encrypt it multiple times in a loop like this one:

Look, now no one will be able to break my encryption because the DES is applied "level" number of times. This is super efficient because when I test this it runs blindingly fast even if I set level to something rediculalsy large like five billion. It's amazingly effecient.

I'm using level 2048 encryption now. It doesn't slow anything down. In fact everything runs just as fast as if I never encrypted anything. I know its secure because its encrypted. So I can put all my biggest secrets on the sides of bridges in bright red spray paint.

Ahh so that should help to explain why games are so fast...oh wait, maybe more likely to explain why games are filled with bugs.

I do understand that game developers wants their product to reach the shelf as quick as possible but seriously. Who here would by a table with only three legs or a car without a possibility to fill it up with gas (or whatever fule you are using).

Thats more or less what game developers are sending to their customers, no wonder people are annoyed at them. It have happened at time that people I know buy a game and doesn't get it running but by downloading a pirate version they have no trouble what so ever.

what i find amazing is that the comment would seem to suggest totally different behaviour from the actual code (even if you correct the for condition)

the code would suggest the memory getting zero-ed for a few seconds, while the code just runs through the array once (if we fix the for loop). Also the comment would seem to suggest that if you dont override a memory cell at least X times, its previous contents can still be recovered, which is not true, for hard disks, maybe, for RAM, not at all

It have happened at time that people I know buy a game and doesn't get it running but by downloading a pirate version they have no trouble what so ever.that...

That normally indicates that the game's publisher has added copy protection code after the game code has been delivered to them by the game's developers. And the publisher's programmers have screwed it up.

Actually, that line of thinking isn't completely missing the point. The encryption mentioned in the beginning might scare off one or two potential hackers, and every little helps. Enough of these measures might make even dedicated and experienced software molesters reckon it's not worth the effort.

You should rightly be concerned with everything you store in memory, because it is or can be made accessible to other processes. Even in plain old Windows your data may get swapped to disk and stay there for anyone to access.

That being said, the code is still stupid (and doesn't work), the comments are incorrect, overwriting memory several times makes no difference, the bit about keeping memory clear for a few seconds is likely a hugely stupid misunderstanding of how the CPU data cache works, "some technology" is particularly funny for all the wrong reasons, XOR gives you obfuscation, not encryption, and if IP addresses are sensitive information in the first place, your entire approach is flawed. Still at least they're trying. :)

The note that data written to disk should be overwritten several times is also correct, though 7 or 32 seems excessive. One random bit pattern should be enough, two if they're paranoid. And they should be more concerned with preventing the disk caches from turning those 32 writes into a single write delayed to whenever it's most convenient to update the file in question.

My name is Martin Gomel, and I am the lead developer for "Some Technology". If you are interested in purchasing "Some Technology" for elite hacking into game server code, please send a certified check for $1000 to:

A pure C/S aproach to gamming is deemeed to fail, on most games, because the latency is huge on internet. So most games go for a hybrid way. Where some logic run on the client, and some logic on the server.
But if the "wrong" logic end clientside, and some cracker is looking for it, may become in the cracker hands, so may result on a hard to detect cheat exploit.

I doubt you need to ofuscate information serverside, so this smells like clientside code.

The real wtf is to put ">". But maybe is a anomization artifact. I don't know if the idea of overwritte more than once is lame or not, cause I have not tried to activelly look at memory on ...hee.. memory or the swap area. Maybe the way memory work, you really need to write more than once, to cause a flush() or something alike OS-wide. So the stuff is really clear.

My name is Martin Gomel, and I am the lead developer for "Some Technology". If you are interested in purchasing "Some Technology" for elite hacking into game server code, please send a certified check for $1000 to:

Why don't you go read the Gutmann paper on secure deletion and recovery from magnetic memories before you accidentally make yourself look like someone shooting his mouth off about something he is fundamentally ignorant of? Most serious, professional, high-security cryptography software zeroes out its temporary memory after use.

The REAL WTF is why anyone would think a hacker who was too stupid to use a packet sniffer would be smart enough to extract an ip address from a compiled binary. What are they going to do next? Hack the server with their l33t lack of networking skills? I doubt it.

I have a theory about this one. Some oaf manager demands these "security" features, the programmer tries to persuade him that it'd be pointless but fails. Programmer sneakily implements it with incorrect for loops to keep the manager happy without tainting the project.

Althorugh the implementation above didn't work, there are lots of good reasons for zeroing memory after a password is in it. Some (eg swap file to disk) have already been mentioned, but one that is often missed is that the location of a variable is quite likely to be on a stack - which persists until overwritten but something else.

My name is Martin Gomel, and I am the lead developer for "Some Technology". If you are interested in purchasing "Some Technology" for elite hacking into game server code, please send a certified check for $1000 to:

Why don't you go read the Gutmann paper on secure deletion and recovery from magnetic memories before you accidentally make yourself look like someone shooting his mouth off about something he is fundamentally ignorant of? Most serious, professional, high-security cryptography software zeroes out its temporary memory after use.

OK, I believe you. But what does that have to do with anything Gomel said?

My name is Martin Gomel, and I am the lead developer for "Some Technology". If you are interested in purchasing "Some Technology" for elite hacking into game server code, please send a certified check for $1000 to:

Why don't you go read the Gutmann paper on secure deletion and recovery from magnetic memories before you accidentally make yourself look like someone shooting his mouth off about something he is fundamentally ignorant of? Most serious, professional, high-security cryptography software zeroes out its temporary memory after use.

Why don't you go read the Gutmann paper on secure deletion and recovery from magnetic memories before you accidentally make yourself look like someone shooting his mouth off about something he is fundamentally ignorant of? Most serious, professional, high-security cryptography software zeroes out its temporary memory after use.

The paper that says you can't do it? People cite it to support the idea they have to do "35" passes over a disk, but it actually says it's completely impossible to recover anything after one. It doesn't have anything to do with memory wiping.

It's O(1) because there is no n anywhere. In order for it to be linear time, the amount of time required would have to scale linearly with some factor.

So if instead of always incrementing four elements, it was something like:

for (int i = 0; i < n; i++) {
foo[ i ] += bar;
}

Then it would be O(n).

Think of it this way: if you can easily unroll the loop, it's O(1). After all:

foo[0] += bar;
foo[1] += bar;
foo[2] += bar;
foo[3] += bar;

Is clearly O(1). (Well, unless someone's been playing with operator overloading.)

The loops, even if they worked, would be O(1) because they always operate in constant time.

O(1) doesn't (necessarily) mean fast: it means it always take the same amount of time to run.

I see what you mean, O(n) is only applicable to containers, where you do a for loop to container.size()/length()/your favorite language method()/property ...

But no ...

The O notation is not applied to for loops but to algorithms. Meaning that their (when correctly implemented) algorithm has an O(N) complexity, because it IS dependant on the size of the RAM space you want to delete ...
Your example for loop has a O(N) complexity because it implements an addition algorithm on an array of data ...

If your assertion was ok, then I could theoretically sort a dictionary in O(1) by unrolling the nlog2n loops, or am I wrong here ?

The paper that says you can't do it? People cite it to support the idea they have to do "35" passes over a disk, but it actually says it's completely impossible to recover anything after one. It doesn't have anything to do with memory wiping.

The Guttman paper is the most ridiculously overquoted "security" document in the history of electronics. It's lead to nearly everyone thinking that data on a modern hard drive which has been overwritten can be recovered. Yet, somehow, mysteriously, there are no actual factual accounts to be found anywhere of this process being used successfully.
Yes, there are lots of data recovery places that will recover data from damaged drives. That's easy, more or less, because the data is still there except where it was burned/scraped/etc off of the physical media. Can anyone cite even one documented instance of an overwritten drive being successfully read, other than the decrepit MFM drives that Guttman discusses? No? You can only cite tinfoil hat rumours about the CIA and electron microscopes? That's what I thought.

My name is Martin Gomel, and I am the lead developer for "Some Technology". If you are interested in purchasing "Some Technology" for elite hacking into game server code, please send a certified check for $1000 to:

Maybe crackers will check for memcpy calls and zoom there.
Normal programs may have only a few memcpy calls on the code. Like 8 or 12 for 1 MB of source code. It will be easier to crack if the guy use a memcpy, because the cracker will look to small memcpy calls before the call run. If the cracker filter by size, may be even easier. Like filtering for a buffer enough to store a IP.

The paper that says you can't do it? People cite it to support the idea they have to do "35" passes over a disk, but it actually says it's completely impossible to recover anything after one. It doesn't have anything to do with memory wiping.

First, it is possible to recover values from memory that has lost power provided those values had stayed at the given memory locations for a long time, think hours. That said, it is rare that any real values, especially transient variables from an application, stay put long enough to be recoverable.

Concerning harddrives, just writing zero's doesn't do it. Because a zero value has a certain magnetic strength that can be compensated for in the recovery process. The only real way to make sure the data is not recoverable is to physically destroy the drive. However, you can make it near impossible by filling the drive with random data over multiple passes; 32 is the NSA accepted number.

If you don't believe me, just call a data recovery company. Tell them you formatted the drive and see if they can recover the data. They will tell you yes, for a fee. About the only time they can't recover all of the data is if a drive head crashed into the platters; but they can usually get some of it depending on the physical damage.

Obviously this programmer has never heard of memcpy or bzero (which can be easily implemented using memcpy).

I assume you mean memset, not memcpy.

And for preventing memory from being swapped, there's mlock. Though considering this is a game, they're probably running on windows, so they're stuck with VirtualLock, which is only available on win2k+.

VirtualLock does not do what you think it does: see e.g.

"VirtualLock only locks your memory into the working set"
http://blogs.msdn.com/oldnewthing/archive/2007/11/06/5924058.aspx

See also the Dekker/Newcomer NT device driver book, which has a little anecdote about this.

To lock pages in memory, you have to write a kernel mode device driver and use the MmProbeAndLockPages API.

My name is Martin Gomel, and I am the lead developer for "Some Technology". If you are interested in purchasing "Some Technology" for elite hacking into game server code, please send a certified check for $1000 to:

It IS possible to see previous states of the data on a disk. It's very hard though.

However, as far as I know HDD bits aren't in a FIFO queue. So, if you're looking at vague traces of data on a magnetic platter, you can't tell if that bit that looks like it might have been a '1' was a '1' at the same TIME as the bit that's next to it looks like it might have been a '0'. Any CRC will have been trashed as well (probably more so than the rest of the data), so there's no way of checking that what you think you've recovered is what the data originally was.

IOW, if a byte on a disk was first
01010101 then
01001001 then
00010101 then
00000000

At the end, with your ultrasensitive detector you will probably see traces of '1's in these positions
01011101

The 'decay' of the magnetic coding would have to be so precisely uniform to be able to tell what a combination of bits were at a particular moment in time that it's just infeasible except on 24 or CSI..

Security conscious organisations like the NSA might well overwrite lots of times (they'd probably actually degauss, shred then burn) - not because they know data could be retrieved if they didn't, but just because it's not a big effort to do so, and who knows what might be possible in the future. TBH, they'd be daft NOT to overwrite X times given the simplicity of the task.

I don't know what language the example is written, and my comments apply less to C, c++ etc, than to some others, but...

Most programmers have too much faith that the computer is doing what they tell it to do, when if fact, it is only guarenteed to do something with the same result. To begin with, most people today are running in a virtual machine which, as noted before, can be swapped around at the will of the OS, leaving its image in RAM and on disk. Second, within the runtime environment, many languages keep track of all their variables in symbol table, and it's quite possible that the successive assignments of value to a given variable name will result in a change in symbol table address to an new area in memory leaving the old value for a garbage collector to clean up later. Third, an optimizing compiler might move the assignments to a constant out of the loop altogether.