Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

That is what they get for mandating the code be in ANSI C. How about allowing reference implementation in SPARK, ADA or something else using design-by-contract. After all, isn't something as critical as a international standard for a hash function the type of software d-b-c was meant for?

If we imagine that the hash function came only as a mathematical definition, how would your test your new implementation in LangOfTheWeek is correct?

Well, you have 2 options. One, you can prove that your program behaves, in every important way, the same as the definition. This is long, tedious work, and most programmers don't even have the necessary skills for this. Two, you can make a reference implemention in some other language, and compare the outputs.

Now, given, say, 100 programmers each working on their own functions, we should have 1 resulting behaviour. This will mean that everybody implemented the algorithm 100% correctly. However, the actual number will be between 1 and 100, depending on the skills of the programmers, and the care they've taken in implementing the functions.

Now, what's the result here? (no pun intended). It's likely to be chaos.

If you've read the works of E.W. Dijkstra (start with Cruelty [utexas.edu]), you'd understand that a programming language isn't much more than a formal system for expressing mathematical definitions. Perhaps Haskell or another purely functional language might fit your intuitive understanding of a "mathematical definition" better than a procedural language like C, C++, P*, or Java.

In a word, no. A reference implementation is supposed to be a working version of the code, not just a mathematical description. With a working version, it's possible to do things like test its real world performance or cut and paste directly into a program that needs to use the function. That's obviously only possible if you have a version that works on real-world processors.

Consider Skein as an example. One of the things that Bruce Schneier described as a major goal of its design is that it uses functions that are highly optimized in real-world processors. That means that it's possible to make a version that's both very fast and straightforward to program, an important criterion for low-powered embedded applications. You won't discover that kind of detail until you implement it.

Mathematically anything is feasible, however if you place a real-world constraint such as it requiring an implementation then that greatly narrows the field down.

Furthermore one of the judging factors is the speed and portability of the algorithm upon a wide variety of commonly used platforms - it doesn't make sense to come up with a super-cool hash function that only works well on say an x86.

The short of it is that people make mistakes from time to time, and it is true that perfection is an important facto

An implementation in a programming language is the a way to define a function mathematically.

That is what a programming language is - a way to precisely define an algorithm.

But C is a low-level language, and therefore maybe a bad choice for function definitions. Instead, in my experience, implementing an algorithm in a high-level functional language like Haskell will often result in a beautiful and readable mathematical function definition.

Yes, it would be nice to have a way to compile beautiful mathematical functions to machine code. Sadly you're dependent on those people that are writing the grammar and reference implementation of the compiler. What if they need one of those pesky algorithms for that?

Look up bootstrapping. Most modern compilers can compile themselves using a stage or two of simpler compilers, built with either an existing compiler, an assembler, or maybe some hand written machine code. All the important parts of the compiler (parser, optimizer, etc.) can be written in the same language that the compiler can compile.

Bruce Schneier pointed out [schneier.com] that one can bootstrap a compiler using a different implementation of the language as a (probabilistic) measure against defects introduced by trusting trust. Build it on systems with different compilers, bit-compare the binaries generated on each system, and if they match, you can be reasonably sure that there is no such defect. But unlike C, which has implementations from GNU, Borland, Watcom, M$, Green Hills, and numerous other vendors, a lot of the managed languages lack multip

Unless you manually prove all the basic results of set theory for yourself (as well as philosophically agreeing that zf is a good choice of a set theory) and then build mathematics from it and derive formal languages from that, you probably shouldn't trust any code on any computer. Even then you can't trust the hardware, or even the wetware in your head.

I was just explaining how to bootstrap a compiler, not the finer points of epistemology.

C is a bad choice for mathematical function definitions, but it's a fantastic choice for integrating into virtually any stage of a software project. It can be used in an OS kernel, a standard portable crypto library (e.g. OpenSSL), embedded firmware, what have you. All of this with NO more library dependencies than the bare minimum memory management, and most crypto/hash functions don't need those because their state fits in a fixed-size structure. So you can have the mythical 100% standalone C code that fi

But that just raises the question of how to define a hash function mathematically? The lambda calculus, Godel Numbers? Things like cryptographic hash functions don't tend to be nice algebraic thingies like f(x)=x*x+7, especially since they're usually iterative and deliberately messy - the pretty functions are likely to be less secure.

What did they get? You realize this is just an ad for Fortify, right? Out of 42 projects, they found 5 with memory management issues using their tool. Maybe instead of switching to SPARK, the 5 teams that fucked up could ask the 37 that didn't for some tips on how to write correct C.

This just emphasizes what we already knew about C, even the most careful, security conscious developer messes up memory management.

I know nothing of the sort. How about asking some developers who have a history of getting both the security and the memory management correct which intellectual challenge they lose the most sleep over?

The OpenBSD team has a history of strength in both areas. I suspect most of these developers would laugh out loud at the intellectual challenge of the memory management required

"We're going to give you more shit to think about by making you use C. if you can't deal with all the stupid shit C throws at you, you suck."

Which is a shit argument. Just use a better language that gives people less to worry about, and develop from there. Having to debug the shit out of a program for obscure memory management issues shouldn't be a test of your competence. You should be able to focus on the task at hand, nothing else.

Because most of the systems out there use C for the performance sensitive bits. (and when asm optimization is done, people generally use a C implementation as a reference since C and asm are similar in many ways).

When they start doing Linux and Windows and other popular systems primarily in Ada you can start going WTF over people posting ANSI C code. Until Java, Ruby and Python aren't dependent on C/C++ implementations for their functionality we'll just have to suffer with C.

I hate to have to repeat it for the thousandth time, but Java's so-called virtualization comes crashing right down if you have even a single threading bug. Let me explain how it works.

Java gets compiled to machine code at runtime. Unlike machine code made from C code, the machine code really does have some nice protections from address and type confusion, with a generally acceptable performance penalty.

However it does NOT have ANY protections from threaded race conditions, so if you make any mistake in this

Call me when your real-world JVM isn't written in C. Until then you're just shifting the burden of resource management to someone else, and you could do that just as easily with C libraries as you could with Java libraries.

"Just Works"? How much Java have you actually used? For the space of SEVERAL MONTHS, the official "production-quality" Sun JVM had 64-bit JIT bugs that made it crash very often on very popular projects like Eclipse. They and their users had to wait for months for the JVM to be fixed upstream, and for those fixes to trickle down into their managed environments, which often takes another few months of testing. Don't talk to me about "Just Works", Java is software just like any other, and far from the highest

That is what they get for mandating the code be in ANSI C. How about allowing reference implementation in SPARK, ADA or something else using design-by-contract. After all, isn't something as critical as a international standard for a hash function the type of software d-b-c was meant for?

C is the lowest common denominator. Once you have a library in C that's compiled to native code, you can link just about any other language to that library: Perl, Python, Ruby, Java, Lisp, etc., all have foreign function interfaces to C.

C also can compile to just about any processor out there: x86, SPARC, POWER, ARM, etc. Over the decades it's become very portable and optimized.

C is also know by a large subset of the programmers out there, so if people want to re-write the algorithm in another language, the

After all, isn't something as critical as a international standard for a hash function the type of software d-b-c was meant for?

No.

The key desirable characteristics of a good secure hash function are (in this order):

Security (collision resistance, uniformity)

Performance

Ease of implementation

If the NIST AES contest is any example, what we'll find is that nearly all of the functions will be secure. That's not surprising since all of the creators are serious cryptographers well-versed in the known methods of attacking hash functions, so unless a new attack comes out during the contest, all of the functions will be resistant to all known attack methods.

With security out of the way, the most important criterion that will be used to choose the winner is performance. This means that every team putting forth a candidate will want to make it as fast as possible, but it still has to run on a few different kinds of machines. The closer you get to the machine, the faster you can make your code -- and this kind of code consists of lots of tight loops where every cycle counts. But assembler isn't an option because it needs to run on different machines.

With security out of the way, the most important criterion that will be used to choose the winner is performance.

Probably no one will ever see this self-reply, but I just noticed that the Skein site makes a good argument that the paring down of the candidate list should (will?) happen in the other order.

The idea is that since it's a huge amount of work to cryptanalyze a hash function, and since it's easy to measure performance (in time and space), the thing to do is to first toss out all of the slow and/or memory-hungry candidates. Obviously, if all of the fast and tight candidates were found to have security flaws

I think that since only 5 of the 42 projects garnered your attention that a better quote to include in the summary would have been:

We were impressed with the overall quality of the code, but we did find significant issues in a few projects, including buffer overflows in two of the projects.

If the other 37 projects didn't have any signficant flaws on the first round of this contest then that doesn't say to me "well, obviously no-one can do memory management properly" - it says that people make mistakes.

People do make mistakes. Even geniuses, when they're trying really hard to be careful. Personally, I see recognizing that as a validation for code review (including the automated code review that I do).

I want the winning entry for this competition to be flawless to the extent that's feasible. Right now, my job includes finding SHA-1 for cryptographic key generation, and telling people to replace that with something better. I don't want to be pulling out SHA-3 in a couple year

I think that was the article I submitted, expressly with the intent that it would catch the attention of people like yourself who could contribute to the auditing. This improves the quality of the submissions, perhaps identifies flaws in algorithms, and in general leads to a better contest between better competing implementations.

Personally, I'm gloating a little because the functions I considered to have such cool names (eg: Blue Midnight Wish) all came through clean and are also listed on the SHA3 Zoo as

This just emphasizes what we already knew about C, even the most careful, security conscious developer messes up memory management.

This doesn't follow from TFA. The blog points out two instances of buffer overflows. The first one you could argue they messed up "memory management" because they used the wrong bounds for their array in several places... but they don't sound very "careful" or "security conscious" since checking to make sure you understand the bounds of the array you're using is pretty basic.

But that's not what bothered me. The second example is a typo where TFA says someone entered a "3" instead of a "2". In what dimension is mis-typing something "messing up memory management"? That just doesn't follow.

In what dimension is mis-typing something "messing up memory management"? That just doesn't follow.

I haven't evaluated the code in question, because math scares me, but if someone makes a fencepost error (or just a typo - but fenceposting is a common cause of off-by-one errors) it's entirely possible to be mucking with memory that is a byte or a page off from the memory you think you're working on. So that's an example of how mis-typing something could cause an error in memory management (if in one function you have it right, and in another you have it wrong... you can't even get it right by getting luck

One reply deep in comment 26951319 [slashdot.org] I demonstrate that typing the "3" instead of "2" improperly access memory space that may or may not be allocated. This type of out-of-bounds access is mismanaging memory.

The summary is kind of a troll, since most of the submissions actually managed to get through without ANY buffer overflows.

Buffer overflows are not hard to avoid, they are just something that must be tested. If you don't test, you are going to make a mistake, but they are easy to find with a careful test plan or an automated tool. Apparently those authors who had buffer overflows in their code didn't really check for them.

C is just a tool, like any other, and it has tradeoffs. The fact that you are going to have to check for buffer overflows is just something you have to add to the final estimate of how long your project will take. But C gives you other advantages that make up for it. Best tool for the job, etc.

Buffer overflows are not hard to avoid, they are just something that must be tested.

No, they're a huge pain in the ass 99% of the time, what's worst is that pointers work even when they absolutely shouldn't. I recently worked with some code that instead of making a copy of the data just kept a void pointer to it instead. Since that code was passed just a temporary variable that was gone right after being passed into it, it should error out on any sane system, but the problem is it worked - when it was called later, using the pointer to a temp that's alreday gone it would just read the value out of unallocated memory anyway, and since the temp hadn't been overwritten it worked! Only reason it came up was that when called twice it'd show the last data twice, since both would be pointing to the same unallocated memory location. It's so WTF that you can't believe it. Personally I prefer to work with some sort of system (Java, C#, C++/Qt, anything) that'll give you some head's up that will tell you're out of bounds or otherwise just pointing at nothing.

Which is why tools like Valgrind or Numega BoundsChecker exist, they provide much more granular information about how memory's being used and abused, the problem you just described would flag up instantly as writing to previously free'd data along with a few source code locations relevant to where it was allocated/free'd.

I don't think the performance estimate will change much. You only have to check the input, and the complexity is within the rounds of the underlying block cipher/sponge function or whatever is used to get the bits distributed as they should be.

I blame the bugs on tight time schedules and inexperienced programmers/insufficient review. Basically, cryptographers are mathematicians at heart. There is a rather large likelyhood that C implementations are not their common playing field.

I suspect the problem is related to the poor coding practices used in academia. I see college professors who write code that barely compiles in GCC without a bunch of warnings about anachronistic syntax. Some of the C code used constructs that are unrecognizable to someone who learned the language within the past 10 years, and is completely type unsafe.

I can't tell much from the code on the link, but I do see #define used for constants which is no longer appropriate (yet is EXTREMELY common to see). C99 had the const keyword in it, probably even before that.

I suspect the problem is related to the poor coding practices used in academia. I see college professors who write code that barely compiles in GCC without a bunch of warnings about anachronistic syntax.

You know you'll learn a lot in a class when, after being told at the very least his c++ code is using deprecated includes, the professor tells you to just use '-Wno-deprecated'. I've basically come to the conclusion that I am just paying the school for a piece of paper, and I will learn little outside my personal study.

That depends, what is the class for? If it's a class teaching how to use C++, then you have a point.

If it's just about any other CS class, however, probably the language you are using doesn't matter so much, but rather what you are using the language to do.

I'm guessing that in this instance, the fact that the professor is using some wacky set of C constructs is not nearly as important as what is actually being taught, e.g. an algorithm. That is, ignore the deprecated stuff because that's not what is importa

With a #define, the preprocessor picks up the constant while using const delays it to the compiler. Of course, he said they were equally good, just a matter of style.

We never did learn why we had to put

using namespace std;

near the top of the program other than because we were supposed to use the standard namespace. I never bothered to ask or find out because by that time I had realized I'm an atrocious programmer and wouldn't be doing that

I can't tell much from the code on the link, but I do see #define used for constants which is no longer appropriate (yet is EXTREMELY common to see). C99 had the const keyword in it, probably even before that.

C got const much earlier; it was there in 1989.
And at least in the past, a static const int FOO was less useful than #define FOO,
it wasn't "constant enough" to define the size of an array.
But yes, you see macros too often.

I've had to work on an app where the main developer didn't know / didn't care about void *, and used char * everywhere instead. In fact he used char* even when the type was unique, and type cast at every call, and at the beginning of the called function.

When I called him on it, he said that I was doing philosophy and that he had real work to do.

I can't tell much from the code on the link, but I do see #define used for constants which is no longer appropriate (yet is EXTREMELY common to see). C99 had the const keyword in it, probably even before that.

I don't know where to start here.

const has been in C since c89.

const int SIZE=4;int data[SIZE];

is fine in C++ and does the same as using a #define for SIZE.

It doesn't work in C prior to C99.

In C99 it defines a VLA - i.e. it's identical to using int SIZE=4; and not using const for the array parameter.

Thanks, wow... so, doesn't that mean they broke functionality that worked in previous revisions of C?

?

No functionality broken. You can't use a const int to define an array size in C prior to C99 at all. (That's where this entire subthread started - someone was commenting about the use of #define rather than const int which is a reasonable criticism of C++ code but not C code)

It is an(other) area where C and C++ will forever behave differently (I assume).

no lint is basic, so basic there is no good reason to not always use it. (bo.c) is equivalent to the only flaw described in the published summary. Dynamic analyzers (required to catch your example) are just stuck in the shadow of the halting problem.

It's splint these days (at least on Linux). And using splint on any nontrivial large code base will bury you in tons of mostly irrelevant warnings.
If you dare to attempt cleaning up the mess, you'll find that you have to annotate your code. And then you have the problem that splint will only spit out correct warnings if your annotations are correct, so you have just doubled the potential sources of error (now it's annotations+code instead of just code).

I tried to write a minimal program that would give no warnings, I couldn't do it. If my main didn't have a return statement, it complained about that. If it had one that was not reachable, it complained about unreachable code. If it had one that COULD be reached, it complained that return could be called from main.

In other news, the first SHA-3 conference will be held in Belgium this week. The NIST hopes to be able to reduce the amount of contestants for the SHA-3 contest to a more manageable level by the end of that; for more info read on here [securityandthe.net].

MD6 by Rivest and Skein by Schneier et. al. seem to be getting a lot of attention, but another celebrity cryptographer, Dan J. Bernstein, also has a hash in this race, called "CubeHash."

DJB continued his tradition of offering cash rewards for people to find security problems with his code, giving out (so far) monthly prizes of 100 Euros to the most interesting cryptanalysis of CubeHash.

So far, the primary criticism of CubeHash is that it's slow, running some 10 to 20 times slower than many of the others in the competition. Dan brushes off this criticism by stating on his site [cr.yp.to]: "for most applications of hash functions, speed simply doesn't matter."

To be honest, when compared efforts like MD6 and Skein, with their mathematic proofs of security, VHDL and other in-hardware reference implementations, and their amazing optimizations in both speed and efficiency (Skein can process half a GByte of data per second on modern hardware, and consumes only 100 bytes) -- entries like CubeHash seem to have that longshot underdog appeal, like a New Zealand soccer World Cup team.

So far, the primary criticism of CubeHash is that it's slow, running some 10 to 20 times slower than many of the others in the competition. Dan brushes off this criticism by stating on his site: "for most applications of hash functions, speed simply doesn't matter."

That dude must have missed out on the small thing called "P2P", since most of them rely on making tons of hash checks per block, per file and whatever. And yes, they do need the properties of a good hash not just a checksum so it can't be poisoned with bad data. Any kind of backend processing lots of signed messages? Usuaully you hash the message to check it against the digest, then check the signature of the digest. What about mobile devices with low battery and CPU capacity? I'm not buying it.

At the very least, using a C-like language with safety, like Cyclone [thelanguage.org], would be a reasonable performance/safety tradeoff for a lot of users compared to the current tradeoffs (which leave quite a bit to be desired [cormander.com]). I'm guessing the main stumbling block would be reimplementation overhead (Linux already exists in C, has a lot of code, and is a fairly quickly moving target) and lack of interest on the part of kernel hackers (who have little interest in using non-C languages), rather than performance of the resu

If you're still writing unmanaged code, you get what you deserve. It's 2009, not 1989.

Try running managed code in the 4 MB RAM of a widely deployed handheld computer. Now try making that managed code time-competitive and space-competitive with an equivalent program in C++ compiled to a native binary.

My phone has a Java runtime. It works, and it's in fact a very sensible choice for the application (where security and binary portability matters more than performance). Even today, many embedded devices are powerful enough to run bytecode-interpreted languages, and this will only become more true in the future.

it's in fact a very sensible choice for the application (where security and binary portability matters more than performance)

Except that binary portability doesn't matter, and while security is an absolute requirement, performance must be as high as possible.

Many applications hash huge volumes of data. SHA-256 can hash around 60 MBps on a ~2Ghz core, and that's too slow for many applications. WAY too slow. I have an application where I'd like to be able to hash over 20 MBps on an XScale processor. The rest of the system can easily sustain this data rate, but the hash is the bottleneck. The hash should not be the bottleneck