Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

ancientribe passes along this excerpt from DarkReading.com: "Life's too short to defend broken code. That's the reason renowned researcher Dan Kaminsky says he came up with a brand-new way to prevent pervasive SQL injection, cross-site scripting, and other injection-type flaws in software — a framework that lets developers continue to write code the way they always have, but with a tool that helps prevent them from inadvertently leaving these flaws in their apps. The tool, which he released today for input from the development and security community, basically takes the security responsibility off the shoulders of developers. Putting the onus on them hasn't worked well thus far, he says. Kaminsky's new tool is part of his new startup, Recursive Ventures."

As soon as I hit "deliverable" in the first paragraph, warning bells went off. When "productize" appeared as a verb in the second paragraph, I closed the browser window. Sorry, but my experience tells me that the article is simply not worth reading.

The developer culture around SQL, where the majority of tutorials, cookbook methods, forum support groups, "expert" examples, etc. reinforce doing SQL the insecure way. It may not be current practice, but you can't rewrite the decades of bad advice still out there and being indexed, referred to, taught in introductory classes by uninterested tutors, and used by people who think infosec is analogous to physical security.

I agree with you on most of what you said. However, people who are just learning have no business writing business critical code for high risk environments, much less without strong supervision.
Also, writing checks for every case imaginable bloats your code and then there are all the cases you didn't imagine but a clever hacker does. the solution is to write checks for everything valid and have a standard procedure for everything invalid.

Bad developers aren't shamed out of the industry, they either carry on churning out bad code or become managers. The people who leave the industry are the decent programmers who tire of working with incompetent assholes.

Considering there are entire extremely complex systems made purely on stored procedures (which, from a client point of view, basically are just a little more than parameterized queries), 99.9% of the time if you cannot parameterized a query, you're doing wrong.

There's nothing stopping you from building a dynamic SQL string with parameters, and get the advantages without the drawbacks if you do it right (like using HIbernate/Nhibernate or equivalent):)

Having seen this sort of thing firsthand, bad programmers get away with being bad programmers because they have managers who are non-technical and whose bullshit detectors are defective or non-functional.

Part of it is just a corporate culture thing. Some companies encourage honesty and owning up to your mistakes so you can learn from them. Other companies have you living in fear of making even the tiniest mistake, so you'll find any excuse you can to make a given problem someone else's fault. Guess which type of company ends up inadvertently protecting the lousy programmers.

You seem to wander off-point a lot but the basic gist is that everyone should know how a computer works. Hell, *I* don't even know how a computer works, not really... I can spool off books on the technology, structure, electronics, bus interfaces, caching, logic, programming and the like and still not understand why a missing semi-colon caused quite so much trouble. Or how they layer silicon on the chips. Or why probe a certain I/O port hangs the computer.

And the way to counter that is NOT to expect the average joe on the streets to understand deep-level programming and computing. That's pointless, because they will never get it, and what they do get will never be accurate (read the recent article on Knuth's algorithms only working as advertised on a theoretical machine).

It's the same in *ALL* sciences (and anyone that doesn't classify computer science and mathematical sciences as "science" doesn't even begin to understand science), and we can't teach everyone everything. There hasn't been a single person in the world who knew "all of known science" since the ancient Greeks and there hasn't been anyone who knows everything about their own particular area for centuries, most probably.

We already are completely reliant on computers or robots. If you don't think that, then you're crazy. The problem is that we *can't* rely on the programmers and system engineers that put them together. My computer is currently executing billions of logical operations perfectly and flawlessly every single second. It's timing itself to balance these instructions across two major silicon chips (and dozens of minor chips) that were the mainframe-designer's dream of only 10-15 years ago, without fault, on the order of picoseconds - while those chips are shutting themselves down, speeding themselves up and consuming mere watts of electricity. It's integrating with millions of disparate electronic systems and detecting quantum-level errors in itself and correcting them. If there's a problem, I would know about it almost instantaneously (with certain checks on RAM / filesystem use). This computer, and all the ones I work with, has been doing that for several years 24/7 without failure... even through blackouts, brownouts and power-faults. Hell, it's a perfect operating device, like the one that controls my airbag in my car, the ABS, my bank accounts, every control system on a modern aeroplane, the satellite that gives me television / radio, the Internet, etc. They are all operating virtually flawlessly even across BILLIONS of such devices every day, all day. In terms of engineering that's phenomenal. They do *exactly* as they are told, perfectly, for years on end. Hardware faults are so rare as to be a cause for widespread panic in the IT departments when they happen.

Trouble is, some pillock put Linux or Windows or MacOS or VxWorks on them, or confused feet and metres, or thought 2-digit-years would always be enough. The fault with computers almost ALWAYS lies with the programmer, not the devices. Most of those problems are so damn subtle you could spend years analysing them and still not work out what happened. Hell, we've had computer chips "designed" by genetic algorithms which perform a specified task better, quicker and cheaper than any chip we've ever designed to do it - and although we know "how" it does it, we still don't understand exactly how it works or how to use that knowledge to our advantage (the anecdote I remember is one about a chip that could distinguish two different frequencies of electrical input - someone threw a GA at the problem and the chip design that resulted was smaller and lower-powered than any human design at the time to perform that task). We can understand the hardware, that's faultless (overall) but the software *always* lets us down and no amount of intent study and education can stop that. Hell, it's almost impossible to write more than a few thousand lines of C (which could execute in less than a few hundred CPU cycles even on the slowest of embedded processors

Sensible safety is never bloaty, its sleek, functional and manageable. Built in safety for every imaginable risk is bloat and risk in itself because your imagination is the limit of your protection and a management nightmare because people keep on thinking up new ways things can go wrong, while the amount of right things stays the same. Data validation is one of the most basic things you can do. But doing it the blacklist way is a slippery slope.
Oh, and just for a little mind-bending, imagine a car seat that has safety for every imaginable thing that can go wrong in a car with just the driver from you spilling hot coffee to having a heart attack and compare what you imagine to what you get in an average car. Bloat point should become obvious.

So essentially Kaminsky's vision comes down to: "Programmer's won't fix their code to prevent SQL injection errors. So my code will prevent SQL injections as long as developers fix their code to use my product"?

They keep getting rid of the smart people by pay cuts or salary freezes. The smart people jump ship, and the people who write functional but terrible code get kept. If you want someone who knows what they are doing, you have to pay, and that increases costs.

Yup. The good programmers also get sick of shouldering the load--fixing the crappy code written by their incompetent coworkers.

I've known too many good developers who got penalized because they spent all their time cleaning up other people's messes, missing their own deadlines, because they cared about having a quality product. At review time, they'd get chastised, get no raises or bonuses, and eventually they'd split. I can't say I blame them, either.

It's pretty interesting that a guy with a resume like yours (tour guide, a bit of web art here and there) feels qualified to imply that Dan Kaminsky, a respected security expert, doesn't have a clue and is a "moron".