Posted
by
kdawson
on Tuesday June 15, 2010 @04:23AM
from the won't-have-xss-to-kick-around-anymore dept.

ancientribe passes along this excerpt from DarkReading.com: "Life's too short to defend broken code. That's the reason renowned researcher Dan Kaminsky says he came up with a brand-new way to prevent pervasive SQL injection, cross-site scripting, and other injection-type flaws in software — a framework that lets developers continue to write code the way they always have, but with a tool that helps prevent them from inadvertently leaving these flaws in their apps. The tool, which he released today for input from the development and security community, basically takes the security responsibility off the shoulders of developers. Putting the onus on them hasn't worked well thus far, he says. Kaminsky's new tool is part of his new startup, Recursive Ventures."

As soon as I hit "deliverable" in the first paragraph, warning bells went off. When "productize" appeared as a verb in the second paragraph, I closed the browser window. Sorry, but my experience tells me that the article is simply not worth reading.

The article wasn't all that informative, and yes, red flags went up as soon as I read "deliverable" & "productize". Basically his framework requires "developers to use different prefixes that describe variables of the strings, without requiring any major changes to their coding style". That's the gist of the important info in the article.

I'm not making any statements or claims against Kaminsky, Recursion Ventures or Interpolique. I'm just agreeing that the article wasn't all that great.

So essentially Kaminsky's vision comes down to: "Programmer's won't fix their code to prevent SQL injection errors. So my code will prevent SQL injections as long as developers fix their code to use my product"?

It requires developers to use different prefixes that describe variables of the strings, without requiring any major changes to their coding style, he says. And the resulting code is automatically formatted in such a way that can't be easily abused by the bad guys.

"Our system makes it very clear what is data and what is code without asking the developer to jump through hoops to make that expression" as with existing secure coding options for string-injection prevention, Kaminsky says. Th

It requires developers to use different prefixes that describe variables of the strings, without requiring any major changes to their coding style, he says. And the resulting code is automatically formatted in such a way that can't be easily abused by the bad guys.

"Our system makes it very clear what is data and what is code without asking the developer to jump through hoops to make that expression" as with existing secure coding options for string-injection prevention, Kaminsky says. The tool establishes a boundary between data and code and then translates it for the destination coding language -- be it SQL or JavaScript, for example, he says.

Which means he enforces a convention on developers that aims to improve code security. Sounds smart.

Interesting... a naming notation to describe the contents of variables. Hungarians like Kaminsky sure are smart!

Interesting... a naming notation to describe the contents of variables. Hungarians like Kaminsky sure are smart

Hungarian notation duplicates information the programmer and the system already knows into the syntax. And its not comparable since there is no framework that checks if the programmer actually used the right prefix matching the variable type.

Here, the notation adds information the programmer knows to the system. One could also think of declaring or annotating a variable, allowing it to contain code. That this information is useful to the system (security) is known (W^X and the such).

What you're describing is Hungarian notation, the way it was originally intended. Unfortunately, the original paper describing Hungarian notation used the word "type" to define this sort of metadata, and it was misinterpreted as meaning "type" as in data type, leading to a sort of bastardised Hungarian that isn't much good for anything. What's semi-new, and interesting, here is the compiler automatically enforcing the meanings of the prefixes. (Splint, a static checker for C, has done that sort of thing for

hungarian notation is best used not to store actual datatype of the variable, but additional information beyond the raw datatype. So "strTitle" is worthless in a statically typed language, and at best questionable in a dynamically type language.

But "uTitle" to indicate an unsafe (unescped) title and "sTitle" to indicate an title that is safe (has been escaped) is a coding convention that makes plenty of sense. It is very possible to then automatically scan code to find violations of safety, such as any assi

Hungarian notation in this case would add semantic value that's not already captured by the type checker, but maybe we should just modify the type system [blogspot.com] to cover the safety of strings.

As another objection, isn't forcing the programmer to use Hungarian notation for safety a more complicated option than just using parameter binding correctly in pretty much any language/framework that uses varargs for this purpose (Java/JDBC is the notable exception). Don't object-relational mapping systems in many frameworks

The whole point is that it wasn't a grammatical error, it was intentional marketing-speak. People that use such language often don't have a clue what they are talking about. If you don't want to sound like a moron, don't talk like one.

I've got a patent pending solution. First step you buy a copy of the OED, the largest one you can find. Step 2, you buy a tall ladder, preferably 10" or higher. Step 3 you place said ladder in front of subject. Step 4, you climb it and drop it straight on his head. If the spelling doesn't stick, chances are you've got quite a mess on your hands, and ought to pick up some cleaning supplies.

It's pretty interesting that a guy with a resume like yours (tour guide, a bit of web art here and there) feels qualified to imply that Dan Kaminsky, a respected security expert, doesn't have a clue and is a "moron".

What on Earth makes you think a random commenter on Slashdot would have "met Dan" before? Is meeting the author a prerequisite to comment now? I just said marketroid speak turns me off and based on my previous experiences has a very high potential for being bullshit. Or did you just want to show off how cool you are in front of everyone..."oh yes Dan and I have met and we're on a first-name basis! Look at me and respect me! Remember the utterly forgettable handle I use on this website and quake when I

Actually, that's how the Java version works -- you take strings, and subclass them into safe versions and unsafe versions. Then you combine, either through a vararg shell, or through sequential dot notation.

I'm not a big fan of either; I really think interpolation is the right way for a programmer to express intent, and the compiler should be smart enough to extract it.

By using a transaction around command groups, you ensure that any error will cause the entire transaction to be rolled back. By disabling the comment character, you make it impossible to make the rest of the command go away and not cau

The developer culture around SQL, where the majority of tutorials, cookbook methods, forum support groups, "expert" examples, etc. reinforce doing SQL the insecure way. It may not be current practice, but you can't rewrite the decades of bad advice still out there and being indexed, referred to, taught in introductory classes by uninterested tutors, and used by people who think infosec is analogous to physical security.

There is truth in what you say. "Developer culture" has grown in many bad an improper ways. It's sad and unfortunate. Of course, every time I say so, I lose karma points or whatever. But you have to admit that developer culture varies largely on the platform for which they are developing. Are there excellent Windows coders? Oh yeah, I'm sure of it. Are there bad Windows coders. The question doesn't need to be asked. What is the rate and proportion of said developers? It's a guess but I favor a higher proportion of bad coders in Windows. Do other platforms foster bad/lazy coding?

Well, as put, yes. Tutorials and methods and the like tend to get the messages across as simply and directly as possible. Inserting error check and validation code might confuse matters. But for people who are learning, they may not realize the need for such code until it is too late.

I can't even think of writing code without checks for every condition imaginable simply because when I started coding, I was learning among peers whose favorite thing to do was poke holes in your code in some way or another. I guess that's known today as "peer review" but it was more like peer pressure review when I was in school. The last thing I wanted was to have embarrassing or code that may be ridiculed. And I think that's what TRULY missing in today's development environments -- shame and ridicule.

Windows and Mac are both quite "closed source" and peer review, if any ever occurs, happens internally. Linux is open sourced and peer review happens all the time.

I agree with you on most of what you said. However, people who are just learning have no business writing business critical code for high risk environments, much less without strong supervision.
Also, writing checks for every case imaginable bloats your code and then there are all the cases you didn't imagine but a clever hacker does. the solution is to write checks for everything valid and have a standard procedure for everything invalid.

However, people who are just learning have no business writing business critical code for high risk environments, much less without strong supervision.

This is very true, but bad habits that are learned early are hard to break. When these 'noobies' start to work on more critical stuff, and they have deadlines, and a boss who isn't as 'understanding' as you or I, they will cut corners because it is already in their repertoire. Teach them properly, right from the start, and everyone is in much better shape: noobies, teachers, employers, the dev community, site owners & operators... and the general public.

Also, writing checks for every case imaginable bloats your code and then there are all the cases you didn't imagine but a clever hacker does. the solution is to write checks for everything valid and have a standard procedure for everything invalid.

I think the bloat needn't be as bad as you imagine. For example, if you are screening for illegal/disallowed characters in your input string, you could write a series of if/then to test for each one, or you could define a string comprised of disallowed characters and write a loop to test for the presence of any of those characters in the input string. You can be as clever or slick as you like so long as it is accurate and complete. But calling it bloat is something of a misnomer. Are seatbelts on a car

Sensible safety is never bloaty, its sleek, functional and manageable. Built in safety for every imaginable risk is bloat and risk in itself because your imagination is the limit of your protection and a management nightmare because people keep on thinking up new ways things can go wrong, while the amount of right things stays the same. Data validation is one of the most basic things you can do. But doing it the blacklist way is a slippery slope.
Oh, and just for a little mind-bending, imagine a car seat t

I would probably use regex for this. If the language didn't support regex, I would assume that this was a much smaller scale app or utility and then review my options based on that, but for any good sized project, regex would do the job better than any sort of native character comparisons and I can't imagine there is any serious development language that doesn't have a regex library or regex support.

Have been in the situation of having no idea what I was doing and writing business critical code, I'd like to explain how this happens. My boss comes to my team and yells "DO MORE WITH LESS!!!" They decide my department is now in charge of things some other department used to do but they fired them all. When we can't keep up, management comes to us and declares that we obviously must be surfing SlashDot all day instead of working and institutes a metrics system. It's supposed to track every piece of work we do, assign how long it "Should" take to complete (a totally invented number) and then track it. At the end of the day we get a stat that says we were "70% productive" because we completed 70% of what they thought we should. What the system really does is make it take twice as long to do all that work that we now had too much of to do anyway. We start working through our breaks and lunches trying to make our numbers. Finally one day I realize the similarity between many of our tasks. I realize a lot of tasks could be made easier if there was a web-page that collected a lot of the info together, and then maybe some scripting that added things to some different databases. I have limited skills coding so I go out and find example code, manipulate that code until it "sort of" works for what I need. Finally I can make my stats. After a few weeks my manager comes to me "It's impossible for you to meet these stats how are you doing it!" I explain what I have written and I suggest that we have our code department write something similar that actually follows standards and what-not. But no, they apparently are not taking on any new projects at this time because they are busy writing a database for tracking their projects (Totally serious, that's really why they denied our request) My boss decides that what I've written is too important for them not to use. I explain MANY TIMES that I am not a programmer, have no schooling, I just found bits of code on the net, modified it extensively, and not only that what I've written goes down in flames on a REGULAR basis. We're talking database corruption, Crashing the entire workstation etc... They understand that but are going ahead with it. They say they will get a spare programmer to help me work the bugs out when ones available. It's now 3 years ago, my code has grown into a monstrosity beyond imagination. It controls much of everything we do, but every few hours I'm called to fix it. One of the databases corrupts so often that I have it back itself up every 15 minutes but we still constantly lose data. Meanwhile my boss has had me add feature after feature and completely eliminated any time I had been given to maintain the code, making things more complex and dooming the entire system to an even earlier death. When they finally got a coder to look at it was such a mess at that point that they quoted them a total rewrite and a price tag that was 4x my annual salary. The fact that the entire thing hasn't collapsed in on itself is a shocking to me, meanwhile my department is now so dependent on the mess that when it collapses I don't think we could continue to function at all.

I can't even think of writing code without checks for every condition imaginable simply because when I started coding, I was learning among peers whose favorite thing to do was poke holes in your code in some way or another. I guess that's known today as "peer review" but it was more like peer pressure review when I was in school. The last thing I wanted was to have embarrassing or code that may be ridiculed. And I think that's what TRULY missing in today's development environments -- shame and ridicule.

At DEC we had a formal process known as “code review”. A bunch of us got copies of some code to review. We then all met together and went over the code line-by-line describing the flaws that we found. There was also statistics gathering and reporting, but the greatest value of the process was to the coder, who got feedback on his code. I had thought it would be hard to avoid getting upset at what were perceived as personal attacks (“that's my baby you're criticizing”) but, at least in the code reviews that I was involved in, that never happened. The whole thing was handled very professionally.

So you were working on DEC. I have been waiting to ask this question to some DEC for long long time. The default behavior in DEC for any violation seems to be to crash the executable, without warning, without stack trace, nothing. Have to laboriously insert debug/print statements and find the location of the crash. It was a nightmare of a platform to work with. We were basically using DEC as our hardware bounds checker. If it runs on DEC, you don't have to run bounds checker, purify etc. But very painful to

So you were working on DEC. I have been waiting to ask this question to some DEC for long long time. The default behavior in DEC for any violation seems to be to crash the executable, without warning, without stack trace, nothing. Have to laboriously insert debug/print statements and find the location of the crash. It was a nightmare of a platform to work with. We were basically using DEC as our hardware bounds checker. If it runs on DEC, you don't have to run bounds checker, purify etc. But very painful to develop in that platform.

The behavior depends on the platform. My experience was with the PDP-10 and VAX systems. The PDP-10-based machines generally were very good for debugging, since the early software was written in assembler. There was no stack trace, since stack handling was an application convention, but there was a good debugger.

On the VAX systems the assembler-level debugger was not as good, probably because we were developing in a high-level (for the time) language by then. Exception handling was much better, though.

At this point I'd settle for people not writing SQL in all caps with the vowels removed. It's like the 'convention' is to make SQL as unreadable as possible. The 70's are over folks, toss your caps-lock key and buy a vowel. SQL can be readable.

At this point I'd settle for people not writing SQL in all caps with the vowels removed. It's like the 'convention' is to make SQL as unreadable as possible. The 70's are over folks, toss your caps-lock key and buy a vowel. SQL can be readable.

Those of us who are refugees from the 70's know that we use capitals for SQL keywords and lower case/mixed case for tablenames, columnnames and values. So
SELECT colname FROM Tablename WHERE colname IN ('value1', 'value2')
is easy to read. Those of us who can do t

It's easy enough to straighten out, though. At my current job, committing non-parameterized SQL strings into production is a firing offense and everyone is told that from the beginning. It's right up there with "don't stab the boss" and "don't smoke crack at your desk".

I laugh here, but it really is that serious. There's not a single legitimate reason for ever using anything other than parameterized queries. They're easier to write ("How many quotes do I need to put here?"), easier to maintain (because you don't ever have to mix SQL and code), always as fast as constructed queries and usually faster, and generally superior in every single way.

Parameterized SQL, or prepared statements, completely prevent SQL injection attacks. They might also speed things up in some circumstances. Why not simply use them exclusively?

I question the need for SQL. Can't we have a simple OO query system? We don't need to write strings of TCL to interact with GUI components.

I've never seen a complex GUI that could come back to the same state it was in when you pulled the plug on it. Nor have I ever seen a complex GUI that could run for years without having to be restarted. To my knowledge, every major operating system has a function to force kill a hung GUI.

SQL (or any system that tries to be as expressive as the relational model) isn't so important for queries as it is for declarative integrity.

If you don't have declarative integrity, you can't centralize your integrity check

It would be useful to the whole slashdot if you could support your assertion with some examples or reference.

NOTE: As far as I understand the "injection attack" is able to completely change the behaviour of a SQL statement, something like transforming a select into a delete or alter, we are not talking here of some SQL errors that are something like

DELETE from ana_tbl WHERE ana_name LIKE (?)

The above statement can easily be abused if someone pass % as "name", but it is NOT injection attack, it is just plain

Since this was the first post asking the question on how to inject sql when paramters are used.
When the statment that parameters do not protect the person who is making the statement wants to beable to pass table names into the select statement, in the FROM portion, and you cannot paramaterize table names.
Yes they are missing the point.
The other example I have been given is they want to pass generate a complete where string pass that through a parameter and expect that the parameter will provide protecti

You can't supply table names in the ones I'm using, in fact the whole point of a parameterized (fuck me I can't spell that) query is to have the query in a fixed state rather than creating them dynamically.

Regarding your other point, when you use parameterized queries you have a guarantee from the driver to escape anything that's put into the position/binding, I don't care how random your string is, there is simply no way you will inject anything into my database.

Regarding your other point, when you use parameterized queries you have a guarantee from the driver to escape anything that's put into the position/binding, I don't care how random your string is, there is simply no way you will inject anything into my database.

Actually it's better. Parameters are never seen by SQL interpreter because they are applied to already interpreted queries -- they are not escaped because they don't go through the mechanism that escapes and un-escapes anything. Obviously, if someone is stupid enough to take data from a database and concatenate it with something else to construct an interpreted query statement, he will face exactly the same problem if he taken that data directly from the user -- parametrized queries are only useful if ALL q

Considering there are entire extremely complex systems made purely on stored procedures (which, from a client point of view, basically are just a little more than parameterized queries), 99.9% of the time if you cannot parameterized a query, you're doing wrong.

There's nothing stopping you from building a dynamic SQL string with parameters, and get the advantages without the drawbacks if you do it right (like using HIbernate/Nhibernate or equivalent):)

99.9% of the time if you cannot parameterized a query, you're doing wrong.

In that case, all the 0.1% of the queries appeared to fall on me. Try using operator IN with parameters, and then see why it doesn't always work [pineight.com].

There's nothing stopping you from building a dynamic SQL string with parameters

That doesn't work if the parameter interface in your database's client API expects there to be a constant number of parameters in each statement. For example, how does one pass a variable number of parameters to mysqli_stmt_bind_param() in PHP?

That doesn't work if the parameter interface in your database's client API expects there to be a constant number of parameters in each statement. For example, how does one pass a variable number of parameters to mysqli_stmt_bind_param() in PHP?

call_user_func_array() You just have to make sure the parameter array contains references to variables since mysqli_bind_param() (mysqli_stmt_bind_param is deprecated, use mysqli_bind_param instead) expects the parameters to be passed by reference.

call_user_func_array() You just have to make sure the parameter array contains references to variables

Then the code has to maintain three things in parallel: the string of ?s in the SQL, the list of argument types, and an array containing references to elements in another array. Does this method, which requires reaching into more obscure corners of PHP (references and call_user_func_array()), provide noticeably more safety than a dedicated function for escaping lists?

(mysqli_stmt_bind_param is deprecated, use mysqli_bind_param instead)

I don't use either; I use the bind_param method of a prepared statement object.

There are many types of queries (typically those that go beyond CRUD operations or are a little meta) that cannot be parameterized.

Please provide an example. Because I don't believe you.

Few SQL client libraries support passing a 1-column table as a parameter, yet that's exactly what you need to do when using SQL's operator IN [pineight.com]. So instead, I made a function that escapes such 1-column tables using the quoting rules of SQL, and I tested it thoroughly.

Judging from the summary this tool is useless for good developers- I know I trust myself with security with regards to things like SQL injection attacks more than I'd trust some automated system to eliminate them.

But it sounds like it's designed as a crutch for incompetent developers, those who don't use paramters or prepared statements because they don't know they exist/don't understand them/can't be bothered to learn about them/have some irrational reason for not using them.

What the product does is go through your code and change in-line SQL to code like this
select * from table where fname=b64d("VEhJUyBJUyBUSEUgU1RPUlkg QUxMIEFCT1VUIEhPVyBNWSBMSUZFIEdPV CBUVVJORUQgVVBTSURFIERPV04=")
Then when called in the database you get inject safe code.

More excuses for hiring developers who can produce bad code by metric ton without understanding what it actually is what they are doing. And that they don't even need to understand. Sigh... It waould be better to ban your developers off SQL and into an abstraction layer that puts the queries together behind the scenes and hire a competent developer to develop that.

Having seen this sort of thing firsthand, bad programmers get away with being bad programmers because they have managers who are non-technical and whose bullshit detectors are defective or non-functional.

Part of it is just a corporate culture thing. Some companies encourage honesty and owning up to your mistakes so you can learn from them. Other companies have you living in fear of making even the tiniest mistake, so you'll find any excuse you can to make a given problem someone else's fault. Guess which type of company ends up inadvertently protecting the lousy programmers.

They keep getting rid of the smart people by pay cuts or salary freezes. The smart people jump ship, and the people who write functional but terrible code get kept. If you want someone who knows what they are doing, you have to pay, and that increases costs.

Yup. The good programmers also get sick of shouldering the load--fixing the crappy code written by their incompetent coworkers.

I've known too many good developers who got penalized because they spent all their time cleaning up other people's messes, missing their own deadlines, because they cared about having a quality product. At review time, they'd get chastised, get no raises or bonuses, and eventually they'd split. I can't say I blame them, either.

As much as I like good code, people like that are not doing their job. Clean up code as much as you have time for while doing your own job, but when you start missing deadlines you visibly cost the company money. Best course of action would be to note down dirty code for after release so that your manager can give you time to write a patch for important issues.

It doesn't say anything about how this actually works and how it differs from existing solutions. And, hey, most developers aware of SQL injection / XSS etc already protect their apps. Rails has got both, PHP frameworks have, Java had it since like for ever (2001?). What's the point of this article?

It doesn't say anything about how this actually works and how it differs from existing solutions. And, hey, most developers aware of SQL injection / XSS etc already protect their apps. Rails has got both, PHP frameworks have, Java had it since like for ever (2001?). What's the point of this article?

The point of this article is that major vendors/websites continue to place vulnerable code facing the web.And this is despite the fact that "most developers aware of SQL injection / XSS" and many frameworks attempt to prevent your errors for you.

Because of this, Kaminsky will get rich off his startup if the program secures (in an automated fashion) what everyone else has tried and failed to secure.

Seems to me that this is just perl's taint mode, implemented in a less elegant fashion (one that relies on variable name prefixes, ugh).

From perldoc perlsec:
You may not use data derived from outside your program to affect
something else outside your program--at least, not by accident. All
command line arguments, environment variables, locale information (see
perllocale), results of certain system calls ("readdir()",
"readlink()" [snip - "and other stuff" ] and all file input are marked as "tainted".
Tainted data may not be used directly or indirectly in any command that
invokes a sub-shell, nor in any command that modifies files,
directories, or processes, with the following exceptions:

You seem to wander off-point a lot but the basic gist is that everyone should know how a computer works. Hell, *I* don't even know how a computer works, not really... I can spool off books on the technology, structure, electronics, bus interfaces, caching, logic, programming and the like and still not understand why a missing semi-colon caused quite so much trouble. Or how they layer silicon on the chips. Or why probe a certain I/O port hangs the computer.

And the way to counter that is NOT to expect the average joe on the streets to understand deep-level programming and computing. That's pointless, because they will never get it, and what they do get will never be accurate (read the recent article on Knuth's algorithms only working as advertised on a theoretical machine).

It's the same in *ALL* sciences (and anyone that doesn't classify computer science and mathematical sciences as "science" doesn't even begin to understand science), and we can't teach everyone everything. There hasn't been a single person in the world who knew "all of known science" since the ancient Greeks and there hasn't been anyone who knows everything about their own particular area for centuries, most probably.

We already are completely reliant on computers or robots. If you don't think that, then you're crazy. The problem is that we *can't* rely on the programmers and system engineers that put them together. My computer is currently executing billions of logical operations perfectly and flawlessly every single second. It's timing itself to balance these instructions across two major silicon chips (and dozens of minor chips) that were the mainframe-designer's dream of only 10-15 years ago, without fault, on the order of picoseconds - while those chips are shutting themselves down, speeding themselves up and consuming mere watts of electricity. It's integrating with millions of disparate electronic systems and detecting quantum-level errors in itself and correcting them. If there's a problem, I would know about it almost instantaneously (with certain checks on RAM / filesystem use). This computer, and all the ones I work with, has been doing that for several years 24/7 without failure... even through blackouts, brownouts and power-faults. Hell, it's a perfect operating device, like the one that controls my airbag in my car, the ABS, my bank accounts, every control system on a modern aeroplane, the satellite that gives me television / radio, the Internet, etc. They are all operating virtually flawlessly even across BILLIONS of such devices every day, all day. In terms of engineering that's phenomenal. They do *exactly* as they are told, perfectly, for years on end. Hardware faults are so rare as to be a cause for widespread panic in the IT departments when they happen.

Trouble is, some pillock put Linux or Windows or MacOS or VxWorks on them, or confused feet and metres, or thought 2-digit-years would always be enough. The fault with computers almost ALWAYS lies with the programmer, not the devices. Most of those problems are so damn subtle you could spend years analysing them and still not work out what happened. Hell, we've had computer chips "designed" by genetic algorithms which perform a specified task better, quicker and cheaper than any chip we've ever designed to do it - and although we know "how" it does it, we still don't understand exactly how it works or how to use that knowledge to our advantage (the anecdote I remember is one about a chip that could distinguish two different frequencies of electrical input - someone threw a GA at the problem and the chip design that resulted was smaller and lower-powered than any human design at the time to perform that task). We can understand the hardware, that's faultless (overall) but the software *always* lets us down and no amount of intent study and education can stop that. Hell, it's almost impossible to write more than a few thousand lines of C (which could execute in less than a few hundred CPU cycles even on the slowest of embedded processors

Nothing fixes bad code. Nothing can. Now there are things you can do to prevent writing bad code, like scream when your code goes and screws up stuff. You can automate the things you might do wrong, use a garbage collector, use prepared statements, use a filter to check for input. And it's hard work, but that's why you get paid. Now management can help you too (my boss gives me w

We use PostgreSQL. We expose the libpq not default port directly to the internet through pgbouncer. What we did:

*) Modify pgbouncer to only except extended protocol (parameterized) queries
*) Auto Generate list of allowed queries used by app to store in whitelist
*) Block all functions except auth if authenticated or to the whitelist othewise
have had zero problems. curious what you think.

Sounds like a great idea to me. Possible performance downside on a really busy system, of course.

In general there are a handful of queries that don't take parameters, so they would need to be whitelisted somehow, perhaps by table name. There are all sorts of other fun policies you could enforce, of course.

I don't see the point of this 'solution' - it's going to make things worse. The world is going to build a whole bunch of 'better' idiots that will get around the 'magic tool' in ways that will make your head spin. It's solving the wrong problem.

I also don't understand the infatuation with stored procedures. Programming in the database vendors language is horrible compared to most client side languages. You can concatenate all sorts of dynamic queries client side without opening yourselves to injection attac

Apparently all you have to do is include "Kaminsky" in the summary to get a Slashdot article to the front page. This post has zero real content and TFA uses the word "productize", for science's sake. Looks like Kaminsky has become the nearest thing to a rock star that the security industry has.. which is too bad, because he's sort of a douche bag in real life.

Jesus, what is with the grammar nazis on this?! Let it go already. True IT geeks write and comment code, they don't write fucking novels.