Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

alxtoth writes "Snowflake is a new BSD-licensed tool that parses SQL Select statements and generates a diagram. It shows parts of the underlying SQL directly in the diagram. For example: x=30, GROUP BY (year), SUM (sales), HAVING MIN (age) > 18. The primary reason for the tool was to avoid Cartesian joins and loops in SQL written by hand, with many joined tables. The database will execute such a statement, if syntactically correct, resulting in runaway queries that can bring the database down. If you sit close to the DBAs, you can hear them screaming... "

No single query will ever bring a (real) RDBMS down. Even on a terrabyte of data or more, doing a crazy multi-hundred-table cross join, you're not going to bring it down.

A real ACID-compliant database, no. MySQL, maybe.

Now, it could seriously slow down a production server, but... you're not pushing untested SQL on a production server now, are you? Right? Riiiiiiiiiiight?

Unfortunately sometimes you do need to run new queries against production servers. Of course, with a real database like MSSQL or Oracle, you can see how a query will execute, what path the optimizer will follow, and what the cost of the query will be.

Is it possible to bring Oracle down? I would think so, it would just take a lot (note: assuming normal hardware, not a large high-power cluster). Is it possible to take MySQL down? Easily. It can be surprisingly easy to lock the server completely. Even when you select off one set of tables (A) and want to insert into another set (B, possibly in a different schema/DB) it is possible to have things locked. It's very easy. We haven't seen a crashing bug in MySQL in a while (fun: a query that formated dates with the date format function could reliably crash MySQL 4.0 or 4.1 (don't remember which).

Does explain help? No. On Oracle it may help. In Postgres it seems to help. I have no experience with MSSQL. In MySQL you have to watch out. While it can be useful, it is very limited.

It's row counts can be horribly useless. It can list 1.2 million rows when in fact it can take a fraction of a second to get the data because it's all in an index in memory.

Worse: it will run the query for you. Under some circumstances (using a subquery can do it, using more than one level of subquery is almost guaranteed to do it) it will just run the inner query and then use that to produce results. This means that describe/explain can lock the database and take hours to return (if you had a query that was bad enough and didn't kill the describe/explain). It's all the fun of running the real query, without the results actually presented to you.

Note: We're using 5.0 (since 5.1 isn't production ready yet). Some of this may be fixed.

Is it possible to bring Oracle down? I would think so, it would just take a lot (note: assuming normal hardware, not a large high-power cluster).

Oh yes, Oracle can be brought to a grinding halt (even on substantial hardware) by a big nasty query. It may not be crashed, but it's nonresponsive. Especially annoying when there is no need for the cartesian product; Oracle's pessimizer just chose to do one when something else was MUCH more appropriate. Alas this tool would not catch that situation (but EXPLAIN PLAN does).

You can use Database Resource Management to control maximum execution time for queries in Oracle; it's designed to protect such scenarios.

However, the "pessimizer" as you call it is not a magic wand - it's just decent heuristics for selecting the optimal path for data retrieval. It's pretty complex; and has taken decades to come to this level. It's damn good if you understand it and configure your instance correctly. Where it doesnt work, you always have had other options.

The question is: is there a semantically equivalent query that DOESN'T overload the system?

Yes, in the particular cases I ran into; I was able to reformulate the queries to avoid the problem. They were SELECT DISTINCT on a big hairy set of tables; by making that query into a subquery with SELECT ALL, and then doing a SELECT DISTINCT on the subquery, the problem was resolved.

The statement about being 'no need for a Cartesian product' reminds me of a guy who said inner joins should never be used because they can bring a system down. Because he wrote one once that was a poor query and it did just that.

Just because a person doesn't know when to use something properly, or uses it improperly from time to time, doesn't mean it has no use.

I've used Cartesian joins before. Not very often, but I do recall using them in the past for very specific requirements. If memory serves me, it was

Ha! I remember I had MySQL doing a full-text search over about a million documents in a web-app. If someone searched for something stupid like "problem" it would take like 10 minutes to return. Then if someone else did a search in the middle of that, the database would cease functioning for that table and all new queries that hit that table would simply sit for very large values of x.

I had to write a system so that before I ran the full-text search query, I looked at the current running queries and if any of them had been running longer than 15 seconds I killed them before running the new query. It actually worked well enough and gave me time to move the system to lucene;)

SQL Server is pretty robust... you can spike the processor to 100% with stupid queries, but they usually finish unless there is a problem with the index. SQL Server is notorious for corrupting its indexes (maybe just our customers?) and then just having the weirdest problems on those tables.

Postgresql I've never tested with huge queries so not sure about. Oracle... we use it for some backend stuff at work, and frankly I think a monkey with a pen and paper could be faster. But I don't control that system so maybe it's just set up badly or the web-app I do have access to just gives it *really* bad queries that take 10 minutes to come back (if at all - stupid 25 connection limit).

Oracle... we use it for some backend stuff at work, and frankly I think a monkey with a pen and paper could be faster.

Someone's done something wrong (probably stupidly wrong), as I've used Oracle on a system running stupidly complex queries against a table containing ~60 million rows, and while some queries can take a while most are surprisingly fast (sub-second or two). Yes, it's been designed and optimised to be fast - but that's what I meant about someone at your place having done something wrong...

I'm far from a SQL guru, but isn't that sort of situation the place to store user search results (vBulletin-esque) and query for the stored search before hitting the live data?

No, because random articles were updated constantly (a few hundred a day) so just displaying previously cached results would have been bad for our purposes (and the searching was usually only a few searches a minute, so it wasn't scaled to the point where caching was absolutely necessary). Accuracy was more important than speed in our situation, and the small size of the user base (about 50 people) using the web app allows us to search the live data effectively.

I looked into the query timeout thing... as far as I could tell MySQL does not have such a capability.

I actually did do the nice error message with the kills though - it returns an error saying the server is shutting down, so if that happened I just had the code display to the user that they're query took to long and search for something more specific.

So, you don't put an untested query on a production server. Great. What happens when someone changes data in such a way that your query now explodes?:D

In the last case I had to deal with that, one boneheaded programmer had his code set to send him an email if it couldnt' find a good match in the DB. Someone changed the data, and with the amount of traffic, his code, spread across our web serving farm, had injected almost a million messages into the email queues. Programmers are awesome.

Not from where I'm standing. The query itself will be slow if it affects a lot of rows (if I use exactly the query you're giving, and that X is indexed, it will be near instant, no matter how many rows I have =P), but the server will purr along just fine with other queries (assuming its configured to use snapshot mode, else it could lock away other queries... but who doesn't use snapshot mode since its been available in SQL Server anyway? Especially after people been bitching about it from the day it was av

Hmm... but what if X is a clustered index (or some other index type which physically orders the data on the physical storage...)? if you're updating the clustered index, you could wildly, inadvertently crush your server due to the disk I/O as data has to get reordered on the drive...

Not the biggest fan of clustered indexes in Sql Server/Sybase... Oh well, I suppose in one of the next few releases of SQL Server that MS will figure out how Oracle does some more of their low-level black magic, and they will q

And Oracle tracks clustering factor of every index. If you really want data to be clustered about an index in Oracle, one creates the table with the clause ORGANIZATION INDEX and the table is physically created as a B-tree index structure.

Hm-m-m-m. I do this on a daily basis with one of our instances. We use SQL Server as a data preparation engine for a predictive modeling operation. This typically involves mass updates of the type you're specifying. I'm working on a set of data right now, with table sizes up to 45M rows. Some of the longer updates will run for 3 to 5 minutes, but that's hardly locking up the box. If you poorly construct the query, it will run for a couple of hours, but I don't call that bringing the database down, it'

AHAHAHAHAHAHAHAHAHA. Since I do run terabyte-sized databases, I'll contradict you - poor queries _can_ tank a server, even with small tables, if the query is poor enough. While it technically may be running, if nobody else can access it, then for practical purposes the server is down. And never underestimate the ability of one user with enough knowledge to be dangerous, to spread that selfsame query across as many people as possible.

Maybe my poor queries writing skills are bad:) Because I've seriously -tried- before... cross joins on 100+ tables, all of which containing several douzen gigs of data, totally multiple terabytes...the scheduling was good enough to give the query very low priority, leaving the server ok.

If you use (in SQL Server at least) the default settings, that will basically render your database useless... but if you use the newer locking strategies from 2005 (which had been available in Oracle for ages), the tables

Since I do run terabyte-sized databases, I'll contradict you - poor queries _can_ tank a server, even with small tables, if the query is poor enough. While it technically may be running, if nobody else can access it, then for practical purposes the server is down.

This sounds like the server is doing potentially unbound amount of I/O or processing with a lock held. Otherwise the other queries should still run, just slightly slower due to increased load in the server. A query, no matter how poor, shouldn't b

Err, there are ways of configuring decent RDBMS to have this kind of behaviour, albeit typically on a per user basis - just like you can limit the number of parallel queries a particular user can execute.

SQL code is usually developed on some small server or the DBA's own workstation. The dev database is representative of the prod version only in structure and not in size. So this type of errors sometimes go unnoticed until the code is migrated to the prod environment. The effect of such errors vary depending on server architecture. The most sensitive are HA cluster environments, where the clustering engine overreacts and starts failing things over, exacerbating the problem.

This type of error would go unnoticed? Err? If the query returns -the wrong data- (as a cross join by mistake would), and it goes unnoticed, something worse than your database crashing on you is waiting.

The point is, buggy SQL code running on a powerful production server with a huge database may manifest itself in a completely different way, as opposed to when running on a slow dev box in a minimal db environment. With growing volumes of data and shrinking IT budgets sometimes even the largest companies find themselves with inadequate testing and staging hardware.

This is why there are "reference" systems - typically the same size and spec (or within the same ballpark) that contain the same data as the live system (or again, some representative sample of it) which are used for staging / integration / pre-release testing.

Technically Staging is identical to production environment with the same amount of data. Testing has less powerful hardware, but many times just as much data to deal with.

BTW with our more or less 100GIG database we run tests on WAY slower machines than what the actual DB server is. We do not have staging or testing, but one environment where we mess around. Development is pretty much the same, but with smaller DBs...

It's not that you bring it "down", you just slow it down to the point of a noticeable performance hit to everyone else. I've seen queries written by people just smart enough to be very dangerous, where the query consumes 100% of resources for an hour (when if it was written correctly, the query would take 2 seconds). That's as good as "down" in some cases.

Well, a modern RBDMS is practically an operating system. This means that they way you bring it down doesn't involve the kinds of things a tool like this tells you. You probably need to do something procedural, involving a mix of tasks the RDBMS can't handle efficiently.

Of course, it is always possible to tune things in a disastrous way. Oracle is an RDBMS that is highly tunable, which means that a lot of people make really bad choices, for example tuning things in a way that require greater memory witho

Now, it could seriously slow down a production server, but... you're not pushing untested SQL on a production server now, are you? Right? Riiiiiiiiiiight?Not as part of a system, no, but if I just need answers, I run untested queries on production servers all the time.

1. Just a massive slow down - login (SQL Server there is a Dedicated Administrator Connection, don't think I've had problems connecting with a problem Oracle db as long as I can get on the OS (partly because sessions are processes)), and just kill the process. The DB should clean everything up. (as long as its not a toy db; I'm looking at you MySQL.)2. A crash - then you have to go through a whole number of steps to bring it up and then

A link to an alpha project on Sourceforge that was created three days ago and doesn't even have its own website? That apparently outputs LaTeX tables instead of something readable without having to compile it first, like HTML, SVG, or even indented text? I know it's silly to expect every story to be about a cure for cancer, but come on...

Word to the wise: if you're going to actually start advertising a project, please make sure you have some binaries built for some common relevant platforms, and make sure you have some decent information online even if it's just an ugly page with screenshots or examples of what it does.

In this case, we're talking about some scripts written in Python. At least let people know this on the front page, and list the project dependancies! ie, GraphViz, or whatever.

This way, your potential users won't immediately discard it due to a lack of compelling information, and your potential (future) developers can see how far you've got and maybe get inspiration to chip in and help!

That said, this sounds like it should be a great tool for beginner or intermediate SQL users, and I look forward to throwing a few of our mammoth 12-table-join queries at for much fun.

Most PostgreSQL users don't seem to use the existing, and superior, tools like EXPLAIN, EXPLAIN ANALYZE, PgAdmin-III's graphical explain, etc. I'm sure the same is true for users of many other databases.

It's not like these tools are particularly difficult to use or understand. No training is required, though being willing to think and read a little documentation helps if you want to get the most out of them. Understanding at least vaguely how databases execute queries is handy for any database user anyway.

Ditto. I downloaded it to take a look and see how good it was at parsing T-SQL, since we have a few saved T-SQL queries with WHILE loops in them. I gave up after seeing it's... nothing. Just a Python script. It requires Graphviz, Python, and Pyparsing (even though it comes with pyparsing!? WTF!), and even more damning is that you can't use it for ad-hoc queries, the query has to be saved into a file first.

Someone slap a GUI on this that lets you paste in a query, and bundle all the requirements along with t

You can do what I've done and seen done a number of times, and write a hunk of middleware that parses SQL statements for runaways and send back a warning to the user. That, and not using medium and low duty databases lile MSSQL and MySQL can go a very long way to keeping users happy.

That, and not using medium and low duty databases lile MSSQL and MySQL can go a very long way to keeping users happy.

Honestly, to describe MSSQL as "medium and low duty" is pretty rich. You'd best believe I'm happy to bash MS as much as the next guy but SQL Server is a high-performing, highly maintainable, high-availability database and doesn't deserve to be mentioned in the same sentence as MySQL.

Hell, MSSQL might actually be the only truly good product MS make -- in fact, it probably is. It's not a toy and people who assume it is, just because it comes from MS (I'm not saying this is what you're doing, but people DO do

I am not a DBA, but one of the more useful differences is that you can perform block inserts against each of the RAC nodes independently. That means that you can perform your big table loading scripts in parallel instead of running them against one machine and mirroring it.

Oh, and if you want to enforce query timeouts, that is supported in the user profile via CPU_PER_CALL (non-conforming queries are terminated and resources released)

I must admit I don't know that much about RAC -- but it doesn't really

Doesn't name WHICH RDBMS, and then you throw SQL at it? So what? For DB2 we have a thing called "Visual Explain" which NOT ONLY does this, but is free, provided by IBM, but also shows you other things like whch index is being used for each step, etc.

PHLEBAS the Phoenician, a fortnight dead,Forgot the cry of gulls, and the deep seas swellAnd the profit and loss. A current under seaPicked his bones in whispers. As he rose and fellHe passed the stages of his age and youthEntering the whirlpool.

posted by the admin of the project? the spam tag is accurate...
"yes this is an open source clearing house, no we will not all rapidly sign up to your cute little project."
though, i would be willing to be this is a Masters Thesis project and alxtoth is hoping to get some fast-tracking going on...

all your apps should only be able to access the DB as unprivileged users with resource limits to prevent crashing, and they should only be able to run stored functions which someone qualified at sql creates for the application guys.

this way the programmers are prevented from infecting the database from their crapness

For things like reports, your developers have to write complex SQL. You can argue that it shouldn't be a developer, instead it should be done by a "development DBA" or whatever, but essentially whoever writes the SQL IS the developer for reports. Even experienced DBAs can leave out a join in a complex (10 or more table) query, and it often isn't found if it's only run against a development and/or QA database with limited data and no real load. Cartesian products should be found if anyone actually reads th

You can tout that as "the right way", but there's still no reason this has to be a technical-design issue rather than a process-design issue -- and while my background is as an OSS groupie, I've been the OSS groupie at enough proprietary shops (ie. the party responsible for dealing with upstream on projects used as underlying infrastructure for actually running the proprietary software we built) that I can say with a fair bit of confidence that the approach you're espousing just isn't all that popular in Th

Sure, things that actually use the database for production shouldn't be trying to do dirty things with it, but developers, whether dedicated "DBA's", or the poor shop with only one tech guy of any kind, need to be able to "play" with the database to be able to tweak it and... well... do anything of meaning other than retrieve data. Sometimes this can be dangerous, but this is why they are testing on a development server... right?

Generally what happens on my project is that the team (headed by an analyst) decides on the best design for the task, then subtasks are delegated to developers based on their level of skill with PL/SQL and/or Java.
Business logic (for the most part) is done on the server-side with PL/SQL packages, while the application itself is a Java fat client running on a Citrix cluster.
Before you make statements about keeping business logic separate from the database, this situation works well for this application, as

And these type of edicts from up on high tend to really bite you in the behind over time. You wind up with hundreds upon hundreds of stored procedures, and nobody knows which ones are even in use any longer. One project will wind up requesting a change that affects another project, and it basically excludes any O/R mapping tools. It's just one huge mess.

Your best bet is to insist that your developers are just a little clued in. How hard is it to say, "As long as your queries always have an indexed field

I don't see what this has over EXPLAIN [postgresql.org] and an appropriate graphical display tool like PgAdmin-III [pgadmin.org]. There are large numbers of tools that display graphical query plans [postgresonline.com] - and unlike this simple SQL parser, they know how the database will actually execute the query once the query optimiser [wikipedia.org] is done with it.

Furthermore, a simple SQL parser has no idea about what indexes [wikipedia.org] are present, available working memory for sorts and joins, etc. It can't know how the DB will really execute the query, without which it's hard to tell what performance issues may or may not arise.

See comment 24461217 [slashdot.org] for a more detailed explanation of why this whole idea makes very little sense.

> Any tool that only looks at the SELECT statement, without knowing about the indices or what the optimizer is doing, is nearly useless.

This is wrong. The indices or optimizer have very little to do with the SELECT; at the time of processing the SELECT clause, most database engines already are done with the FROM, JOIN, WHERE, GROUP and HAVING clauses. At this point there will be little gain to add/drop indices from the query plan, unless the platform does support included fields. As such, SELECT is like

> I think you might've missed the point.> The term SELECT statement generally refers to the whole statement, including FROM, WHERE, HAVING, etc clauses.

I did not miss the point:>> For long queries with complex joins (like recursion), a diagram tool for SELECT can be very helpful

I simply disagree that the product is useless without knowing about the indices. Before one should start reviewing query plans and figuring out what index is important, one must make sure the query makes sense. Logical be

Yes, it could be useful for examining the output of the query in non-performance terms. For complex queries I can easily see how that could be useful.
That may, in fact, be the whole idea behind the tool - to help reduce or eliminate execution of grossly incorrect queries that don't do what the user wants.
Tools like EXPLAIN aren't as useful for that, either, as the query looks quite different after the query optimiser is done with it. Additionally EXPLAIN output usually drops detail about specific fields

> The problem is that sql lets you put statments inside of statments. Putting those in the wrong places can be devistating in performance. The rule is pretty easy. You can put select statments in the select clause or where clause areas but be prepared for it to take awhile to finish. Put them in the FROM clause where they belong.

The SQL optimizer will actually do that for you. It can replace a correlated subquery by a JOIN if this appears to me more optimal.

Another comment here revealed part of why someone might think a tool like this was useful:

In MySQL, EXPLAIN apparently works more like PostgreSQL's EXPLAIN ANALYZE (and related features in other RDBMSs). MySQL's EXPLAIN actually executes the query rather than just running it through the query planner. The documentation [mysql.com] even warns that data modification is possible with EXPLAIN in some circumstances.

If your database gives you no way to ask the query planner what it will do without actually executing the quer

So SQL Server has had a graphical execution plan view for ever, and it's better than this lameness. But of course its not free, and we all know that free software is better, even when it sucks. Seriously, compare this to the real tools included with a serious RDBMS, and I have to question why this was even posted. It's almost farcical.

No single query will ever bring a (real) RDBMS down. Even on a terrabyte of data or more, doing a crazy multi-hundred-table cross join, you're not going to bring it down.

You've obviously not tried anything simple on MS-SQL, like expanding a varchar(4) column to nvarchar(10) on a table with a few million rows. MS-SQL spins its wheels filling-up the transaction log until it overflows, then rolls it all back again. A 4GB log file, filled with a 250meg table (and no indexes because they were already dropped)?

In the end we had to drop all FK refs, select * into another table, drop the original table then select * (with conversions) into newTableWithOriginal's name and reset all the FK's. *shakes head*

...and that is why you should switch to DB2.What the heck are you doing with MS-SQL Server? Don't you know its for developers and kids?Trying to use SQL Server in production is like trying to cut your toenails with a straight razor: You may end up cutting your toenails ultimately, but you are likely to bleed yourself to death before that.

myspace.com has been supplated by Facebook.And DB2 is the granddaddy that is being trusted by ALL banks.Give me a bank which does not store its data on DB2, and i will concede this.And banks are the most thorough corporate IT customers.

Facebook being more popular than mySpace has nothing to do with the database back-end. If you need more big customers for SQL Server 2005, they are easy to find: Barnes & Nobles, HMV online music store, NASDAQ (over 5000 transactions/sec).

So basically your statement that SQL Server is a toy database might have attracted a few claps 6 or 7 years ago on Slashdot, but the reality is that SQL Server is a robust product finding its way in many markets. As one c

How about HSBC, Statestreet & Barclays? Both live on IBM.I lived with many large banks for over a decade, especially with HSBC.IBM is something they swear by. Not just because its well known, but because it is so good and thorough.Mid level banks, your figures are correct: Oracle and then SQL Server.I worked for a mid-level bank in CT which migrated from mainframe to Services-Bases Arch and uses Oracle as a back-end.Plus, security of data and privacy lawsuits terrify large banks more. I had to go throug

MS SQL Server is good and relatively cheap. The type of problem the grandparent mentioned exists in ALL DBMSes from ALL vendors. But there's nothing that DB2 can do that MS SQL can't, and MS SQL has great data-flow tools that come along with it to make actual use of the data.

Execution of SQL statements can require the RDBMS to perform nested loops over parts of the query execution.

This can be an issue if the DBMS is forced to do something like perform a sequential scan of one table for each record matched in another table. That gets expensive *fast*.

There are many other possible performance issues, of course.

However, I don't see how SQL parsing can tell you much about the performance characteristics of the query. The database's query optimiser makes choices about how to execute the query, and is free to change its mind depending on configuration parameters, available resources, system load, disk bandwidth, present indexes, statistics gathered about data in the table, etc. PostgreSQL's planner for example does make heavy use of table statistics, so query plans may change depending on the quantity and distribution of data in a table.

Any decent database can already tell you how it will execute a query (and usually give you a performance readout from an actual execution of the query). There are plenty of GUI tools for displaying the resulting query plan output graphically. PgAdmin-II can do it, for example.

A simple SQL parser can have no idea about what indexes are configured, the distribution of the data, how much working memory the database has available for sorts and joins, etc. The database knows these things - and can already tell you how it will, or did, execute a query - so why not let it do its job?

I use diagrams as a tuning tool, but only to look for paths that don't make sense or alternate paths through tables or for "dead-ends"......but these are things that a computer can't really tell you because they require an understanding of the data.

But you're right, the explain plan is the single most useful tool for tuning a query. If you understand how the engine is going to execute the query you know what areas you can affect. And tuning is manipulaing those effects in a way that makes the query faster.

Oh, and considering the default join in virtually any SQL database is a nested-loop join, I'd say all databases loop by default. And a statement as innocuous as :
select * from a, b, c;
Can absolutely crater cpu and I/O performance. If each has 1,000 rows and there's not enough memory, there's 1,000,001 table scans. Hope your disk is fast.

I tested this on SQL Server 2005 Express and either I am doing something wrong or your statement is false.

Besides, for legacy applications that were coded like shit, there's already a bunch of tools that will scan them (either the application, black box style, though mostly for the web, or the code, white box style) to find sql injection vulnerabilities.

Still sad that even to this day, if you go to your favorite programming language XYZ forum, half of the newbies use concatenated strings, because a large amount of tutorials on the net do it that way...