Posted
by
Unknown Lameron Monday September 10, 2012 @07:07PM
from the rev-your-engines dept.

The PostgreSQL project announced the release of PostgreSQL 9.2 today. The headliner: "With the addition of linear scalability to 64 cores, index-only scans and reductions in CPU power consumption, PostgreSQL 9.2 has significantly improved scalability and developer flexibility for the most demanding workloads. ... Up to 350,000 read queries per second (more than 4X faster) ... Index-only scans for data warehousing queries (2–20X faster) ... Up to 14,000 data writes per second (5X faster)" Additionally, there's now a JSON type (including the ability to retrieve row results in JSON directly from the database) ala the XML type (although lacking a broad set of utility functions). Minor, but probably a welcome relief to those who need them, 9.2 adds range restricted types. For the gory details, see the what's new page, or the full release notes.

Generally there is very little in the sense of logical data manipulation capabilities in which Oracle exceeds PostgreSQL (usually the opposite, actually). The main advantage Oracle has is in the extreme high end of scalability and replication, and that benefit is offset by massive complexity in setup and configuration. Even there, PostgreSQL is closing fast these days, with built-in streaming replication, table partitioning, and all sorts of high-end goodies.

I do all sorts of PostgreSQL consulting, and you would be surprised at the number of large companies and government organizations considering migration from Oracle to PostgreSQL.

And if you *really* need PostgreSQL to go into high gear, just pay for the commercial Postgres Plus Advanced Server from EnterpriseDB and you will get a few heavy-duty add-ons, including an Oracle compatiblity layer.

Also, IMHO one of the really cool things about PostgreSQL is the number of very geeky tools it puts at your disposal, such as a rich library of datatypes and additional features, along with the ability to create your own user-defined datatypes.

and you would be surprised at the number of large companies and government organizations considering migration from Oracle to PostgreSQL.

Not really.

I've had no experience with the database end of things, but I've been on the receiving end of some other Oracle "products" at two places I've been. Once you've been Oracled, there is a strong incentive never to go anywhere near them again, no matter how they look on paper.

When it comes for utter distain and hatred for their customers, Oracle make Sony look like rank ametures.

As far as Oracle are concerned, the customer is a fool whose sole purpose is to be screwed over for as much cash as possible.

If you read the bug that you linked to you'll see that it was a regression that wasn't catched by the original test, i.e it was probably not a regression after all, simply that the first fix wasn't a fix for all possible cases. And it's not like there has never been any bugs or regression bugs in other dbs...

Exactly why does it matter that MySQL got ACID first with the inclusion of InnoDB? It would only matter if you pretend that MySQL version 1.0 is the only version that you can run or something...
And yeah comparisons on the Internet is of course always true:-), whell it might be that PostgreSQL is better performant etc. It just happens that for my use case it doesn't, we provide stock market data to sql-servers among other things and the only users we have that never experience performance problems or stab

That is left for out end customers to do, we simply supply the application that feeds a (any) sql server with our datafeed aswelll as the data feed, which sql to use and how to tune it is up to them. However we use MySQL internally which is not tuned at all (we are not DB savvy) but they perform very well and we have never had any problems with them, and they do receive our full data flow. Our customers with MSSQL servers however experiences lots of problems and they receive only a subset of the dataflow, I

Now I have not tested 9.2, but when we developed the application and also during QA we run it against all our supported sql servers and MySQL always have better insert/update performance (we are not select intense) then either MSSQL or PostgreSQL (which can be a too old version to be a reliable source since we use the version bundled with CentOS). But we have not performed any tuning what so ever of course since the purpose is to QA the application and not the dba.

So what you say is that PostgreSQL _must_ be better because you like it... Or could it be that you have not successfully tuned your MySQL installation when you tested it etc etc. This can go on forever and ever. You are completely free to think that PostgreSQL is the best thing out there, it just doesn't mean that MySQL is garbage or second rate like many people claim, many people who never even have used MySQL or with anything other than MyISAM tables.

Performance comparisons is very hard to do since every task out there has different requirements, and if one combines that with the infinite ways one can configure either dbs and the underlying os then you will always fail in the eyes of some one, it's also quite common for people to choose either one of MySQL or PostgreSQL and stick with it so the experience needed to properly set the other one up for the test in hand is often lacking.

Interesting that you feel PostgreSQL easier to set up, I have the exac

Actually I never did, I only wrote that for my use case I didn't find PostgreSQL to be better performant than MySQL. And no I'm not unwilling to substantiate, I told you how I come to my conclusion and since it wasn't done in order to benchmark I never collected any stats. I'm completely uninterested in which dba is the fastest, I simply replyed with my anecdotal experience and what I see when we perform QA on the various databases and the experience that I have with customers different setups.

Yes seriously. We use MySQL extensively with Terabyte sizes databases with tens of thousands writes per second and have never experienced any performance or stability problems what so ever. Our customers who use MSSQL however experiences lots of troubles.

Very nice of the PostgreSQL developer there to run MySQL in a non strict mode and then play confused when it does exactly what he tells it to do... Perhaps it's unknown to many people, but with MySQL you can configure the database engine for different levels of SQL correctness, the default is usually to be very lenient since that is what the normal PHP user would expect (and people used to the older MyISAM storage engine). But for those who want correct SQL correctness like PostgreSQL it is possible to conf

There is a very good reason we OS vendors do not ship with SysV default limits high enough to run a serious PostgreSQL database. There is very little software that uses SysV in any serious way other than PostgreSQL and there is a fixed overhead to increasing those limits. You end up wasting RAM for all the users who do not need the limits to be that high. That said, you are late to the party here, vendors have finally decided that the fixed overheads are low enough relative to modern RAM sizes that the defaults can be raised quite high, DragonFly BSD has shipped with greatly increased limits for a year or so and I believe FreeBSD also.

There is a serious problem with this patch on BSD kernels. All of the BSD sysv implementations have a shm_use_phys optimization which forces the kernel to wire up memory pages used to back SysV segments. This increases performance by not requiring the allocation of pv entries for these pages and also reduces memory pressure. Most serious users of PostgreSQL on BSD platforms use this well-documented optimization. After switching to 9.3, large and well optimized Pg installations that previously ran well in memory will be forced into swap because of the pv entry overhead.

There is a serious problem with this patch on BSD kernels. All of the BSD sysv implementations have a shm_use_phys optimization which forces the kernel to wire up memory pages used to back SysV segments. This increases performance by not requiring the allocation of pv entries for these pages and also reduces memory pressure. Most serious users of PostgreSQL on BSD platforms use this well-documented optimization. After switching to 9.3, large and well optimized Pg installations that previously ran well in memory will be forced into swap because of the pv entry overhead.

I don't see your comment on the blog (maybe it has to be approved?), but the same issue was raised here [nabble.com] during review of the patch. The concern was mostly blown off (most PG developers use Linux instead of BSD, that might well be part of it), but if you had some numbers to back up your post, the -hackers list would definitely be interested. Ideally, you could give numbers and a repeatable benchmark showing a deterioration of 9.3-post-patch vs. 9.3-pre-patch on a BSD. If that's too much work, just the numbers from a dumb C program reading/writing shared memory with mmap() vs. SysV would be a good discussion basis.

Each client connected to the DB has its own child process - the shared memory is a buffer that is shared across postgresql child PIDs with the same parent. That's why the proposed patch would work using an anonymous shared memory segment - because the memory is only passed to children of the same process.

Well...arguably. This is the exact same argument as Apache vs Nginx, where Apache spawns a child process per client, whereas Nginx has a limited number of worker processes that handle a queue of requests as they become free. Nginx definitely has an advantage in terms of RAM when servicing thousands of (truly) simultaneous requests.

While Postgresql does use the Apache model, there is middleware available (google 'pgpool' for an example) that amongst other things will queue requests so they can be serviced by a limited number of children. Of course this only matters if there are an awful lot of simultaneous queries (without the corresponding amount of server RAM).

However; your claim about threads per CPU is oversimplified, and especially wrong with a DB server where processes will most likely be IO bound. With 1 core, for example, there is nothing wrong with having 5 processes parsing and planning a query for a few microseconds, while the 6th is monopolising IO actually retrieving query results. Or the reverse - having 1 CPU-bound process occasionally being interrupted to service 5 IO bound processes, which would negligibly impact the CPU-bound query, while hugely improving latency on the IO bound queries.

Ideally, any single service/application should NEVER have more threads than there are n+1 logical CPUs.

In the ideal world you'll never have more nonparallelizable tasks than you have CPUs.

However in the real world you often do. It is usually better for the application developers to focus on having their application solve the application related problems, and let the OS take care of the multitasking and other OS related problems.

A process per client also means that if a process crashes it is less likely to affect other clients. And if there are memory leaks for whatever weird/stupid reason, if you close that

FWIW MSSQL defaults to 255 worker threads, which is likely to be more than the number of logical CPUs in most servers.If you're the OP AC, you can try reduce your max worker threads to "n+1 logical CPUs" on a 1500 connection test DB server and see if the DB performs better.

I doubt it will. The thing is a thread of execution is a useful concept for a programmer - you set up a thread to handle each task and let the OS worry about multiplexing efficiently across logical/physical/whatever CPUs. Same goes for pr

I don't think this is true any more. Threads are light weight... that's the whole point. They all share the same pmap (same hardware page table). Switching overhead is very low compared to switching between processes.

The primary benefit of the thread is to allow synchronous operations to be synchronous and not force the programmer to use async operations. Secondarily, people often don't realize that async operations can actually be MORE COSTLY, because it generally means that some other thread, typically a kernel thread, is involved. Async operations do not reduce thread switches, they actually can increase thread switches, particularly when the data in question is already present in system caches and wouldn't block the I/O operation anyway.

There is no real need to match the number of threads to the number of cpus when the threads are used to support a synchronous programming abstraction. There's no benefit from doing so. For scalability purposes you don't want to create millions of threads (of course), but several hundred or even a thousand just isn't that big a deal.

In DragonFly (and in most modern unix's) the overhead of a thread is sizeof(struct lwp) = 576 bytes of kernel space, +16K kernel stack, +16K user stack. Everything else is shared. So a thousand threads has maybe ~40MB or so of overhead on a machine that is likely to have 16GB of ram or more. There is absolutely no reason to try to reduce the thread count to the number of cpu cores.

--

There are two reasons for using lock memory for a database cache. The biggest and most important is that the database will be accessing the memory while holding locks and the last thing you want to have happen is for a thread to stall on a VM fault paging something in from swap. This is also why a database wants to manage its own cache and NOT mmap() files shared... because it is difficult, even with mincore(), to work out whether the memory accesses will stall or not. You just don't want to be holding locks during these sorts of stalls, it messes up performance across the board on a SMP system.

Anonymous memory mmap()'s can be mlock()'d, but as I already said, on BSD systems you have the pv_entry overhead which matters a hell of a lot when 60+ forked database server processes are all trying to map a huge amount of shared memory.

Having a huge cache IS important. It's the primary mechanism by which a database, including postgres, is able to perform well. Not just to fit the hot dataset but also to manage what might stall and what might not stall.

In terms of being I/O bound, which was another comment someone made here... that is only true in some cases. You will not necessarily be I/O bound even if your hot data exceeds available main memory if you happen to have a SSD (or several) between memory and the hard drive array. Command overhead to a SSD clocks in at around 18uS (verses 4-8mS for a random disk access). SSD caching layers change the equation completely. So now instead of being I/O bound at your ram limit, you have to go all the way past your SSD storage limit before you truly become I/O bound. A small server example of this would be a machine w/16G of ram and a 256G SSD. Whereas without the SSD you can become I/O bound once your hot set exceeds 16G, with the SSD you have to exceed 256G before you truly become I/O bound. SSDs can essentially be thought of as another layer of cache.

Most of the shared memory is usually reserved for shared buffers, i.e. cached blocks of data files - this is something like a filesystem cache (and yes, some data may be cached twice) with the additional infrastructure for shared access to these blocks (especially for write), and so on. But there's more that needs to be shared - various locks / semaphores etc. info on connections, cluster-wide caches (not directly files) etc.

Here's the problem in a nutshell... any memory mapping that is NOT a sysv shm mapping with the use_phys sysctl set to 1 requires a struct pv_entry for each pte.

Postgres servers FORK. They are NOT threaded. Each fork attaches (or mmap in the case of this patch) the same shared memory region, but because the processes fork instead of thread each one has a separate pmap for the MMU.

If you have 60 postgres forked server processes each mapping, say, a 6GB shared memory segment and each trying to fault in the e

Yes, and that is precisely what happens. But it means that we had to size-down the shared-memory segment in order to take into account that the machine had 7GB less memory available with that many servers running.

There is a secondary problem here... not as bad, but still bad, and that is the fact that each one of those servers has to separately fault-in the entire 6GB. That's a lot of faults. There would be 1/60th as many faults if the servers were threaded. This is a secondary problem because it only

I think everyone has glossed over the single most important feature in the Postgre SQL that they have refined in this release, IMHO. Ranged data types. Let's say you have a meeting schedule DB application. Well currently if you want to restrict a room between two times (start and stop) so that no one else can have the room during that time, you are going to have to write that logic in your application.

Postgre's range data type allows you to create unique checks on ranges of time. This can in two lines of code, do every single logic check that is needed to ensure no two people schedule the same room at the same time.

How this is not showing up on anyone's radar is beyond me, or maybe we all just use Outlook or Google Calendar now. However, the range types are not just limited to the application of time, but of anything that requires uniqueness along a linear fashion, as opposed to just checking to see if any other record matches the one that you are trying to insert.

It seems that you're misunderstanding the definition of a range datatype in this context. The data type of the column in the scheduling example would be defined as a timestamp range, and the constraint on the column would be that no timestamp range value can overlap with any other timestamp value in the table (or in the table for any rows that share a key value, such as user ID). There is no need to alter the column definition to accomodate changes to scheduling data.

Strictly speaking, at minimum it would require two time fields and two boolean fields, with each boolean field specifying whether or not the interval is inclusive of each corresponding end point. It would also require a lot more than one simple constraint to get the desired behavior provided by the new datatype--and the whole mess would need to be repeated for every single interval with a simple exclusivity constraint. The new range datatype also makes it relatively simple to, e.g., specify a non-zero ove

Not sure if you're a troll, but, in case you're being serious and just suffered a reading comprehension failure or skipped over the ggp post because you're browsing at >0, I suggest that you re-read the thread.

Oh, it's simple enough to do with two separate fields and a check constraint. That's how you'd do it i other DB engines, in fact.

Ensuring there are no overlaps is an entirely different story, however: queries against those two fields cannot make any reasonable use of an index. The ranged type, by contrast, allows you to query the data using a nearest neighbour search and a GiST index.

Think of a GiST index as indexing the smallest boxes that enclose your shapes of interest. When queried, the DB scans for boxes that overlap your box of interest, and discards rows that don't match the data's actual shape.

Optimization of a constraint involving date ranges is a bit more difficult than you might think, and having it as one unified type makes queries a lot cleaner and indexes a lot more efficient (if done as GiST indexes anyways)

Old: WHERE (a.starttime BETWEEN b.starttime AND b.endtime OR b.starttime BETWEEN a.starttime AND a.endtime)New: WHERE a.timerange @@ b.timerange

The speedup when you're doing things like trying to find overlaps between two lists of tens of thousands of ranges each is phenomenal.

Optimization of a constraint involving date ranges is a bit more difficult than you might think, and having it as one unified type makes queries a lot cleaner and indexes a lot more efficient (if done as GiST indexes anyways)

Old: WHERE (a.starttime BETWEEN b.starttime AND b.endtime OR b.starttime BETWEEN a.starttime AND a.endtime)New: WHERE a.timerange @@ b.timerange

Also, strictly speaking, you can't do the first one as a constraint at all (you can do it as a query condition, or enforce a constraint-like be

Before 9.2, I did this (for timestamp ranges only) using Jeff Davis's Temporal Extensions for PostgreSQL, which I've submitted a few patches to.

Which really is the direct predecessor of 9.2's Range Type support (from the same developer, too.)

Heh, I should have guessed.

It was a few years ago, but we actually tossed around some ideas on a standard format for applying the range concepts to types besides timestamps. One of the issues then was that a small handful of built-in types have a notion of infinity/-i

It looks like the 9.2 implementation uses its own definition for -infinity/+infinity unrelated to the type in question, which thinking about it now might not have been the best decision (at least, without the ability to define that [,y] and [-infinity,y] are synonymous) since -infinity (as defined by range types) is less than all other values including -infinity (as defined by the contained type).

I think it is a good decision in that it provides a syntactic construct for ranges that are unbounded on either

TL;DR: Is there an advanced PostgreSQL for MySQL Users guide out there somewhere? Something more than basic command-line equivalents? And preferably from the last two major releases of the software?

Long versionI've been using MySQL personally and professionally for a number of years now. I have setup read-only slaves, reporting servers, multi-master replication, converted between database types, setup hot backups (Regardless of database engine), recovered crashed databases, and I generally know most of the tricks. However I'm not happy with the rumors I'm hearing about Oracle's handling of the software since their acquisition of MySQL's grandparent company, and I'm open to something else if it's more flexible, powerful, and/or efficient.

I've always heard glowing, wonderful things online about PostgreSQL, but I know no one who knows anything about it, let alone advanced tricks like replication, performance tuning, or showing all the live database connections and operations at the current time. So for any Postgres fans on Slashdot, is there such a thing as a guide to PostgreSQL for MySQL admins, especially with advanced topics like replication, tuning, monitoring, and profiling?

PostgreSQL replication is new (revision 9.1) so there may be little out there (Yes, there was replication, but with additional software, like Slony).

I'm in the weird position of having used PostgreSQL mainly --- for seven years, writing dozens of applications --- but never MySQL. I've also used --- out of necessity only --- Microsoft SQL, Oracle, and Ingres, and PostgreSQL is much better. Just from a programming point of view, the syntax is, in my mind, simpler yet more powerful --- more ANSI-SQL-compliant, too, I've heard.

Anyway, the point is, I've never used anything I like more. I adore PostgreSQL. It's so powerful. So many useful datatypes, functions, syntax. Not to mention it's ACIDity.

To your question, though --- are there any good books to help a MySQLite move to PostgreSQL? Not that I've come across. But then again, I haven't found any good PostgreSQL books --- or even, for that matter, very well-written SQL books, period. They all are stupefyingly boring --- but I got what I could out of them.

Actually, PostgreSQL's documentation is not that bad. In particular, try sections I, II, V, VI, and III, in that order. Skip anything that bores you at first. You can always come back. Honestly, there can't be that much of a learning curve for you, coming from MySQL.

Problem with all books is, they get outdated too quickly. While a lot of the basic info is still true for the books above, the O'Reilly book is very much based on 8.4 with is pretty ancient already. Perhaps getting an ebook is less

If you look for a good SQL programming book, the PL/SQL book from Oracle is the best book written in this area, IMHO. As for the MySQL to PostgreSQL book, there was no incentive to write it for PostgreSQL power users. We mostly looked over the time at MySQL as toy database and it's users as at best misguided and at worst, not caring about data integrity (cardinal sin in my book). So writing such book would be sort of like "Black Hat Hacking for Script Kiddies". Sure it could be done, but who wants a bunch o

Well, recommending a PL/SQL book as a source for learning SQL is a bit silly IMHO. Moreover, I find the books from Oracle rather bad - there are better sources to learn PL/SQL (e.g. the one from Feuerstein is a much better book).

And in fact there's a great book about administering PostgreSQL from Hannu Krosing - it's called "PostgreSQL 9 Admin Cookbook" [http://www.packtpub.com/postgresql-9-admin-cookbook/book]. It's a great set of recipes for admins for common tasks, not an exhaustive documentation (that's

I'm not sure if the book available to Oracle employees on PL/SQL is the same as the one available externally, I assume so. The books by Oracle are generally not so good, I'd agree, but the PL/SQL one is a rare gem. It sounds like you read a bunch of Oracle books, but not this one and you recommend what you did read on the subject, which is fine. But in this case... Anyway....

What the point was not a good Postgres book, there are some. The point was a comparison book taking you from MySQL to PostgreSQL and

Not sure which Oracle books you mean - I've read e.g. "PL/SQL Programming" (ISBN 978-0072230666) and "Expert Oracle PL/SQL" (ISBN 978-0072261943) and probably some more when preparing for OCP exams. And I'd definitely recommend ISBN 978-0596514464 instead of the first one. But yeah, it's a matter of opinion.

But you're right - there are no "PostgreSQL for MySQL people" guides. The problem is that almost no one is able to write it. The people who are switching from MySQL to PostgreSQL don't have the knowledge

I have to admit, as a long-time MySQL user, it really messes with your head and makes you not do things in a way that works with MS SQL Server or PostgreSQL. Especially how MySQL does its lazy grouping.

I've only tried other databases for a short while and give up because I know that I'd have to learn everything properly. If I was starting a brand new project, it might be great, but I wouldn't want to rewrite an existing database app with it.

Unfortunately, I haven't found a really good guide of the type you are looking for. I can give you my experiences, going from MySQL to PostgreSQL, back to MySQL to support it at a large company, and then back to PostgreSQL. Generally, these days there is really *nothing* that I can find about MySQL that can't be done better in PostgreSQL. I mean it. At least for awhile MySQL could boast of native replication, but Postgres biw has that and it is arguably much more robust than MySQL's solution (had the misfortune to support MySQL replication for 2 years). Ditto with full-text indexing, and just about any other MySQL feature.

Main differences:

1. PostgreSQL is much more "correct" in how it handles data and has very little (essentially no) unpredictable or showstoppingly odd behavior of the sort you find in MySQL all the time. Your main problem in migrating an app to PostgreSQL will be all those corner cases that MySQL just "accepts" when it really shouldn't, such as entering '0000-00-00' into a date field, or allowing every month to have days 0-31. In other words, PostgreSQL forces you to be a lot more careful with your data. Annoying, perhaps, if you are developing a non-mission-critical system like a web CMS or some such, but absolutely a lifesaver if you deal with data where large numbers of dollars and cents (or lives) depend on correct handling.

MySQL has provided for a fair amount of cleanup for those who enable ANSI standard behavior, but it is still nowhere close to PostgreSQL's level of data integrity enforcement.

2. MySQL has different table types, each of which support different features. For example, you cannot have full-text indexing in InnoDB (transactional) tables. PostgreSQL has complete internal consistency in this regard.

3. MySQL has an almost entirely useless error log. PostgreSQL's can be ratcheted up to an excruciating level of detail, depending on what you want to troubleshoot. Ditto with error messages themselves.

4. MANY MANY more choices in datatypes and functions to manipulate them. Definitely a higher learning curve, but worth it for expressive capability.

5. Don't get me started on performance. Yes, if you have a few flat tables, MySQL will be faster. Once you start doing anything complicated, you are in for a world of pain. Did you know that MySQL re-compiles every stored procedure in a database on every new connection? PHP websites with per-page-load connections can really suffer.

6. Don't get the idea that PostgreSQL is more complex to work with. If you want simple, you can stick with the simple parts, but if you want to delve into complex database designs and methodologies, PostgreSQL pretty much opens up the world to you.

>Did you know that MySQL re-compiles every stored procedure in a database on every new connection?
Actually it really doesn't, it will only recompile the stored procedure if the compiled version has left the cache, so as long as they fit into the cache you would see very little compiling going on.

What's wrong with third-party stuff? I mean, looking bad it was silly to expect this to happen with replication (third-party replication solutions, not included in the core), but with the management tools this should not be a problem - there are already tools like repmgr and more to come. The problem with in-core tools is that they hard-code a single way to do things the release cycle is tightly bound to the PostgreSQL itself and it's a significant effort for the whole community.

Kludgy? You must be talking about MySQL's "solution". The one that is not really truly transaction-safe, nor dependable. I can't tell you how many times I've logged into a MySQL server in the morning only to find replication broken.

Native replication has been available for almost two years now. The fact that it uses the write-ahead log in conjunction with streaming is exactly the kind of solution you need if you want dependable transaction-safe replication. I suggest that PostgreSQL took longer to achieve b

To me, JSON very interesting. I don't know how exactly I'll use it, but it combines all that's great about PostgreSQL with some of what was interesting about CouchDB and other projects like it.

Mainly, one-to-many relationships may be easier. Usually, they are two separate select statements. For example, one to get the article, another to get the comments. Then you patch it all together in PHP, or whatever middle language you're using. With JSON support, that could be a single SELECT, crammed up in JSON, which you then uncram with a single json_decode function call in PHP, which would yield nice nested arrays.

Mainly, one-to-many relationships may be easier. Usually, they are two separate select statements.For example, one to get the article, another to get the comments. Then you patch it all together in PHP, or whatever middle language you're using.

I'm not sure adding new SQL features is going to deal with the problem of people not using the features they already have. Its already quite possible in PostgreSQL to do a single select that gets the article data and an aggregate that contains all the comments. Feat

Until the fix the TX number issue ( the infamous rollover ) then they are pretty much out of the running in DB's that have VERY high insert levels since the vacuum process cannot hope to keep up with tables that have 100's of millions of rows.

I am an Oracle professional but I do keep track of Postgres and like it, but the 32 bit TX t is a bit of an Achilles heel.

If you have a single table with 17 trillion rows then you're doing it wrong. And inserts aren't really an issue with MVCC in PG - I'd focus more on updates.

Partitioning in PostgreSQL will let you split that up into separate physical items on disk. As others have said - you just need to let vacuum scan the table once every 2 billion transactions or so to keep things in check. Rows that aren't update regularly will be given the special frozen xid and won't be subject to any wrap around issues.

Until the fix the TX number issue ( the infamous rollover ) then they are pretty much out of the running in DB's that have VERY high insert levels since the vacuum process cannot hope to keep up with tables that have 100's of millions of rows.

Infamous to whom? A vacuum updates the frozen TID, which is a trivial operation and allows a subsequent TID to safely wrap around. And I'm struggling to think of any common use cases where the volume of inserts is so high that they can't afford a vacuum every two bil

A vacuum updates the frozen TID, which is a trivial operation and allows a subsequent TID to safely wrap around.

What if you have at least one outstanding transaction/connection? Can vacuum update the frozen TID then?

For example if you have a transaction that's open for a few weeks and happen to have 4 billion transactions during that time.

I believe perl DBI/DBD in AUTOCOMMIT OFF mode starts a new transaction immediately after you commit or rollback. So if you have an application using that library that is idling for weeks a transaction would presumably be open for the entire time- since it would be connected to the d

I mean - PostgreSQL does have 32 bit transactions IDs and a well designed process to prevent wraparound.

Oracle has 48bit transaction IDs, a number of bugs that speed up transaction ID growth, a feature that "synchronizes" transaction IDs through the whole cluster (thus the IDs are growing according to the busiest of the instances) and a soft SCN limit

It just works properly out of the box. No nasty surprises, no alarming omissions or deviations from expected database behaviour. It's just a fast, reliable database which also happens to be open source and free.

I'm sure most of this applies to MySQL these days but historically it didn't and I never saw the attraction of a DB which went through a succession of backends in order to obtain the behaviour PostgreSQL always supplied. It doesn't help that MySQL is Oracle owned and all the issues with licencing a

Yeah, 7.x was no picnic. Especially if you came from a Windows background and needed to install it on a Windows server.

We didn't switch until 8.0 or 8.1, after we were able to install as a native Windows application and play with it. The pgsql database servers are actually Linux, but we were still feeling our way there as well.

Minor, but probably a welcome relief to those who need them, 9.1 adds range restricted types.

First, its 9.2, not 9.1.

Second, (as shown in the link) these are range types, not range-restricted types. Range-restricted types (as known from, e.g., Ada) are something that (via domains with check constraints) PostgreSQL has supported for a very long time.

Range types, combined with 9.2s support for exclusion constraints, are a pretty major new feature that give 9.2 a great facility in dealing with (among other things) temporal data and enforcing common logical constraints on such data in the database as simple-to-express constraints rather than through triggers.

Because we love to bash our keyboards into so much plastic scrap whenever we come across one of its many standards-defiant idiosyncracies?

You mean, idiosyncracies different from Oracle's idiosyncracies, Microsoft's idiosyncracies and IBM's idiosyncracies?

By the way, care to be specific? Oh yeah, posting anon. Right.

I think probably the idiosyncracy that keeps it from running on my Linux servers is probably sufficient. Although that extra level in the table naming hierarchy has been known to cause me to destroy things.

Your comment reminds me about a pregnant woman who phoned her doctor saying she had contractions - her doctor said come in when they get to 3 minutes apart - when she phoned a while later saying they were still only a minute or so apart, she was told to come in immediately! You might find that PostgreSQL is already in advance of 'MemSQL'!

More seriously, unless you say what features you think MemSQL is ahead off PostgreSQL, you are sounding very much like a troll.