Designed specifically for the cloud, PPCD features
automated scale-out from its cluster architecture, built-in failover
for high availability, private instances for security and
consistent performance, and complete administrative access.

EDB tools are designed for large scale mission-critical enterprise deployments providing the very best in Postgres database management, monitoring, performance, replication, high availability, backup, scalability, security, and disaster recovery.

Yearly calendar with scheduled training sessions.

EnterpriseDB's On-demand/Self-paced courses can be taken anytime and from anywhere allowing you to strengthen your Postgres skills when its most convenient for you. There are no travel expenses to increase costs and no time away from work in case you are needed. You can start, stop and start again whenever you want.

EntepriseDB has certification tracks for both PostgreSQL and Postgres Plus Advanced Server. Holding a certification from EnterpriseDB affirms a database professional's Postgres skills, and employers trust certifications as industry acknowledgement of proficiency and the ability to perform effectively. Our certification program sets the global standard for Postgres professionals, and individuals certified under this program fill a growing and critical need for Postgres knowledge in enterprise environments.

Postgres Plus(R) Open Source Database Adoption

Say yes to Not only SQL. Did you know that Postgres Plus Advanced Server can easily handle JSON documents and unstructured Key-Value data as well and as fast as MongoDB? Advanced Server provides the freedom, flexibility, performance and agility of handling unstructured and semi-structured data while preserving its long-term viability as enterprise information under ACID conditions.

success_stories

Among our customers, you'll find interesting use cases and in particular compelling ROI success stories. Learn how companies are creatively and with minimal disruption to their business reducing their database TCO.

EnterpriseDB Community Contributions

EnterpriseDB is deeply involved with and committed to the PostgreSQL community with the common goal of constantly improving and building upon the software as well as promoting and facilitating the adoption of PostgreSQL and related products worldwide.

EnterpriseDB is proud to sponsor and work with the best and brightest of the PostgreSQL and general database communities at large. It is with their expertise and deep knowledge of Postgres that we are able to make significant contributions to the community version of PostgreSQL as well as EntepriseDB's Postgres Plus line of products. In addition, their knowledge also helps fine tune our services offerings including software subscriptions, training, technical support and consulting services.

why_enterprisedb

EnterpriseDB is the leading worldwide provider of Postgres software and services that enable enterprises to reduce their reliance on costly proprietary solutions and slash their database spend by 80 percent or more.

With powerful performance and security enhancements for PostgreSQL, sophisticated management tools for global deployments and database compatibility, EnterpriseDB software supports both mission and non-mission critical enterprise applications. More than 2,500 enterprises, governments and other organizations worldwide use EnterpriseDB software, support, training and professional services to integrate open source software into their existing data infrastructures.

Based in Bedford, MA, EnterpriseDB is backed by strategic private investors.

Meet EnterpriseDB's executive team, composed of seasoned entrepreneurs and business leaders with diverse backgrounds. Their knowledge and expertise in the technology space enables them to forcefully plan and execute the Company's long-term growth strategy, while keeping a keen eye on operational excellence.

The EnterpriseDB Board of Directors includes:

Ed Boyajian, President and Chief Executive Officer

Note: this page is currently in the process of being updated.

EnterpriseDB is the leading worldwide provider of Postgres software and services that enable enterprises to reduce their reliance on costly proprietary solutions and slash their database spend by 80 percent or more.

With powerful performance and security enhancements for PostgreSQL, sophisticated management tools for global deployments and database compatibility, EnterpriseDB software supports both mission and non-mission critical enterprise applications. More than 2,800 enterprises, governments and other organizations worldwide use EnterpriseDB software, support, training and professional services to integrate and optimize open source software in their existing data infrastructures.

Based in Bedford, MA, EnterpriseDB is backed by strategic private investors.

EnterpriseDB sponsors a great number of trade shows and conferences for PostgreSQL or representing Postgres products all around the world. Check out our current listings and make plans to attend any of these events to learn about the latest trends and offerings in the open source database market in general and Postgres in particular.

EnterpriseDB is revolutionizing the enterprise database market with the power of open source software. Terrific opportunities are available to qualified candidates who are bright, industrious, and passionate about excellence.

As part of the EnterpriseDB team, you'll work in a fast-paced and dynamic environment to develop, support, market, and sell our award-winning enterprise-class database products and solutions. We offer competitive compensation packages that include stock options and health benefits, and we enjoy a challenging, collegial work environment that spans the globe.

EnterpriseDB Headquarters:

partner_programs

EnterpriseDB works with an ecosystem of partners to build, optimize, and deliver complete PostgreSQL based open source database solutions to our customers. Through our global network of leading independent software vendors, resellers and systems integrators, EnterpriseDB offers customers a wealth of proven technology solutions that solve real-world business challenges. Find the partner program that best fits your organization's profile and contact us today.

This is the first PostgreSQL release
to run natively on Microsoft Windows® as
a server. It can run as a Windows service. This
release supports NT-based Windows releases like
Windows 2000 SP4, Windows XP, and
Windows 2003. Older releases like
Windows 95, Windows 98, and
Windows ME are not supported because these operating
systems do not have the infrastructure to support
PostgreSQL. A separate installer
project has been created to ease installation on
Windows — see http://www.postgresql.org/ftp/win32/.

Although tested throughout our release cycle, the Windows port
does not have the benefit of years of use in production
environments that PostgreSQL has on
Unix platforms. Therefore it should be treated with the same
level of caution as you would a new product.

Previous releases required the Unix emulation toolkit
Cygwin in order to run the server on Windows
operating systems. PostgreSQL has
supported native clients on Windows for many years.

Savepoints

Savepoints allow specific parts of a transaction to be aborted
without affecting the remainder of the transaction. Prior
releases had no such capability; there was no way to recover
from a statement failure within a transaction except by
aborting the whole transaction. This feature is valuable for
application writers who require error recovery within a
complex transaction.

Point-In-Time Recovery

In previous releases there was no way to recover from disk
drive failure except to restore from a previous backup or use
a standby replication server. Point-in-time recovery allows
continuous backup of the server. You can recover either to
the point of failure or to some transaction in the past.

Tablespaces

Tablespaces allow administrators to select different file systems
for storage of individual tables, indexes, and databases.
This improves performance and control over disk space
usage. Prior releases used initlocation and
manual symlink management for such tasks.

Improved Buffer Management, CHECKPOINT,
VACUUM

This release has a more intelligent buffer replacement strategy,
which will make better use of available shared buffers and
improve performance. The performance impact of vacuum and
checkpoints is also lessened.

Change Column Types

A column's data type can now be changed with ALTER
TABLE.

New Perl Server-Side Language

A new version of the plperl server-side language now
supports a persistent shared storage area, triggers, returning records
and arrays of records, and SPI calls to access the database.

Comma-separated-value (CSV) support in COPY

COPY can now read and write
comma-separated-value files. It has the flexibility to
interpret nonstandard quoting and separation characters too.

A dump/restore using pg_dump is
required for those wishing to migrate data from any previous
release.

Observe the following incompatibilities:

In READ COMMITTED serialization mode, volatile functions
now see the results of concurrent transactions committed up to the
beginning of each statement within the function, rather than up to the
beginning of the interactive command that called the function.

Functions declared STABLE or IMMUTABLE always
use the snapshot of the calling query, and therefore do not see the
effects of actions taken after the calling query starts, whether in
their own transaction or other transactions. Such a function must be
read-only, too, meaning that it cannot use any SQL commands other than
SELECT.

Nondeferred AFTER triggers are now fired immediately
after completion of the triggering query, rather than upon
finishing the current interactive command. This makes a
difference when the triggering query occurred within a function:
the trigger is invoked before the function proceeds to its next
operation.

Server configuration parameters virtual_host and
tcpip_socket have been replaced with a more general
parameter listen_addresses. Also, the server now listens on
localhost by default, which eliminates the need for the
-i postmaster switch in many scenarios.

Server configuration parameters SortMem and
VacuumMem have been renamed to work_mem
and maintenance_work_mem to better reflect their
use. The original names are still supported in
SET and SHOW.

Server configuration parameters log_pid,
log_timestamp, and log_source_port have been
replaced with a more general parameter log_line_prefix.

Server configuration parameter syslog has been
replaced with a more logical log_destination variable to
control the log output destination.

Server configuration parameter log_statement has been
changed so it can selectively log just database modification or
data definition statements. Server configuration parameter
log_duration now prints only when log_statement
prints the query.

Server configuration parameter max_expr_depth parameter has
been replaced with max_stack_depth which measures the
physical stack size rather than the expression nesting depth. This
helps prevent session termination due to stack overflow caused by
recursive functions.

Casting an integer to BIT(N) selects the rightmost N bits of the
integer, not the leftmost N bits as before.

Updating an element or slice of a NULL array value now produces
a nonnull array result, namely an array containing
just the assigned-to positions.

Syntax checking of array input values has been tightened up
considerably. Junk that was previously allowed in odd places with
odd results now causes an error. Empty-string element values
must now be written as "", rather than writing nothing.
Also changed behavior with respect to whitespace surrounding
array elements: trailing whitespace is now ignored, for symmetry
with leading whitespace (which has always been ignored).

Overflow in integer arithmetic operations is now detected and
reported as an error.

The arithmetic operators associated with the single-byte
"char" data type have been removed.

The extract() function (also called
date_part) now returns the proper year for BC dates.
It previously returned one less than the correct year. The
function now also returns the proper values for millennium and
century.

CIDR values now must have their nonmasked bits be zero.
For example, we no longer allow
204.248.199.1/31 as a CIDR value. Such
values should never have been accepted by
PostgreSQL and will now be rejected.

EXECUTE now returns a completion tag that
matches the executed statement.

psql's \copy command now reads or
writes to the query's stdin/stdout, rather than
psql's stdin/stdout. The previous
behavior can be accessed via new
pstdin/pstdout parameters.

The server now uses its own time zone database, rather than the
one supplied by the operating system. This will provide consistent
behavior across all platforms. In most cases, there should be
little noticeable difference in time zone behavior, except that
the time zone names used by SET/SHOWTimeZone might be different from what your platform provides.

Configure's threading option no longer requires
users to run tests or edit configuration files; threading options
are now detected automatically.

Now that tablespaces have been implemented,
initlocation has been removed.

The API for user-defined GiST indexes has been changed. The
Union and PickSplit methods are now passed a pointer to a
special GistEntryVector structure,
rather than a bytea.

Some aspects of PostgreSQL's behavior
have been determined to be suboptimal. For the sake of backward
compatibility these have not been removed in 8.0, but they are
considered deprecated and will be removed in the next major
release.

The 8.1 release will remove the to_char() function
for intervals.

The server now warns of empty strings passed to
oid/float4/float8 data
types, but continues to interpret them as zeroes as before.
In the next major release, empty strings will be considered
invalid input for these data types.

By default, tables in PostgreSQL 8.0
and earlier are created with OIDs. In the next release,
this will not be the case: to create a table
that contains OIDs, the WITH OIDS clause must
be specified or the default_with_oids
configuration parameter must be set. Users are encouraged to
explicitly specify WITH OIDS if their tables
require OIDs for compatibility with future releases of
PostgreSQL.

Before this change, many queries would not use an index if the data
types did not match exactly. This improvement makes index usage more
intuitive and consistent.

New buffer replacement strategy that improves caching (Jan)

Prior releases used a least-recently-used (LRU) cache to keep
recently referenced pages in memory. The LRU algorithm
did not consider the number of times a specific cache entry was
accessed, so large table scans could force out useful cache pages.
The new cache algorithm uses four separate lists to track most
recently used and most frequently used cache pages and dynamically
optimize their replacement based on the work load. This should
lead to much more efficient use of the shared buffer cache.
Administrators who have tested shared buffer sizes in the past
should retest with this new cache replacement policy.

In previous releases, the checkpoint process, which runs every few
minutes, would write all dirty buffers to the operating system's
buffer cache then flush all dirty operating system buffers to
disk. This resulted in a periodic spike in disk usage that often
hurt performance. The new code uses a background writer to trickle
disk writes at a steady pace so checkpoints have far fewer dirty
pages to write to disk. Also, the new code does not issue a global
sync() call, but instead fsync()s just
the files written since the last checkpoint. This should improve
performance and minimize degradation during checkpoints.

Add ability to prolong vacuum to reduce performance impact (Jan)

On busy systems, VACUUM performs many I/O
requests which can hurt performance for other users. This
release allows you to slow down VACUUM to
reduce its impact on other users, though this increases the
total duration of VACUUM.

This improves the way indexes are scanned when many duplicate
values exist in the index.

Use dynamically-generated table size estimates while planning (Tom)

Formerly the planner estimated table sizes using the values seen
by the last VACUUM or ANALYZE,
both as to physical table size (number of pages) and number of rows.
Now, the current physical table size is obtained from the kernel,
and the number of rows is estimated by multiplying the table size
by the row density (rows per page) seen by the last
VACUUM or ANALYZE. This should
produce more reliable estimates in cases where the table size has
changed significantly since the last housekeeping command.

Improved index usage with OR clauses (Tom)

This allows the optimizer to use indexes in statements with many OR
clauses that would not have been indexed in the past. It can also use
multi-column indexes where the first column is specified and the second
column is part of an OR clause.

Improve matching of partial index clauses (Tom)

The server is now smarter about using partial indexes in queries
involving complex WHERE clauses.

Improve performance of the GEQO optimizer (Tom)

The GEQO optimizer is used to plan queries involving many tables (by
default, twelve or more). This release speeds up the way queries are
analyzed to decrease time spent in optimization.

Miscellaneous optimizer improvements

There is not room here to list all the minor improvements made, but
numerous special cases work better than in prior releases.

Improve lookup speed for C functions (Tom)

This release uses a hash table to lookup information for dynamically
loaded C functions. This improves their speed so they perform nearly as
quickly as functions that are built into the server executable.

Add type-specific ANALYZE statistics
capability (Mark Cave-Ayland)

This feature allows more flexibility in generating statistics
for nonstandard data types.

ANALYZE now collects statistics for
expression indexes (Tom)

Expression indexes (also called functional indexes) allow users to
index not just columns but the results of expressions and function
calls. With this release, the optimizer can gather and use statistics
about the contents of expression indexes. This will greatly improve
the quality of planning for queries in which an expression index is
relevant.

New two-stage sampling method for ANALYZE
(Manfred Koizar)

This gives better statistics when the density of valid rows is very
different in different regions of a table.

Speed up TRUNCATE (Tom)

This buys back some of the performance loss observed in 7.4, while still
keeping TRUNCATE transaction-safe.

Change server configuration parameter log_statement to take
values all, mod, ddl, or
none to select which queries are logged (Bruce)

This allows administrators to log only data definition changes or
only data modification statements.

Some logging-related configuration parameters could formerly be adjusted
by ordinary users, but only in the "more verbose" direction.
They are now treated more strictly: only superusers can set them.
However, a superuser can use ALTER USER to provide per-user
settings of these values for non-superusers. Also, it is now possible
for superusers to set values of superuser-only configuration parameters
via PGOPTIONS.

By default, configuration files are kept in the cluster's top directory.
With this addition, configuration files can be placed outside the
data directory, easing administration.

Plan prepared queries only when first executed so constants can be
used for statistics (Oliver Jowett)

Prepared statements plan queries once and execute them many
times. While prepared queries avoid the overhead of re-planning
on each use, the quality of the plan suffers from not knowing the exact
parameters to be used in the query. In this release, planning of
unnamed prepared statements is delayed until the first execution,
and the actual parameter values of that execution are used as
optimization hints. This allows use of out-of-line parameter passing
without incurring a performance penalty.

Allow DECLARE CURSOR to take parameters
(Oliver Jowett)

It is now useful to issue DECLARE CURSOR in a
Parse message with parameters. The parameter values
sent at Bind time will be substituted into the
execution of the cursor's query.

Fix hash joins and aggregates of inet and
cidr data types (Tom)

Release 7.4 handled hashing of mixed inet and
cidr values incorrectly. (This bug did not exist
in prior releases because they wouldn't try to hash either
data type.)

Make log_duration print only when log_statement
prints the query (Ed L.)

In previous releases, because single quotes had to be used to
quote a function's body, the use of single quotes inside the
function text required use of two single quotes or other error-prone
notations. With this release we add the ability to use "dollar
quoting" to quote a block of text. The ability to use different
quoting delimiters at different nesting levels greatly simplifies
the task of quoting correctly, especially in complex functions.
Dollar quoting can be used anywhere quoted text is needed.

Make CASE val WHEN compval1 THEN ... evaluate val only once (Tom)

CASE no longer evaluates the tested expression multiple
times. This has benefits when the expression is complex or is
volatile.

Test HAVING before computing target list of an
aggregate query (Tom)

Fixes improper failure of cases such as SELECT SUM(win)/SUM(lose)
... GROUP BY ... HAVING SUM(lose) > 0. This should work but formerly
could fail with divide-by-zero.

This gives us a fairly bulletproof defense against crashing due to
runaway recursive functions. Instead of measuring the depth of expression
nesting, we now directly measure the size of the execution stack.

Allow arbitrary row expressions (Tom)

This release allows SQL expressions to contain arbitrary composite
types, that is, row values. It also allows functions to more easily
take rows as arguments and return row values.

Allow LIKE/ILIKE to be used as the operator
in row and subselect comparisons (Fabien Coelho)

Add COMMENT ON for casts, conversions, languages,
operator classes, and large objects (Christopher)

Add new server configuration parameter default_with_oids to
control whether tables are created with OIDs by default (Neil)

This allows administrators to control whether CREATE
TABLE commands create tables with or without OID
columns by default. (Note: the current factory default setting for
default_with_oids is TRUE, but the default
will become FALSE in future releases.)

Allow ALTER ... ADD COLUMN with defaults and
NOT NULL constraints; works per SQL spec (Rod)

It is now possible for ADD COLUMN to create a column
that is not initially filled with NULLs, but with a specified
default value.

Add ALTER COLUMN TYPE to change column's type (Rod)

It is now possible to alter a column's data type without dropping
and re-adding the column.

Allow multiple ALTER actions in a single ALTER
TABLE command (Rod)

This is particularly useful for ALTER commands that
rewrite the table (which include ALTER COLUMN TYPE and
ADD COLUMN with a default). By grouping
ALTER commands together, the table need be rewritten
only once.

Allow ALTER TABLE to add SERIAL
columns (Tom)

This falls out from the new capability of specifying defaults for new
columns.

In 7.3 and 7.4, a long-running B-tree index build could block concurrent
CHECKPOINTs from completing, thereby causing WAL bloat because the
WAL log could not be recycled.

Database-wide ANALYZE does not hold locks
across tables (Tom)

This reduces the potential for deadlocks against other backends
that want exclusive locks on tables. To get the benefit of this
change, do not execute database-wide ANALYZE
inside a transaction block (BEGIN block); it
must be able to commit and start a new transaction for each
table.

REINDEX does not exclusively lock the index's
parent table anymore

The index itself is still exclusively locked, but readers of the
table can continue if they are not using the particular index
being rebuilt.

Erase MD5 user passwords when a user is renamed (Bruce)

PostgreSQL uses the user name as salt
when encrypting passwords via MD5. When a user's name is changed,
the salt will no longer match the stored MD5 password, so the
stored password becomes useless. In this release a notice is
generated and the password is cleared. A new password must then
be assigned if the user is to be able to log in with a password.

New pg_ctlkill option for Windows (Andrew)

Windows does not have a kill command to send signals to
backends so this capability was added to pg_ctl.

Information schema improvements

Add --pwfile option to
initdb so the initial password can be
set by GUI tools (Magnus)

Detect locale/encoding mismatch in
initdb (Peter)

Add register command to pg_ctl to
register Windows operating system service (Dave Page)

Composite values can be used in many places where only scalar values
worked before.

Reject nonrectangular array values as erroneous (Joe)

Formerly, array_in would silently build a
surprising result.

Overflow in integer arithmetic operations is now detected (Tom)

The arithmetic operators associated with the single-byte
"char" data type have been removed.

Formerly, the parser would select these operators in many situations
where an "unable to select an operator" error would be more
appropriate, such as null * null. If you actually want
to do arithmetic on a "char" column, you can cast it to
integer explicitly.

Syntax checking of array input values considerably tightened up (Joe)

Junk that was previously allowed in odd places with odd results
now causes an ERROR, for example, non-whitespace
after the closing right brace.

Empty-string array element values must now be written as
"", rather than writing nothing (Joe)

Formerly, both ways of writing an empty-string element value were
allowed, but now a quoted empty string is required. The case where
nothing at all appears will probably be considered to be a NULL
element value in some future release.

Array element trailing whitespace is now ignored (Joe)

Formerly leading whitespace was ignored, but trailing whitespace
between an element value and the delimiter or right brace was
significant. Now trailing whitespace is also ignored.

Emit array values with explicit array bounds when lower bound is not one
(Joe)

In READ COMMITTED serialization mode, volatile functions
now see the results of concurrent transactions committed up to the
beginning of each statement within the function, rather than up to the
beginning of the interactive command that called the function.

Functions declared STABLE or IMMUTABLE always
use the snapshot of the calling query, and therefore do not see the
effects of actions taken after the calling query starts, whether in
their own transaction or other transactions. Such a function must be
read-only, too, meaning that it cannot use any SQL commands other than
SELECT. There is a considerable performance gain from
declaring a function STABLE or IMMUTABLE
rather than VOLATILE.

Nondeferred AFTER triggers are now fired immediately
after completion of the triggering query, rather than upon
finishing the current interactive command. This makes a difference
when the triggering query occurred within a function: the trigger
is invoked before the function proceeds to its next operation. For
example, if a function inserts a new row into a table, any
nondeferred foreign key checks occur before proceeding with the
function.

Allow function parameters to be declared with names (Dennis Björklund)

This allows better documentation of functions. Whether the names
actually do anything depends on the specific function language
being used.

Allow PL/pgSQL parameter names to be referenced in the function (Dennis Björklund)

This basically creates an automatic alias for each named parameter.

Do minimal syntax checking of PL/pgSQL functions at creation time (Tom)

This allows us to catch simple syntax errors sooner.

More support for composite types (row and record variables) in PL/pgSQL

For example, it now works to pass a rowtype variable to another function
as a single variable.

Parsing is now driven by presence of ".." rather than
data type of FOR variable. This makes no difference for
correct functions, but should result in more understandable error
messages when a mistake is made.

In PL/Tcl, SPI commands are now run in subtransactions. If an error
occurs, the subtransaction is cleaned up and the error is reported
as an ordinary Tcl error, which can be trapped with catch.
Formerly, it was not possible to catch such errors.

Accept ELSEIF in PL/pgSQL (Neil)

Previously PL/pgSQL only allowed ELSIF, but many people
are accustomed to spelling this keyword ELSEIF.

Use dependency information to improve the reliability of
pg_dump (Tom)

This should solve the longstanding problems with related objects
sometimes being dumped in the wrong order.

Have pg_dump output objects in alphabetical order if possible (Tom)

This should make it easier to identify changes between
dump files.

Allow pg_restore to ignore some SQL errors (Fabien Coelho)

This makes pg_restore's behavior similar to the
results of feeding a pg_dump output script to
psql. In most cases, ignoring errors and plowing
ahead is the most useful thing to do. Also added was a pg_restore
option to give the old behavior of exiting on an error.

pg_restore-l display now includes
objects' schema names

New begin/end markers in pg_dump text output (Bruce)

Add start/stop times for
pg_dump/pg_dumpall in verbose mode
(Bruce)

Allow most pg_dump options in
pg_dumpall (Christopher)

Have pg_dump use ALTER OWNER rather
than SET SESSION AUTHORIZATION by default
(Christopher)

This simplifies the task of building extensions outside the original
source tree.

Support relocatable installations (Bruce)

Directory paths for installed files (such as the
/share directory) are now computed relative to the
actual location of the executables, so that an installation tree
can be moved to another place without reconfiguring and
rebuilding.

Use --with-docdir to choose installation location of documentation; also
allow --infodir (Peter)

Add --without-docdir to prevent installation of documentation (Peter)

Upgrade to DocBook V4.2 SGML (Peter)

New PostgreSQLCVS tag (Marc)

This was done to make it easier for organizations to manage their
own copies of the PostgreSQLCVS repository. File version stamps from the master
repository will not get munged by checking into or out of a copied
repository.