Some developers make use of cscope, which can be found at http://cscope.sf.net/. Others use glimpse, which can be found at http://webglimpse.net/.

Some developers make use of cscope, which can be found at http://cscope.sf.net/. Others use glimpse, which can be found at http://webglimpse.net/.

−

tools/make_diff has tools to create patch diff files that can be applied to the distribution. This produces context diffs, which is our preferred format.

+

tools/make_diff has tools to create patch diff files that can be applied to the distribution. This produces context diffs for easier readability.

pgindent is used to fix the source code style to conform to our standards, and is normally run at the end of each development cycle; see [[#What.27s_the_formatting_style_used_in_PostgreSQL_source_code.3F|this question]] for more information on our style.

pgindent is used to fix the source code style to conform to our standards, and is normally run at the end of each development cycle; see [[#What.27s_the_formatting_style_used_in_PostgreSQL_source_code.3F|this question]] for more information on our style.

What development environment is required to develop code?

PostgreSQL is developed mostly in the C programming language. The source code is targeted at most of the popular Unix platforms and the Windows environment (XP, Windows 2000, and up).

Most developers run a Unix-like operating system and use an open source tool chain with GCC, GNU Make, GDB, Autoconf, and so on. If you have contributed to open source software before, you will probably be familiar with these tools. Developers using this tool chain on Windows make use of MinGW, though most development on Windows is currently done with the Microsoft Visual Studio 2005 (version 8) development environment and associated tools.

Developers who regularly rebuild the source often pass the --enable-depend flag to configure. The result is that if you make a modification to a C header file, all files depend upon that file are also rebuilt.

src/Makefile.custom can be used to set environment variables, like CUSTOM_COPT, that are used for every compile.

Development Tools and Help

How is the source code organized?

If you point your browser at How PostgreSQL Processes a Query, you will see few paragraphs describing the data flow, the backend components in a flow chart, and a description of the shared memory area. You can click on any flowchart box to see a description. If you then click on the directory name, you will be taken to the source directory, to browse the actual source code behind it. We also have several README files in some source directories to describe the function of the module. The browser will display these when you enter the directory also.

What tools are available for developers?

First, all the files in the src/tools directory are designed for developers.

RELEASE_CHANGES changes we have to make for each release
ccsym find standard defines made by your compiler
copyright fixes copyright notices

entab converts spaces to tabs, used by pgindent
find_static finds functions that could be made static
find_typedef finds typedefs in the source code
find_badmacros finds macros that use braces incorrectly
fsync a script to provide information about the cost of cache
syncing system calls
make_ctags make vi 'tags' file in each directory
make_diff make *.orig and diffs of source
make_etags make emacs 'etags' files
make_keywords make comparison of our keywords and SQL'92
make_mkid make mkid ID files
git_changelog used to generate a list of changes for each release
pginclude scripts for adding/removing include files
pgindent indents source files
pgtest a semi-automated build system
thread a thread testing script

In src/include/catalog:

unused_oids a script that finds unused OIDs for use in system catalogs
duplicate_oids finds duplicate OIDs in system catalog definitions

tools/backend was already described in the question-and-answer above.

Second, you really should have an editor that can handle tags, so you can tag a function call to see the function definition, and then tag inside that function to see an even lower-level function, and then back out twice to return to the original function. Most editors support this via tags or etags files.

tools/make_diff has tools to create patch diff files that can be applied to the distribution. This produces context diffs for easier readability.

pgindent is used to fix the source code style to conform to our standards, and is normally run at the end of each development cycle; see this question for more information on our style.

pginclude contains scripts used to add needed #include's to include files, and removed unneeded #include's.

When adding built-in objects such as types or functions, you will need to assign OIDs to them. Our convention is that all hand-assigned OIDs are distinct values in the range 1-9999. (It would work mechanically for them to be unique within individual system catalogs, but for clarity we require them to be unique across the whole system.) There is a script called unused_oids in src/include/catalog that shows the currently unused OIDs. To assign a new OID, pick one that is free according to unused_oids, and for bonus points pick one that is nearby to related existing objects. See also the duplicate_oids script, which will complain if you made a mistake.

What's the formatting style used in PostgreSQL source code?

Our standard format BSD style, with each level of code indented one tab, where each tab is four spaces. You will need to set your editor or file viewer to display tabs as four spaces:

For vi (in .exrc or .vimrc):

set tabstop=4 shiftwidth=4 noexpandtab

For less or more, specify -x4 to get the correct indentation.

The tools/editors directory of the latest sources contains sample settings that can be used with the emacs, xemacs and vim editors, that assist in keeping to PostgreSQL coding standards.

pgindent will the format code by specifying flags to your operating system's utility indent. pgindent is run on all source files just before each beta test period. It auto-formats all source files to make them consistent. Comment blocks that need specific line breaks should be formatted as block comments, where the comment starts as /*------. These comments will not be reformatted in any way.

What is configure all about?

The files configure and configure.in are part of the GNU autoconf package. Configure allows us to test for various capabilities of the OS, and to set variables that can then be tested in C programs and Makefiles. Autoconf is installed on the PostgreSQL main server. To add options to configure, edit configure.in, and then run autoconf to generate configure.

When configure is run by the user, it tests various OS capabilities, stores those in config.status and config.cache, and modifies a list of *.in files. For example, if there exists a Makefile.in, configure generates a Makefile that contains substitutions for all @var@ parameters found by configure.

When you need to edit files, make sure you don't waste time modifying files generated by configure. Edit the *.in file, and re-run configure to recreate the needed file. If you run make distclean from the top-level source directory, all files derived by configure are removed, so you see only the file contained in the source distribution.

How do I add a new port?

There are a variety of places that need to be modified to add a new port. First, start in the src/template directory. Add an appropriate entry for your OS. Also, use src/config.guess to add your OS to src/template/.similar. You shouldn't match the OS version exactly. The configure test will look for an exact OS version number, and if not found, find a match without version number. Edit src/configure.in to add your new OS. (See configure item above.) You will need to run autoconf, or patch src/configure too.

Then, check src/include/port and add your new OS file, with appropriate values. Hopefully, there is already locking code in src/include/storage/s_lock.h for your CPU. There is also a src/makefiles directory for port-specific Makefile handling. There is a backend/port directory if you need special files for your OS.

There is always a temptation to use the newest operating system features as soon as they arrive. We resist that temptation.

First, we support 15+ operating systems, so any new feature has to be well established before we will consider it. Second, most new wizz-bang features don't provide dramatic improvements. Third, they usually have some downside, such as decreased reliability or additional code required. Therefore, we don't rush to use new features but rather wait for the feature to be established, then ask for testing to show that a measurable improvement is possible.

As an example, threads are not currently used instead of multiple processes for backends because:

Historically, threads were poorly supported and buggy.

An error in one backend can corrupt other backends if they're threads within a single process

Speed improvements using threads are small compared to the remaining backend startup time.

The backend code would be more complex.

Terminating backend processes allows the OS to cleanly and quickly free all resources, protecting against memory and file descriptor leaks and making backend shutdown cheaper and faster

Debugging threaded programs is much harder than debugging worker processes, and core dumps are much less useful

Sharing of read-only executable mappings and the use of shared_buffers means processes, like threads, are very memory efficient

Regular creation and destruction of processes helps protect against memory fragmentation, which can be hard to manage in long-running processes

(Whether individual backend processes should use multiple threads to make use of multiple cores for single queries is a separate question not covered here).

So, we are not ignorant of new features. It is just that we are cautious about their adoption. The TODO list often contains links to discussions showing our reasoning in these areas.

Note that having access to a copy of the SQL standard is not necessary to become a useful contributor to PostgreSQL development. Interpreting the standard is difficult and needs years of experience. And most features in PostgreSQL are not specified in the standard anyway.

Development Process

What do I do after choosing an item to work on?

Send an email to pgsql-hackers with a proposal for what you want to do (assuming your contribution is not trivial). Working in isolation is not advisable because others might be working on the same TODO item, or you might have misunderstood the TODO item. In the email, discuss both the internal implementation method you plan to use, and any user-visible changes (new syntax, etc). For complex patches, it is important to get community feedback on your proposal before starting work. Failure to do so might mean your patch is rejected. If your work is being sponsored by a company, read this article for tips on being more effective.

How do I test my changes?

Basic system testing

The easiest way to test your code is to ensure that it builds against the latest version of the code and that it does not generate compiler warnings.

It is worth advised that you pass --enable-cassert to configure. This will turn on assertions within the source which will often make bugs more visible because they cause data corruption or segmentation violations. This generally makes debugging much easier.

Then, perform run time testing via psql.

Runtime environment

To test your modified version of PostgreSQL, it's convenient to install PostgreSQL into a local directory (in your home
directory, for instance) to avoid conflicting with a system wide
installation. Use the --prefix= option to configure to specify an installation
location; --with-pgport to specify a non-standard default port is
helpful as well. To run this instance, you will need to make sure that the correct
binaries are used; depending on your operating system, environment variables
like PATH and LD_LIBRARY_PATH (on most Linux/Unix-like systems) need to be
set. Setting PGDATA will also be useful.

To avoid having to set this environment up manually, you may want to use
Greg Smith's peg scripts,or the
scripts that are used on the
buildfarm.

Regression test suite

The next step is to test your changes against the existing regression test suite. To do this, issue "make check" in the root directory of the source tree. If any tests fail, investigate.

If you've deliberately changed existing behavior, this change might cause a regression test failure but not any actual regression. If so, you should also patch the regression test suite.

What about unit testing, static analysis, model checking...?

There have been a number of discussions about other testing frameworks and some developers are exploring these ideas.

Keep in mind the Makefiles do not have the proper dependencies for include files. You have to do a make clean and then another make. If you are using GCC you can use the --enable-depend option of configure to have the compiler compute the dependencies automatically.

I have developed a patch, what next?

You will need to submit the patch to pgsql-hackers@postgresql.org. To help ensure your patch is reviewed and committed in a timely fashion, please try to follow the guidelines at Submitting a Patch.

What happens to my patch once it is submitted?

It will be reviewed by other contributors to the project and will be either accepted or sent back for further work. The process is explained in more detail at Submitting a Patch.

How do I help with reviewing patches?

If you would like to contribute by reviewing a patch in the CommitFest queue, you are most welcome to do so. Please read the guide at Reviewing a Patch for more information.

Do I need to sign a copyright assignment?

No, contributors keeps their copyright (as is the case for most
European countries anyway). They simply consider themselves to be part of
the Postgres Global Development Group. (It's not even possible to assign
copyright to PGDG, as it's not a legal entity). This is the same way that
the Linux Kernel and many other Open Source projects works.

May I add my own copyright notice where appropriate?

No, please don't. We like to keep the legal information short and crisp.
Additionally, we've heard that could possibly pose problems for
corporate users.

Yes, it does. And it is, because the PostgreSQL Global Development Group
covers all copyright holders. Also note that US law doesn't require any
copyright notice for getting the copyright granted, just like most
European laws.

Technical Questions

How do I efficiently access information in system catalogs from the backend code?

You first need to find the tuples (rows) you are interested in. There are two ways. First, SearchSysCache() and related functions allow you to query the system catalogs using predefined indexes on the catalogs. This is the preferred way to access system tables, because the first call to the cache loads the needed rows, and future requests can return the results without accessing the base table. A list of available caches is located in src/backend/utils/cache/syscache.c. src/backend/utils/cache/lsyscache.c contains many column-specific cache lookup functions.

The rows returned are cache-owned versions of the heap rows. Therefore, you must not modify or delete the tuple returned by SearchSysCache(). What you should do is release it with ReleaseSysCache() when you are done using it; this informs the cache that it can discard that tuple if necessary. If you neglect to call ReleaseSysCache(), then the cache entry will remain locked in the cache until end of transaction, which is tolerable during development but not considered acceptable for release-worthy code.

If you can't use the system cache, you will need to retrieve the data directly from the heap table, using the buffer cache that is shared by all backends. The backend automatically takes care of loading the rows into the buffer cache. To do this, open the table with heap_open(). You can then start a table scan with heap_beginscan(), then use heap_getnext() and continue as long as HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be assigned to the scan. No indexes are used, so all rows are going to be compared to the keys, and only the valid rows returned.

You can also use heap_fetch() to fetch rows by block number/offset. While scans automatically lock/unlock rows from the buffer cache, with heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it when completed.

Once you have the row, you can get data that is common to all tuples, like t_self and t_oid, by merely accessing the HeapTuple structure entries. If you need a table-specific column, you should take the HeapTuple pointer, and use the GETSTRUCT() macro to access the table-specific start of the tuple. You then cast the pointer, for example as a Form_pg_proc pointer if you are accessing the pg_proc table, or Form_pg_type if you are accessing pg_type. You can then access fields of the tuple by using the structure pointer:

((Form_pg_class) GETSTRUCT(tuple))->relnatts

Note however that this only works for columns that are fixed-width and never null, and only when all earlier columns are likewise fixed-width and
never null. Otherwise the column's location is variable and you must use heap_getattr() or related functions to extract it from the tuple.

Also, avoid storing directly into struct fields as a means of changing live tuples. The best way is to use heap_modifytuple() and pass it your original tuple, plus the values you want changed. It returns a palloc'ed tuple, which you pass to heap_update(). You can delete tuples by passing the tuple's t_self to heap_delete(). You use t_self for heap_update() too. Remember, tuples can be either system cache copies, which might go away after you call ReleaseSysCache(), or read directly from disk buffers, which go away when you heap_getnext(), heap_endscan, or ReleaseBuffer(), in the heap_fetch() case. Or it may be a palloc'ed tuple, that you must pfree() when finished.

Why are table, column, type, function, view names sometimes referenced as Name or NameData, and sometimes as char *?

Table, column, type, function, and view names are stored in system tables in columns of type Name. Name is a fixed-length, null-terminated type of NAMEDATALEN bytes. (The default value for NAMEDATALEN is 64 bytes.)

Table, column, type, function, and view names that come into the backend via user queries are stored as variable-length, null-terminated character strings.

Many functions are called with both types of names, ie. heap_open(). Because the Name type is null-terminated, it is safe to pass it to a function expecting a char *. Because there are many cases where on-disk names(Name) are compared to user-supplied names(char *), there are many cases where Name and char * are used interchangeably.

Why do we use Node and List to make data structures?

We do this because this allows a consistent way to pass data inside the backend in a flexible way. Every node has a NodeTag which specifies what type of data is inside the Node. Lists are groups of Nodes chained together as a forward-linked list. The ordering of the list elements might or might not be significant, depending on the usage of the particular list.

You can print nodes easily inside gdb. First, to disable output truncation when you use the gdb print command:

(gdb) set print elements 0

Instead of printing values in gdb format, you can use the next two commands to print out List, Node, and structure contents in a verbose format that is easier to understand. Lists are unrolled into nodes, and nodes are printed in detail. The first prints in a short format, and the second in a long format:

(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)

The output appears in the server log file, or on your screen if you are running a backend directly without a postmaster.

I just added a field to a structure. What else should I do?

The structures passed around in the parser, rewriter, optimizer, and executor require quite a bit of support. Most structures have support routines in src/backend/nodes used to create, copy, read, and output those structures -- in particular, most node types need support in the files copyfuncs.c and equalfuncs.c, and some need support in outfuncs.c and possibly readfuncs.c. Make sure you add support for your new field to these files. Find any other places the structure might need code for your new field -- searching for references to existing fields of the struct is a good way to do that. mkid is helpful with this (see available tools).

Why do we use palloc() and pfree() to allocate memory?

palloc() and pfree() are used in place of malloc() and free() because we find it easier to automatically free all memory allocated when a query completes. This assures us that all memory that was allocated gets freed even if we have lost track of where we allocated it. There are special non-query contexts that memory can be allocated in. These affect when the allocated memory is freed by the backend.

What is ereport()?

ereport() is used to send messages to the front-end, and optionally terminate the current query being processed. See here for more details on how to use it.

What is CommandCounterIncrement()?

Normally, statements can not see the rows they modify. This allows UPDATE foo SET x = x + 1 to work correctly.

However, there are cases where a transaction needs to see rows affected in previous parts of the transaction. This is accomplished using a Command Counter. Incrementing the counter allows transactions to be broken into pieces so each piece can see rows modified by previous pieces. CommandCounterIncrement() increments the Command Counter, creating a new part of the transaction.

I need to do some changes to query parsing. Can you succintly explain the parser files?

The parser files live in the 'src/backend/parser' directory.

scan.l defines the lexer, i.e. the algorithm that splits a string (containing an SQL statement) into a stream of tokens. A token is usually a single word (i.e., doesn't contain spaces but is delimited by spaces), but can also be a whole single or double-quoted string for example. The lexer is basically defined in terms of regular expressions which describe the different token types.

gram.y defines the grammar (the syntactical structure) of SQL statements, using the tokens generated by the lexer as basic building blocks. The grammar is defined in BNF notation. BNF resembles regular expressions but works on the level of tokens, not characters. Also, patterns (called rules or productions in BNF) are named, and may be recursive, i.e. use themselves as sub-patters.

I get shift/reduce conflict I don't know how to deal with

What debugging features are available?

First, if you are developing new C code you should ALWAYS work in a build configured with the --enable-cassert and --enable-debug options. Enabling asserts turns on many sanity checking options. Enabling debug symbols supports use of debuggers (such as gdb) to trace through misbehaving code.

The postgres server has a -d option that allows detailed information to be logged (elog or ereport DEBUGn printouts). The -d option takes a number that specifies the debug level. Be warned that high debug level values generate large log files.

If the postmaster is running, start psql in one window, then find the PID of the postgres process used by psql using SELECT pg_backend_pid(). Use a debugger to attach to the postgres PID. You can set breakpoints in the debugger and then issue queries from the psql session. If you are looking to find the location that is generating an error or log message, set a breakpoint at errfinish. If you are debugging something that happens during session startup, you can set PGOPTIONS="-W n", then start psql. This will cause startup to delay for n seconds so you can attach to the process with the debugger, set appropriate breakpoints, then continue through the startup sequence.

If the postmaster is not running, you can actually run the postgres backend from the command line, and type your SQL statement directly. This is almost always a bad way to do things, however, since the usage environment isn't nearly as friendly as psql (no command history for instance) and there's no chance to study concurrent behavior. You might have to use this method if you broke initdb, but otherwise it has nothing to recommend it.

You can also compile with profiling to see what functions are taking execution time --- configuring with --enable-profiling is the recommended way to set this up. (You usually shouldn't use --enable-cassert when studying performance issues, since the checks it enables are not always cheap.) Profile files from server processes will be deposited in the pgsql/data directory. Profile files from clients such as psql will be put in the client's current directory.