Comparing Oracle with EDB Postgres
The foundation of relational databases began with an experimental system. The core of that system was the basis of the System R research. Today this research remains the key functionality of modern database technology. The need for concurrency control and scalability built and maintained by the power of the SQL language exists among the most popular databases of choice, such as MySQL, Oracle, SQL Server, and the popular attention-seeker PostgreSQL. It’s not often that you combine open-source and closed proprietary source databases in the same sentence, especially when you think about the giant or the champion in the database realm known as Oracle. But the contender, PostgreSQL, has a lot of similarities to the champion. In both cases they are ACID compliant with full transactional logging capabilities. When you add the EDB Postgres Advanced Server database, the gap between the champion and the contender starts to close even more. Exploring the quadrants, Gartner appears to recognize that this gap may be closing in the relational database arena.
Comparing the two technologies, we look at Oracle Enterprise Edition, including tools, and the EDB Postgres Advanced Server, including tools. The comparison allows an Oracle DBA to make an easier transition to the PostgreSQL environment. At first glance you can immediately notice some overall similarities, especially when it comes to SQL capabilities and application development. It’s intriguing to think of being able to execute familiar SQL or PL/SQL syntax directly in a PostgreSQL database.
Understanding the terminology can often be confusing if you have been accustomed to Oracle for a number of years.
The perfect database may not really exist, both databases have some unique options that will cause a DBA to have nightmares:
Unlimited database size
Unlimited rows per table
Unlimited indexes per table
Unlimited nightmares are true options, showing the flexibility and control you can have within the database versus the restrictions within Oracle. The restrictions are for a good cause, so the unlimited options are possible with PostgreSQL but definitely not advised.
Capacity is almost equal between the technologies when you consider options for creating a stable database that you can maintain. The variation of the columns has a range for PostgreSQL due to the different data types being used.
Tables and partitions are quite similar, and both are feature-rich. There have been some great improvements in recent releases of PostgreSQL with the addition of declarative partitioning and features allowing partitions to be created on multiple columns. Temporary tables are always useful, but the concept of global temporary tables does not exist in PostgreSQL.
Data types can be the heart and soul of your database. The flexibility within a data model is key to supporting an application. It’s known that an Oracle database can lend its flexibility to support everything from OLTP to data warehouses. This is also true for PostgreSQL, which also offers an extra layer of flexibility, allowing you to combine the likes of a NoSQL database with the flair of a relational database. The addition of the JSON and JSONB data types makes this possible.
Spatial data in conjunction with blob capabilities can be a necessity in the world of storing maps and navigational system– related databases. PostGIS is an easily configurable extension built to handle spatial data.
There is a common misconception that PostgreSQL can’t handle blobs or clob data. The underlying community PostgreSQL has the data type of bytea, which indeed handles binary data. Advanced Server adds the additional blob/clob data types, which will allow data to exist in the same format as Oracle.
Indexing options are quite similar between the two technologies for standard database options. When thinking of a relational database, you do not typically think about the ability to store documents or complete full-text searches. This could be one advantage where PostgreSQL offers the options of GIST and GIN, allowing you to speed up full-text searches built with the same technology of inverted indices as Elasticsearch. GIN can also be smaller than a B-tree index after creation, offering GIN as a possible substitute for B-tree indexes.
In the Oracle world, we are used to transactions being implicit. For example, a new table creation will do a commit internally. The concept of rollback is not available in this scenario. Within PostgreSQL you have the ability to create transactional DDL, which can include DDL and DML. Transactional DDL will allow everything in your script to roll back if there is a failure. Rerunning a clean process can often be beneficial.
If we make a guess, we may say that 50% of all SQL scripts written to support application code will use the SYSDATE or ROWNUM. Common SQL extensions and DBA favorites such as the mysterious DUAL table are also present.
Comparing the two technologies often confuses most techies that have been working in one particular industry. Aside from the terminology, some overall concepts can be confusing. In the simplest form, we know that PostgreSQL is considered open and Oracle is considered closed. Oracle conceptually has an isolated operating system environment, whereas PostgreSQL will adapt and integrate into its surroundings.
For example, it has been said that Oracle is a resource hog, using what’s available—mainly because it functions like an operating system. PostgreSQL, on the hand, believes in allowing the operating system to carry the load and not trying to reinvent the wheel: “Why do the work when the O/S can do it for me?”
Conceptually there are some users, roles, and schemas with different meanings but with the same purpose in mind. In Oracle you have users and roles, where PostgreSQL only has roles. But with these roles you can actually log into the database.
If confusion settles in, Advanced Server can make things more relatable. A short navigation through the database brings things to life with the capabilities to utilize the all_ and user_views or some of the most common DBA views.
With these common views, the ability to use standard scripts to tune and monitor the database is available. Some key dictionary tables, such as pg_stat_statements, pg_stat_activity, and pg_ locks, can produce standard session details.
If scripting isn’t your tool of choice there are GUI options such as the standard pgAdmin that ships with PostgreSQL, allowing good options to monitor a single node. For a more complete enterprise solution, EDB offers Postgres Enterprise Manager (PEM). For a complete solution, the functionality will bring things closer in relation to OEM offerings. In most environments, the developers’ favorite is Toad for Oracle. Having used Toad for many years, there is no fear: with the release of Toad Edge for Postgres, things are really shaping up.
To share or not to share, that is the question. RAC appears to have found a place within the infrastructure of every large corporation. Sometimes the true purpose of a good technology gets lost in the hype. The actual comparison is a matter of shared disk vs. shared nothing.
Understanding your use case is the key. The fundamental purpose of RAC is to provide a high availability cluster with load balancing. For PostgreSQL, if the use case arises for shared disk, the Red Hat Cluster Suite can be implemented. To complete the solution for HA with load balancing, you could make use of the streaming replication that’s native to PostgreSQL, with Pgpool for load balancing, and implement EDB Postgres Failover Manager (EFM) to give you full control over the HA environment.
With a proper HA solution in place, you tend to consider your disaster recovery needs and a possible means to have your data geographically disbursed. This can leave you in search of a proper multi-master replication option. Oracle has Golden Gate, which can assist you in this area, but with PostgreSQL, the EnterpriseDB tool replication server will also give you the power of active-active replication with change data capture and features to handle the conflict resolution.
With both databases, the deployment options are almost endless. You truly can run the same Postgres everywhere.
Bare metal (Windows, RHEL, CentOS, Linux on Power, SLES, Debian)
Virtualized deployments (VMware)
Container deployments (OpenShift, Kubernetes)
Public/private cloud deployments (AWS, Azure, Alibaba, Google)
An open-source initiative can truly be achieved with a PostgreSQL solution. The price point can be affordable without causing procurement nightmares. Independent of virtualization, a per-core subscription model with no vendor lock-in can sound appealing without the fear of a daunting audit lingering in the shadows.
Sample performance stats show that it’s possible to achieve high TPS, billions of writes, and scaling of concurrent users with some flexibility in database size.
Global mobile ad network
Largest database is 14 TB
1.2 billion transactions a day, 55 K transaction per second
400 concurrent users
Analyzes 240 TB of data per day
Online brokerage firm
1 billion writes a day
3,000 transactions per second
800 concurrent users
Global stock trade underwriter
Largest database is 8 TB
6 to 10 million transactions per day
Global consumer financial services provider
Example application database is 2 TB
200 K SELECT statements per second
PostgreSQL and Oracle can now be mentioned in the same conversation, as they both share a solid place in the database ecosystem. The key to a successful migration will be to determine the proper use case, evaluate the application, and perform a thorough analysis. EDB has performed a multitude of successful migrations to date, building a deep knowledge base of the comparison and challenges. Comparing the technologies may excite you—or you’ll appreciate the quick installation and setup—but proper planning will result in a successful project.
*This article first appeared in the February 2019 issue of The NoCOUG Journal as a special feature, pages 16-18.
... View more

[Section A] Create New/separate user logins for pem (instead of everyone using the default [postgres login] 1) Under the PEM Server Directory branch navigate to the group/login role from under the server with the pem database 2) Under your database in the branch to create a new login role – create new login ie. Dbatest 3) Assign login privileges 4) Assign membership of pem_user [Section B] Create Secure Server connection to limit monitoring access Right Clink Create new server connection on Postgres Enterprise Manager Server branch. Specify the name of a ‘team’ The team name provided will be used to control access to this server connection. (ie. UAT95, Prod etc…) [Section C] Create Team Roles to limit server access [This can be used to allow specific users to only monitor specific servers] Under the Postgres Enterprise Manager Server branch navigate to the group/login role from under the database navigate to create login/group roles. [Note: that the permissions given will distinguish if you are creating a login or a group. A group must have no privileges, only assign a membership] [login which is a user login will have privileges] Create the team role used in section B ‘uat95’ or the name you chose 1b) For the team role do not assign any privileges, only assign membership for pem_user, or pem_admin or pem_agent for specific access control within pem [Section D] Create New/separate secure login with restricted access to specific servers Complete new user login steps from section A Assign additional membership – select the team role name created in section c (ie. ‘uat95’) [This user will only be able to see pem server and the server defined with role uat95] [Section E] PEM Role Definitions PEM User Read-only users; they may view dashboards and use tools like the Postgres Expert and Capacity Manager, but they will not be able to install agents or configure the server, directory, alerts or probes. PEM Admin The same read permissions as members of the pem_user role, plus sufficient privileges to configure the server, directory, alerts, and probes. PEM Agent Explicitly granted the CREATE ROLE privilege. In addition to the permissions granted through membership in the pem_admin role, the CREATE ROLE privilege allows an administrator to create additional PEM users, and to install and register new agents.
... View more

Objective Create a super simple Docker container for running a default installation of EDB Postgres. Prerequisites Have Docker installed and working. Components Dockerfile - special file that contains the recipe for building a Docker image Docker image - the result of building a docker file, can stay in your local Docker registry or can be exported elsewhere as a tar file Docker container - the result of running a Docker image, sort of like a running VM Steps mkdir -v ~/docker/epas cd ~/docker/epas vi Dockerfile Here is what to put in your Dockerfile: # # This is my EPAS Dockerfile # # # Docker images based on latest CentOS version # FROM centos:latest # # My email address as the maintainer of this Docker image # MAINTAINER <email address> # # Commands to install and setup EPAS, done in a single RUN command # so as not to create unnecessary layers (Google this for more details # about Docker image layers) # RUN rpm -Uvh http://yum.enterprisedb.com/edbrepos/edb-repo-10.0-2.noarch.rpm \ && export YUM_USER= <yum user> \ && export YUM_PASSWORD= <yum password> \ && sed -i "s/<username>:<password>/$YUM_USER:$YUM_PASSWORD/g" /etc/yum.repos.d/edb.repo \ && sed -i "\/ppas95/,/gpgcheck/ s/enabled=0/enabled=1/" /etc/yum.repos.d/edb.repo \ && yum -y install ppas95-server # # Set the user to be enterprisedb at this point # USER enterprisedb # # Initialize the cluster datadir with a default enterprisedb password of "enterprisedb" # and Setup pg_hba.conf to allow connections with passwords by adding a catchall # line at the end of the file # RUN echo "enterprisedb" > /tmp/password \ && /usr/ppas-9.5/bin/initdb -D /var/lib/ppas/9.5/data --pwfile=/tmp/password \ && echo "host all all 0.0.0.0/0 md5" >> /var/lib/ppas/9.5/data/pg_hba.conf # # Default command which will start Postgres in this container # CMD ["/usr/ppas-9.5/bin/edb-postgres", "-D", "/var/lib/ppas/9.5/data", "-h", "*"] Now continuing with steps... # # Build the Dockerfile into a Docker image, don't forget the '.' specifying # the directory of the Dockerfile as current directory. This will do # a bunch of stuff, basically running all the steps in the Dockerfile # to create the image. # docker build -t my-repo/epas:9.5 . # # Now let's start the EPAS container in interactive terminal so we # can see the STDOUT # docker run -it my-repo/epas:9.5 # # Congrats! You started the EPAS container. Now let's kill it # and start it in the background as a daemon with a name. # <ctrl>-C docker run -d --name epas my-repo/epas:9.5 docker ps # # Nice, your EPAS container is running. Now, let's exec a command # inside the container to connect to the database with psql. # docker exec -it epas /usr/ppas-9.5/bin/psql -d edb \conninfo exit # # That is pretty nice, but not so good if we can only connect to EPAS # from inside the container. How do we export the port so that someone # outside the container can connect to EPAS? Not hard at all, we just # map the port inside the container to a port on the host. So, let's # kill our current container and restart with a port mapping. # docker rm -f epas # # Start the container mapping port 5444 on the Docker host to port 5444 # inside the container. # docker run -d --name epas -p 5444:5444 my-repo/epas:9.5 # # Now we can connect to the EPAS database from the host via psql or # your database client of choice. # psql -p 5444 -d edb -U enterprisedb Congratulations! You've created your first Docker container for EPAS. This is just the beginning though. Docker allows specifying "volumes" so that your container can leverage shared storage. We can also make the container "smart" so that it starts as a master or starts as a replica of another database. More to come in future quickstarts.
... View more

Objective
Provide some tips/tricks to setup Docker on whatever platform you are using, and provide a couple simple test cases to prove that you are ready to move to more complex Docker examples.
Prerequisites
Have a Mac, Windows, or Linux machine preferably with a recent version of the operating system
Overview
What is Docker? It is a tool for packaging and running applications on any platform. Some like to think of Docker as a way to provide much lighter-weight virtualization options than most other virtualization products. Some like to think of Docker as a way to package applications as Docker images where that same image can run unchanged on Windows, Mac, Linux, or any other platform that supports Docker. At a very technical level, those are probably not correct. But at a logical level, I think they help to show the value well.
A full overview of Docker is beyond the scope of this guide. Please refer to these links to learn it better: https://www.docker.com/what-docker#/VM
At a basic level, you are going to create an environment where you can run commands like "docker images" to list the images in your environment (sort of like a VMDK or AMI), "docker ps" to list the containers (images that are running, sort of like VMs). Docker requires a Linux kernel. If the computer you are using for this is not Linux, don't panic, Docker has ways of installing super light-weight VMs on your Mac or Windows machine that will be managed for you so that you don't even need to know that a light weight Linux VM is actually running and serving as the host for your Docker environment.
Components to Install
There is one primary component, and a second that I recommend so that your Docker containers can export displays (ie run GUIs) back to your main computer display. Running a GUI in a docker container is probably not the best case for Docker, but I tend to do it more often than I expect.
Install Docker
https://www.docker.com/products/docker
Install an X Server (if you are not on Linux)
Mac Users: XQuarts: https://www.xquartz.org/
Edit /usr/X11/bin/startx to ensure that your X Server will listed on TCP 6000 so that container can export their display to it by following the instructions below (future versions of XQuarts may already have this set correctly).
Look for this:
if [ x`defaults read $X11_PREFS_DOMAIN nolisten_tcp` = x1 ] ; then defaultserverargs="$defaultserverargs -nolisten tcp"
Make it this:
if [ x`defaults read $X11_PREFS_DOMAIN nolisten_tcp` = x1 ] ; then defaultserverargs="$defaultserverargs -nolisten tcp" else defaultserverargs="$defaultserverargs -listen tcp" fi
Windows Users: XMing: https://sourceforge.net/projects/xming/
Open Xming Xlaunch and set "-listen tcp" as an "Additional Parameter", see this for more details: https://blog.woopi.org/wordpress/?p=293
Steps to Check Your Setup
1) See if you can run Docker to list running Docker containers: "docker ps", should look like this:
#> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
#>
2) See if you can list your Docker images (should not be any):
#> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
#>
3) See if you can pull down the latest CentOS image:
#> docker pull centos
Using default tag: latest
latest: Pulling from library/centos
8d30e94188e7: Pull complete
Digest: sha256:2ae0d2c881c7123870114fb9cc7afabd1e31f9888dac8286884f6cf59373ed9b
Status: Downloaded newer image for centos:latest
#>
4) Now list your images
#> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
centos latest 980e0e4c79ec 7 days ago 196.8 MB
#>
5) Now, let's run the CentOS image in a container that prints the CentOS version and then exits. This is effectively starting a CentOS container, cat'ing the CentOS version, then stoping and removing the container. A lot happens in the second or two that this runs.
#> docker run -rm -it centos:latest cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
#>
6) Now, let's get a little fancier. We are going to run a container and get inside of it by running bash as our command when starting the container. This is sort of like starting a VM and then SSH'ing into it - sort of. While we are in there, we will install xclock and try to launch it and have the display echo back to our main computer. Keep in mind that you must have XQuartz or Xming running, and you must know the IP address of your host machine.
#> xhost +
access control disabled, clients can connect from any host
#> docker run --rm -it centos /bin/bash
[root@8200ec758a6e /]# yum install -y xclock
Loaded plugins: fastestmirror, ovl
base | 3.6 kB 00:00:00
extras
[... skipping all the install logs ...]
[root@8200ec758a6e /]# export DISPLAY=<YOUR IP ADDRESS>:0
[root@8200ec758a6e /]# xclock
Warning: Cannot convert string "" to type XftFont
[root@8200ec758a6e /]#
If successful, this process above should have opened an xclock window on your main computer like shown below:
Congratulations! You have just barely scratched the surface of starting to run containers with Docker. In the next quickstart, we will create our own custom image with Postgres...
... View more

If you are installing EDB on RHEL/CentOS machines, RPMs are the way to go. There are many, many reasons for this, a few of which are: It's the standard way to install on RHEL derivatives It uses consistent installation paths It's easily integrated into Puppet, Chef, and other provisioning tools Updating is as simple as "yum update" rather than finding and downloading individual installers Some products (like BART and EFM) are only available via RPMs RPMs leverage OS dependencies for things like OpenSSL meaning that you don't have to wait for EDB to patch Postgres in order to get an OpenSSL patch - if you install with RPMs. If you install with the one-click installer, we bundle OpenSSL and you'll need to get those patches from us which is not the most efficient way. And many other reasons... But, many of our customers (government, financial, and others) run their databases on machines that can not reach out to the internet, so they can't get patches from http://yum.enterprisedb.com. So, what are they to do? Well, it turns out that there is an easy and elegant way to address this, and it involves these steps: Clone the EDB YUM repositories (the example below will do this with reposync) Copy that clone to your disconnected machines (via one-way transfer, sneaker net, whatever options you have) Setup a local EDB YUM repository on that disconnected network one of two ways (both of which will be demonstrated below) File based local repository (good for small number of machines) HTTP based local repository (good for larger number of machines) Let's get started! Steps (run as root): #!/bin/bash # # These first set of steps that clone the EDB YUM repository have to be done on # a machine that is connected to the internet. Can be some desktop you have # or could be spinning up an AWS VM for a few minutes to clone the EDB YUM # repository into a zip file that you will transfer to your disconnected network. # # # Setup YUM repository for EDB # rpm -Uvh http://yum.enterprisedb.com/edbrepos/edb-repo-9.6-4.noarch.rpm # # Set YUM username/password in edb.repo # export YUM_USER= <yum user> export YUM_PASSWORD= <yum password> sed -i "s/<username>:<password>/$YUM_USER:$YUM_PASSWORD/g" /etc/yum.repos.d/edb.repo # # Since we will be pulling RHEL 6 and RHEL 7 RPMs to this machine, # we need to overide the yum directories so that they don't get # confused. While we are at it, we'll just list the EDB YUM repositories # in the main yum.conf file so that we don't have to deal with all the # other YUM repos in /etc/yum.repos.d # echo "[main]" > ~/yum.conf echo "cachedir=/tmp/yum_cache" >> ~/yum.conf echo "persistdir=/tmp/yum_persist" >> ~/yum.conf echo "reposdir=/dev/null" >> ~/yum.conf echo " " >> ~/yum.conf cat /etc/yum.repos.d/edb.repo >> ~/yum.conf # # Make sure that the YUM cache directories are empty (in case # you run this over and over like I do). If these directories are # not cleaned out, YUM will try to keep track of what it already # synched. Normally that is good, but in this case, I always # want YUM to go back and refresh from the actual repos. # rm -rf ~/edb_repos /tmp/yum_cache /tmp/yum_persist # # First, we reposync the EDB repo repository which is OS version agnostic because # it just contains a single RPMs with the edb.repo file and the GPG key. # reposync -c ~/yum.conf --norepopath -r edb-repos -p ~/edb_repos/edbrepos # # Next, let's enable all the EDB repos and hard code the version of RPMs # we want to be the RHEL 6 x86_64 RPMs. # sed -i "s/enabled=0/enabled=1/g" ~/yum.conf sed -i "s/\$releasever-\$basearch/6-x86_64/g" ~/yum.conf # # Now, make sure that the YUM caches are empty, and sync # each of the repos to the folder that matches what the edb.repo # defines as the baseurl for each repo. # rm -rf /tmp/yum_cache /tmp/yum_persist reposync -c ~/yum.conf --norepopath -r ppas95 -p ~/edb_repos/9.5/redhat/rhel-6-x86_64 reposync -c ~/yum.conf --norepopath -r ppas94 -p ~/edb_repos/9.4/redhat/rhel-6-x86_64 reposync -c ~/yum.conf --norepopath -r ppas93 -p ~/edb_repos/9.3/redhat/rhel-6-x86_64 reposync -c ~/yum.conf --norepopath -r ppas92 -p ~/edb_repos/9.2/redhat/rhel-6-x86_64 reposync -c ~/yum.conf --norepopath -r ppas91 -p ~/edb_repos/9.1/redhat/rhel-6-x86_64 reposync -c ~/yum.conf --norepopath -r enterprisedb-tools -p ~/edb_repos/tools/redhat/rhel-6-x86_64 reposync -c ~/yum.conf --norepopath -r enterprisedb-tools-testing -p ~/edb_repos/tools-testing/redhat/rhel-6-x86_64 reposync -c ~/yum.conf --norepopath -r enterprisedb-dependencies -p ~/edb_repos/dependencies/redhat/rhel-6-x86_64 reposync -c ~/yum.conf --norepopath -r enterprisedb-xdb60 -p ~/edb_repos/xdb60/redhat/rhel-6-x86_64 # # Now we are going to synch the RHEL 7 repos which are not aviailable # for EPAS 9.1-9.3, so disable those three repos and change the release # version in the baseurls to 7. # sed -i "\/ppas93/,/gpgcheck/ s/enabled=1/enabled=0/" ~/yum.conf sed -i "\/ppas92/,/gpgcheck/ s/enabled=1/enabled=0/" ~/yum.conf sed -i "\/ppas91/,/gpgcheck/ s/enabled=1/enabled=0/" ~/yum.conf sed -i "s/6-x86_64/7-x86_64/g" ~/yum.conf # # Clean the YUM cache and synch the RHEL 7 repos to the directories # that match the baseurls in the edb.repo # rm -rf /tmp/yum_cache /tmp/yum_persist reposync -c ~/yum.conf --norepopath -r ppas95 -p ~/edb_repos/9.5/redhat/rhel-7-x86_64 reposync -c ~/yum.conf --norepopath -r ppas94 -p ~/edb_repos/9.4/redhat/rhel-7-x86_64 reposync -c ~/yum.conf --norepopath -r enterprisedb-tools -p ~/edb_repos/tools/redhat/rhel-7-x86_64 reposync -c ~/yum.conf --norepopath -r enterprisedb-tools-testing -p ~/edb_repos/tools-testing/redhat/rhel-7-x86_64 reposync -c ~/yum.conf --norepopath -r enterprisedb-dependencies -p ~/edb_repos/dependencies/redhat/rhel-7-x86_64 reposync -c ~/yum.conf --norepopath -r enterprisedb-xdb60 -p ~/edb_repos/xdb60/redhat/rhel-7-x86_64 # # Install createrepo and run it against all the EDB repository directories # that we just synched. This will create the "repodata" directory that makes # these folders YUM repositories as opposed to just folders with RPMs. # yum install -y createrepo for i in `ls -d ~/edb_repos/edbrepos ~/edb_repos/*/redhat/*`; do createrepo $i done # # Now zip up all the EDB repositories # tar -czvf edb_repos.tar.gz edb_repos # # Here is where you would copy that zip of repositories to # whatever disconnected system you want to have these repositories available # on. You can burn it to disk or transfer it however you like. For simplicity # of this quickstart, I'm going to keep going forward on this same VM, but in # reality, the rest of these steps are done on the disconnected machine that you # transferred this zip file to. To simulate that I'm on a "different" VM, I'm going # to erase the edb-repo that we installed above. # yum erase -y edb-repo # # Now, we can unzip these repositories anywhere we like. I'm going to put them # in an Apache folder because I'm going to show you how to access them via file # as well as via http if desired and I don't want them in two different locations. # mkdir -p /var/www/html tar -C /var/www/html -xvf ~/edb_repos.tar.gz # # Let's first say we just want to access these repositories via the local filesystem # without any HTTP interface. No problem, install the EDB repo, edit the edb.repo # to point at the local filesystem path, set the release version to 6 or 7 depending # on your OS, enable the EPAS 9.5 and Tools repositories, and show that the EDB # packages are available. Now you can install with YUM just like normal, and as # long as you update the repositories in /var/www/html/edb_repos when # patches are available, it will be as if you were connected to yum.enterprisedb.com. # yum install -y /var/www/html/edb_repos/edbrepos/edb-repo-9.6-4.noarch.rpm sed -i "s,baseurl=http://<username>:<password>@yum.enterprisedb.com,baseurl=file:///var/www/html/edb_repos,g" /etc/yum.repos.d/edb.repo sed -i "s/\$releasever/7/g" /etc/yum.repos.d/edb.repo #sed -i "s/\$releasever/6/g" /etc/yum.repos.d/edb.repo sed -i "\/ppas95/,/gpgcheck/ s/enabled=0/enabled=1/" /etc/yum.repos.d/edb.repo sed -i "\/enterprisedb-tools/,/gpgcheck/ s/enabled=0/enabled=1/" /etc/yum.repos.d/edb.repo yum list ppas95-server yum list edb-bart # # Now if you have several machines on the disconnected network and don't want to # copy these repositories to each machine, you can very easily setup an HTTP server # to host these repositories to other machines. We will do that now by installing httpd, # starting Apache, removing the EDB repo we installed earlier, installing the new EDB # repo from the HTTP interface, and updating the baseurls in edb.repo to point to # our new web enabled EDB YUM repositories. Then, show again that you have access # to the EPAS 9.5 and BART via these new web enabled repositories. # yum install -y httpd chkconfig httpd on service httpd start chcon -R -t httpd_sys_content_t /var/www/html yum erase -y edb-repo rpm -Uvh http://localhost/edb_repos/edbrepos/edb-repo-9.6-4.noarch.rpm sed -i "s,baseurl=http://<username>:<password>@yum.enterprisedb.com,baseurl=http://localhost/edb_repos,g" /etc/yum.repos.d/edb.repo sed -i "s/\$releasever/7/g" /etc/yum.repos.d/edb.repo #sed -i "s/\$releasever/6/g" /etc/yum.repos.d/edb.repo sed -i "\/ppas95/,/gpgcheck/ s/enabled=0/enabled=1/" /etc/yum.repos.d/edb.repo sed -i "\/enterprisedb-tools/,/gpgcheck/ s/enabled=0/enabled=1/" /etc/yum.repos.d/edb.repo yum list ppas95-server yum list edb-bart # # Congratulations! Now, you can take advantage of all the benefits of RPMS and YUM # installations even if you do not have direct access to yum.enterprisedb.com. You will need # at least one machine that is connected to the internet in order to download the repositories # so that you can transfer them to your other network of course.
... View more

Secure, Supported and Enterprise-Ready
Postgres Plus Cloud Database (PPCD) meets the most demanding needs of enterprise-class applications with a rich feature set, elastic scalability, and reliable high availability, all available at your fingertips with an easy-to-use dashboard. Drive your web business's financial transactions, catalog sales, or customer business intelligence. Quickly set up and take down standard development, test, and deployment environments. Support diverse database workloads including ACID-compliant relational information, and built-in NoSQL document and key-value datatypes — all in one database available in the cloud as well as on-premises.
... View more

Objective
Quickly (less than five minutes) install and run EPAS 9.5/9.6 on RHEL 5/6 and demonstrate using SQL Protect with that Postgres cluster.
Prerequisites
Running instance of RHEL 6/7 with root access
Overview
SQL Protect is a tool that helps prevent SQL Injection attacks. SQL Protect can be run in three modes:
learn - keeps track of what tables users query to learn what is considered normal
passive - logs suspicious behavior to be looked into, but doesn't stop suspicious queries
active - logs and denies suspicious behavior
The general steps for using SQL Protect (which will be demonstrated below) are as follows:
Configure SQL Protect infrastructure
Add users/roles to be protected
Run in learn mode to learn what tables users normally query
Run in passive mode if you want to log suspicious behavior, but not stop it
Run in active mode if you want to deny suspicious behavior
Regularly view the status table to see what behavior has been reported
Clean/delete the history if deemed not to be an issue
Steps (as root):
#!/bin/bash # # Setup YUM repository for installing EPAS # rpm -Uvh http://yum.enterprisedb.com/edbrepos/edb-repo-9.6-4.noarch.rpm # # Set YUM username/password in edb.repo # export YUM_USER= <yum user> export YUM_PASSWORD=<yum password> sed -i "s/<username>:<password>/$YUM_USER:$YUM_PASSWORD/g" /etc/yum.repos.d/edb.repo # # Enable the EPAS 9.5 repo in edb.repo # sed -i "\/ppas95/,/gpgcheck/ s/enabled=0/enabled=1/" /etc/yum.repos.d/edb.repo # # Install EPAS 9.5 # yum -y install ppas95-server # # Start the EPAS server and initialize a new Postgres # data directory using the appropriate mechanism # for the operating system # if [ `cat /etc/redhat-release | grep "release 7" | wc -l` = 1 ] then /usr/lib/systemd/system/ppas-9.5.sh initdb systemctl start ppas-9.5.service elif [ `cat /etc/redhat-release | grep "release 6" | wc -l` = 1 ] then service ppas-9.5 initdb service ppas-9.5 start else su - enterprisedb -c "/usr/ppas-9.5/bin/initdb -D /var/lib/ppas/9.5/data" su - enterprisedb -c "/usr/ppas-9.5/bin/pg_ctl -w -D /var/lib/ppas/9.5/data start" fi # # Add the sqlprotect shared library, allow users to connect without # a password (just to make the script below easier) and restart the database. # sed -i "s/shared_preload_libraries = '/shared_preload_libraries = '\$libdir\/sqlprotect,/" /var/lib/ppas/9.5/data/postgresql.conf sed -i "s/peer/trust/g" /var/lib/ppas/9.5/data/pg_hba.conf service ppas-9.5 restart # # Connect to the database as enterprisedb and start setting up the scenario # su - enterprisedb -c "psql -d edb" -- -- Run the sqlprotect SQL to setup the infrastructure for SQL Protect -- \i /usr/ppas-9.5/share/contrib/sqlprotect.sql -- -- Create a user that will act as the normal user that -- our application is connecting as. -- CREATE USER appuser identified by appuser; -- -- Turn on SQL Protect in learn mode -- ALTER SYSTEM SET edb_sql_protect.enabled = on; ALTER SYSTEM SET edb_sql_protect.level = learn; SELECT pg_reload_conf(); -- -- Add appuser as a protected user, and show -- that appuser is the only protected user. -- SELECT sqlprotect.protect_role('appuser'); SELECT * FROM sqlprotect.edb_sql_protect; SELECT * FROM sqlprotect.list_protected_users; -- -- Connect as the application user. -- \c edb appuser -- -- Create two tables. T1 will simulate normal tables -- that should be queried. T2 will simulate a table that -- should not normally be queried by the app user user. -- You'll notice that in learn mode this is allowed, but in -- active mode the user can't create new tables. -- CREATE TABLE t1 (name TEXT); CREATE TABLE t2 (name TEXT); -- -- Teach SQL protect what is normal. Generally this would -- include much more than a couple queries, it would include -- something like running a unit test or integrated test that -- exercises the queries typically done by the system. Notice -- that we will not be selecting from t2 in learn mode. -- SELECT * FROM t1; -- -- Now that the system has learned what is normal, let's -- become enterprisedb and turn the system into passive -- mode, meaning that it will notify us of suspicious -- behavior, but not block it. When reloaded, switch back -- to the application user. -- \c edb enterprisedb ALTER SYSTEM SET edb_sql_protect.level = passive; SELECT pg_reload_conf(); \c edb appuser -- -- The same queries work fine but the last two -- queries with suspicious where clauses and queries -- against new tables raise warnings. -- SELECT * FROM t1; SELECT * FROM t1 WHERE 'x' = 'x'; DELETE FROM t1; SELECT * FROM t2; -- -- Now, when we are pretty sure that only bad behavior will -- raise errors, we can turn on active mode. -- \c edb enterprisedb ALTER SYSTEM SET edb_sql_protect.level = active; SELECT pg_reload_conf(); \c edb appuser -- -- Now notice that the suspicious queries are blocked, not -- just logged. -- SELECT * FROM t1; SELECT * FROM t1 WHERE 'x' = 'x'; DELETE FROM t1; SELECT * FROM t2; -- -- Now, let's switch back enterprisedb and look at some of the -- SQL Protect history that was logged including how many suspicion -- events occurred and what they were. -- \c edb enterprisedb SELECT * FROM sqlprotect.edb_sql_protect_stats; SELECT * FROM sqlprotect.edb_sql_protect_queries; -- -- To clear the history of suspicious behavior, just drop the statistics. -- and note that the stats and logged queries have been removed. -- SELECT sqlprotect.drop_stats('appuser'); SELECT sqlprotect.drop_queries('appuser'); SELECT * FROM sqlprotect.edb_sql_protect_stats; SELECT * FROM sqlprotect.edb_sql_protect_queries; -- -- Congratulations! You have successfully enabled and configured SQL Protect to -- decrease risk of SQL Injection attacks! -- exit
Tips
More detailed information is available in Section 4 of the EDB™ Postgres (Postgres Plus) Enterprise Edition Guide v9.5 available here:
http://www.enterprisedb.com/products-services-training/products/documentation/enterpriseedition
Please note the comments below from the documentation if you are using pg_dump and pg_restore for backup/recovery. The following does not apply if you are doing backup/restore with BART or pg_basebackup.
SQL/Protect uses two tables, edb_sql_protect and edb_sql_protect_rel, to store information on database objects such as databases, roles, and relations. References to these database objects in these tables are done using the objects’ OIDs, and not the objects’ text names. The OID is a numeric data type used by Postgres Plus Advanced Server to uniquely identify each database object.
When a database object is created, Postgres Plus Advanced Server assigns an OID to the object, which is then used whenever a reference is needed to the object in the database catalogs. If you create the same database object in two databases, such as a table with the same CREATE TABLE statement, each table is assigned a different OID in each database.
If you are using pg_dump/pg_restore to backup/restore the database, this results in the re-creation of the backed up database objects, the restored objects end up with different OIDs in the new database than what they were assigned in the original database.
Summary
This guide is intended to get you up and started with a simple SQL Protect example as quickly as possible. There is much more that can be done with the tool as documented in the Enterprise Edition Guide listed above.
... View more