In one of my previous posts I outlined how one can use Foreign Data Wrappers to store data in a underlying sharded MongoDB store and use Postgres’ SQL Interface. But this new feature of v9.5 is something even more exciting. You can have a sharded Postgres cluster where each shard could be on different servers and a central server/primary shard can be used for controlling the data insertion. So far the examples that I have seen around are primarily using postgres_fdw, but the possibilities are endless. In ‘not so distant future’ we may have other FDW too being capable of this. i.e. I can have one of my child tables on MongoDB and other one in Postgres or on Hadoop.

Now doesn’t Hadoop or MongoDB already have sharding capabilities? Why would I mix it up with PostgreSQL? The answer is ‘simplicity’. I just want to keep things simple for my users. My users understand SQL pretty well and they find SQL to be quite intuitive for them. So I can have my most recent data sitting on Postgres and as it becomes cold I might start moving it to MongoDB and then to Hadoop. I can use the oldest set of data on Hadoop for doing complex analytics – to churn out a model for predictive analysis etc and I can continue to use that data for answering rare queries on historical data. Or I can use a current ‘data store’ data according to its nature of a platform which is best suited for it and then logically combine them under one entity (a table) in PostgreSQL (using FDW and child tables).

Whatever be your use case the big advantage is, users (who prefer SQL) will still look at this data as one single logical table in a Postgres Database. When they fire a query Postgres optimizer will go ahead and figure out whether it needs to be fetched from local child table or child table linked to MongoDB or child table linked to Hadoop. There are two other things being discussed and actively pursued in Postgres world-

Pushdown of aggregates, predicates and joins i.e. I would be able to push down the aggregations queries meant for MongoDB nodes to Mongo server.

Parallel scans which means if the data for my query is spread across different child tables or foreign child tables I can scan them in parallel.

These possibilities make it really intersting. Imagine you are able to harness capabilities of MongoDB and Hadoop in parallel and combine the resulting set without doing any bit of programming at your end. This all sounds like a fiction but as you track the recent developments and discussions in Postgres community it all looks possible in near future. Postgres community has been thriving to be in mainstream and address recent trends and demands in data-world. The community is well positioned to address the needs of hybrid architecture which is going to pick up very soon. So organizations would look for combining all the bits and pieces of their data – whether ‘Big’ or ‘Large’, ‘hot’ or ‘warm’, ‘BI-data’ or ‘BA-data’, ‘Hadoop’ or ‘Mongo’, ‘structured’ or ‘unstructured’. Postgres with its FDW, inheritance and parallel scan capabilities will be a key player not just for being the data store but also for being the logical data-hub.