All my posts

Thursday, March 06, 2014

Improving Performance with Zeos and PostgreSql

Last month I was asked if one of our client-server applications can run from a remote site via Internet, sure, why not?, just open a port in your router and point the connection's IP to the external address and done.

A couple of days later, the same customer calls again, telling me that the application is working, but it's slow like a turtle, even on a 2Mb link. Ok, I closed Hacker News and started thinking about how to debug this.

Logging PostgreSql queries

The first thing I did was getting a backup of that customer's database (well, a very old one, because now it's 0.5Tb), and restored on my PostgreSql 9.2 instance. I did this because as I wanted to log everything I just need one client connected, otherwise I'll get mixed queries from different PCs.

Save and restart the postgres service, on Debian based systems just do "sudo service postgresql restart". Then start using the application and go to the pg_log directory, by default in /var/lib/postgresql/9.2/main/pg_log, and you'll see postgresql-YYYY-MM-DD_######.log files.

Ok, once logging_collector was set up, I opened my application then went to one option that just listed data (a grid) then double clicked on a row, to open a dialog containing further data, then closed the application. I thought just a couple of queries will be enough for this task, but I was way wrong.

Let me tell you something about this app here, it's an old application started by a small team seven years ago, initially the team was very design patterns focused (remember the time when Java was everywhere?, we used Delphi, but the Java way), after the initial years, whe keep adding features, the team changed over time, and the quality control dissapeared (after the experience I'm writing here things will change).

Going back to the main topic, what I found in the log files was the app was doing not two queries, but forty, yes 40!, and that wasn't the only strange thing I've found in the logs, there were queries we didn't write, and some warnings I must get rid of (I'll explain this at the end of this article).

The process of improving this was first remove the unwanted queries, that look like this:

I've asked in the Zeus Forum and was told TZConnection has a UseMetadata property set to true by default, so setting it to False fixed the issue. Now the query count was about 30, a 20% less.

The next step was to look for some repeating patterns in the log file, and I found many queries similar to this:

select varvalue from config where varname='REFRESH_TIME';select varvalue from config where varname='MAX_DISPLAY_ROWS';...

As you can see, we store configuration values in the "config" table, and query values on demand. This sounds ok, but is this a smart way of doing it?, config params change in real-time?, surely not, it must be a better way.

Luckily all the configuration was encapsulated in a class, with properties like TConfig.VarValue, so, all the changes to be made where there, inside this class.

The solution to this problem was to mantain a Cache of variables, so we added an internal TCollection that was loaded with all the config when the application starts, then added a LastAccessTime property, containing the last time a variable was accessed, if more than 10 minutes has passed, then we re-load the collection.

This single change removed more than 15 queries, now we are at only 15 queries left to be improved.

Our application has a very neat feature that is per-grid column configuration, this allows an administrator to configure what columns (and it's title, width, etc.) must be displayed. This information was loaded every time the grid is refreshed, once when the window is opened, then every 30 seconds. Usually each screen has one main grid an two or tree subgrids, each with this functionality, all of them are refreshed at the same time. We removed the re-loading at refresh time, eliminating three (for this screen only) queries, now there are 12 queries left.

Some time ago, we implemented "pagination" in our grids, this way we limited the amount of records retrieved from the database, as some queries loaded more than 10k records this was a must have for us. The way we implemented it was this:

1º Having a query, for example "select * from customers", first we needed the total count:

select count(*) from (select * from customers) as foo;

This allowed us to get the count of any query, very neat at the time.

2º Execute the real query with a Limit and Offset:

select * from customers limit 100 offset 101;

This way, we used two queries to display "XXXX records of YYYYY". This improved a lot compared to what we have initially, but forced us to use two queries.

Since PostgreSql 8.4, a very neat feature called Window Functions was added, allowing impossible tasks with the former versions, one of those tasks is getting the total record count even when a "limit" is imposed, for example:

This tells the database is escaping everything with a backslash, since our blob fields were encoded, they included backslashes. If we get rid of this encoding, the data sent by the wire will be significantly less.

Again, asking in the Zeos Forum, I was told what to do, just add doPreferPrepared to TZQuery objects and done!. This only works if you use parametrized queries, luckily we use them in all of our queries.

Final words

The optimization process didn't finish here, but, to not force the customer to wait more time, we branched the app and delivered this quicker version. I must admit, we are very happy with the results. Surely we could have created a web version, but as our customer's budget is thin, and they needed to use the application in no more than two weeks, I think we did the right thing.

Monday, September 09, 2013

Lazarus + PostgreSql's JSON (your secret weapon)

Hi, today I've read the news about the new PostgreSql 9.3 and, as usual, I went directly to the Release Anouncement, pretty impressive stuff. The part that caught my attention was the JSON related part, so I've read a little more here and couldn't stop the desire to do some tests.

So, the first thing was installing the brand new version on my Ubuntu 12.04:

Now, from your Lazarus/fpc program you can easily get the JSON string.

One possible usage for this that comes to my mind, is a CGI app that returns a resultset to an EXTjs grid. The CGI just executes the query and return the string in the TResponse handler as ContentType "application/json".

Wednesday, March 06, 2013

Quick CGI SpellChecker

Here's a small CGI program that provides spellchecking services based on GNU Aspell. I use it on a local network where Delphi Win32 clients connects to this CGI hosted on an Apache Server, running on Linux, to ask for spelling suggestions.

This simple spellchecker has a small API that does the very basic, that is, check for word spelling, add word to dictionary, and delete word from dictionary. The client side can be done using any language capable of doing http GET requests, and handling JSON responses.

Prerequisites:

Of course, the first requisite is to install Aspell and one or more dictionaries. On apt-get based systems, you'll install them using this:

sudo apt-get install aspell
sudo apt-get install aspell-en_US

Then next step is to create a personal dictionary, this is a just a plain text file where new words will be added. The file must have just one line, containing this:

personal_ws-1.1 en 0

If the dictionary will be using, for example, spanish words, you must replace "personal_ws-1.1 en 0" by "personal_ws-1.1 es 0", and do:

sudo apt-get install aspell-es_ES (or es_AR for Argentina).

IMPORTANT: please set RW attributes to the file, to allow read/write by everyone.