On Friday, November 16, 2012 7:52 PM Cédric Villemain wrote:
> Le vendredi 16 novembre 2012 15:08:30, Amit Kapila a écrit :
> > On Thursday, November 15, 2012 8:18 PM Amit kapila wrote:
> > > On Wednesday, November 14, 2012 12:24 AM Robert Haas wrote:
> > > On Mon, Nov 12, 2012 at 10:59 PM, Amit kapila
> > > <amit(dot)kapila(at)huawei(dot)com>
> > >
> > > wrote:
> > > > Uh, no, I don't think that's a good idea. IMHO, what we should do
> is:
> > > >
> > > > 1. Read postgresql.conf.auto and remember all the settings we saw.
> > > > If
> > >
> > > we see something funky like an include directive, barf.
> > >
> > > > 2. Forget the value we remembered for the particular setting being
> > >
> > > changed. Instead, remember the user-supplied new value for that
> > > parameter.
> > >
> > > > 3. Write a new postgresql.conf.auto based on the information
> > >
> > > remembered in steps 1 and 2.
> > >
> > > Attached patch contains implementaion for above concept.
> > > It can be changed to adapt the write file based on GUC variables as
> > > described by me yesterday or in some other better way.
> > >
> > > Currenty to remember and forget, I have used below concept:
> > > 1. Read postgresql.auto.conf in memory.
> > > 2. parse the GUC_file for exact loction of changed variable 3.
> > > update the changed variable in memory at location found in step-2 4.
> > > Write the postgresql.auto.conf
> > >
> > > Overall changes:
> > > 1. include dir in postgresql.conf at time of initdb 2. new built-in
> > > function pg_change_conf to change configuration settings 3. write
> > > file as per logic described above.
> > >
> > > Some more things left are:
> > > 1. Transactional capability to command, so that rename of .lock file
> > > to .auto.conf can be done at commit time.
> >
> > About transaction capability, I think it will be difficult to
> > implement it in transaction block, because during Rollback to
> > savepoint it is difficult to rollback (delete the file), as there is
> > no track of changes w.r.t Savepoint.
>
> not a problem.
> consider that pseudo code:
>
> begin serializable;
>
> update pg_settings; -- or whatever the name of the object (can include
> creation of a table, etc...)
>
> savepoint...
>
> update pg_settings;
>
> rollback to savepoint;
>
> commit; <-- here the deferred trigger FOR STATEMENT on pg_settings is
> fired and is responsible to write/mv the/a file.
>
> Is there something obvious I'm not seeing ?
I think transaction handling is better with the way you are mentioning.
Here is what I am able to think about your idea:
1. have a system table pg_global_system_settings(key,value)
2. On SQL command execution, insert if the value doesn't exist or update if
already existing.
3. On commit, a deffered trigger will read from table and put all the rows
in a .auto flat file
4. In the deffered trigger, may be we need to use lock for writting to file,
so that 2 backends
writting same time may not garbled the file. I am not sure if lock is
required or not?
Advantages of this approach:
1. transaction handling can be better.
2. updation of config value row can be easier
Problem which needs to be thought
Sychronization between flat file .auto.conf and system table
Case-1
a. During commit, we write into file (deffered trigger execution)
"before" marking transaction as commit.
b. after writting to file, any error or system crash, then table and file
will have different contents.
Case-2
a. During commit, we write into file (deffered trigger execution) "after"
marking transaction as commit.
b. any error or system crash before write into file can lead to different
contents in table and flat file.
Resolution
May be during recovery, we can try to make table and file consistent, but it
can be tricky.
Comparison with Approach I have implemented
1. Because it will be done in serializable isolation, 2 users trying to
modify the same value, will get error.
However in current approach, user will not get this error.
2. The lock time will be lesser in system table approach but don't think it
will matter because it is a rarely used
command.
I think, some other people thoughts are also required to see if there is any
deeper design issue which I could not see in this approach and whether it
can clearly score over approach
with which currently it is implemented(directly operate on a file).
Suggestions/Thoughts?
With Regards,
Amit Kapila.