Most VFP developers begin their careers by developing small desktop apps meant to be used by single users on a single machine. Indeed, I have several of these applications running right now. The likelihood of there ever being more than one user accessing the data at one time is remote. On the other hand, there is a possibility that as the organization grows, you will eventually want that application to be able to handle multiple users. Let’s face it, the whole idea behind using a database is that you have a central repository for data that can be used by a number of users.

Before you can move your app to a multi-user app, you will need some very basic changes:

1 - In your Main.Prg, you must SET EXCLUSIVE OFF. As the name suggests, if you SET EXCLUSIVE ON, you are asking that your database and tables be opened for the exclusive use of this user. That is the opposite of what you are trying to do here.

2 - When you open tables or databases, use the SHARED clause: OPEN DATABASE MyDatabase.DBC SHARED or USE MyTable SHARED.3 - If you are dropping tables onto the form’s Data Environment, then you need to ensure that you set the .Exclusive property for that table to .f.

Now, there are a number of commands that require exclusive use of the tables. For example, PACK requires exclusive use of the table. “But wait,” you say. “If I cannot open the table exclusively, how can I PACK the table?” The answer is: “you don’t use the PACK command.” In your Main.Prg, you need to issue SET DELETED ON. This will make it so your users do not get to see deleted records. VFP will simply “ignore” these deleted records so they do not appear for the users. You need to note that SET DELETED is one of those commands that is “scoped” to the data session. If you are only working in the default data session, a single SET DELETED command in your Main.Prg will suffice. On the other hand, if you are using a private data session in a form or report, you will need to go into the data environment and double click it to bring up the data environment’s code window, find the .BeforeOpenTables() method and add the SET DELETED ON command to this method.

Again, “But wait” you say, “I eventually need to get rid of those deleted records! I don’t want my tables half filled with deleted records!” At some point, you will need to PACK these tables – but you do not need to do this when there are other users trying to access the data. I use something called a “Pack and Re-index” utility. This is a separate EXE file that runs as a scheduled (using Windows Scheduler) application once a month in the middle of the night when there are no other users on the system. Yes, in this application, I open the tables exclusively. Once the pack and re-index is done, it closes all of the tables, and the next morning, the users can go back to shared access.

Another of these “basics” is the SET MULTILOCKS ON command. You will probably want to be able to modify several records at a time. This being the case, you need to be able to lock more than one record at a time. SET MULTILOCKS ON in your Main.prg. Note that this is another of those commands that is scoped to the data session so if you are using a private data session in your form, you will need to add the SET MULTILOCKS ON to the data environment’s .BeforeOpenTables() method.

For the most part, the real difference between a multiple user app and a single user app is something called “buffering.” As the name suggests, the buffer is something that sits between the user and the database. The user enters the data into a buffer and then later, “commits” that data from the buffer to the database. Note the following:

This actually sets up a type of buffer. In this case, the form itself acts as a buffer. When the form is run, the user may modify the data in these controls. Now, when you wish to save this data back to the original tables, you need a save routine, so you do something like this:

(Note: Another variant of this concept is the SCATTER/GATHER method that we became familiar with in FP 2.X. The SCATTER command used a memory variable as a buffer. The GATHER command then took the data out of the memory variable and stored it in the table.)

While this will work in a multi-user environment, it is a heck of a lot of work. You first have to create a method to populate the form control’s .Value property and then you have to create a method to save the data back into the table. What if there was a way to accomplish this without having to create and maintain all this code. Thankfully, VFP gives us this capability. Start by reading the help file topic “Buffering Data.”

VFP gives you two basic types of buffers – row buffers and table buffers. A row buffer only deals with one record at a time. When you move the record pointer, VFP will issue an implicit TABLEUPDATE() and the data will be saved to the table. While this sounds wonderful, it actually isn’t very good at all. What happens if the TABLEUPDATE() fails? TABLEUPDATE() returns a .T. or a .F. upon success or failure. You need to explicitly issue a TABLEUPDATE() and check the return value. If you let the record pointer move trigger an implicit TABLEUPDATE(), how are you going to check the return value? Bottom line, you ALWAYS need to explicitly issue the TABLEUPDATE():

IF NOT TABLEUPDATE(.T., .F., [MyTable])
&& The update failed so do somethingENDIF

The other type of buffer is the table buffer. This type of buffer allows you to store changes to several records in that buffer. For example, if you were dealing with invoice details, you could create a number of invoice detail records in the buffer, and then when you are done, you issue a single TABLEUPDATE() and it saves all of those records to the table. The downside with table buffering is that you have to explicitly issue the TABLEUPDATE(). You need to issue the TABLEUPDATE() anyway – that is the only way that you are going to be able to check the return value, so there really isn’t any downside here. If you think about it, you can use table buffering with changes to a single record or to hundreds of records, so what is the point of using record buffering? Table buffering will do anything that record buffering will do, but the reverse is not true. I always use table buffering.

(Note: You should now be able to see why I always use SET MULTILOCKS ON. I always use table buffering and if I wish to be able to modify more than one record, I need to be able to have multiple record locks.)

When you save changes to a record, you must lock that record. It would not do to have some other user trying to save his/her changes at the same time that you are. You could end up with the first half of the record containing your changes and the second half of that record containing that other user’s changes or worse – you could have a field containing half of the data from one user and the other half from you.

When you add a new record to a table, you need to lock the entire table. A new record means that the record count will be increased and therefore, the table’s header record must be modified (this is where VFP stores the record count.) There is only one header record, so when VFP locks that header record it essentially locks the entire table. Thankfully, in VFP you really do not normally have to worry about record or table locking – VFP does it for you.

“Wait a minute, if VFP does this locking for me, why do I care about this?”

I mention this because when you are using buffered tables you need to choose between two methods of locking: Pessimistic and Optimistic. Basically, pessimistic locking is a little more secure (it locks the record while you are making changes to it) but as can be imagined, it holds that lock longer. Optimistic locking only locks the record while it is being written and is therefore a bit faster. I have never had the need to use pessimistic locking.

When you choose the type of buffer to use, you get 5 choices:1 – No buffering2 – Record buffering with pessimistic locking3 – Record buffering with optimistic locking4 – Table buffering with pessimistic locking5 – Table buffering with optimistic locking

All I ever use is 5 – table buffering with optimistic locking.

You have two ways to set this. If you are dropping your tables onto a form, you can go into the data environment and set the table’s .BufferModeOverRide property to 5. If you are doing this programmatically, you can use the CURSORSETPROP("Buffering", 5) command. Take a look at the CURSORSETPROP command and the .BufferModeOverRide property in the help file.

Now that you have your tables buffered, take a look at the .ControlSource property in the help file. This is the key to making your forms work with buffered tables. In the example I give above, you need to populate your form control’s .Value property and then when you wish to save the data to the table you have to use a .Save procedure. Take a look at the following:

Of course, you can also do this visually much faster. If you simply drag the field to your form it will create the textbox, checkbox, editbox, etc. automatically and it will set the .ControlSource property to that field at the same time. You will still need to have a .SaveMethod, but as you can see, the code will be MUCH smaller:

This touches on the reasons why a TABLEUPDATE() might fail. Lets assume that two users are working on the same record. They both open the record, then user1 changes the “FirstName” field from “Ken” to “Kenneth” and user2 changes the Address field from “9 Saint James St.” to “Box 107.” User1 now saves his/her changes and then user2 saves his/her changes. User1’s changes will be lost – the FirstName field will have “Ken” in it. Take a look at the TABLEUPDATE() function in the help file. The second parameter “Force” will allow you to overwrite changes made by another user if you set it to .t. If you leave it at the default, .f., your TABLEUPDATE() will fail if another user has saved changes since you began editing that record. If you check the return value of TABLEUPDATE(), you can then make a decision as to what to do in this situation. In the example above, I simply ask the user if he/she wishes to save his/her changes and overwrite the other user’s changes.

There is one last thing to look at – transactions. Begin by reading the help file on BEGIN TRANSACTION. Basically, a transaction is like another layer of buffering. Let’s say that you are working on something like an invoicing application where you have an InvoiceHeader and InvoiceDetail table (or any other situation where you have parent and child tables.) What happens if you TABLEUPDATE() your InvoiceDetail records and it succeeds but when you TABLEUPDATE() your InvoiceHeader record, it fails? You could end up with a situation where you have all of these InvoiceDetail records floating around but no InvoiceHeader record to go with them. In this sort of situation you want to ensure that all tables are updated or none of them are. This is where the transaction comes in. Take a look at the following:

You basically begin the transaction and then start issuing TABLEUPDATE()’s. If you run into a failure, you issue a ROLLBACK. If all the tables update successfully, you issue an END TRANSACTION and your updates will be committed to the tables.

In summary, developing multi-user applications is not really all that difficult – in fact, it is actually quite simple. All you need are a few basic concepts: Working with deleted records and SET DELETED ON; Buffering & Transactions and using the .ControlSource property. Give it a bit of a play. You can create a small multi-user app and then open up two instances of that application to simulate two users.

Suggested reading: I HIGHLY recommend that you take a look at Andy Kramek’s excellent article “A Guide to Buffering and Transactions in VFP” at his web site: http://www.tightlinecomputers.com/Downloads.htm This well written little article is a MUST READ for anyone who is working with buffering and transactions.

Hope this helps.

ABOUT THE AUTHOR: KEN MURPHY

Ken Murphy is a long time FoxPlus, FoxPro and Visual FoxPro developer who specializes in developing data-centric solutions for charities. Ken posts in a number of forums but can most often be found at www.foxite.com where he is also a site moderator. Ken was recently named as a Microsoft MVP (Visual FoxPro) 2007/2008.
Ken is active in ministry and serves as a Roman Catholic chaplain at his local hospital, nursing home and at the medium security penitentiary Springhill Institution. Ken is currently taking courses towards a Masters degree in Theology and discerning a call to ordained ministry as a Roman Catholic Deacon. Ken and his wife Gail are active in their community and belong to a small theater group "The Pit Players" doing musical dinner theaters as fundraisers for local charities. In the past 5 years, the Pit Players have raised over $60,000 for these charities. Ken is the Musical Director (and sometimes actor) for the Pit Players.
Ken lives in Springhill Nova Scotia, Canada - home of the famous singer Anne Murray, and site of the famous coal mining disasters of 1956 and 1958.

Glenn, you can use a single EXE file stored on the server, or you can install a copy of the EXE on each computer. Leaving the EXE on the server makes it easier for you when you need to roll out a new version. You don't have to go round to each machine with the new version. On the other hand, it does increase your network traffic. Each time the EXE is run, it first has to be downloaded from the server. On a slow or over worked network, you probably want to install the EXE on each machine. An option for you might be to store your EXE on the server while beta testing. Once you have a final version, you can then install it on each machine.

Great Article, it presented the whole Single/Multiuser scenario in a nutshell in a single Page, WOW!. I have been designing suh applications since 12 yrs. but this helped me to clear some of the misunderstood concepts.thanx Ken.

This article was written for a VFP backend. With a VFP back end, you get a "file server" application, not a "client server" application. In a client server environment, you obtain a cursor containing the records you want from the server. That cursor now acts as the buffer. Now when you save the data, you save the data from the cursor back to the backend. This is similar to what we used to do with the SCATTER/GATHER commands, but in this case, you are using a cursor rather than a memory variable.

In a client server situation you have thress basic choices - you can use a Remote View, a Cursor Adapter or you can use SQL Pass Through. Note that both the remote view and the cursor adapter have a SQL SELECT statement as a primary building block. In SPT, you actually "pass" a SQL SELECT "through" to the backend.

Will this slow dwon the network? Actually, using a client server solution will reduce your network traffic. In a File Server situation, the evaluation of a WHERE or FOR clause happens at the workstation. This means that all of the records might have to be downloaded to the workstation to see if the WHERE or FOR clause is satisfied. (Note, that if you have the correct indexes in place, only the index will have to be downloaded to the workstation, but you still have to download the entire index.) In a Ckient server situation, the WHERE clause is evaluated at the server. This means that only the results are downoladed to the client.

Dear friend,i have an application in VFP9, running on server with XP windows and 2 users (one Win7 and Win Server 20080 sometimes it happens that records from one user are stored in the reference of another user.it never happens i am using pessimistic table buffering.is the problem because of different windows versions.i must have XP on the server since salesmen are using old dot matrix printers.thank you very much.kamalkamal@mbcc.ae