Then I get for a binary read call: 0.59 seconds, which corresponds to 460 MB/s. This could very well be the max. SSD read speed.Edit: Found the disk specs: Samsumg 850 evo 500GB, sequential read speeds up to 540 MB/s.Additional code for this:

So now a routine to search in this big binary blob for the right barcode is needed...Also found an external USB 2.0 160 GB disk. Its heavy. Lets's see if it still works...Edit: Judging on it weight, I would expect a nuclear power plant inside it, but no, I had to go look for the power adapter. It still works...Edit: 14 MB/s now, much better :-)

Even if you're already rocking a fast SSD (one of the best upgrades you can make), you can still improve your computer's performance by adding more memory and turning it into a RAM disk, which can be as much as 70 times faster than a regular hard drive or 20 times faster than an SSD.

The data has to get into the RAM disk from the slower disk, and get back to the disk at the end. So it works out that just loading the database into memory by your program, and saving it back out at the end, is going to be the exact same speed as a RAM disk. If the database fits in a RAM disk, it would certainly fit into memory. On the other hand if you had several programs or processes that wanted to use the database files, a RAM disk could be a fast way to do that. Although on modern OS's, any files shared between processes are cached and memory mapped anyway, so the effect is the same.

My conclusion from this is: Follow MrSwiss' advise, buy SSDs? Install a modern OS? At least under Linux, no speed gain when reading the whole file at once. Probably the OS is smart enough to read more data then what is needed for one "Input #ProductFileNumber, etc." call. Not sure how FreeDOS handles this.

If the OS does not cache, then do it yourself. Read all data at start or after a button press. When adding/updating an item, add to cache (memory) and file. But a problem if multiple clients need the same data. Keep track of the changes then somehow.

as you can see for now I have comment out some of the functions all of the Fields starting with Product_ have been declared at the start of the module so they can be access all though it (this module is called Database.bi

I would like to find a solution for this as my NPoS application ( one that is designed to run on Linux and Windows) also uses csv as it data structure so i would like to find a better solution (I am looking into direct MySQL support for that version runs on Linux so it can access the MySQL Server)

7000 records is very small. Even if you opened the file and read it in one record at a time each time you needed to query it, it would still be a fraction of a second (disk caching would essentially make that an in-memory operation). So something isn't quite right with your algorithm.

Do until EOF(ProductFileNumber) Input #ProductFileNumber, Product_.... If Trim(BarcodeNumber) = Trim(Product_barcodenumber) Then ProductFound = 1 CloseAllFiles Exit Do End If Loop

Do the following:- read one full line at a time- check if the string contains the barcodenumber- then parse the string to get the other items

That will work in under a second. If the parsing looks too complicated,- open the file in binary mode- store the position before the Line Input #1, string- if the line contains the barcode, seek #1 oldposition, then do the Input #ProductFileNumber, Product_.... etc

Reading a bunch of parameters for every line is indeed very slow, that's why a proposed the other solution above. It would be good to analyse where your code is slow but it throws so many build errors that my enthusiasm to dig deeper is very low.