Author
Topic: Easiest way to persist data? (Read 3575 times)

Need to find, for now, the simplest way to persist data. Tried looking at TZMSQL, but could not even figure out how to use/install.

Right now for my needs need the simplest approach to store data which will only will be used by my application. Database or not (hence why I did not post in the database area).

Any suggestions?My initial needs, which will grow later but for now, are* Ability to store multiple lines in a file.* Ability to load data from file to memory.* Ability to save data from memory to file. Ok to re-write entire file and not update.

Use case:My utility, runcontrol, will be initially just to help some shell scripts not to execute too often. So I want to save something like20170226-17|1

The format isYYYMMDD-HH|#

Where "#" is just a number.

For my "phase 1" I just want to limit how many times something runs so I will be saving date+hour and how many times has run so far.

Eventually will move to a database format, but finding it I am spending more time trying to figure out which library to use and how to use it, than likely will take me to just do something simple myself. However, prefer to at least get familiar with some form of existing library as to not only not re-invent the wheel, but also to keep getting familiar with what is available.

Any pointers greatly appreciated.

Logged

sky_khan

Create a stringlist, if there is a file saved before load it with TStringList.LoadFromFile, process it as whatever you like, add or delete some lines and save it back with SaveToFile method. Problem solved ?

Writing an open source program as I am learning Object Pascal.Need to find, for now, the simplest way to persist data....

Use case:My utility, runcontrol, will be initially just to help some shell scripts not to execute too often. So I want to save something like...

For my "phase 1" I just want to limit how many times something runs so I will be saving date+hour and how many times has run so far....

Just curious.You have not said what OS you are using, but let's suppose it is Linux, why are you not using cron to control the shell script frequency or using the shell script themselves to control whether they should be execute or not?

Just curious.You have not said what OS you are using, but let's suppose it is Linux, why are you not using cron to control the shell script frequency or using the shell script themselves to control whether they should be execute or not?

Uhhhmm... Maybe because we are programmers? And cron is written in the "wrong language"? So we can do better? Note that Windows also has perfectly capable scheduling software as standard. That's certainly not a prerogative of Linux. < grumpy mode >

I also would suggest a TStringlist, but note that it has its limitations for that kind of job and you may want to implement a rotation (e.g. limit the size of the file, rename) scheme.

..let's suppose it is Linux, why are you not using cron to control the shell script frequency

The OS is Ubuntu.

We use both the built in cron as well as some commercial "orchestration" software to manage crons acros many machines.

The first programs I want to use my program with are monitoring programs. Because we want to be alerted ASAP when there is an event some times have set cron to run every two minutes. Depending on the issue/event we could get a flood of emails every two minutes until someone logs in to manually comment the cron jobs; potentially in multiple machines. So, my first target for my program is to set counts of how many alerts should go out per hour.

or using the shell script themselves to control whether they should be execute or not?

I did not write the bash shell scripts and python programs that I am hoping to use my software with. It would be far more work to go program by program and ad logic than it would be to write the program I am trying to create and to run a "pre-check" with my program. I am also scheduled to "inherit" some of these systems soon and it will take me time to get familiar with them.

Additionally, with my program we can continue to run the checks every 2 minutes and only have the alerting part of the program check if it is ok to run by calling my program.

Moreover, my long term goal is to have dependencies prioritization:

Program A should run first

Program B should run after A has run

Program C should After B has run

The above is a trivial example. The actual dependencies are far more complex. Currently multiple teams try and estimate, based on historical data, how long a process has taken and then try and figure out how to schedule it all.

Recently we did some re-organization of databases which caused some jobs to run faster; that should be a good thing, except that it totally threw off the dependencies because now some jobs are trying to do parts before some other parts are done.

Long term my goal is for my program to handle dependencies so not only we don't have jobs run out of sequence, but also we have less wasted time. Right now if a job takes 1 to 2 hours, we may put the follow up job 3 hours after the dependency to make sure the other finishes. With my program, eventually we will be able to programs shortly after their dependencies are done.

Lastly, I want to learn Pascal and this seemed like a good use case. If I had done this with python I would have to worry about installing dependencies and potentially OS modules needed to be installed on machines I may not have root. As for doing it in Bash, it would likely have been ok for the first phase of simple controls, but unlikely to work for the dependency part where I will need to do queries against a DB.

Just curious.You have not said what OS you are using, but let's suppose it is Linux, why are you not using cron to control the shell script frequency or using the shell script themselves to control whether they should be execute or not?

Uhhhmm... Maybe because we are programmers? And cron is written in the "wrong language"? So we can do better? Note that Windows also has perfectly capable scheduling software as standard. That's certainly not a prerogative of Linux. < grumpy mode >

I also would suggest a TStringlist, but note that it has its limitations for that kind of job and you may want to implement a rotation (e.g. limit the size of the file, rename) scheme.

Thaddy, you are not being constructive in here. And you are not always right, as a matter of fact, nobody is.francisco1844 has told that he is trying to use binaries to control shell scrips which is usually done by OS scheduling tools, such as cron on Linux.Any average programmer knows that ANY OS has scheduling tools, and I give an example on Linux and you give another on Windows. What's the difference?I can understand that Microsoft Windows is so important to you, but not everybody here uses that OS.I am trying to understand francisco1844's problem to see if I could help him and your sarcasms are useless on this thread, that is a pity because you seem to be a good person and competent professional and usually help many of us.I don't know how your bad humor or being rude can make you happier, but there is no joy in it for the rest of us, specially for the new comers.

The OS is Ubuntu.We use both the built in cron as well as some commercial "orchestration" software to manage crons acros many machines.

The first programs I want to use my program with are monitoring programs. Because we want to be alerted ASAP when there is an event some times have set cron to run every two minutes. Depending on the issue/event we could get a flood of emails every two minutes until someone logs in to manually comment the cron jobs; potentially in multiple machines. So, my first target for my program is to set counts of how many alerts should go out per hour.

I did not write the bash shell scripts and python programs that I am hoping to use my software with. It would be far more work to go program by program and ad logic than it would be to write the program I am trying to create and to run a "pre-check" with my program. I am also scheduled to "inherit" some of these systems soon and it will take me time to get familiar with them.

Additionally, with my program we can continue to run the checks every 2 minutes and only have the alerting part of the program check if it is ok to run by calling my program.

Moreover, my long term goal is to have dependencies prioritization:

The actual dependencies are far more complex. Currently multiple teams try and estimate, based on historical data, how long a process has taken and then try and figure out how to schedule it all.

Recently we did some re-organization of databases which caused some jobs to run faster; that should be a good thing, except that it totally threw off the dependencies because now some jobs are trying to do parts before some other parts are done.

Long term my goal is for my program to handle dependencies so not only we don't have jobs run out of sequence, but also we have less wasted time. Right now if a job takes 1 to 2 hours, we may put the follow up job 3 hours after the dependency to make sure the other finishes. With my program, eventually we will be able to programs shortly after their dependencies are done.

Lastly, I want to learn Pascal and this seemed like a good use case. If I had done this with python I would have to worry about installing dependencies and potentially OS modules needed to be installed on machines I may not have root. As for doing it in Bash, it would likely have been ok for the first phase of simple controls, but unlikely to work for the dependency part where I will need to do queries against a DB.

I used to see the same problem on some clients and my suggestion is always very similar to what you are doing:- review all process;- create stored procedures on the databases, one stored procedure for each task;- for batch operations, decide whether use binaries or shell scripts or both;- create one binary or shell scripts or both for each task;- create one binary or shell script or both to control (start/finish) all tasks and report all success and errors in log text files (usually) or database (rarely) for checking and statistical purposes;- this program that control everything can send EMAIL, SMS, Facebook, Whatsapp, Telegram, etc, for a group when bad things happen.

- for batch operations, decide whether use binaries or shell scripts or both;...- this program that control everything can send EMAIL, SMS, Facebook, Whatsapp, Telegram, etc for a group when the bad things happen.

I am currently a Postgresql DBA. Most of the monitoring scripts I am about to inherit were done by a team member. For those just need to control that we don't spam out alerts when something goes down.

The coordination part are ETL jobs done by multiple teams/people.

I am hoping to present my, open sourced, program to the different people/teams and see if we can (long term) have sort of centralized dependency tree. Right now as I mentioned we have some system (I don't use it myself) for managing crons, but that is purely a distributed type of cron system. It doesn't know about dependencies.

I understand what you are saying about binary or script, but if I were to go to all these people/teams with these 2 options* Fix your program so it doesn't spam / coordinate with other people teams more tightly so jobs run in the proper slot* Use my script to avoid alerts from spamming and use my script so you can easily have your bash/python script run when it is supposed to

I think the second option is likely going to work better, if for no other reason that it will be less work for all those other teams/people.

Other than the scripts I am about to inherit most of the issues don't impact/involve me. But I thought I could create something to make the work easier for the other teams.

Not trying to log, but to have basically control data.. how many times has this shell script run this hour? For now to keep it simple for my first phase will create a single file per script. Later could use something like sqlite to centralize to a single file.

With my little program I just need to have the programs call my utility with something that likely will be likeif (runcontrol -h 1 monitoring_program1)then do workelse exitfi

Same for each monitoring program. I will then create one control file in ~/runcontrol/monitoring_program1 with how many times the program has run in the given hour (as an example). So, in this initial phase, I will just have one crontrol file per shell script.

I am currently a Postgresql DBA. Most of the monitoring scripts I am about to inherit were done by a team member. For those just need to control that we don't spam out alerts when something goes down.

The coordination part are ETL jobs done by multiple teams/people.

I understand what you are saying about binary or script, but if I were to go to all these people/teams with these 2 options* Fix your program so it doesn't spam / coordinate with other people teams more tightly so jobs run in the proper slot* Use my script to avoid alerts from spamming and use my script so you can easily have your bash/python script run when it is supposed to

I think the second option is likely going to work better, if for no other reason that it will be less work for all those other teams/people.

Other than the scripts I am about to inherit most of the issues don't impact/involve me. But I thought I could create something to make the work easier for the other teams.

Not trying to log, but to have basically control data.. how many times has this shell script run this hour? For now to keep it simple for my first phase will create a single file per script. Later could use something like sqlite to centralize to a single file.

With my little program I just need to have the programs call my utility with something that likely will be likeif (runcontrol -h 1 monitoring_program1)then do workelse exitfi

Same for each monitoring program. I will then create one control file in ~/runcontrol/monitoring_program1 with how many times the program has run in the given hour (as an example). So, in this initial phase, I will just have one crontrol file per shell script.

I understand what you mean.My experience goes on a different direction:- centralized decisions;- centralized server for production and centralized OLAP server for business intelligence;- all branch servers work independent and replicate both way to the centralized production server;- all ETL stuff happen out of working hours in batch mode in only one BI server and by only one BI team working strictly with the DBA; - all other BI stuff, such as analysis cube, drills and reporting, happen only on one OLAP BI server separated from the production server;- I rarely see OLTP data warehouses spread to branches of a great private company. And that is really hard to work in.