a> i was just wondering if this is advisable to open/close file that a> fast? is there any better approach to do this?

That file is on a special procfs filesystem so the usual penalties don't apply. Even on a regular filesystem this would not be a big deal, 1 Hz is nothing in the context of modern CPUs and disks.

By the way, if you want *precise* readings, you don't want to use sleep(1). You're sampling every 1sec + (time to open+close file), and sleep() itself is not very precise so you may end up sampling irregular intervals. Doing this correctly is not easy. At the very least, look at the documentation for sleep() in `perldoc -f sleep'.

You can, however, run a pipe on vmstat: do

open VMSTAT, "vmstat 1|";

and read from it for as long as you need. You'll get updates every second.

You could also look at Sys::Statistics::Linux, which I found via:

http://search.cpan.org/search?query=procfs&mode=all

it seems to provide much more than just memory info, so you may find it useful.

Ted

Tue, 20 Mar 2012 16:04:58 GMT

Martijn Lievaar#4 / 4

open() close() same file many times

Quote:

> hello list

> i am writing a script to check memory usage on linux.

> the script opens and closes /proc/meminfo 5 times in 5 seconds to > calculate memory usage on 5 seconds average, it goes something like

> i was just wondering if this is advisable to open/close file that fast? > is there any better approach to do this?

Juergen already pointed out that you could use seek(), but I would add, does it really matter? This is all in memory stuff and unless it runs on a severely handicapped machine (embedded processors can be seriously under powered) you probably won't even be able to measure the difference.