nanosleep messing things up.

This is a discussion on nanosleep messing things up. within the C Programming forums, part of the General Programming Boards category; As the title says, it messes things up. My program is supposed to read in char by char, which works ...

nanosleep messing things up.

As the title says, it messes things up. My program is supposed to read in char by char, which works fine without nanosleep, when I add it in, messes things up. Yes I MUST use nanosleep which is why Im using it. Why is it not working in its spot? And where does it belong

Do you really want to set KEY, nIN, nWORK and nOUT to the value of the first char in the first 4 command line arguments? Or should you be using atoi() there?

I want it to just print out a file, the other things is excess I guess I should have taken out. But the key thing is that it works without nanosleep. I want it to sleep for a random time and pick up a char and store it in the buffer. and keep sleeping randomly after. It was able to print the file before, but with nanosleep only prints a little over half or so and just cuts off.

It's because you're not waiting for your threads in main. With nanosleep, your threads sleep long enough for main to end before the file is fully printed out. One way to wait is to simply add a pthread_exit(NULL); as the last statement in main (besides a possible return 0; that's never actually reached).

I also found a nifty way for a program to wait that is "transportable" (which I think means it works on multi-OSs).

Are you serious? A lot of compiler will just optimise that out, some machines will take fractions of a second to do it (especially if they just put those values in registers!) and others will take much, much longer. Hell, different optimisation levels in the same compiler on the same computer will have vastly different lengths of pause. On my copy of gcc, it takes 280ms to run that loop with -O0, and LITERALLY 0ms at -O3. You can even whack up the numbers even further - it makes no difference because the optimisation of the compiler wipes it out.

Now think what that does on different machines, different compilers, different operating systems, etc.

Why not just loop in a while loop until the number of seconds passed is one (or the number of milliseconds passed is one thousand or whatever)? All you need is a SINGLE function call that can give you time accurate to say, half-a-second or better, which is pretty much part of the C standard ("clock"). That works, works the same on ALL machines, works consistently in terms of timing, and can't be optimised away by the compiler.

- Compiler warnings are like "Bridge Out Ahead" warnings. DON'T just ignore them.
- A compiler error is something SO stupid that the compiler genuinely can't carry on with its job. A compiler warning is the compiler saying "Well, that's bloody stupid but if you WANT to ignore me..." and carrying on.
- The best debugging tool in the world is a bunch of printf()'s for everything important around the bits you think might be wrong.