It's as easy as declaring an int, incrementing it every time the buffer swap occurs, then displaying the value every 1000 milliseconds and reseting the value.

Granted this isn't the optimal method to calculating frame rate with any level of precision.

Personally, depending on the project, I use three different methods:

1. glutTimerFunc() and a global int to hold the amount of frames per second
2. sample system time using timeGetTime() and using the return from that value to compare (down to the millisecond) how long it took to draw a certain number of frames
3. or, use the method described in #2, except with the time(NULL) function

The difference between timeGetTime() and time() is that the former samples system time in milliseconds since the machine was booted, while the latter samples the amount of seconds that have passed since January 2nd of 1970 (IIRC).