Mmm.. the tranformation that (most) lossy codecs do from time domain (samples) to frequency domain (frequency bands intensity and phase) does not compress by itself. It might even need more data, depending on the precision.

What codecs use to reduce the bitrate demands is allowing frequencies to be less precise (quantizing the different possible values), coupled with other compressing techniques (joint stereo with less bits for side channel, huffmann compression, parametric audio reconstruction, etc..).

But going back, what you usually get from the transformation is not "frequency 1Hz, x intensity, frequency 2Hz y intensity,...). To get that, you would need to use an FFT of the same size of the sampling rate, and in that case, you would need to say which size it is. (effectively defining the sample rate).

Generally, though, a fixed size (depending on sample rate, to have enough definition) transformation is used. In MP3, it is an overlapped window of 1152 samples (in case of long blocks) (Please, correct me if i am wrong!), which generates 576 frequency bands (and their phases). Those 576 values by themselves don't mean a thing, because there is the same number of bands from a 32Khz wave than from a 48Khz wave.

Concretely, the band 576, for a 32Khz contains frequency information from the frequency 31943Hz to the frequency 31999Hz.For a 48Khz, that same band contains frequency information from the frequency 47915Hz to the frequency 47999Hz.It is only on playback that the frequency gets any meaning. And to understand it, you might think on what happens when playing a 22Khz file at 44Khz, or playing an LP of 33RPM at 45RPM.