In the physics texts I have read and from other online information, I gather that Planck's constant is the quantum of action or that it is a constant specifying the ratio of the energy of a particle to its frequency. However, I'm still not understanding exactly what it is?

From other things I have read, I understand that Planck did a "fit" of data concerning others' experiments and came up with this value; exactly what other data exactly did he fit to arrive at this really small value? Or maybe he did it some other way? Perhaps an answer concerning its origins will help me understand my first question better?

The problem existed with the so-called "blackbody radiation". There was for one, energy distribution per frequency as derived by classical mechanics, which yielded the Rayleigh-Jeans formula for blackbody radiation:

$\rho_T(\nu) = \frac{8 \pi \nu^2 kT}{c^2} d\nu$

($\nu$ here is the frequency. For some reason, in spectroscopy, people use $\nu$ rather than $f$.)

Of course, this contained no $h$ in it, because Planck did not invent it yet. It agreed well with experiment at low frequencies.

On the other hand, at high frequencies, there was experimental data, which showed an exponential decay (the formula for which, contained no $h$, both because Planck had not invented it yet, and because it was freaking experimental data.)

The whole business was called the "Ultraviolet Catastrophe". (In those days, some people thought that was the last physics problem there was left to solve before shutting down the physics departments... Oh well.)

The catastrophe arises because, classically, a "standing electromagnetic wave" can have any energy. Planck, just looking at the numbers, and looking for a way out (and a way to fit experimental data; very good way to get published if your theory agrees with the data) realized that, if instead of allowing every value for the energy of a standing wave (whose numbers increase with the square of the frequency, hence the blow-up with frequency) if, just if, he postulated that energy could not take on any value, but could take only discrete values, the divergence would not happen any more!

He followed that line of thought, and took the simplest possible assumption: That energy could take equally spaced values, separated by... Some stupid constant! And said:

$E = n h \nu$

The $h$ was just a "mathematical tool" to start with. Then, he saw that, this assumption fit the low-energy part of classical mechanics, and could be made to fit the whole experimental data if he just chose a "wise" value for his constant, $h$. So, he chose wisely:

$h_{original} = 6.63 \times 10^{-34} J\cdot s$

This is remarkably close to the modern value:

$h_{modern} = 6.62606957 \times 10^{-34} J\cdot s$

It is widely accepted that he did not fully understand how far-reaching consequences this simple assumption created. Such as being the unit of action, the basis of all quantum mechanics, blah, blah, blah... He just found a mathematical solution to a practical problem. He fit the data. And the universe unfolded.

Fascinating stuff, really. So is the rest of the discovery of the quantum world...