Nyquist Frequency

There seems to be varying answers on what the definition of "Nyquist Frequency" is, between my professor, myself, and various internet sites.

One group says:

"The Nyquist Frequency is 1/2 the sampling rate"

and the other group says:

"The Nyquist Frequency is twice the signal's bandwidth"

Both groups agree that your sampling rate must be twice the signal's bandwidth to avoid aliasing, but when a problem simply says "the sampling rate is x, and the signal's bandwidth is y, what is the Nyquist Frequency?" I get different answers depending on which definition I use.

Which is the correct definition, and how can I argue this to my professor?

To me there is no right or wrong. Both are right. Words can have multiple meanings, even in a technical context. What's wrong is arguing that only one sense is right, the other wrong.

Picture yourself as a designer of some digital system. Here you are worried about what the sampling frequency has to be. You find the highest frequency of concern, double it to yield the minimum sampling frequency, and then set the sampling frequency even higher than that for safety. The Nyquist frequency is based on system design parameters, and it is this dictated by design Nyquist frequency that defines the sampling frequency.

Now picture yourself as an operator of this system. The system was built and deployed long ago, and it's getting a bit on in years. Things are getting a bit rough on the edges. Can you see some noise signal in the telemetered data? If the frequency of that noise is more than half the sampling frequency, no you can't. Now the sampling frequency is set in the design. You can't change it, so now the sampling frequency defines the Nyquist frequency.

Both groups agree that your sampling rate must be twice the signal's bandwidth to avoid aliasing, but when a problem simply says "the sampling rate is x, and the signal's bandwidth is y, what is the Nyquist Frequency?" I get different answers depending on which definition I use.

EDIT: there is confusion between Nyquist frequency and Nyquist rate.

The Nyquist frequency is by definition half the sampling rate, since that frequency and all frequencies below that frequency will not be aliased when sampled.

The Nyquist rate is the sampling rate at which a given signal will not be aliased.

The N. rate is not simply twice the max. signal frequency. For example, if a signal contains W Hz lying between mW and (m+1)W Hz, m an integer, i.e. a pass-band signal, the Nyquist rate is 2W, not 2W(m+1).

I would advise not paying too much attention to the nomenclature and concentrate instead on the theory.

The sampling rate can be anything. In fact, deliberate undersampling is often done. The answer is that the Nyquist rate is twice the highest frequency in a baseband signal (extending from 0 to f).

But if the frequency range is limited to f1 to f2 then the Nyquist rate (min. sampling frequency to avoid aliasing) assumes a more complex formula, almost always being < 2f2.

In common usage, the Nyquist frequency is the folding frequency, which is half the sampling rate. People are very sloppy about this, however, as you have noted.
It is interesting to read Nyquist's papers (1924 and 1928) on the subject of signalling, because he never discussed sampling as is presently done in digital systems. His interest was in the frequency spectrum of telegraph signals, which he examined through their Fourier series. Claude Shannon later referred to "Nyquist interval" as the period at the folding frequency (half the sampling frequency). Other authors coined "Nyquist rate" for the sampling frequency, and on it went. I leave Nyquist's name out of my discussions and talk in terms of the sampling frequency, half the sampling frequency, etc. so there is no confusion.