I mean, why stop there? Wouldn't you then want a camera that could shoot photos with vivid, saturated colors at fast shutter speeds just by the light of a waning crescent moon?

Ideally, yes.

I'd want that too, but today's best sensors haven't got too much room left to go for any improvement, so waiting for this to happen is a pipe dream that can't be counted upon to be realized.

Modern digital cameras contain electronic sensors that have predictable properties. Foremost among those properties is their relatively high Quantum Efficiency, or ability to absorb photons and generate electrons. Second is that the electronics are so good in most cameras, that noise is under 2 electrons and rarely worse than about 15 electrons from the sensor read amplifier. With the low noise and high Quantum Efficiency, along with the general properties of how the sensors collect the electrons generated from photons, it is possible to make general predictions about camera performance. An important concept emerges from these predictions that we are reaching fundamental physical limits concerning dynamic range and noise performance of sensors.

Which is why the tremendous improvement of the D3 and D700 sensor which was followed by a healthy but smaller improvement by D3s's sensor was followed by only a very small improvement by the D4's sensor. Fuji's sensors similarly have little room left for improvement. Canon's another story.

The human eyed doesn't violate the laws of physics. That being the case, whenever sensor performance is inferior to that of the human eye, the problem is one of technology, not physics.

No, the human eye doesn't violate any laws of physics or optics but it doesn't work the same way in very low light. The high resolution cones that are responsible for color vision are densely packed in a small part of the center of the retina while the very low resolution cones that surround this area, filling the rest of the retina's surface are achromatic, providing only monochrome (essentially B&W) vision. The human brain may also help a bit by integrating what is seen over time. I've also written before that my cameras are able to take usable pictures in light that's so dim that I can't see what I'm shooting, either with the naked eye or through a DSLR's viewfinder. P&S cameras do better (some of them) by increasing the LCD/EVF gain in very low light, so even if the resolution isn't great I can still see enough to accurately point the camera at the intended subjects.

The human retina contains about 120 million rod cells and 6 million cone cells.

There are two types of photoreceptors in the human retina, rods and cones.

Rods are responsible for vision at low light levels (scotopic vision). They do not mediate color vision, and have a low spatial acuity.

Cones are active at higher light levels (photopic vision), are capable of color vision and are responsible for high spatial acuity. The central fovea is populated exclusively by cones. There are 3 types of cones which we will refer to as the short-wavelength sensitive cones, the middle-wavelength sensitive cones and the long-wavelength sensitive cones or S-cone, M-cones, and L-cones for short.

The bottom figure shows the distribution of rods and cones in the retina. This data was prepared from histological sections made on human eyes.

In the top figure, you can relate visual angle to the position on the retina in the eye.

Notice that the fovea is rod-free and has a very high density of cones. The density of cones falls of rapidly to a constant level at about 10-15 degrees from the fovea. Notice the blind spot which has no receptors.

At about 15°-20° from the fovea, the density of the rods reaches a maximum. (Remember where Hecht, Schlaer, and Pirenne presented their stimuli.) A longitudinal section would appear similar however there would be no blind spot. Remember this if you want to present peripheral stimuli and you want to avoid the blind spot.