In the other hand, do nocturnal animals, see random noise at night? I guess: no.

I know that the human brain does a lot of tricks to compensate for the flaws in our vision. I would assume the same happens for those nocturnal animals.

Yes, you absolutely right. Sometimes, when you close your eyes, you can see (or not) such an effect like that (or something... because could we call that "noise"(?), I have no idea, but it's something we can see in parallel with added to the retinal persistence). And so, we can see (or not) because our brain can choose to select it (or not...)

Current NR algorithms tends to smooth it, but not with an adjustable threshold by photographer, nor according to a better exposure (
ETTR
workflow) nor to align the red channel as Iliah Borg suggest (by filtering "high frequencies" VS "slow frequencies" photons)

All above can optimise treatment for low light condition. But it doesn't seems to be completely implemented today. (Imho)

For achieving that: Iliah Borg was using different gradations of magenta filters:

Backlit sensor to loose a minimum of photons;

Modern sensors lose less than half of the photons. So not much more to gain there.

Hum....! If «less than 50%», that suggest a big margin to make some progress...

innovative design of optics that would multiplying the ambient light optically (such as a converging system what can align multiple circles of images through a prism)

Sounds like a complicated way
of achieving the same as you would achieve with a larger aperture.

It's already inside the R&D secret drawers, I guess.
.

no more infrared filter to capture more (surnumerary) photons;

an in-camera algorithm/processing that would restore infrared images, and interpret their colours to render them such as human eyes perceive it normally;

With the current technology, the camera can only know that those photons were infrared if it blocks them for some pixels and allows them for other pixels - just like the bayer filter we have today to make color photos. This blocking results in a loss of photons, so you may not gain a net benefit.

On the other hand, if someone invents a sensor which can count all photons and at the same time discern between their wavelengths without blocking some of them in a bayer filter, we would take a step forward in sensitivity. I don't even know if that is theoretically possible (there are some laws of physics which say that you cannot theoretically measure all properties for some systems precisely at the same time).

Who talk about Bayer, me?
Have already patents for new design, without Bayer matrix.

But we have also Foveon and Sony version of the Foveon sensor.

And/or, like a combination with both: Foveon and
back-illuminated
design!

Here the standard ones:

Here the new Sony design (what resolve the Iliah Bork dilemma):

Btw, a new sensor dedicated to lowlights could have a bits count conducive for lowlights
(not for highlights)

As you can see with such inversed bits count, you have much more gradations available in lowlights areas. Same as when our brain can choose to select something more in deepest lowlights...

Because it seems to be that the current bits count starting up from lowlights and go to highlights (according to a non sinusoidal pattern of grey levels in the digital world, what can blow high lights quickly, but what it dosen't matter when we suffer of a lack of photon in lowlights) and so, they are conducive to highlights (mathematically much more grey levels than lowlights) if you know what mean....