OK, if we have a sensor that has, say, 16 megapixels, we expect that the final image produced also has 16 megapixels — one pixel per sensel, right?

While this seems common-sensical, are there practical mathematical reasons for a demosaicing algorithm to deliver another size, not mapping sensels to pixels 1:1? Not simply to resize, but because the it delivers superior results in one way or another?