As do I, as long as they are quality pixels and improve upon the past.

There are three primary aspects to a pixel's "quality":

QE (Quantum Efficiency -- the proportion of light falling on the pixel that is recorded)

Read Noise (the additional noise added by the pixel and supporting hardware)

Saturation Capacity

There's no evidence that the size of a pixel has any substantial effect on QE, as the pixels from the 6D have basically the same QE as the pixels from the G15, despite the fact that the 6D pixels have 11.5x the area as the G15 pixels.

As for read noise, we need to compare area for area, not pixel for pixel. For example, the combined read noise of 11.5 pixels from the G15 (same area as one 6D pixel) is 8.5 electrons vs 26.8 electrons of the 6D at base ISO (1.7 stop advantage smaller pixels), but 5.8 electrons vs 2.0 electrons at ISO 6400 (1.5 stop advantage for larger pixels).

Lastly, so long as the saturation levels for the pixels are in proportion to the area, there will be no relative advantage. For example, 11.5 of the G15 pixels have a saturation of 84630 electrons as opposed to one 6D pixel which has a saturation of 76606 electrons (negligible difference -- 0.14 stops apart).

So, as we can see, for the same generation tech, the relative difference in sensor efficiency is essentially the same for a huge range of pixel sizes. In other words, a 230 MP FF sensor made with G15 pixels would have been as efficient as the 20 MP 6D sensor.

In other words, any argument against more pixels has to come from an operational argument (frame rate, file size, processing time) as opposed to IQ, since 230 MP over 20 MP of equally efficient pixels would most certainly have the IQ advantage.