First an engineer thought 'why not use software to correct CA and distortions resulting in smaller lenses'. Great idea, but not taken far enough.

With sensors improving at an astonishing rate, why not have one with core attributes (for example, the basic 20 megapixel of the RX100), then a co-sensor whose primary function is to enhance the other interchangeable lenses.

We are under the impression that recreating a scene requires 100% of the original data, but this is proven false everyday in the ultra high-end home theater world. When the video circuitry uses artificial intelligence to enhance the horizontal resolution, it recreates the scene using only about a quarter of the original data, and to our eyes, the results are magical.

When applied to lenses, it means they can be dramatically reduced in size.

Through a combination of optics miniaturization, software corrections, AI data extrapolation, and co-sensors matched to specific lenses, I believe that we can do a run-around to the laws of optics.

A 1" sensor camera that is shirt-pocketable not only with the kit zoom, but with the majority of focal lengths that we typically use, is not only technically possible, I would be surprised if we don't see one within 2 years.