The missing features are being "hallucinated". This thing should be trained for normal maps etc. and we don't have to bake large things anymore.Not perfect but worked quite well in some areas of my test image.

It works especially well with high quality content. Or with down-resized photos. Works less well with noisy, grainy, low detail pics.

Original = the original resolution imagethen I took this and resized it down 400% (so I can have a well defined low grain image) and I up-scaled it back up 400% with the website and with photoshop.The website algorithm seems to be able to "make up" or invent new details, including grain and surface detail so it closely match the original image. And it's better than the new "preserve detail 2.0" photoshop algorithm. And much better compared to some sharp bicubic or lanc algorithm.

Some parts of the up-scaled image look almost the same, or even better than the original, even though it's missing like 75% or something of original image content.

Found another one if you're interested:https://topazlabs.com/downloads#gigapixelIt offers 30-day trial with no image limit (?). From my initial tests it seems it is slightly worse than letsenhance, but offers more options (e.g. 600% upscaling).

It would be interesting if you could apply this to an image sequence (a movie). I sent a message to the letsenhance people about it but I got no reply. It would be expensive anyway. But probably programs like these who invent new details for resizing would probably not have temporal coherence, I'm guessing it would flicker from frame to frame.

It would be interesting if you could apply this to an image sequence (a movie). I sent a message to the letsenhance people about it but I got no reply. It would be expensive anyway. But probably programs like these who invent new details for resizing would probably not have temporal coherence, I'm guessing it would flicker from frame to frame.

This is possible and some other services even specialize it in. Most of them are in-house and the only public one is not available for even beta test yet, it's part of Artomatix suite. ( I am beta tester but currently the focus is for texture synthetization ).But they did showcase upscaling to 4k using this tech, it was temporally well stabilized and you couldn't tell. But the detail enhanced wasn't that great compared to others I have seen, not sure why.

They are keeping this in-house or closed for the public or web based only, because they think it's something revolutionary and people will pay a lot of cash for each picture resized. Well it is something revolutionary, but I think soon the technology will leak to the public in forms of After Effects plugins and so on, like it always does eventually.

"AI" resizing for everybody ! :)

Perhaps with only 200% resize, the artefacts will be almost invisible.If anyone makes a video test up-size with this, or the topazlabs thing, please post here.

By the way, does some kind of AI upscaling in the Corona VFB sound interesting to you? The idea would be: render something, press a button, wait, you get double the output size. I guess it would be interesting for prints and fast previews that turn out good enough.

It's not something the devs are working on, just a wild fantasy, but I am curious how many people would be interested in this. :)

By the way, does some kind of AI upscaling in the Corona VFB sound interesting to you? The idea would be: render something, press a button, wait, you get double the output size. I guess it would be interesting for prints and fast previews that turn out good enough.

It's not something the devs are working on, just a wild fantasy, but I am curious how many people would be interested in this. :)

In my opinion not in this way, but I could see it as integrated feature of interactive. Basically nVidia is doing this with their latest DLSS which is sort of up-scaling/AA technique.It would be another trick to massively boost interactive as very small visual detriment, if at all.

The tech is super-fast, akin to Optix, so responsive enough for interactive.

In my opinion not in this way, but I could see it as integrated feature of interactive. Basically nVidia is doing this with their latest DLSS which is sort of up-scaling/AA technique.It would be another trick to massively boost interactive as very small visual detriment, if at all.

The tech is super-fast, akin to Optix, so responsive enough for interactive.

So basically you are IRing a lower-res image, which is then upscaled in almost real time?

In my opinion not in this way, but I could see it as integrated feature of interactive. Basically nVidia is doing this with their latest DLSS which is sort of up-scaling/AA technique.It would be another trick to massively boost interactive as very small visual detriment, if at all.

The tech is super-fast, akin to Optix, so responsive enough for interactive.

So basically you are IRing a lower-res image, which is then upscaled in almost real time?

Yeah :- ). In fact I could imagine this being the best order of operation:

1) Sample at half the resolution Resolution for best quality, or 1/4 for best speed.2) Up-sample in real-time3) Denoise the upsampled result.

Not that is to be seen if this gives better quality than sampling at true res and then denoising, but imho it could give a lot more information to the denoiser.This is also how nVidia advises for it to be used.

If the upscale would be even 70% as good as letsenhance.... And then rendering and tweaking in IR with maybe a very small 480 x 360 pixels would be superfast, and then to have that upscaled on the fly with the GPU to maybe 960 x 720 or more with minimal loss ? That sounds awesome.