It’s been a week since our release of NNSuperResolution, and we are happy to have received a lot of encouraging comments from artists all across the globe, especially in this thread on LinkedIn. The next thing on our agenda is to look into how we can potentially make it more temporally stable. As mentioned on the product page, the current super resolution solution is based on a still frame trained neural network. This is resulting in very nice and sharp high-res still images, but doesn’t necessary result in smooth images sequences / video. Image detail may stutter and flicker between frames, all depending on how the input images are looking. There are several different approaches available in recent research to try and achieve a more temporally stable result without degrading the upscaling results too much. We are currently looking into the methods presented in the paper “Single-frame Regularization for Temporally Stable CNNs“.

Investigating temporal stability