New product called NNFlowVector released!

We are very excited to announce the release of a new product called NNFlowVector. It’s an optical flow plugin for Nuke, powered by state of the art AI/machine learning algorithms, capable of generating very clean, stable and accurate motion vectors. It’s also able to generate the more complex and advanced Smart Vectors to be used with NukeX’s powerful tools like VectorDistort, VectorCornerPin and GridWarpTracker.

Here is a quick demo of the features:

Be sure to check it out on our product pages, NNFlowVector, and also the bundled tools called NNFlowVector Utils.


Fixing an annoying Nuke “feature”

In the middle of developing the next optimised release of NNSuperResolution and also our upcoming Nuke plugin in the NN family (be sure to check back pretty soon for the release!), I did a little detour to try and fix one of Nuke’s annoying and long standing “features” – the dreaded “middle click to zoom out in the DAG”. I haven’t met a single compositor that likes or uses this feature, rather the opposite – people seem to more or less hate it.

The problem here is that Nuke doesn’t natively support turning off this behaviour. There are no preferences, and there are no python API calls either. To be able to tweak this behaviour you rather have to hack into the Qt widget stack that is making up the GUI in Nuke. You got to find the correct widget, in this case the DAG, and install an event filter on it which makes it possible to get a callback to your own function whenever something you are listening in on is happening. In this case the events we are listening for are middle clicks. What we then do is catching that event, and doing some hacks to get the behaviour we are after. There are some filtering for only triggering our override when you middle click (actually middle press and then middle release close to the same coordinates), and not when you are using the middle mouse (read: Wacom pen) button to pan around. We also create and delete a Dot node, and send a left mouse button click instead of the middle click to the DAG to make it work seamlessly in the background.

It wasn’t super straight forward to code, and needed quite a lot of trial and error, but after a few hours I managed to produce something that works well (for me at least). Here is the code for you to use if you would like:

from PySide2 import QtWidgets, QtGui, QtCore, QtOpenGL

class MouseEventGrabber(QtCore.QObject):
    def __init__(self):
        super(MouseEventGrabber, self).__init__()
        self.middleclicked = False
        self.clickpos = None = QtWidgets.QApplication.instance()
        dags = [widget for widget in if widget.windowTitle() == "Node Graph"]
        if dags:
            self.dag = None
            if dags[0].size().height() > dags[1].size().height():
                self.dag = dags[1].findChild(QtOpenGL.QGLWidget)
                self.dag = dags[0].findChild(QtOpenGL.QGLWidget)
            if self.dag:
                print "Installing DAG event filter"
        if not dags or not self.dag:
            print "Couldn't install event filter, DAG not found"
    def eventFilter(self, widget, event):
        '''Grab mouse and key events.'''
        if event.type() == QtCore.QEvent.MouseButtonPress and event.button() == QtCore.Qt.MouseButton.MiddleButton:
            self.middleclicked = True
            self.clickpos = QtGui.QCursor.pos()
            #print "Set middle clicked: True (position: %d, %d)" % (self.clickpos.x(), self.clickpos.y())
        if event.type() == QtCore.QEvent.MouseButtonRelease and event.button() == QtCore.Qt.MouseButton.MiddleButton and self.middleclicked:
            newpos = QtGui.QCursor.pos()
            #print "Set middle clicked: False (position: %d, %d)" % (newpos.x(), newpos.y())
            self.middleclicked = False
            if newpos.x() > self.clickpos.x() - 5 and newpos.x() < self.clickpos.x() + 5 and newpos.y() > self.clickpos.y() - 5 and newpos.y() < self.clickpos.y() + 5:
                print "Blocked zoom out from middleclick"
                import nukescripts
                dot = nuke.createNode("Dot", inpanel=False)
       = QtWidgets.QApplication.instance()
                dags = [widget for widget in if widget.windowTitle() == "Node Graph"]
                if dags:
                    self.dag = None
                    if dags[0].size().height() > dags[1].size().height():
                        self.dag = dags[1].findChild(QtOpenGL.QGLWidget)
                        self.dag = dags[0].findChild(QtOpenGL.QGLWidget)
                QtWidgets.QApplication.sendEvent(self.dag, QtGui.QMouseEvent(QtCore.QEvent.MouseButtonPress, self.dag.mapFromGlobal(newpos), QtCore.Qt.LeftButton, QtCore.Qt.LeftButton, QtCore.Qt.NoModifier))
                return True
        return False

def SetupEventfilter():
    global mouseEventFilter
    if not "mouseEventFilter" in globals():
        mouseEventFilter = MouseEventGrabber()

Place the code above in for example your “” python file (in your “.nuke” folder). You then need to register it to get called whenever you open up a new script. This is because I haven’t found a way to make the code above work automatically from the startup of Nuke. The DAG widgets aren’t fully created when the loading code gets run, hence the install of the event filter doesn’t work. So instead we register it to when scripts are opened using the OnScriptLoad callback as such:


That’s it, I hope you like it!
Cheers, David

New major release of NNSuperResolution

After several months of training neural networks, we are pleased to finally announce v3.0.0 of NNSuperResolution. The main updates are:

  1. 2x upscale option (as a complement to the already existing 4x upscaling).
  2. Addition of a “details” knob where you can tweak the amount of introduced generated detail in the upscale process.
  3. Fully re-trained neural networks for sequence mode, with a higher neural capacity (improved quality results).

We have also fixed so that NNSuperResolution is compatible with upscaling anamorphic formats.

All the different Nuke and platform variants are available now from the Downloads page. If you want to test it out without any added watermark noise, please request a free trial license.

Cheers, David

Demo video on YouTube

We released a demo video presenting a couple of new examples of upscaling material using NNSuperResolution (including a CG sequence). The video also showcases the plugin in action directly in Nuke which gives you an idea of the performance as well as the ease of use for the artist. Have a look:

Development update – Windows versions are released!

The Windows versions of NNSuperResolution have been in the wild for about a month now! We are very excited about this since it’s something a lot of people have been asking about. It’s also worth mentioning that it’s Nuke Indie compatible as well. You can download a copy directly from the downloads page to test it out. If you want to test it out without the added watermark/noise, please request a trial license. It’s free and quick, and will let you run the plugin unrestricted for 10 days. If you need more time to evaluate it, please get in contact using the comments field on the trial request page and we’ll organise something suitable.

The next couple of things we are looking into for NNSuperResolution are getting the “still mode” able to upscale CG (RGBA), similarly to the “sequence mode” which is already capable of this. We have also started training variants of the neural networks that are able to upscale only 2x. Currently all the upscaling with NNSuperResolution is 4x, but it’s not always that you need to go that large. Maybe you already got full HD material (1080p), and want it remastered as UHD (4K), then 2x would be good to have available directly as an option.

We are now featured on the official Plug-ins for Nuke page at Foundry’s website. 🙂

We are also working on producing a demo video to show the plugin in action directly in Nuke. While the best way is always to try things out for yourself using your own material, it can also be nice to see the thing in action on YouTube.

Stay tuned!
Cheers, David

Major update of NNSuperResolution!

It’s been a while since we did any updates here. That is not because nothing has happened, rather the opposite – we’ve been very busy working on a new and improved version of NNSuperResolution called v2.5.0! It’s just been released and is available now for downloading. The major new features are as follows:

  • New and improved upscale solution for plates, featuring sharper and more detailed results. This has been made possible by fully re-training the neural networks using a lot more, and higher quality, filmed plates shot using the Arri Alexa motion picture camera.
  • A first release of an upscale solution for CG, i.e for rendered computer graphics. This means you can now upscale renders including the alpha channel (and also custom lightgroups/AOVs).
  • Nuke Indie support

We are very happy to finally have this release in the wild, and would like to hear what you think about it! Download and take it for a spin. If you need a trial license, don’t hesitate to request one for free on the request trial license page.


Major new release of NNSuperResolution!

We are really proud of having released v2.0 of NNSuperResolution to the public. It’s available now from the downloads page! The big new feature is a sequence mode for upscaling video material while keeping it temporally stable. This is a game changer as you can now quadruple the resolution on any video, which for example means that 1K can easily become 4K!

You need to see this in action, and to help you do this we’ve launched a new YouTube channel:

Below is a very quick sneak peak of an example before & after:

Input material
Upscaled result (sequence mode)

We hope you will enjoy this as much as we do!
Cheers, David

Added support for NVIDIA RTX30xx graphics cards

Zoom-in before & after example of
running NNSuperResolution

This weekend we rebuilt our development environment from the ground up. This includes updating graphics driver, CUDA, cuDNN and recompiling multiple layers of software to now support the latest line of graphics cards from NVIDIA, the RTX30xx series (RTX3070, RTX3080 and RTX3090). In more technical terms it means we are now supporting compute capabilities 8.0 and 8.6 (please see The newly built version variants of NNSuperResolution are available from the Downloads page as usual. If you do indeed own a RTX30xx card, be sure to look for the versions compiled against CUDA11.2.


More example images, trial licences and bundled CUDA libs

It’s been another month and lots of developments!

In the middle of January we did a series of before & after examples of the output of NNSuperResolution on Instagram and Facebook. For easy access, we’ve also posted these examples on this page here on the website.

We have created a dedicated page to make it easy to request a free trial license for NNSuperResolution. You can request either node locked or floating licences. The default is that we create a free test license for you that will expire after 10 days. This way you can test the plugin fully, without any watermarking/noise, to properly be able to evaluate the results for yourself on your own material.

After talking to some clients about their installation experience, we’ve decided to also provide downloads of the NNSuperResolution plugin bundled with the needed CUDA and cuDNN libraries. This will make for a much easier installation procedure if you don’t previously have the NVIDIA CUDA Toolkit and NVIDIA cuDNN libraries installed on your system. The bundled CUDA & cuDNN libs can be installed into the same NUKE_PATH directory as the main “” is installed into, and the plugin will find and use them directly from there. These new versions are available on the downloads page.

We are continuing our development journey towards finding a good super resolution solution for sequences that will produce a much more temporally stable result. While the translation invariance loss, from the previously mentioned paper, does help with producing a more stable result in general it doesn’t produce as temporally stable sequences as we want. We are currently looking into the methods presented in the paper “Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation“.