All downloads should now be much faster

We have successfully migrated all of our files available for downloading to Amazon’s AWS S3 cloud system. We received your feedback about our downloads being painfully slow (for some of you it even took days to download the latest builds of our plugins). We agree that this was not acceptable, and have now solved it by upgrading to a much more solid solution. We hope that this will provide nice download speeds going forward, no matter where in the world you are located.

Cheers,
David

Bug fix release of NNFlowVector and some other updates

We have just released a new version of NNFlowVector, v1.5.1. It’s a patch/bug fix release with the following release notes:

  • Patch release fixing a streaking error that occurred with the last processing patch (furthest to the bottom-right area of the processed image), in some resolutions (resolutions that needed padding to become dividable by 8).
  • Improved the blending of the seams between processing patches. The problem was not always visible, but became apparent in some specific combinations of maxsize, overlap and padding values.

We have noticed that the NNFlowVector Utility nodes “MotionVector_DistortFrame” and “MotionVector_FrameBlend” don’t work in Nuke13.1 and Nuke13.2. They do however work in Nuke13.0 and earlier versions. We investigated this and found the reason to be a bug in Nuke in the way it handles motion vector channels in the IDistort node when they are time shifted using a TimeOffset node. If you are interested, here is the bug ticket (ID 518631) at Foundry’s site: https://support.foundry.com/hc/en-us/articles/7496106165010-ID-518631-The-Viewer-outputs-a-grey-image-when-there-is-an-IDistort-downstream-of-a-time-node

We have recently relocated our website to a new hosting service. It’s now much faster and more reliable. We are currently looking into moving our file hosting service as well, to get all downloads working better (we know that they have been painfully slow as times, thanks for letting us know!)

Due to client requests, we have added a new license type for our plugins called a “Global license”. This type of license is tailored for large companies that operate world wide, i.e. on a global scale, with operations present in a lot of different geographical locations around the globe. You will find the option in the Shop next to the other types (“node locked”, “floating” and “site license”).

Have a nice weekend!
Cheers,
David

Bug fix update to NNSR and some future plans

We released an important bug fix update to NNSuperResolution yesterday (12th of September). It’s called v3.2.1, and fixes a regression where the overscan support didn’t work as intended in sequence mode (v3.2.0 was actually crashing if you tried to calculate a sequence that had overscan). The plugin is now patched and fully working again, so please go ahead and install the new version (there are no other changes to the plugin in this version update).

We wanted to take the opportunity to communicate some of what we are aiming to work on going forward. The single most requested and asked for feature for NNFlowVector is mask support, i.e. the ability to exclude some local object movements from the solve of the resulting optical flow. We are currently in the process of implementing this, but it’s a rather complex solution which involves gathering a lot of example data, training another neural network, and rewriting parts of the plugin. Hence we are not setting any hard time frames at this point. If everything goes according to plan, it will be released sometime next year.

We are also planning to investigate if we can squeeze some extra quality out of NNSuperResolution’s upscaling in sequence mode. The idea is to replace its rather simple internal optical flow engine with the much more competent optical flow engine that’s implemented in NNFlowVector. That way the solution would be able to lean even more onto the temporal features than today, which will hopefully result in increased end quality.

We are also slowly and secretly working on our third Nuke plugin powered by AI/ML! It’s too early to tell what it is though at this point in time, so you just got to remember to check back in on our website from time to time to be in the loop.

All the best,
David

New version release of NNFlowVector, v1.5.0

We are proud to finally release our first major update to our NNFlowVector plugin. This is a release with a bit of everything in it; improved neural networks, better UI and user control, better performance overall, better compatibility with Nuke13.x etc. Here are the full release notes:

  • Fully re-trained the optical flow neural networks with optimized settings and pipeline. This results in even higher quality of generated vectors, especially for object edges/silhouettes.
  • To better handle high dynamic range material, all training has internally been done in a logarithmic colorspace. This made the “colorspace” knob become unnecessary and hence it has been removed. (please create new node instances if you are updating version and using Nuke scripts with the old version present)
  • Implemented a “process scale” knob that controls in what resolution the vector calculations are happening in. A value of 0.5 will for example process the vectors in half res, and then scale them back to the original res automatically.
  • Improved the user control of how many iterations the algorithm will do while calculating the vectors. The knob “iterations” is now an integer knob instead of a fixed drop down menu.
  • Added a knob called “variant”, to enable the user to choose between several differently trained variations of the optical flow network. All network variants produce pretty similar results, but some might perform better on a certain type of material. Hence we encourage you to test around. If you are unsure, go with the default variant of “A”.
  • Speed optimizations in general. According to our own internal testing, the plugin is now about 15% faster to render overall.
  • Added an option for processing in mixed precision. This is using a bit less VRAM, and is a quite a lot faster on some GPU architectures that are supporting it (RTX).
  • Added an option for choosing what CUDA device ID to process on. This means you can pick what GPU to use if you got a workstation with multiple GPUs installed.
  • Optimized the build of the neural network processing backend library. The plugin binary (shared library) is now a bit smaller and faster to load.
  • Compiled the neural network processing backend with MKLDNN support, resulting in a vast improvement in rendering speed when using CPU only. According to our own testing it’s sometimes using even less than 25% of the render time of v1.0.1, i.e. 4x the speed!
  • Updated the NVIDIA cuDNN library to v8.0.5 for the CUDA10.1 build. This means we are fully matching what Nuke13.x is built against, which means our plugin can co-exists together with CopyCat nodes as well as other AIR nodes by Foundry.
  • Compiled the neural network processing backend with PTX support, which means that GPUs with compute capability 8.0 and 8.6, i.e. Ampere cards, can now use the CUDA10.1 build if needed (see above). The only downside is that they have to JIT compile the CUDA kernels the first time they run the plugin. Please see the documentation for more information about setting the CUDA_CACHE_MAXSIZE environment variable.
  • Internal checking that the bounding box doesn’t change between frames (it’s not supported having animated bboxes). Now it’s throwing an error instead of crashing.
  • Better error reporting to the terminal
  • Added support for Nuke13.2

Hope you like it, and that you find it even more useful in production!
All the best,
David

New version release of NNSuperResolution, v3.2.0

We are happy to announce that we’ve just released a great update to NNSuperResolution, v.3.2.0. It’s mainly a stability and optimization release, and it’s available as of now from our downloads page. Release notes as follows:

• Speed optimizations overall. According to our own internal testing, sequence mode is now about 30% faster to render.
• Added an option for processing in mixed precision. This is using a bit less VRAM, and is a quite a lot faster on some GPU architectures that are supporting it (RTX).
• Added an option for choosing what CUDA device ID to process on. This means you can pick what GPU to use if you got a workstation with multiple GPUs installed.
• Disabled initial heuristics to fix a slow down issue on certain GPU architectures that happened on the first processing frame.
• Optimized the build of the neural network processing backend library. The plugin binary (shared library) is now a bit smaller and faster to load.
• Built the neural network processing backend with MKLDNN support, resulting in a vast improvement in rendering speed when using CPU only. According to our own testing it’s sometimes using even less than 25% of the render time of v3.0.0 (in sequence mode), i.e. 4x the speed!
• Updated the NVIDIA cuDNN library to v8.0.5 for the CUDA10.1 build. This means we are fully matching what Nuke13.x is built against, which means our plugin can co-exists together with CopyCat nodes as well as other AIR nodes by Foundry.
• Built the neural network processing backend with PTX support, which means that GPUs with compute capability 8.0 and 8.6, i.e. Ampere cards, can now use the CUDA10.1 build if needed (see above). The only downside is that they have to JIT compile the CUDA kernels the first time they run the plugin. Please see the documentation for more information about setting the CUDA_CACHE_MAXSIZE environment variable.
• Internal checking that the bounding box doesn’t change between frames in sequence mode (it’s not supported having animated bboxes). Now it’s throwing an error instead of crashing.
• Bug fixes to the “frame range knobs” handling and the “reset frame range” button
• Better render status logging for what layer is processing to the terminal
• Better error reporting to the terminal
• Added support for Nuke13.2

Hope you like it!
Cheers, David

New product called NNFlowVector released!

We are very excited to announce the release of a new product called NNFlowVector. It’s an optical flow plugin for Nuke, powered by state of the art AI/machine learning algorithms, capable of generating very clean, stable and accurate motion vectors. It’s also able to generate the more complex and advanced Smart Vectors to be used with NukeX’s powerful tools like VectorDistort, VectorCornerPin and GridWarpTracker.

Here is a quick demo of the features:

Be sure to check it out on our product pages, NNFlowVector, and also the bundled tools called NNFlowVector Utils.

Cheers,
David

Fixing an annoying Nuke “feature”

In the middle of developing the next optimised release of NNSuperResolution and also our upcoming Nuke plugin in the NN family (be sure to check back pretty soon for the release!), I did a little detour to try and fix one of Nuke’s annoying and long standing “features” – the dreaded “middle click to zoom out in the DAG”. I haven’t met a single compositor that likes or uses this feature, rather the opposite – people seem to more or less hate it.

The problem here is that Nuke doesn’t natively support turning off this behaviour. There are no preferences, and there are no python API calls either. To be able to tweak this behaviour you rather have to hack into the Qt widget stack that is making up the GUI in Nuke. You got to find the correct widget, in this case the DAG, and install an event filter on it which makes it possible to get a callback to your own function whenever something you are listening in on is happening. In this case the events we are listening for are middle clicks. What we then do is catching that event, and doing some hacks to get the behaviour we are after. There are some filtering for only triggering our override when you middle click (actually middle press and then middle release close to the same coordinates), and not when you are using the middle mouse (read: Wacom pen) button to pan around. We also create and delete a Dot node, and send a left mouse button click instead of the middle click to the DAG to make it work seamlessly in the background.

It wasn’t super straight forward to code, and needed quite a lot of trial and error, but after a few hours I managed to produce something that works well (for me at least). Here is the code for you to use if you would like:

from PySide2 import QtWidgets, QtGui, QtCore, QtOpenGL

class MouseEventGrabber(QtCore.QObject):
    def __init__(self):
        super(MouseEventGrabber, self).__init__()
        self.middleclicked = False
        self.clickpos = None
        self.app = QtWidgets.QApplication.instance()
        dags = [widget for widget in self.app.allWidgets() if widget.windowTitle() == "Node Graph"]
        if dags:
            self.dag = None
            if dags[0].size().height() > dags[1].size().height():
                self.dag = dags[1].findChild(QtOpenGL.QGLWidget)
            else:
                self.dag = dags[0].findChild(QtOpenGL.QGLWidget)
            if self.dag:
                print "Installing DAG event filter"
                self.dag.installEventFilter(self)
        if not dags or not self.dag:
            print "Couldn't install event filter, DAG not found"
    
    def eventFilter(self, widget, event):
        '''Grab mouse and key events.'''
        if event.type() == QtCore.QEvent.MouseButtonPress and event.button() == QtCore.Qt.MouseButton.MiddleButton:
            self.middleclicked = True
            self.clickpos = QtGui.QCursor.pos()
            #print "Set middle clicked: True (position: %d, %d)" % (self.clickpos.x(), self.clickpos.y())
        if event.type() == QtCore.QEvent.MouseButtonRelease and event.button() == QtCore.Qt.MouseButton.MiddleButton and self.middleclicked:
            newpos = QtGui.QCursor.pos()
            #print "Set middle clicked: False (position: %d, %d)" % (newpos.x(), newpos.y())
            self.middleclicked = False
            if newpos.x() > self.clickpos.x() - 5 and newpos.x() < self.clickpos.x() + 5 and newpos.y() > self.clickpos.y() - 5 and newpos.y() < self.clickpos.y() + 5:
                print "Blocked zoom out from middleclick"
                import nukescripts
                nukescripts.clear_selection_recursive()
                dot = nuke.createNode("Dot", inpanel=False)
                self.app = QtWidgets.QApplication.instance()
                dags = [widget for widget in self.app.allWidgets() if widget.windowTitle() == "Node Graph"]
                if dags:
                    self.dag = None
                    if dags[0].size().height() > dags[1].size().height():
                        self.dag = dags[1].findChild(QtOpenGL.QGLWidget)
                    else:
                        self.dag = dags[0].findChild(QtOpenGL.QGLWidget)
                QtWidgets.QApplication.sendEvent(self.dag, QtGui.QMouseEvent(QtCore.QEvent.MouseButtonPress, self.dag.mapFromGlobal(newpos), QtCore.Qt.LeftButton, QtCore.Qt.LeftButton, QtCore.Qt.NoModifier))
                nuke.delete(dot)
                return True
        return False


def SetupEventfilter():
    global mouseEventFilter
    if not "mouseEventFilter" in globals():
        mouseEventFilter = MouseEventGrabber()

Place the code above in for example your “menu.py” python file (in your “.nuke” folder). You then need to register it to get called whenever you open up a new script. This is because I haven’t found a way to make the code above work automatically from the startup of Nuke. The DAG widgets aren’t fully created when the loading code gets run, hence the install of the event filter doesn’t work. So instead we register it to when scripts are opened using the OnScriptLoad callback as such:

nuke.addOnScriptLoad(SetupEventFilter)

That’s it, I hope you like it!
Cheers, David

New major release of NNSuperResolution

After several months of training neural networks, we are pleased to finally announce v3.0.0 of NNSuperResolution. The main updates are:

  1. 2x upscale option (as a complement to the already existing 4x upscaling).
  2. Addition of a “details” knob where you can tweak the amount of introduced generated detail in the upscale process.
  3. Fully re-trained neural networks for sequence mode, with a higher neural capacity (improved quality results).

We have also fixed so that NNSuperResolution is compatible with upscaling anamorphic formats.

All the different Nuke and platform variants are available now from the Downloads page. If you want to test it out without any added watermark noise, please request a free trial license.

Cheers, David

Demo video on YouTube

We released a demo video presenting a couple of new examples of upscaling material using NNSuperResolution (including a CG sequence). The video also showcases the plugin in action directly in Nuke which gives you an idea of the performance as well as the ease of use for the artist. Have a look:

Development update – Windows versions are released!

The Windows versions of NNSuperResolution have been in the wild for about a month now! We are very excited about this since it’s something a lot of people have been asking about. It’s also worth mentioning that it’s Nuke Indie compatible as well. You can download a copy directly from the downloads page to test it out. If you want to test it out without the added watermark/noise, please request a trial license. It’s free and quick, and will let you run the plugin unrestricted for 10 days. If you need more time to evaluate it, please get in contact using the comments field on the trial request page and we’ll organise something suitable.

The next couple of things we are looking into for NNSuperResolution are getting the “still mode” able to upscale CG (RGBA), similarly to the “sequence mode” which is already capable of this. We have also started training variants of the neural networks that are able to upscale only 2x. Currently all the upscaling with NNSuperResolution is 4x, but it’s not always that you need to go that large. Maybe you already got full HD material (1080p), and want it remastered as UHD (4K), then 2x would be good to have available directly as an option.

We are now featured on the official Plug-ins for Nuke page at Foundry’s website. 🙂

We are also working on producing a demo video to show the plugin in action directly in Nuke. While the best way is always to try things out for yourself using your own material, it can also be nice to see the thing in action on YouTube.

Stay tuned!
Cheers, David