We are proud to finally release our first major update to our NNFlowVector plugin. This is a release with a bit of everything in it; improved neural networks, better UI and user control, better performance overall, better compatibility with Nuke13.x etc. Here are the full release notes:
- Fully re-trained the optical flow neural networks with optimized settings and pipeline. This results in even higher quality of generated vectors, especially for object edges/silhouettes.
- To better handle high dynamic range material, all training has internally been done in a logarithmic colorspace. This made the “colorspace” knob become unnecessary and hence it has been removed. (please create new node instances if you are updating version and using Nuke scripts with the old version present)
- Implemented a “process scale” knob that controls in what resolution the vector calculations are happening in. A value of 0.5 will for example process the vectors in half res, and then scale them back to the original res automatically.
- Improved the user control of how many iterations the algorithm will do while calculating the vectors. The knob “iterations” is now an integer knob instead of a fixed drop down menu.
- Added a knob called “variant”, to enable the user to choose between several differently trained variations of the optical flow network. All network variants produce pretty similar results, but some might perform better on a certain type of material. Hence we encourage you to test around. If you are unsure, go with the default variant of “A”.
- Speed optimizations in general. According to our own internal testing, the plugin is now about 15% faster to render overall.
- Added an option for processing in mixed precision. This is using a bit less VRAM, and is a quite a lot faster on some GPU architectures that are supporting it (RTX).
- Added an option for choosing what CUDA device ID to process on. This means you can pick what GPU to use if you got a workstation with multiple GPUs installed.
- Optimized the build of the neural network processing backend library. The plugin binary (shared library) is now a bit smaller and faster to load.
- Compiled the neural network processing backend with MKLDNN support, resulting in a vast improvement in rendering speed when using CPU only. According to our own testing it’s sometimes using even less than 25% of the render time of v1.0.1, i.e. 4x the speed!
- Updated the NVIDIA cuDNN library to v8.0.5 for the CUDA10.1 build. This means we are fully matching what Nuke13.x is built against, which means our plugin can co-exists together with CopyCat nodes as well as other AIR nodes by Foundry.
- Compiled the neural network processing backend with PTX support, which means that GPUs with compute capability 8.0 and 8.6, i.e. Ampere cards, can now use the CUDA10.1 build if needed (see above). The only downside is that they have to JIT compile the CUDA kernels the first time they run the plugin. Please see the documentation for more information about setting the CUDA_CACHE_MAXSIZE environment variable.
- Internal checking that the bounding box doesn’t change between frames (it’s not supported having animated bboxes). Now it’s throwing an error instead of crashing.
- Better error reporting to the terminal
- Added support for Nuke13.2
Hope you like it, and that you find it even more useful in production!
All the best,