top of page

Fun With Copy Cat

  • Writer: Max Austin
    Max Austin
  • Aug 10, 2021
  • 4 min read

No one was more excited than me when the Foundry announced they were adding new machine learning tools to Nuke 13! I've been very busy at work the past few months, but I finally had time to sit down and see what it could do.


I'll show 3 tests that I've done and share the results and what I've learned from them.


I would like to preface this by saying these tests were done on my personal PC and not a workstation so my GTX 1060 did its best. I'm sure if I had my hands on a more recent GPU, training would have been a lot faster; however, in the end, the results are the same.


Copy Cat has been most widely advertised as a tool to speed up roto (in more of a garbage matte way and not an articulate way), so that was the first logical test. I grabbed a plate off of Pexels that had some areas I thought would be challenging.


The first 3 tests I did were only upping the epoch count to see how that worked. It's essentially: more epochs = better results with diminishing returns and slower training. I've taken some data from the roto tests and posted them below.


Mountain Test v001

-Run at 10,000 epochs

-Time: 1:30

Mountain Test v002

-Run at 20,000 epochs

-Time: 3:30

-Noticeable improvements around edges

Mountain Test v003

-Run at 30,000 epochs

-Time: 4:30

-Minor improvements, not worth the extra hour

Mountain Test v004

-Run at 20,000 epochs

-Time: 4:00

-Added 3 extra ref frames around trouble spots


The Results

Of course, I could keep tweaking and stencil bits out and add more reference frames to improve, but I just wanted to see how much time this could potentially save with minimal effort.


Difference between test 1 and final test.


A few key areas I was looking at: hair detail, the backpack straps, foot interaction, and her shorts being a similar color to the sky.

It didn't pick up on any fine hair details but I wasn't expecting it to, it did do an excellent job picking up the backpack straps blowing in the wind, that part impressed me most. If I were doing this in production I'd probably have to roto the feet manually, there's too much occlusion and shadowing for CopyCat to get a good read on it. And it did struggle a bit with the shorts but that would be a relatively simple roto to do.

CopyCat exceeded my expectations here!


For the second test I wanted to see how CopyCat would do with replicating a change in color over a larger frame range. The plate I found on Pexels was of a man fanning out a hand of cards, there was a fair amount of movement and lighting change so it was ideal for a test.


I started with 10 reference frames evenly spread over 546 frames. The initial results weren't too bad but had issues when the cards reached the side of the frame or encountered a bigger lighting change. The second test was to see how much it could correct itself just by increasing the epochs. After that I added 2 reference frames based on areas it was still having trouble in. The time it took to complete these tests more than doubled over between the first and third test. The specifics can be found below.


Cards Test v001

-Run at 10,000 epochs

-Time 4:30

-Few problem areas at the edge of frame, color slips in lighting changes


Cards Test v002

-Run at 10,000 epochs

-Time 5:00

-Slight improvements in both areas


Cards Test v003

-Run at 20,000 epochs

-Time 10:00

-Added frames 269 & 313

-Large improvements, still some red spill in movement


The Results

I was overall really impressed with the way this one turned out. It did a good job picking up lighting changes and once I added in a ref frame near the edge of the screen. There's still some red spill when the cards move but again it did an impressive job.


For my final test, I wanted to do something outside of the box (literally). I did a quick render of a CG cube slowly rotating and I wanted to see how it would handle applying a texture. This one was not a 'production scenario' but I thought it would be beneficial because I would have a rendered version that I could compare against.

Below are the results from the tests. Since I had a perfect reference for every frame it was only a matter of increasing the frequency of reference frames as well as messing with the number of epochs.


Cube Test v001

-Run at 10,000 epochs

-Time 3:00

-Ref every 15 frames


Cube Test v002

-Run at 20,000 epochs

-Time 5:00

-Ref every 15 frames


Cube Test v003

-Run at 10,000 epochs

-Time 4:00

-Split ref to every 7 or 8 frames


It is nowhere near usable but still was interesting to see the results. I was disappointed it didn't pick up anything in the reflection of the back wall but it tried its best.


To answer the question in the thumbnail, yes, I do believe that machine learning will play a huge role in the future of VFX. I look forward to seeing other artists' uses of CopyCat as well as the other machine learning tools The Foundry adds to Nuke. Feel free to leave a comment and let me know what you've done with CopyCat!





 
 
 

Comments


imdb_icon.png
linkedin_icon.png
bottom of page