Nuke: Making Stereo Noise

After seeing the Foundry tweeted a tutorial on converting a 2D image of fire into a stereo image today, I wondered if you could use a similar method to generate stereo noise clouds within nuke.

Here’s a script I knocked up this evening very quickly to demonstrate the idea, please feel free to expand and improve on this!

stereo_noise_grab

Download the example script

  1. Set your new project to have stereo views (in the Project Settings).
  2. Make a Noise node. By default it generates a nice cloudy texture.
  3. Split off the alpha channel, invert it, then copy it back into the forward.u channel
  4. Make an iDistort node, set the channels to RGB, the UV channels to motion (and just check the u box), set the UV scale to 15 or so.
  5. Make a JoinViews and plug the iDistort into the left side and the original Noise into the right
  6. Stick an Anaglyph node under that and put on your red/cyan glasses to view your noise cloud in wonderful stereo.

This uses the motion channel values to distort the image horizontally; the black (0) areas have no effect, and the white (1) areas distort fully, and all shades in between. So that has the effect of pushing the white ‘peaks’ to the left, while keeping the black ‘valleys’ where they are. Giving the illusion of depth when viewed in stereo.

It’s not really like smoke, because you can’t see the other side, but perhaps a combination of several of these could simulate it better? Suggestions welcome.

If the noise isn’t animating, and you use the translate tab to move the noise around, you get the illusion of looking down onto a bumpy landscape with peaks and valleys. Could be useful.

Will play with this more, it may not be of much use but feels like there’s potential there for generating stereo elements to put onto stereo live action (if you can’t get hold of real elements, of course).

Advertisements

Nuke: Stabilizing a plate with CameraTracker

Discovered today that you can easily use  a 3D camera track in Nuke to stabilize a plate – and what’s more, you can easily choose which plane of the image you want to lock down (something in the foreground, mid-ground or background).

CameraTracker Stabilize screengrab

  1. Get a good 3D camera track, masking out anything that moves first.
  2. Solve the camera and go to a frame you’d like to use for reference, probably somewhere in the middle of the sequence.
  3. Copy and paste the camera, renaming them to something like ‘CameraMain’ and ‘CameraProject’.
  4. Open the properties of CameraMain and remove any animation (right click the parameter and choose ‘No Animation’)
  5. Create a Project3D node and connect it to CameraProject. Also plug the plate into the Project3D.
  6. While viewing the plate, bring up the CameraTracker properties pane so that you can see the solved camera tracks over the plate. Choose one that sits on the plane you want to stabilize, eg on the back wall. Then right-click the point and choose ‘create – card ‘ to create a card in 3D space at that coordinate.
  7. Plug the Project3D into the card.
  8. Switch to 3D view and you should see the image projected from CameraProject onto the card. Remove all rotation from the card in the properties pane so that it’s flat on to CameraMain, then use the Uniform Scale on the card to scale it up nice and large so that none of the picture gets cut off.
  9. Now, create a ScanlineRender, plug in the card and CameraMain (your stationary camera), then view the result. If you are still on the same reference frame, the output should be identical to the plate.
  10. Play it back and you should see a stabilized image. You will then need to crop the image to remove black edges appearing. To choose a different plane to lock to, just repeat steps 6-8 using a different point from the track (eg, something from the foreground). You can do this several times and keep the cards in your script, swapping them out to find which one gives the best result. Alternatively you can just play with the Z translation of the first card until you get what you want, but choosing an actual point on a plane will probably be quicker.

So, to summarize, the viewing camera and the card do not move at all, but the image being projected moves around on the card, which matches the camera move in the shot, cancelling out the movement.

EDIT: After using this technique quite a lot for various tasks, I realise that for best results you should create a card at the angle of the plane you want to stabilise (by selecting multiple points on that plane and creating a card), rather than pick one point and remove rotation. That’s what you’d do for a frame hold patch projection, so you’d do the same here. Only difference is you swap the cameras.

This means after you stabilise, you can copy the card and project it onto the new card with the cameras swapped to match move it back to the original. Useful for creating mattes, but not advised for patches because of the slight softness produced as it breaks concatenation between the two stages. I’ve been using it recently to create rain mattes; remove the camera movement using the 3D track, Min 3 adjacent frames together to remove the bright rain drops, use a Difference node between the stablised plate and the Min, then match move using the same camera setup. Gives a nice alpha of the rain as that’s the only thing moving in the plate.