How Does A 3D Cinema Camera Work

INTRODUCTION TO 3D

Over the years 3D cinema has seen a few periods of resurgence and falling off. 

Although it's in a bit of a dip at the moment, it does still exist as a niche format with its own unique method of capturing native footage on specialist 3D cinema cameras.

So let’s go over how these cameras work, what extra film crew is needed to operate them and some methods that work best with this format. 



HOW 3D CAMERAS WORK

Unlike regular 2D cameras which capture and project a flat image against a flat surface, 3D cameras create the impression of footage having depth with different planes of dimensionality.

They do this by capturing images in the same way that human eyes use to see.

Light enters two eyes, is processed by the brain, to allow us to see the world with depth perception. In the same way, two cameras can be placed near each other at slightly different angles to capture two similar images. 

When each image is viewed by a different eye, the brain is tricked into thinking that the 2D images have different planes of depth. 

Presenting slightly offset images to the right and left eye is also called stereoscopic display, and is the same concept used for other 3D depth perception such as VR.

In 3D cinemas different images are fed to different eyes by using 3D glasses. There are some different varieties, such as glasses that use different colour lenses that only take in projected images of a certain colour, or glasses that have different polarised lenses for each eye that allow each eye to see a different picture.

It is now also possible to create a 3D film without a 3D camera, by manipulating 2D footage with post production software that separates the images into different planes. This Is the case for some movies that are shot in regular 2D that have been re-released in 3D. But, that’s a topic for another day.




3D CAMERA BUILD

There are two types of 3D camera rigs that can be used to get the two cameras close to each other to shoot these slightly offset images. 

The first is called a parallel rig - where the cameras are placed horizontally alongside each other on a sliding plate. The second type of 3D camera configuration is called a beamsplitter. 

This is where two cameras are placed at a 90 degree angle, pointing into a sort of cube that houses a mirror inside angled at 45 degrees. This mirror is like a one way mirror that you see in police interrogation scenes. This means that one side of it acts as a mirror that bounces the light off the 45 degree slope, while the other camera that is level sees through the other side of the mirror - shooting at a slightly offset angle.

As the footage from both cameras will converge to be almost exactly the same, it’s important that the cameras, lenses and any accessories used are all identical. 

This means renting two sets of everything in the camera package. So shooting with one 3D camera rig will require 2 cameras, and 2 sets of lenses. Shooting with two 3D cameras will need 4 cameras and 4 sets of lenses.

This also applies to accessories like focus motors. Often the wireless follow focus will come as part of the specialised 3D camera package. If using a system such as a Preston with a zoom lens, this may mean using two MDR boxes and six motors: 2 motors to control the focus on each camera, 2 motors to control the zoom and 2 motors for the iris.

However, there are some focus solutions that exist that minimise all those extra cables and MDR boxes, such as the CMotion CVolution - which has a single motor box that can drive up to 8 different motors at a time, and run the focus from one wireless handset using multiple motors on multiple cameras.  

All of this gear, combined with aligning two cameras alongside each other means that 3D cameras are very buky, heavy and awkward to move around. For that reason there are some best practices for working with this camera system, and also, a unique crew member that you don’t have on a regular 2D shoot.



CREW ROLES

There are two important control settings which are unique to native 3D cameras and they are both controlled by an extra crew member called a stereographer. 

These two 3D settings are the interocular distance and the convergence.

The interocular distance refers to the space between the lenses on the two respective cameras. On a parallel rig this can be adjusted by sliding the cameras further apart or closer together on the horizontal axis, while on a beamsplitter it can be changed by adjusting the distance of the lenses from the mirror.

Usually this is controlled by the stereographer with an extra motor that can turn gears to adjust this distance between the cameras. 

The reason that this distance between the lenses may need to change for different shots is because, as a general rule, the further away the subject is from the camera, the greater the distance between the two cameras should be. While the closer you are to the subject the nearer the lenses have to be to each other.

So wider shots, such as 3D aerial footage, may need to be shot with a parallel rig where the cameras can be more easily separated, while a beamsplitter 3D camera may work better for a close up where the camera lenses can be brought very close together using the mirror as a divider.

The stereographer will monitor this setting as well as the convergence, which is the angle at which the cameras are adjusted to line up the screen plane so that the images overlap correctly and are properly aligned.

If using zoom lenses this also includes making sure that both lenses are set to the exact same focal length and the width of both lenses, as well as the stop and level of exposure and other settings on each camera correctly matches.

The job of the cinematographer on a 3D film may involve some added technical complications, but otherwise their framing, lighting and other aspects to the cinematography will remain mostly the same as when shooting a 2D film.

The camera operator will have to deal with the implications of a heavier, more cumbersome camera build, but otherwise will also retain their usual role, with an important difference.

They will usually need to frame their shots a little wider than they normally would, allowing for extra frame room on the edges of the shot. This is because the 3D images may need to be slightly cropped into during post production if they need to adjust the convergence of the images.

For this reason they need to keep important parts of the frame away from the edges of the shot, for example, fully framing the top of the actors head without giving them too much of a ‘haircut’.

Custom frame lines can also be added to the monitor, which crop in on the fully captured image, so that operators know the bounds of framing that they need to abide by.

VT, who handle transmitting a live video feed from the cameras to monitors and playing back that footage, is another job on set which may get a bit trickier. 

For example, on Prometheus, this involved setting up a video village for a multicam shoot with two 3D monitors, and two 2D monitors that could be flipped between seeing the left eye and right eye camera on a single 3D camera rig.    



SHOOTING 3D

On top of these layers of complications, there are some other extra best practice rules that are often followed when shooting 3D. 

Cinematographers may be advised to light with a little more contrast between dark and light parts of the frame than they normally would. Creating different planes of light and shadow helps to better sell the illusion of depth from 3D, than if everything were lit to be flat - with a low level of contrast.

For purely practical reasons it may also be easier for DPs to work with zoom lenses rather than primes. This is because changing to a new prime lens focal length requires realigning cameras each time, while zooming in and out minimises the setup time it takes to correctly overlap the images.

Because you are using two cameras, rather than a single camera, it also means that each rig is significantly heavier and bulkier. This means that if you want to move the camera you will have to use rigs that can take a heavier payload - such as a remote head on a Technocrane.

Or if something like a Steadicam is required it will take careful technical preparation to make sure the rig can correctly balance.

The large size of these cameras also make it difficult if not impossible to shoot in tight spaces, such as car interiors, get very low angle shots, or place the camera in any situation that usually requires a small build.

When it comes to moving the camera - stability of motion is usually encouraged. Shooting handheld is advised against as too much movement can be off putting and mess with the audience’s sense of motion. Which is why 3D cameras often rely on shots operated from a base, like a tripod or dolly, or shots with stable motion like Technocrane or Steadicam.


CONCLUSION

It’s easy to see why 3D is niche for a reason. Capturing native 3D images is quite a challenge both from the technical side but also from a financial point of view. It adds layers of complication to the production workflow which can, in many ways, limit filmmaking choices.

However, in the right context 3D can work really well in recreating the feeling of being in a real world environment and increasing the audience’s engagement in the world of the story. While at other times it can feel a bit gimmicky, distracting, unnatural and actually pull the audience out of the movie.

Whether or not it will make a mainstream cinema comeback again remains to be seen. 

But, maybe these 3D rigs will be reimagined, reconfigured and applied to new visual projection experiences that use different but similar principles like the new Sphere or the world of VR. Only time will tell.

Previous
Previous

How Oppenheimer Reinvented Imax

Next
Next

5 Reasons To Light Films With Colour