Grip Rigs For Cinematic Camera Movement (Part 2)
To move a cinema camera in different ways requires different types of mechanical rigs. In this video let’s go over some of the common, interesting and even unusual rigs that are used in the film industry to create motion.
INTRODUCTION
There are many reasons to move the camera in filmmaking. It can be used to reveal more of a space and establish the geography of a scene. It can elevate action in fight sequences. Evoke an emotion or a tone. Or even provide an unusual perspective to a scene.
To move a cinema camera in different ways requires different types of mechanical rigs. In this video let’s go over some of the common, interesting and even unusual rigs that are used in the film industry to create motion.
BOLT
The Bolt is a specialised robotic arm rig, which is designed to move the camera at extremely high speeds, extremely precisely. It is built by Mark Roberts Motion Control and is the go to robotic arm for industry level film work.
So, how does it work? This cinebot has a 6-axis robotic arm - which means it has 6 different points where the arm can swivel, rotate, pan, tilt and roll the camera. This arm is attached to a heavy base which is designed to slide along a track - which come in 3 metre length pieces - giving it an additional lateral movement axis.
This total of 7-axes of movement means that it can move the camera in very complex ways, almost anywhere within a confined area. What makes the Bolt special is that it comes with software called Flair that is used to program each move that it makes, frame by frame.
Once a move is programmed it can be saved and repeated as many times as necessary in frame perfect passes. In other words it can perform the exact same motion multiple times, so that each move records exactly the same image, even when broken down frame for frame.
This allows filmmakers to record multiple plate shots of the same take - where they can record different details in different parts of the frame multiple times, then layer different sections of each plate on top of each other in post production.
For example, this is a shot from a commercial that I camera assisted on a few years ago. The Bolt could be used to record two passes. One plate shot of the boy drinking orange juice, and another plate with a dog being cued to jump by an animal wrangler.
In post, the animal wrangler could be cropped out and the motion of the dog jumping overlayed on top of the shot of the boy, so that it looked like it was recorded in a single take. This is made easy by the Bolt’s frame perfect, repeatable, programmed camera moves.
The Bolt is often combined with a high frame rate camera, like a Phantom, to shoot slow motion because the Bolt can move at extremely high speeds. When shooting slow motion, everything, including camera motion, gets slowed down. This means that to shoot extreme slow mo and still get a normal tracking movement, the camera needs to move at a much faster speed than normal.
It can also be used to get super fast camera motion when shooting with the camera at a normal frame rate.
It’s actually a bit scary how fast this heavy chunk of metal can move. That’s why the Bolt operators will usually either cordon off the area that the arm moves in or give a stern warning to cast and crew not to go anywhere near the arm, unless the operators give permission. Because if this thing were to hit anything at a high speed it’d be super dangerous if not fatal.
For this reason, camera assistants will usually strip the camera of a monitor, mattebox, eyepiece and any additional weight that could offset balance or upset smooth movement or even pieces that could fly off while the arm moves and stops at extreme speeds.
Another use case for the Bolt is to program it to do very specific, macro moves. Using the Flair software and a special focus motor, the focus distance can also be programmed for each frame - since pulling focus at these extreme speeds manually is very difficult, if not impossible.
This means it can repeat moves in macro shots, get multiple plates, all while maintaining perfect preprogrammed focus.
Although you can do incredible things with the Bolt, it’s usually reserved for specialised, pre-planned shots only, as it's both an expensive toy to rent and because moving it around and programming it takes a lot of time to do.
TOWERCAM
Another piece of equipment which is designed for a very niche type of camera movement is the Towercam. This is a telescoping camera column which is designed to get completely vertical, booming camera motion. It is remote controlled by an operator near the base of the rig.
Unlike a Technocrane, which is more of an angled telescoping arm, the Towercam is an arm that moves completely vertically and can either be rigged from the ground or rigged from above and telescope up and down.
Although the hydraulic arm of a dolly can also be used to do vertical up and down moves, the range of its arm is much more limited to around 1 metre of vertical boom movement. There are different versions of the Towercam, but the XL can extend the height of the camera to almost 10 metres.
This is a great tool for getting large, symmetrical, up and down moves - which is why Robert Yeoman often uses it when shooting with Wes Anderson, who loves himself some symmetry. Using a dolly for horizontal tracking moves and a Towercam for vertical tracking moves.
But, it can also be rigged with a remote head, which allows an operator on the ground to pan and tilt the camera while it moves vertically. Which is great for this kind of a shot of tracking an actor walking up a flight of spiralling stairs.
It can also be used for doing fast vertical moves, capturing live events, nature documentaries, or any other application where straight, vertical motion is required.
3-AXIS GIMBAL
Let’s move on to a piece of gear that most people may know: a 3-axis gimbal. This is a stabilisation device that is used to keep the camera steady while capturing fluid motion.
3-Axis refers to the three motorised points on the rig that counteract any unwanted camera shake. These axes control the tilt, sometimes called pitch, which is the vertical up and down motion, the pan, sometimes called yaw, which is the horizontal left and right motion and the roll, which is the side to side camera motion.
By stabilising the camera across these three points, these motors minimise shakes and vibrations, and give the camera movement a smooth, floating, fluid feeling - very similar to what a Steadicam does.
Probably the two most popular gimbals on the market for industry level film production are currently the DJI Ronin 2 and the Freefly Movi Pro.
There are many reasons these stabilising devices have soared in popularity in recent years. For one, they make it very quick to set up and adjust a shot with stable movement - much quicker than setting up and levelling a dolly and tracks.
For another, their relatively light weight makes it easy to shoot in hard to access locations, where bringing other heavier gear like a crane or a dolly isn’t possible.
They are also flexible in their applications. For example, they can double up as a stabilised head - which can be operated remotely with camera wheels or a joystick - or they can be operated in a handheld configuration using the natural drift of the motors.
Whereas in the past it may have been necessary to hire a Steadicam, a Steadicam operator, a stabilised Libra head and a Libra head technician. Now the productions can get away with hiring one gimbal technician with a gimbal - which can be switched between different rigs - including a drone.
Their flexibility also extends to how they can be operated. For example, shooting from a dolly on a track locks you into one lateral line that you can shoot from, while shooting with a gimbal is like shooting with a stabilised handheld rig. It can go anywhere your feet go, which makes it easy to adjust to the movements of actors changing position during a take.
However, there are a few reasons why some DPs don’t like using them much.
The camera needs to be balanced so that no undue strain is applied to one of the motors. Sometimes this is easier said than done, especially when using a heavier camera build or longer lenses. Any time a lens is changed it needs to be rebalanced. If a certain weight is exceeded it may not be possible to balance it at all.
Even when accompanied by a gimbal technician, these machines can give technical issues. In my experience, working with a Steadicam and working with a gimbal is like night and day. A Steadicam is simply engineered and almost never runs into technical issues, whereas gimbals almost always need to be tweaked by assistants to deal with technical glitches that pop up.
Also, unless the gimbal is being operated by a second operator with wheels or a joystick, it can be difficult to get precise framing, as the motors create a natural drift of movement as you move or stop, which can be hard to predict.
A last advantage of a gimbal is that they can sometimes be combined with other grip rigs for a special move. For example, when I assisted on this show, the key grip came up with an overhead sliding rail system with a bungee cord that the gimbal could be attached to. This took some of the strain of operating the heavy camera away and made it quick and easy to shoot fight scenes.
Or there are special shots like this, which combine a gimbal with the final rig we’ll talk about: a Spydercam.
SPYDERCAM
A Spydercam is a cable suspended rig that allows for smooth, controlled aerial movement.
Setting one up begins by rigging several high tensile strength cables which need to be securely anchored above the shooting area so that the cables don’t touch the ground.
A camera rig with a stabilised remote head is then suspended from those cables and its lateral and horizontal movement is controlled by motorised winches that are connected to the suspension cables.
This way the physical position of the overhead camera can be changed by an operator, called the pilot, using software while at the same time another camera operator can pan, tilt or roll the camera to change the frame that it sees.
The Spydercam was regularly used in films before the widespread use of commercial drones, to get controlled, smooth aerial footage within confined spaces, which couldn’t be shot by using a helicopter.
For example, aerial plate shots which swing or fall through a highrise city which they could later drop shots of characters into.
Even after drones became popular production tools, there are still applications for the Spydercam - most notably for indoor scenes or scenes shot in a studio that require controlled aerial camera moves.
For example, the shot mentioned before used a Spydercam with a specially designed release. This allowed a stabilised gimbal to float from one floor to the next, which the camera operator could then grab on the second floor and begin shooting with in a handheld configuration.
Another reason filmmakers may want to use a Spydercam is for a shot which starts close to actors. Having a drone start this close to the ground, objects or actors would cause a visible wind on screen and would also be a bit dangerous. Instead a precisely controllable Spydercam could be used to start on a nice, tight shot, which was then brought all the way up to a massive aerial wide by winching it up the cables.
How To Shoot Action: John Wick 4
Let’s break down how cameras, lights and grips were used to elevate action scenes in John Wick: Chapter 4.
INTRODUCTION
The first thing that I thought about while watching the fight scenes in the new John Wick movie, was another fight scene in Oldboy. And it’s not only because of the great action choreography.
Both movies use cinematography in a very deliberate way to enhance these fight scenes. Through the use of wide shot sizes, vividly saturated lighting and a smoothly moving camera.
So, let’s break down how cameras, lights and grips were used to elevate action scenes in John Wick: Chapter 4.
APPROACHES TO SHOOTING ACTION
Watching this movie got me thinking about the different ways that fight scenes can be presented by filmmakers. I think a large part of how we as an audience interpret or get involved in these scenes comes down to the kind of visual style in the cinematography.
There are basically two directions that filmmakers can go in when shooting action: one is more realistic, immersive and first-person, the other approach is more elevated, observational and third-person.
John Wick very much falls into the second camp. The fight scenes are larger than life and something of a spectacle. Somewhere in between semi-reality and a comic book world.
This look is in a large part created by three aspects of the cinematography: the shot sizes used, the lighting and the camera movement. Lots of slower cutting wide shots are used. The lighting is hard and vividly colourful. And the camera moves are smooth.
Compare this to a fight scene in The Bourne Identity. There are lots of quick cutting close ups. The lighting is softer, more motivated and naturalistic. And the camera moves are rough and handheld.
To me, this way of shooting feels much more like the audience is being forced into the more realistic perspective of the main character.
While this way of shooting feels like the audience is being invited to sit back and enjoy a heightened, action packed spectacle.
Of course, this isn’t a hard and fast rule that has to apply to each and every fight scene in a movie. For example, Oldboy has one long-take fight scene that feels almost like you are watching a video game from a wide, 3rd person perspective.
And, it has another fight scene where the perspective shifts to first person, includes POV shots, and covers the scene with close ups, quicker cuts and a rougher camera movement.
SHOT SIZE
Part of what makes the fight scenes in John Wick impressive from a technical perspective is that they often play out in wider shot sizes, with slower cutting. This relies on the performances of the stunts by the actors to be at a high level, rather than getting stunt doubles to do the action and using lots of quick cutting to break each stunt down shot by shot.
Cinematographer Dan Laustsen enhanced this feeling of width through his camera selection.
“I think that is the goal or the key. To shoot it wide and beautiful. As wide as we can.” - Dan Laustsen
The previous John Wick movies had been shot on Super 35 cameras with Master Anamorphic lenses. One way to pull off an even wider look is to shoot on a camera with a larger sensor - the Arri Alexa LF.
The bigger the sensor is the wider the field of view that it is able to capture. However, they ran into a snag. Previously you could either choose to shoot with a large format camera and spherical lenses, then crop to widescreen in post, or you could shoot with a Super 35 size sensor with anamorphic lenses that natively creates a widescreen aspect ratio.
To overcome this issue, they turned to Arri, who had recently developed a set of anamorphic lenses which were able to cover large format sensors. These lenses are called the Arri Large Format Anamorphics, or ALFAs.
This gave them the combination of a large format wide field of view and the anamorphic look - all in one. These were used to shoot 95% of the movie.
Since the widest ALFA lens is a 32mm, the remaining 5% was mainly made up of super wide angle shots using spherical Signature Prime lenses - probably with a 12mm. You can tell which are the spherical lens shots in the movie because of their almost fish eye look and round bokeh, while the ALFAs have oval bokeh.
“We also shoot wide so Keanu could do what Keanu had to do. And then getting into the close ups for whatever reason super fast because Keanu is running out of speed, of course, because he’s not a stunt guy.” - Dan Laustsen
So, they mainly used wide focal length lenses to capture the stunt scenes in two or three full takes of wide shots. Then used long focal length lenses mainly for close ups during dialogue scenes, not really using many focal lengths in between.
LIGHTING
Another large part of what elevates the John Wick movies from reality, and places them in an exaggerated, comic book like world is due to the lighting.
Laustsen is known for lighting with exaggerated colours, often mixing steel blue, cyan, or green tones, with warmer light. In fact it was partly his use of colour in Crimson Peak that won him the job to shoot the franchise with the director.
“I think he loved Crimson Peak because he loved the colours. We talked a lot about colour. He really likes colour. You know, he loved this heavy world.” - Dan Laustsen
This coloured lighting, that lifted the story out of the normal, realistic world, was created in two ways. Firstly, with practical lighting and secondly by using film lights.
Practical lights are sources that are included in the frame of the shot. These were normally built into the sets themselves using different coloured strips of LEDs. Or, in this movie, they were done by rigging hundreds of RGB Astera Titan Tubes across the set, which were placed inside fluorescent housings.
These fixtures could then be wirelessly controlled through a DMX board to tweak their brightness or colour.
Often these tubes would be set to a colour that contrasted with the colour of the key light. For example if the key was set to a cyan-green, then the practical tubes could be set to an orangey-red. This not only creates a vivid, exaggerated, colourful palette, but it also separates the background from the foreground.
These practicals also served an extremely important practical purpose - they could be seen in the shot. This is especially necessary for the kind of super wide shot sizes they were shooting, where the camera saw everything. Like this scene, where they rigged these practicals next to the canal for an incredible 2km.
When you’re shooting in such an enormous space at night, there is little to no place to hide film lights anywhere near the actors without them being seen.
“You know, that is my goal with lighting all the time. I want to have the light as far away as possible. Of course if you do a close up of something I have light close to the camera but normally all my light is far, far, far away.” - Dan Laustsen
The further that you place a light source from an actor, the less effect that light will have. The only way to counteract this is by having stronger sources of light that compensate for the light lost through distance. This means either hiring higher output sources, or increasing the amount of sources - which, either way, is an expensive endeavour.
The wider you want your night exterior shots to be, the more difficult and expensive it becomes to light.
They also shot basically everything at night - even the day interiors and a scene that was meant to take place at dawn. This is because it was the only way to control the level of ambient sunlight and maintain the kind of strong, hard, consistent light for long periods of time without competing with sunlight - which would have overpowered the film lights.
His team either lit the scenes with very high output, hard sources, like T12 or T24 tungsten units. Or they used loads - and by loads I mean like 85 units - of lower output, RGB LED Creamsource Vortex 8s. These gave an even spread of RGB light and were also water resistant - which was very necessary for certain scenes with lots of water.
CAMERA MOVEMENT
The camera movement generally tracks the action of the choreography with slow, fluid, smooth movement.
The technical team pulled out almost every piece of gear they could to do this in a variety of different situations: from a drone, to a Spidercam, to a Technocrane. For example, this long take, top-down God’s eye view shot meant rigging a line with a Spidercam, that could move the camera precisely above the large set.
However, the two main tools that they used to move the camera for the fight sequences were a Steadicam and a dolly - with A-cam on dolly and B-cam on a Steadicam AR.
The AR allows operators to quickly change from high mode to low mode, while keeping the camera completely stabilised.
Shooting from a Steadicam meant that they could get smooth, flowing movement that followed the action, but which also allowed the operator to be flexible and adjust their movements to the movements of the actors.
Because the fight scenes were not easy on the actors, they didn’t want to have to do more takes than were necessary. Therefore, the whole movie was shot at quite a deep stop of T/4, which gave the focus puller a little bit of leeway for the difficult, fast moving action scenes.
Because of the speed of the scenes, the 1st AC also chose to often pull focus by judging distance by eye. Since it’s easier to pre-empt where the actors, and therefore the focus, needs to be when you are watching their movements as opposed to watching on the monitor.
CONCLUSION
John Wick 4 is a great example of how action and the world of the story can be made to feel more elevated due to the cinematic choices made by the filmmakers.
All three elements of wide shots, vibrant, colourful lighting and fluid movement combined to make the action scenes in John Wick feel like an entertaining, extravagant spectacle.
The Most Popular Cinema Lenses (Part 6): Zeiss, DZOFilm, Leica, Angénieux
In part six of this series, I’ll go over four more popular cinema lenses that are used in the high end film industry, break down the images they produce, their practical design and what kinds of filmmaking they are useful for.
INTRODUCTION
The choice of which cinema lens to select for your shoot is a pretty important one. Each lens varies not only in a practical and ergonomic sense, but also in the kind of look that they produce.
So, in part six of this series, I’ll go over four more popular cinema lenses that are used in the high end film industry, break down the images they produce, their practical design and what kinds of filmmaking they are useful for.
DZOFILM VESPID RETRO
Starting with something a bit unusual, the Vespid Retros are a limited edition set of seven prime lenses. They are modelled on the same optical and mechanical construction of the original Vespid prime set that preceded it.
They come in a pretty standard set of primes, ranging from 16mm to 125mm focal lengths with a stop of T/2.1, except for the 16mm which has a T2.8 aperture.
The Retros cover a 46mm image circle, which means they can cover all of the recent Vista Vision, Full Frame and LF digital cinema sensors without vignetting.
Despite this coverage, they come in a very compact, lightweight form factor, which makes them a good choice for cinema rigs that need balancing, such as gimbals. They come with a standardised 80mm front diameter across the range of primes, which makes it easy to quickly swap out a mattebox when changing lenses. All the lenses, except the 16mm also feature a 77mm filter thread which can be used to attach screw-in filters.
They come with a PL mount, but it’s also possible to swap out the mount to a Canon EF mount. This means that they come with all the modern features that make them quick and easy for camera assistants to work with - just like the original Vespid primes.
So what makes them different from the original Vespids? Well they come with a bit of a party trick. The glass is finished with a golden coating. This coating produces warm, amber flares that haze up the lens when you point it at a direct light source. They resolve fine details, but also offer a subtle softness. All of this combines into what they call a ‘retro’ or ‘vintage’ look.
This warm, dreamy look is good for more experimental applications, like shooting dream or flashback sequences, or for capturing films where you want to play with light and texture.
For example, I recently rented these primes out on a music video shoot, which I lit with lots of backlight in order to get maximum flares for a more dreamy, cosmic musical performance.
And, well, they’re silver, which looks kind of cool.
ANGENIEUX OPTIMO 15-40MM & 28-76MM
Now, let’s move away from primes and take a look at a pair of popular zoom lenses.
The Angenieux Optimo 15-40mm and 28-76mm are a pair of lightweight zooms with a 2.7x zoom ratio. This means that the longest focal length is 2.7 times longer than the widest focal length in the zoom range.
The reason I pair these two zooms together is because they are constructed to an identical weight and size, and are often rented as a pair as interchangeable wide angle zooms that can cover from wide to medium focal lengths, while maintaining a high degree of optical quality and an aperture of T/2.6.
With an image circle of around 31mm, these lenses are designed to cover ever so slightly wider than Super 35 size sensors. But won’t cover the full width of large format sensors, like the Alexa Mini LF.
At 1.92kg these zooms are lightweight enough to be used on a Steadicam, a gimbal or handheld on the shoulder. The zoom and focus happens internally in these lenses which means they have a constant lens volume.
In other words the lens barrel doesn’t extend and twist as it might with some vintage lenses. This means a balanced, stabilised rig like a gimbal will remain balanced even if the lens is ‘zoomed’ during a take.
In terms of the look, they have a good degree of contrast and no vintage softness or aberrations. Basically it’s a good, sharp, clean zoom. The Optimo range retains homogeneity of colour, contrast and resolution across the range. This makes it easy to use multiple zooms in the range on the same shoot, without worrying about optical differences in quality or colour.
On the wide 15mm end the lens also has minimal distortion.
As mentioned, these lenses are perfect for anyone who needs a wide angle zoom on a Steadicam and gimbal, to make quick and easy adjustments to focal length or zoom the lens in a shot.
They are also used by solo operators in documentary situations - for example, I recently used these lenses in a handheld build with an Amira to shoot a doccie-style commercial. Zooms also save time swapping out lenses, and are often carried on long form jobs to accompany a set of primes.
ZEISS STANDARD SPEED
The Zeiss Standard Speeds, sometimes also called the Zeiss T/2.1s, are a set of lightweight primes. They were originally manufactured somewhere between the late 60s and mid 70s by Carl Zeiss for Arri to be paired with their Arriflex cameras.
They originally came in 5 focal lengths, 16mm, 24mm, 32mm, 50mm and 85mm focal lengths. However, later Mk II versions of these lenses in the 1980s included additional focal lengths.
The Mk II versions of this lens are the ones most commonly found at rental houses and feature a similar build and coating as the faster T/1.3 Zeiss Super Speeds. For this reason they are often used by rental houses to ‘build out’ or add extra focal lengths to a Super Speed lens set, especially on the longer end with the 100mm and 135mm focal lengths. But also for DPs that like focal lengths in between the traditional ones, like a 40mm.
A mechanical difference between the Super Speeds and the Standard Speeds is that the actual glass inside the housing of the T/2.1s is pretty tiny - around the size of another vintage prime set from the same era - the Cooke Speed Panchros. While the glass used for the Super Speeds takes up the entire housing.
Regardless, the T/2.1s are still able to cover Super 35 size sensors without vignetting. Because of their small glass elements, these lenses are very light, yet still solidly constructed and easily usable by camera assistants - as long as they have been well maintained and calibrated by the rental house.
What I love about these lenses is the look. It is a very comparable look to the Super Speeds as, from what I can gather, the T/2.1 Mk I and Mk II sets both have the same T* multicoating on the glass as the Super Speeds.
Because of the older optical design of these lenses they have some vintage characteristics. For example, when shot with a wide open aperture they have lower contrast than modern lenses, like the Cooke S4s or Master Primes. This gives the images more of a washed out feel.
These old lenses have a simple, symmetrical lens design which produces beautiful bokeh which is slightly swirling. When I’ve shot with them I’ve found the Standard Speeds, especially the 40mm, to have really beautiful, creamy, slightly off kilter, oval bokeh - which I actually prefer to the Super Speeds. Despite their age, these primes are still quite sharp when you hit the focus sweet spot.
So, if you’re looking for compact cinema prime lenses that don’t have to have a super fast aperture, with a vintage, slightly low contrast look, and very pretty bokeh, then the Standard Speeds are an often overlooked choice which are worth considering.
LEICA SUMMICRON-C
Moving from the past back to the present, let’s take a look at a modern set of spherical primes - the Leica Summicron-Cs.
In the same way that the Zeiss Standard Speeds are a slower aperture version of the Zeiss Super Speeds, the Summicron-Cs are a slower aperture version of the much more expensive Summilux-Cs. However, at T/2 they are still fast enough for most shoots.
They come in a set of 11 focal lengths from a 15mm up to a 135mm.
With a 36mm image circle the Summicrons cover an enlarged Super 35 size sensor, but won’t cover most full frame or large format cameras without vignetting.
One of the big advantages of the Summicrons is their build. For modern cinema lenses with a fast stop they are very compact and lightweight, but still have a long focus throw. This means they have lots of well spaced distance markings that 1st ACs can use to more accurately program focus distances into a handset and pull focus.
The 18mm to 100mm focal lengths lenses are also all the same physical length at exactly 101mm. This small, standardised sizing makes them perfect for rigs that need balancing, like drones, gimbals or Steadicam.
The images that these lenses create are pretty typical of sharp, modern cinema glass. They have authentic colour rendition, good contrast, even wide open, and render fine details with crisp, sharp edges. This look is consistent across the entire set.
Therefore these lenses are well suited to productions shooting on a Super 35 camera, that want a reliable, versatile set of standardised, compact lenses that produce a crisp, consistent, modern look with a fast aperture.
And for those that need all of the above but are wanting to shoot in very low light conditions or are looking for just a tad more depth of field - the T/1.4 Summilux-Cs are also a viable choice.
FPV Drones Are Changing Filmmaking
How do FPV drones work and what makes them so special? Let’s find out.
INTRODUCTION
First, there were helicopters. Then there were drones.And now, there are FPV drones.
These little, flying, cinema cams are now able to pull off shots that could have only been dreamt of decades ago, or even a couple years ago. But how do these FPV drones work and what makes them so special? Let’s find out.
WHAT IS AN FPV DRONE?
One of the main differences between a regular drone built for filmmaking and an FPV drone is in the name. FPV stands for First-Person View. This means the drone provides a real-time video feed from its onboard camera to a display - usually in the form of goggles worn by the pilot.
This allows the pilot to see a front-on perspective of exactly what the drone sees, as if they were sitting inside the drone and flying it. This immersive experience provides a sense of flight and enables more precise control and navigation of the drone
FPV drones were primarily designed to be used for sport, where they race each other to see which pilot can manoeuvre the FPV the most nimbly and quickly.
Because of this they are usually way smaller than the big production drones, like the Freefly Alta 8 or Alta X, which are designed to carry heavy payload cinema packages on a gimbal like a Movi-Pro or DJI-Ronin.
FPV’s smaller frame and lighter weight makes them far more agile and fast, able to fit through small gaps, all with enormous acceleration.
FPV DRONE SETUP
Because cinema FPV drones are a bit of a niche product, they are often custom built by their pilots. However it’s also possible to buy some that are assembled straight out of the box
Let’s break down what a cinema FPV drone may look like.
The frame is what the camera sits on and is made up of a body - that supports the camera and battery - and arms - that hold the motors that allow it to fly.
Usually frames are constructed from carbon fibre as it is a lightweight material that is also durable. When it comes to building a rig, the name of the game is to keep weight to a minimum.
An example of a frame that may be used for a cinema FPV drone setup is the Hevi Frame from Shendrones. This is a coaxial octocopter. This means that it has four dual motors that can each turn two rotors.
To this frame a FPV camera will be attached to the front of the drone, along with a transmitter, that sends a transmitted video signal using antennas from the FPV camera to a headset that the pilot wears such as a DJI FPV goggles.
Inside this headset the pilot can also set frame lines that mimic what the cinema camera is recording, which are usually cropped in from the wider FPV camera view.
This FPV camera is built to be as lightweight as possible and is only used for monitoring and transmission purposes so that the pilot can see where they are flying. It’s not used for recording.
For the actual video recording you’ll need a cinema grade camera, which is as lightweight as possible. A Red Komodo is currently the most popular.
This is due to the tradeoff between its light, tiny body and its ability to shoot in a 6K resolution - which gives room for image shake to be stabilised in post - its RAW codec with a high dynamic range and its global shutter - which reduces artefacts and motion blur when filming objects moving at a high speed.
When using the Komodo like a crash cam, to shoot in dangerous situations where there’s a possibility that the camera may get damaged, it’s useful that it has no buffering in writing data to the card.
In other words, the camera is able to transfer the footage to the Cfast card immediately as it shoots it without a delay. So, even if the camera crashes and you are able to pull the card out, then all the data should be preserved up to the moment that the camera crashed.
The Komodo will usually be paired with a wide angle, compact, lightweight cinema lens, such as a Laowa Zero-D Cine 9mm T/2.9 or a 15mm T/2.1. If shooting in bright conditions a ND filter can be screwed into the thread of the cinema lens.
Because no focus motor is used, the focus distance on the lens will be set at a fixed distance, normally at, or just off of, infinity.
A lipo battery will be strapped to the bottom of the rig. The longer you want to be able to fly the drone for, the bigger the battery has to be. So, it comes down to a tradeoff between a lighter weight and more manoeuvrability or a heavier, less manoeuvrable longer flying drone.
A 4,500mAH battery may give the pilot somewhere between three to five minutes of flight time depending on how aggressively they fly it.
HOW ARE FPV DRONES CHANGING FILMMAKING?
Aerial shots have two noticeable attributes, one, the perspective they provide, and two, the kind of camera movement they achieve.
Let’s start with perspective. Aerial shots were first done by shooting from helicopters, which provided filmmakers with a birds eye view perspective that could give context to landscape and provide a sense of scope. For this reason, helicopter shots were often used as a way of establishing a location, before cutting into a scene at that location.
Because of the safety precautions necessary when flying a large vehicle, like a helicopter, this usually limited filmmakers to shooting these establishing shots as wides, far from any building or from the ground.
When drones started taking over, they usually adopted this same, wide, bird’s eye view perspective.
Even though some stunt helicopter pilots are incredibly skilled and can do amazing manoeuvres, their size is still a limiting factor.
FPV drones are different. Because of their much smaller size, and increased manoeuvrability, these drones can physically get much closer to objects or to the ground than ever before. Flying closer to objects with a wide angle lens at a high speed makes aerial establishing shots incredibly dynamic.
“You’d usually have wide interstitials of the location as you move from location to location. Ant and I are just bored by that convention and we wanted to do something more kinetic and more dynamic that would whip you from location to location and just keep the movie moving forward at a breakneck pace.” - Joe Russo, Director
One of these dynamic establishing shots even involved blending an FPV shot that wove through a hospital entrance, with a Steadicam shot that started the scene on the actors. Using a quick panning movement to hide an invisible cut.
This kept the intensity of the pacing high, while at the same time clearly establishing the space that the scene is unfolding in.
This shot also shows off the second noticeable attribute of aerial footage - movement.
Although helicopters are able to pick up speed, and create dynamic footage, FPV drones are different in that they can make turns and accelerate far quicker, while also using their small size to fit through gaps that wouldn’t have been possible before.
Shooting with a wide lens, close to action has a very different feel to shooting action from far away on a telephoto lens. The physically closer you are to the action the more visceral, kinetic and immersive it is.
“Drone work is really boring in movies and I wanted to hip it up. I wanted the energy because this is a very intense movie and it just keeps that motion going. And I’m kinda known for how I move my cameras.” - Michael Bay, Director
So, being able to film aerial footage, in both interiors or exteriors, at high speeds, incredibly close to actors and objects, allows filmmakers to get highly dynamic footage, without using visual effects. This is all possible thanks to the combination of FPV drones and incredibly compact digital cinema cameras.
Although this technology unleashes incredible possibilities, it still has quite a few limitations. They mainly revolve around the weight issue.
To be as lightweight as possible, these little drones normally can’t use larger cine lenses, a mattebox or use 4x4 or 4x56 inch filters to control exposure or create a filter effect.
This also means that when shooting day exteriors, much of the exposure control will happen by stopping the lens down to a deep stop. However, this is also necessary as usually these drones can’t take the weight, nor afford to power a focus motor.
This means that wide lenses with a deep stop need to be used. Therefore shooting objects close with a shallow depth of field isn’t possible.
Filmmakers are also limited to very small flight times before they need to change the battery. This makes it impractical to roll for long takes.
Also, unlike remote heads on helicopters, or larger drones that use a gimbal system, the camera cannot be panned, tilted or rolled by an operator. Instead the pilot is the camera operator and needs to position the front of the drone precisely where they want to shoot.
Finally, there’s also the risk factor. Since a lot of these manoeuvres are incredibly fast it can be dangerous, not only for actors, but for the camera. In fact I remember hearing a story a few years ago about a shoot that was using the first Red Komodo in South Africa on an FPV drone for a commercial. The pilot was skimming it over the ocean at a high speed until, suddenly, it wasn’t just above the water, but under it. And that was the end of the first Komodo in the country.
CONCLUSION
Cinematographers often talk about film equipment, like lights, cameras and grip rigs as tools, which they can choose from to chisel each story into the correct shape. An FPV drone is another tool now available to filmmakers, which allows them to get action packed, dynamic aerial movement like never before.
How Cinematographers Use Hard Vs. Soft Light
n this video I’ll explain what hard and soft light is and how each can be used in different applications to impact the visual storytelling of a film. Then I’ll go over some film equipment and techniques that are commonly used to alter the quality of the light.
INTRODUCTION
Cinematography at its core is all about being able to manipulate and control lighting to tell a story. Light can be cool or warm, have a high output or a low output and, importantly, be either soft or hard.
In this video I’ll explain what hard and soft light is and how each can be used in different applications to impact the visual storytelling of a film. Then I’ll go over some film equipment and techniques that are commonly used to alter the quality of the light.
HARD VS SOFT LIGHT
Before we get into talking about how different qualities of light can be used and are created, we need to know what the difference is between hard and soft light.
Hard light refers to a type of illumination that creates distinct and sharp shadows with defined edges. In other words, the area where the shadow meets the light has a very small, if any, gradient.
Hard light is usually achieved by using a direct light source, such as a bare light bulb or direct sunlight, that casts a focused beam of light onto the subject.
Soft light on the other hand creates diffused and gentle shadows.
So, unlike hard light that has no gradient between where shadows and light meet, soft light has a gradual transition between light and dark areas. Soft light is achieved by using a more diffused or indirect light source, such as sunlight that passes through a sheer curtain, that spreads light evenly over the subject.
Although I’m talking about the quality of light in quite defined terms, it’s important to remember that the softness or hardness of light occurs on a spectrum.
Usually cinematographers will use a combination of both hard and soft light either in different scenarios or even within the same shot. How light is used is dependent on the context of the location and story that is being told.
HOW IS SOFT LIGHT USED?
Generally, it’s more difficult for cinematographers to create softer light - which we’ll get to a bit later. So what are some reasons and situations why they might want to do that?
Well, firstly, using softer light is seen as a more flattering way of lighting actors. Because of its gentler gradient of shadows and decreased harshness, it can help to smooth out skin texture and highlights in a more subtle and appealing way.
If you compare this to a harder light source, you’ll see that harsh shadows and strong direct light accentuates textures, wrinkles and skin imperfections, whereas softer light smooths out faces in a gentler way.
For this reason, soft light has historically been used for beauty commercials as a way to smooth out the skin of the on screen talent. Back in the days of the Golden Age of Hollywood, it was also common to use soft, flattering light on the lead actress, while using a harder light with a more rugged feel for the lead actor.
Another, more modern reason that cinematographers use soft light now is because it can feel more organic, natural and realistic. Of course natural sunlight is not always soft. If you’re shooting outside, in direct sun, the source will be very hard.
However, if you turn off all the lights in a room, and observe how the natural light from the sun hits a space, there’s a good chance the light will be soft.
This is because, unless it comes directly through a window, sunlight usually bounces and reflects off different surfaces within a room - and, as we’ll find later, bounced light is soft.
Also, if there are clearly defined shadows, such as on a wall, it is clear that artificial light, such as a placed film light, is being used and it’s easy to tell which direction it’s coming from. If instead the shadows are more gentle, it may give a feeling that the light is natural and coming from a motivated source other than a film light.
Finally, soft light also creates a tonal atmosphere or a feeling. Because it’s literally softer and lighter, it can be used to create a sense of calmness, lightness or tranquillity in scenes that have a more peaceful, comedic, romantic or introspective tone.
HOW IS HARD LIGHT USED?
Hard light can be used to create a more dramatic and higher contrast look. Because it creates less of a gradient in shadows, hard sources of light can be good for stories that need to be played out in lower key lighting with lots of shadow and contrast.
This can be particularly effective in creating a sense of tension or danger in scenes with a lot of action or suspense, or to make characters seem more imposing or dangerous. As we mentioned, it is less flattering on the skin, so can be used to portray characters as more rugged and less perfect.
In an exterior, hard light can also be used to accentuate the strength of the natural elements, by making the sun’s rays feel hot and strong.
A harder backlight can also be placed behind characters in a scene to separate them from the background so that they stand out.
Because it is easier to create contrast with hard light it’s also a good tool for cinematographers that like to illuminate different pockets of light and shadow within the same frame.
These pockets of shadow can be used either to disguise details or the identity of a character within a shot, or be used to create a more stylised visual interest.
Hard light even has associations with certain genres, such as film noir, which has extensively used it to elevate stories from realism and evoke a darker, ominous mood by creating carefully shaped, textured shadows - such as the classic office blind shadow.
EQUIPMENT FOR SOFT LIGHT
There are two main techniques that can be used to transform hard light to soft light: diffusion and bounce.
To diffuse light, a type of translucent material needs to be placed between the light source and the subject the light will hit.
There are many different types of diffusion, but perhaps the most basic are diffusion gels - which come in rolls of white polyester plastic that can be cut into a size that can be directly clipped onto the light or a diffusion frame in front of the light.
A popular diffusion gel is a 216 White Diffusion from Lee Filters. This greatly spreads and softens the light source, but also takes away a stop and a half of light. The same is true of any diffusion. While softening the light source it also reduces the amount of light that hits the subject. The more that you soften light the dimmer the light will be.
Therefore, DPs that want very soft light may need to use high output fixtures in order to have a bright enough light to expose properly.
There are many other types of diffusion gels available which spread and soften the light in different ways, such as a Frost, Brushed Silk, Opal or Hampshire Frost. Some gels like a white diffusion gel also come in half, quarter and eighth strengths if DPs need diffusion with less softness that lets through more light.
Another type of diffusion is to use a textile rather than a plastic gel. This is a cloth material that can be tied onto frames of various sizes such as 8x8, 12x12 or 20x20.
In my experience there are three main fabrics which are used for diffusion. Silk is a very sheer, natural diffusion fabric and comes in different strength gradients. It offers a light to medium diffusion of light.
Grid cloth is a popular, waterproof fabric that comes in various strengths, such as a 1/2 , 1 or 2. Each number refers to how many stops of light it will reduce, so a grid cloth 1 will reduce the strength of your light by 50%.
Finally, there is muslin. This is a much thicker textile which comes in either white, called bleached muslin, or a warmer cream, called unbleached muslin. Because of its thickness it takes away a lot of light when used as diffusion, which means it requires very strong, high output film lights to be used.
Other equipment, such as a softbox, uses this same idea of placing a diffusion substance in front of the light, but attaches it directly onto the front of the fixture.
Diffusion can be used in front of artificial fixtures, but it can also be placed between a subject and the biggest light source there is - the sun. Typically this will be done with one of the textiles mentioned on a metal frame such as a 12x12.
The other way to soften a light source is to bounce it. A light will be placed so that it faces away from the subject, but towards a surface that is used to bounce that light onto the subject. Like with diffusion, bouncing light can greatly diminish the strength of the source - so you need a higher output light to compensate. The further a bounce is from the subject the weaker the light will be, but the softer it will be.
Again, light is usually bounced off a textile, such as a muslin. However other surfaces, such as a poly board or even the walls inside a house could be used as a bounce. However, remember that whatever colour the bounce is will affect the colour of the bounced light.
Finally, a book light takes this idea to the extreme, by first bouncing a light and then diffusing the light that comes from the bounce. This makes it super soft, but also means that very high output lights are needed to compensate for the large amount of light that is lost.
EQUIPMENT FOR HARD LIGHT
Creating hard light is much simpler. The easiest way to get hard light is simply by positioning a direct source of light straight at the subject without diffusing or bouncing it. This could be direct sunlight or a direct film light. The stronger the source of light is, the harder and more contrasty it will appear.
Also, minimising the amount of light sources is a good way of creating hard, high contrast light with more shadows. The more light sources that are added to a space, the more the light will get evened out.
A cheaper way of getting strong, hard light, without putting up big film lights, is to use a mirror. Mirrors can be used to directly angle a hard beam of light at a subject, without diffusing that light.
You can also position practical sources which you see in the shot, such as a bare lightbulb. As long as these don’t have diffusion they will create a hard source of light.
Editing Transitions Every Filmmaker Should Know
In this video I’ll go over a few types of edits, so that whether you’re a cinematographer, a director, an editor or an audience member, you’ll have a better idea of how and why films get assembled the way they do.
INTRODUCTION
If shooting a movie is like growing produce and gathering a bunch of different ingredients together, then editing is where those ingredients are organised, selected and prepared in a specific way to create a meal.
From simple cuts, to graphic matches, to whatever is happening here. But we’ll get to that later.
There are many different editing techniques and types of transitions that can be used to put stories together.
In this video I’ll go over a few types of edits, so that whether you’re a cinematographer, a director, an editor or an audience member, you’ll have a better idea of how and why films get assembled the way they do.
CUT
Let’s start by looking at a basic cut and then examining some of the creative ways it can be used.
A cut is simply the act of taking a portion of a video clip and joining it with another clip using editing software, resulting in a seamless transition from one shot to another.
Cuts are often used to create a sense of continuity or, alternatively, to establish a clear change in time or location. The speed at which a series of cuts happens can also unconsciously affect the way that the audience interprets the film: whether it feels more hurried and panicked, or more slow and realistic.
In a dialogue scene, cuts can be used to switch between the person speaking and the person listening, maintaining the flow of the conversation. This is known as a shot reverse shot.
SHOT REVERSE SHOT
Often, cinematographers shoot these shots from the same side of an invisible line, called the 180 degree rule, and use framing to balance each shot with its reverse shot.
So if character A appears on the left of frame for a close up and character B’s shoulder is on the right of frame, this will get switched for the reverse, where character B appears on the right of frame and character A’s shoulder is on the left of frame.
SHOT SIZES
Editors can also cut in or out of different sized shots. For example from a medium shot to a medium long shot. Cutting from tight to wide could be used as a way of giving more context or information to the audience. For example, here we cut wider as the character’s attention drifts to the bag on her lap.
Cutting from wide to tight focuses the information that the audience is given. For example, cutting from a wide to a tight close up shows which character is most important and allows us to more clearly read their reaction or how they are processing a moment.
INSERT
A variation of this cut is called an insert. This cut shows a close-up of a specific detail or object within a scene. It is usually a brief shot that is inserted into the sequence to provide a visual emphasis or to provide additional information to the audience. It’s a way for the editor to say, “this thing is important.”
All of the shot transitions we’ve discussed so far have been continuous, meaning they portray the scene as unfolding in real time, and try to uphold the suspension of disbelief.
JUMP CUT
The jump cut is a way of breaking continuity in a jarring and abrupt way.
Jump cuts are created by removing a portion of the footage from the middle of a shot, causing the subject to appear to "jump" from one position to another within the same sized frame. This effect can be used to create a feeling of unease or tension, as it disrupts the viewer's sense of time and continuity.
However, in other contexts, such as vlogs, where this cut is made as a practical way of quickly removing mistakes or filler content, the jump cut is more whimsical in tone and conveys the passing of time.
CROSS CUTTING
To amp up tension and create a feeling of suspense, some filmmakers may choose to alternate between shots that take place in two or more different locations - that would otherwise be separate scenes unto themselves.
This is called cross cutting. It is a way of showing different actions or events happening simultaneously or in close proximity to each other. A traditional example of this would be cutting between the perspectives of different characters in different locations during action sequences to provide a contrast between overlapping scenes while simultaneously injecting a faster energy into the cut.
Anticipation would be reduced if each scene was allowed to play out in full, before cutting to the next scene.
Or a more unconventional example is how it is used here to intercut between scenes with the same characters, happening at the same time, in different parallel universes.
MATCH ON ACTION
Not only is this transition an example of cross cutting, but it’s also what we call a match on action or graphic match cut.
This is used to create a visual connection between two shots that share a visual similarity in shape, colour, texture, or movement.
In this case, it cuts from a close up, to another close up, where the actor is in the same central position in the frame and occupies the same space. This is a technique that The Daniels did often in this movie.
These shots can also be a bit subtler and based more on a thematic connection or on movement. A famous example is in 2001: A Space Odyssey where we start on the thematic height of technology in a pre-human world - a bone used as a club - and cut forward thousands of years to a similarly shaped spaceship - the new height of technology.
But in order to make this kind of cut in the edit, it usually needs to be planned for and shot during production. There are a few ways of doing this. You could use a screengrab from the first shot and use that to approximately line up the actor in the same position for the second shot.
Or, you can use what is called mix and overlay. This is performed by the VT operator on the set using software such as QTake. They take a freeze frame from the first shot, reduce its opacity, and overlay it on top of a live image of what the camera is seeing on a monitor. The camera can then be lined up until it matches the framing of the first shot.
If you don’t have a VT operator and need to do an exact match, the old school way of doing it is to take a see through sheet of plastic, overlay it on a monitor and trace the outline of whatever needs to be matched with a pen. Then when you need to shoot the second shot, you can take this stencil and match the frame to the traced outline.
CONTRAST CUT
So far we’ve only talked about the visual side of making editing transitions, however sound can also play a major role in edits.
A contrast cut can be created by placing shots which are noticeably different next to each other, whether through differences in colour, tone, texture, shape, movement, or sound.
Here, a series of cuts get louder and louder until a contrast cut is made to a far quieter shot. This supports the moment that the character goes from a constantly growing agitation and non-stop frenetic life, to a breakdown moment of reflection.
J & L-CUTS
Another way that sound can be used in transitions is by extending it, so that it overlaps the two shots.
When the audio from the next shot is heard before the video actually cuts to that shot it’s called a J-cut, which forms a J shape on the timeline. Such as this example from Apocalypse Now where he starts to hear the building sound of a helicopter starting up before we cut to it.
The opposite is called an L-cut, where the sound from the first shot extends into the next shot. Again, this time the choppy sound of helicopter blades bleeds into shots of a soldier alone in a room.
Throughout this movie, scenes and shots bleed into each other, blending time and the events together in a dissociative way.
DISSOLVE
Rather than using normal cuts, the editor often used a series of dissolves to enhance this feeling.
This is where one shot gradually fades out while the next shot gradually fades in, overlapping briefly in the process. This creates a blending effect where the two shots appear to merge or dissolve into each other.
Dissolves are often used to indicate the passage of time, to suggest a change in location or mood, or to create a dream-like or nostalgic effect.
This sequence dissolves three different shots over each other: a shot of helicopters, a shot of a soldier and a shot of a fan. The fan and the helicopter blades also provide a subtle graphic match. The effect is a dreamy one, that, at the same time, hints that these past memories of combat haunt the soldier.
Dissolves can also be used more sequentially, one after each other to convey a passing of time.
FADE TO BLACK
When a shot is dissolved against a back screen, this is called a fade or a fade to black. This effect is often used to indicate the end of a scene, to create a sense of closure, or to suggest the passage of time - kind of like a chapter in a book.
In this case a fade to black is used to end a turning point in the screenplay. When the image cuts back in from a black screen, it is clear that a portion of time has passed and we are viewing the final epilogue.
WIPE
Occasionally filmmakers will use a more stylised transition, like a wipe.
This is where one shot is replaced by another shot in a distinctive manner that involves a visual element "wiping" across the screen, gradually revealing the new shot. The visual element can be a simple line, a geometric shape, or any other kind of graphic element.
If we break this transition down frame by frame we see that a wipe is combined with a dissolve and a graphic match to magically transport a character from one location to another. If we play it back at full speed it creates the illusion that this happens in one shot.
HIDDEN CUT
This is a type of hidden cut, where the transition happens on a moment of movement to disguise that an edit took place.
If we go all the way back to the example from the beginning of the video and break it down - we’ll see another hidden cut, or in fact six hidden cuts.
Each cut happens on a moment of fast movement which is a continuation of the movement from the prior shot. When these clips are organised and played back at full speed each cut is magically disguised.
Cinematography Style: Bill Pope
In this episode of cinematography style I’ll go over how Bill Pope broke into the world of film, look at his philosophy and ideas on cinematography and go over some of the gear and equipment that he uses to execute his work.
INTRODUCTION
After starting out by shooting music videos in the 80s, Bill Pope progressed into being a highly experienced feature film DP responsible for photographing many high budget Hollywood movies.
At the core of his work, he focuses on collaboration and using the structure that director’s provide to fill in the story’s tone visually.
In this episode of cinematography style I’ll go over how Bill Pope broke into the world of film, look at his philosophy and ideas on cinematography and go over some of the gear and equipment that he uses to execute his work.
BACKGROUND
After graduating from NYU graduate school, Bill Pope didn’t stay in New York for long before he decided to make the move out to California where there was far more work available in the film industry.
“I just kept hammering at it. I’d just shoot for free. I went over to AFI and volunteered to shoot movies for free and I did. I shot many. You fall in with a group of people and I would always just shoot for free. And I worked for free for years. During the day I would be a PA and then I’d take time off to shoot whatever was necessary.”
While working as a camera assistant on a Roger Corman movie, he even used the camera and spare short ends, which is the unused film stock on a reel, to shoot a short film in his garage on weekends.
One of the networks he built during this time was with a friend whose production company made music videos with budgets between $2,000 and $5,000. Later, when MTV kicked off and they were looking for DPs with music videos on their reel, Pope’s name came up.
During this time he spent about six to eight years shooting music videos for various well known artists, before director Sam Raimi persuaded Paramount to let Pope shoot his first feature: the superhero movie Darkman.
From there he became a regular features DP, who over the years worked for many other well known directors such as The Wachowskis, Edgar Wright, Jon Favreau and Joe Cornish.
PHILOSOPHY
“I would say to a young cinematographer the most important thing you have is relationships. And the earlier you form them the better you’re going to be. You can have all the talent in the world but if you don’t know anybody you’re not going to go anywhere.”
Not only does building relationships with directors have the most obvious and most important effect of allowing you to get work, but it also has other downstream effects. Working for a range of different directors exposes cinematographers to many different situations and many different ways of working. It also allows you to pick up on various strengths and weaknesses that directors may have, which will strengthen your skillset as a DP.
“We don’t make the framework. We bring the atmosphere. We bring the thread. We bring the tinsel but they bring the tree.”
Directors will first approach Pope either with a meeting to talk about a project, by sending a screenplay and then a bit later by sending through a shotlist or a storyboard.
Although he has compiled shot lists for certain directors and certain projects, he usually prefers to be a sounding board for the ideas, shots or storyboards that the directors put forth, and then gives his unfiltered feedback to them.
“You’re just bouncing things back and forth. In a way, I’m the first person to see the movie. So, I’m the first audience member and I just say to directors, “When I start, look I can be really annoying, because I’m going to say what is in my head and I’m not going to filter it too much. I’m gonna be direct feedback to you because I feel like that’s my job.”
This may involve him proposing anything visual, such as an alternative shot, tone, angle, lens, costume palette or art direction to the directors that he feels better conveys the story. These ideas can either be rejected, accepted, or through collaboration, morphed into a new idea.
This process happens both during prep, as well as on the day of shooting, where he needs to be adaptive and quick on his feet to make loads of quick decisions about how to best tell the story.
This is especially necessary when working for a director such as Edgar Wright who likes to cut quickly to a variety of angles - which makes the shooting schedule very tight.
Making good decisions, but also making them quickly is an important quality for a DP to have.
Using his analogy, the director brings the script and structure, or tree, and then they both decide what ornaments, like the tone, shots, or palette, they can add to it, to better accentuate it.
GEAR
Since he started his career in the 80s, long before the first viable digital cinema cameras were released around the early 2010s, much of what he has shot was done using film. Although his recent projects have mainly been shot digitally on versions of the Alexa, he does occasionally still get opportunities to work photochemically.
“It’s just a discipline. You know the mag is going to last 10 minutes long. The slate comes in fast. Everyone is on their toes. It becomes sort of flaccid in the digital world. They’re on their phone, they’re like whatever. On film it’s a complete old fashioned discipline. It’s gotta be in a can. It’s gotta be split up. It’s gotta go to the lab. It’s gotta be returned. Everything’s got a deadline to it and a precision to it that people step up to.”
On top of film changing the set dynamics, he also values it for its look - which renders images with an elevated realism. On Baby Driver he made the decision to shoot the vast majority of the movie on a single film stock, the more sensitive Kodak 500T. He did this in order to preserve a similar continuity of grain across the entire movie. The more sensitive a film stock is to light the more prominent the grain will be - so a 50D stock will have finer grain than a 500T stock.
However, he did occasionally use the less sensitive Kodak 250D for certain car mount shots, where there wasn’t enough room to add a mattebox to the front of the lens, which, if shooting with a 500 speed stock, would be needed to hold ND filters to cut down on light.
“Ordinarily I was shooting it all with 5219. Just so that I have enough speed to deal with polarizers and overcranking.”
Polariser filters are pieces of glass that let through certain waves of light while blocking others. Practically this filter can be used to either introduce more contrast to darker tones, most commonly in the sky, or to reduce or manage reflections, for example reducing the reflections seen when shooting through a car window.
However, this filter also reduces the amount of light let into the camera by around one stop. Therefore, Pope chose a faster film stock that was more sensitive to light to counteract this.
He also mentions ‘overcranking’. This means shooting at a frame rate which is faster than the regular 24 frames per second most commonly used. If the frame rate is doubled to 48 frames per second, it creates slow motion, but it also reduces exposure by 1 stop.
So, again, choosing a more sensitive stock helped counteract that.
This overcranking was taken to the extreme on The Matrix, which was also shot using sensitive 500T for interiors. Because many of the fight scenes had super slow motion shots in them, Pope had to light these sets so that the camera could shoot at up to 300 frames per second. This was done on a high speed Photosonics film camera.
300 frames per second requires about 4 stops more light to shoot at than regular 24 frames per second. This means that to get to his shooting stop on the lens of T/2.8 he had to light spaces at least 4 stops brighter, at T/11.
This is fine when shooting exteriors lit by strong sunshine, but is far more difficult when shooting the many interior sets or night scenes - which required many, high output film lights.
For the lobby scene which was shot with high frame rates, the idea was to add an ambient top light to the space and then key the scene with a side light. This sounds simple in principle, but getting to this super bright level of light, required doing this on a massive scale.
His team rigged 1,000 tungsten par cans to the roof for ambience and about nine high output tungsten dino lights that side lit the scene through 12x12 grid cloths that softened the light.
It also meant that any practical lights that could be seen in the shot had to be custom built using pyrex, and specially fireproofed, so that the extremely high output, hot sources of light didn’t start any fires.
While most shots in the Matrix were achieved on practical sets, there was also some visual trickery, such as the iconic bullet dodge shot. Rather than trying to physically move the camera around the actor, extremely quickly and shoot in extreme slow motion, Pope’s team instead created a rig that mounted many stills cameras placed around the actor in a green screen studio.
They could then use an image from each camera sequentially, so that it gave the illusion that the camera was moving. This shot of the actor was then cleaned up and dropped into an environment, to achieve the famous circular slow motion shot.
One of the metaphorical ornaments that was hung on The Wachowski’s metaphorical tree was how they used colour. The real world in the film was devoid of sunlight, so was always lit to a cool, blu-ish colour temperature. While the artificial, computer generated world of the matrix had a sickly green tint to it, inspired by the colour of the cursors of the time.
When working on these larger action based movies that require getting a lot of different shots, he’ll often shoot with multiple cameras to maximise the coverage he can get.
This means hiring crew, such as camera operators and dolly grips, who will technically collaborate with him to frame the images and influence the camera movement.
“Any director likes to have other filmmakers around them. You don’t want a piece of meat who isn’t going to give you an opinion. You want people who are smart and who are contributing all the time. I mean, I always consider operators and dolly grips to be part actor. They have to be able to interact with the actors themselves and feel the drama because the actors are never going to do it twice the same way. You have to feel it. You have to be one with them.”
The movies he shoots often use smooth dolly movement to tell stories by doing things like pulling out from an important object to reveal context, or very slowly pushing in with the camera to elevate an emotional scene or experience the character is undergoing.
Each camera move is another little ornament for the tree.
He’s used both anamorphic and spherical lenses, but does have a tendency of shooting grander, action stories in the anamorphic format, often on Panavision anamorphics, such as the G-series, T-Series or older C-series lenses.
When deciding where to place multiple cameras, it’s useful to have a viewfinder or pentafinder to more accurately find the frame before placing the heavy cinema cameras in place or laying dolly tracks.
There are a few photos of him using what looks to be an interesting viewfinder from Kish optics, which has a video tap inside, which can send an image to a little monitor. This makes it easy for both himself and the director to evaluate and find the right frame.
CONCLUSION
The diversity of the films that he has shot makes it tricky to pin Pope down to only one particular photographic style. Rather, he works in a close collaborative relationship with each director in a different way. Acting as a sounding board for their ideas and providing his own perspective on how best to elevate and execute each story visually using his deep experience and knowledge of cinematography.
How James Cameron Shoots A Film At 3 Budget Levels
In this episode I’ll look at three movies shot at three increasing budget levels by James Cameron - Piranha II, The Terminator and Avatar: The Way Of Water - to uncover the similarities between them and how he built his career and style as a director.
INTRODUCTION
There are few director’s who are as well known by mainstream audiences as James Cameron. After all, he’s directed some of the biggest Hollywood blockbusters of all time.
In this episode I’ll look at three movies shot at three increasing budget levels by James Cameron - Piranha II, The Terminator and Avatar: The Way Of Water - to uncover the similarities between them and how he built his career and style as a director.
PIRANHA II - $600,000
“I started working in special effects and I did a little art direction. In fact I got my first directing gig which was Piranha II as a result of Galaxy of Terror. I was doing second unit directing on that film. And they needed a cutaway to a dismembered arm lying on the ground with worms crawling around it. So I have them put a piece of zip-cord, strip it down and lay it underneath the arm and we make it wet so that it’ll conduct and there’s an electrician around behind the set. And the only cue I could think of was action - to tell him to throw the switch.
What I didn’t know was that these two producers who were looking for a director for Piranha II had come in behind me and they were looking over the setup. And I said, ‘Alright, roll camera’ and they rolled, ‘And action!’ and the electrician behind plugged in and the worms all started to move I said ‘that’s good that’s a cut’. He unplugs. They didn’t see him. Worms stop moving. I turn to them and they say, ‘Come with us we’d like to talk to you about a film we’re making.’”
As the title suggests, Piranha II was the sequel to Piranha, which was basically a B-movie rip off of Jaws that replaced the shark with genetically modified flying Piranhas that could somehow survive out of water. The low budget sequel was taken on by an Italian producer, Ovidio G. Assonitis, known for making B-horror movies.
James Cameron, who had built up a career as a production designer and special effects director, was initially brought onto the project to do the special effects, but after the first director on the project was quickly fired, due to a creative disagreement with Assonitis, Cameron was replaced as the director of the movie.
Filming of the exteriors took place in Jamaica, with some interior scenes filmed in a soundstage in Rome.
Due to the low budget of $600,000, and an allegedly controlling Italian producer, almost the whole crew was made up of Italians, including the cinematographer, whose name was spelled incorrectly in the credits. This quickly made communication difficult for Cameron, as apparently most of the crew could not understand English.
As the story goes, Cameron was eventually fired from the shoot as director due to constant interference from and disagreements with the producer. After firing two different directors the producer finished directing the rest of the B-movie himself. However, as Assonitis felt that the movie needed an anglicised director name in order to sell, they kept Cameron’s name on the credits.
Cameron was also allegedly not allowed to view rushes of the footage he shot, and was kept out of the editing process. Although he would later buy back the footage from the distributor so that he could recut, re-score and release the originally intended version for home video.
Although Piranha II will probably not go down in history as a cinematic masterpiece, the low budget horror movie still contains a few directorial stylistic threads that would continue throughout his career.
One, his fascination with the ocean and setting films in or around water.
Two, telling stories in a way that draws the audience into the magical world of cinema where realism is replaced by spectacle and plausibility is replaced with a sprinkle of cinematic magic.
Three, his fascination with using special effects and pushing the boundaries of whatever technology he has at his disposal.
And, finally, presenting larger than life stories with lots of action and set pieces, which are told through the eyes of a focused cast of main characters.
THE TERMINATOR - $6 Million
In the wake of the release of Piranha II, James Cameron fell ill and had a fever dream one night about a metallic, robotic torso. This sparked the idea for his next sci-fi slasher script which he would write about a cyborg assassin sent back in time to kill someone. He called it The Terminator.
“The first film that I directed that I got fired off of was called Piranha II and I think it was about $600,000. And the second film was The Terminator and that was $6 million.”
Although it was a substantial jump from his first low budget flick, $6 million was still considered a fairly limited budget to tell the kind of action heavy, ambitious sci-fi movie he had written.
“It’s not really about machines from the future. It’s sort of about our relationship with technology. But it’s really kind of about our human potential for dehumanisation ourselves. You see cops as examples. And then, you know, the other idea not to trust technology and even not to trust the fabric of reality.”
Cameron employed Adam Greenberg as the cinematographer on the project who worked with a relatively small camera and lighting package - without any expensive, elaborate gear setups.
Terminator was shot on an Eastman 250T film stock with an Arri 35 BL for dialogue scenes and an Arriflex 35 III for MOS, action or insert shots that didn’t need synchronised sound.
Since the script featured lots of night scenes, Greenberg chose a set of spherical prime lenses with a fast aperture that could let in lots of light - the Zeiss Super Speeds. Also, based on the classic 80s halation that you can see around the highlights, it’s likely that he used a diffusion filter, such as a Tiffen Low Con.
When it comes to action, Cameron edits it tighter and more coherently by creating little pockets of quick cuts that group different shots of the same action or idea together, rather than just cutting back and forth between various unrelated shots.
“I think it’s a very analytical process. I remember literally having a discussion with the editor and saying, ‘Look, in this battle sequence every single time we’ve improved the battle we’ve done it putting like with like, by putting the attack with the attack, the retreat with the retreat and grouping things that way.”
The cars in the chase sequences were never allowed to drive faster than 40 miles per hour. So, Greenberg helped to create a faster sense of motion with light. He placed film lights with operators on moving vehicles next to the shooting vehicle. The operators could quickly pan and move the source around to create lighting and reflections that gave the illusion that the streetlights were passing by quicker than they actually were.
Another analytical way of creating a menacing tone for The Terminator was to often shoot him from a low angle, while the camera shot Sarah Connor at a more comfortable, familiar neutral angle.
Overall he also used a large amount of backlight - placing hard light sources behind the actors in almost every night scene. Generally using softer sources for Sarah Connor and harder backlight for other characters. And don’t forget about the 80s hair light. This created a more silhouette-y shadow-y look which both helped to hide some of the visual effects and created the moody noir feel that they were after.
Another way of masking special effects was through a combination of clever angles and editing. Cameron’s philosophy was if you supply the audience with a shot A and a shot B, the audience will quickly fill in shot C using their own imagination. A good example of this is in the famous eye extraction scene. By showing shot A, a profile shot of the blade lifted up to an eye, and shot B, blood dripping into water, the audience fills in the gruesome shot C, the cyborg cutting out its own eye, by themselves.
AVATAR: THE WAY OF WATER - $400 Million
After the huge financial success of Avatar in 2009, a sci-fi epic which pushed the technological boundaries of motion capture and CGI, Cameron promised a series of sequels.
It took over a decade of development and production to create the first in a series of follow ups - which were all filmed simultaneously.
Avatar combines live action capture, as well as motion capture that is later used to create computer generated characters, in computer generated environments. As I mentioned before, a trademark of Cameron’s work has been how he pushes the technical limits of gear and technology. He has shown an interest in visual effects his whole career, so much so that he started his own VFX company, Digital Domain, in the 1990s.
Since Avatar uses an unusual process to make the movie, let’s break down the full workflow of how it was prepped, shot and then finished in post.
Cameron selected his old cinematographer Russel Carpenter, who worked on Titanic, to shoot the film. The photographic process started in 2018 where Carpenter oversaw the virtual lighting of the CGI sequences, using a programme called Gazebo which was developed by Weta FX visual effects house. He pushed for lighting with different colour nuances for different virtual spaces. This virtual lighting was also valuable as he would have to match it in the next step of the process which was shooting live action scenes with real human characters.
These scenes were filmed in 3-D, using a 3-D rig designed by one of Cameron’s companies, with two Sony Venice cameras and Fujinon zoom lenses. They had to frame for two different aspect ratios during shooting. The taller 1.85:1 was used for 3-D presentation, while the wider 2.39:1 ratio was used for regular 2-D screenings.
They also made the unusual decision to shoot at 48 frames per second, instead of the more regular 24, which decreased the amount of motion blur. To compensate for the one stop loss of light that comes from shooting at a higher frame rate, they shot using the Venice’s higher ISO base of 2,500.
Once the live action portions were done they turned to capturing the virtual performances using motion capture technology.
“So, it’s a set without a set. Your coral reef is going to be some pieces of tubing which are screwed together to create contact points for the actors. So if they’re pulling themself through coral we don’t build the corral. We build that which they need to touch and then we warp reality to fit their hand contacts later.”
Motion capture doesn’t really use cameras in the traditional sense. Instead, actors are put in skintight suits that have reference markers or sensors on them. A few cameras, or sensors are then placed around them that are able to capture how the actor moves in a 3-D space.
This data is then later used in post production to give a digital character the same movement as the actor on set.
For the underwater sequences with digital characters, Cameron played out scenes with his actors in mocap suits in a custom built pool in a studio. Actually shooting underwater gave a more realistic sense of motion but came with some challenges. They had to construct underwater housings for the mocap cameras, and shoot at a wavelength of light near ultraviolet which would better capture motion through water. The pool was lit by rigging 60 Skypanel S-60s over the water. They then placed little plastic beads on the water’s surface. This both diffused the light and minimised capturing reflections underwater - which would cause false data.
After all the live action and motion capture footage had been acquired they could then plug the footage back into the pre-designed lighting and virtual environments. From there an intensive post production team was used to build and finish the characters, movements and environments in the film.
How Cinema Cameras On Movies Are Built
This video is a template for how to put all the pieces of a cinema camera together, as well as a guide to what camera accessories are commonly used and rented on industry level jobs.
INTRODUCTION
In one of the early videos on my channel I created a guide for how cinema cameras are built and all the accessories that are usually attached to them on an industry level production.
Since that video has been removed, due to copyright issues, I thought I’d update that video and create a new guide that can act as a template for how to put all the pieces of a camera together, as well as a guide to what camera accessories are commonly used and rented on industry level jobs.
Before we begin, just a reminder that camera technology and gear gets updated often, and although the gear shown in this video may soon be outdated, the types of accessories shown are pretty standardised across the industry and should at least remain relevant in concept in the future.
BASE PLATE
Before we begin adding accessories, we first need to be able to stabilise the camera so that it can be worked on. This means mounting the bottom of the camera to a base plate which can be fitted to a tripod head.
For a standard studio build, the base plate gets screwed into the bottom of a sliding dovetail plate. To attach the camera to the dovetail requires a bridge plate, such as an Arri BP-8.
Bridge plates are designed with two holes that can accept bars or rods which come in either 19mm or 15mm sizes. These rods are used to mount other accessories which we’ll get to later.
The release on the side of the bridge plate is moved to lock and unlock the plate from sliding. Once unlocked, the camera is able to slide into the dovetail base. Sliding the camera is also a quick way of balancing the weight of the camera on the tripod head so that it is not too front heavy or too back heavy.
Base plates and bridge plates on industry level cinema cameras are standardised to the same design and size so that the different cameras, from an Alexa to a Venice to a Red, can be securely mounted onto other rigs, such as a car mount, or a stabilised head, without having to change the mounting accessories on the camera.
LENS
Next, we can put on a lens. The most common cinema mount found at the back of the lens is a PL, or positive lock, mount. Although there are also PV mount lenses offered by Panavision, as well as large format LPL mounts. The principle for attaching a lens to the camera is the same.
For a camera to accept a lens the mount at the back of the lens needs to be compatible with the mount on the camera.
Line up the locating pin, found at 45 degrees, with the lens, and gently insert it into the mount. Once the lens is in, turn the PL release clockwise to lock the lens in place. To remove a lens you have to hold it in place and then turn the release counter-clockwise until the lens is loose and can be taken out.
MATTEBOX
To the front of the lens we’ll attach a mattebox, this has two main purposes: to flag light from hitting the lens, which reduces flares, and to hold filters.
There are two types of matte boxes: clip on mateboxes, which are usually smaller, take 4” or 4x5.6” filters and can be clamped straight onto the front of a lens, and rail mounted matte boxes which are usually larger, take 6” filters, and need to be slid on and clamped to 19mm, or sometimes, 15mm rods.
For clip on matte boxes it’s important to make sure that the back of the mattebox is the same size as the front diameter of the lens that you will attach it to. For example, if you’re using Zeiss Super Speeds with a 80mm front diameter, then your mattebox back must also be 80mm wide.
To attach a matte box we loosen the release, push it onto the lens and then tighten or clamp the release until it holds securely onto the front of the barrel.
POWER
Before we go any further we’ll need power for both the camera and any accessories that need to be run. Cinema cameras typically have two methods of powering the camera, on-board power, where batteries are mounted onto the back of the camera, and block power, where cameras are tethered to a large battery with a cable.
On-board batteries are usually used for handheld camera builds or other builds where the camera will move a lot and be mobile. While block batteries are often used for studio setups, such as when the camera is on a tripod or a dolly - which has a platform where the battery can rest.
They also enable on-board batteries to be replaced or hot swapped without powering down the camera.
An on-board battery plate for V-mount, gold mount, or B-mount batteries may be slid onto the 15mm rods at the back of the camera in some cases. These batteries don’t power the camera as long as cabled batteries, but they allow the camera to move untethered.
MONITOR
Now that the camera has powered up and we can see an image, let's attach a monitor onto the camera that the DP or operator can use to assess exposure and frame.
Using an articulating arm, such as a Noga arm, we’ll screw one side into the base of the monitor and the other side into a quarter inch mounting point on the camera, such as into the handle or the top plate.
There are two cables that need to be plugged in to get the monitor working. The first is a power cable, such as a D-Tap or a two pin, which we’ll plug into a port on the battery plate, into the battery itself or into a port on the camera. It’s also possible to power monitors with their own batteries.
The second cable is an SDI which provides a video feed from the camera to the monitor. This is plugged from an SDI output on the camera into an SDI input on the monitor.
TRANSMITTER
Occasionally, directors and focus pullers like to stand next to the monitor on camera, but more often than not they like to have a transmitted video feed to their own monitor.
To set this up we’ll attach a wireless transmitter, such as a Bolt 500, to the camera. Again this requires 2 cables: a power cable and an SDI video feed.
This Bolt will then transmit a live video feed wirelessly from the camera to a receiver. The signal is then either received by a VT operator at their station where it will be output to different monitors, or it can be directly attached and cabled to a director’s monitor, or paired with a sidekick receiver that goes into a focus puller’s monitor. So that they can see a live video feed of what the camera is recording while remaining physically separated from it.
FOLLOW FOCUS
Next we’ll need to be able to change the focus on the lens.
There are two types of gear we can use: a standard follow focus, that needs to be operated by standing next to the camera, or a wireless follow focus which is operated remotely using a handset.
The 1st AC will attach a wireless follow focus motor such as from Arri, or Preston, by mounting it onto a rod. This could be top mounted from a short rod, or bottom mounted onto one of the 15mm or 19mm rods on the bridge plate. The teeth of the focus motor need to be aligned with the gears on the barrel of the lens.
You then need to plug a power cable from the camera to the motor. Sometimes an additional MDR box may need to be mounted on the camera, which the motor will be plugged into. The focus puller can then alter the focus of the lens remotely by turning the knob on the handset.
FOCUS ASSIST
A modern tool that some 1st ACs use is a focus assist device, like a Cine Tape, which sends out an ultrasonic signal to measure the distance between the camera and a subject. This allows the 1st AC to more accurately judge and pull focus.
This tool is especially useful when shooting on film, where focus needs to be judged more off distance than by looking at an image on a monitor.
The focus assist is mounted using an articulating arm. It usually comes with a box that needs to be mounted onto the camera and plugged into a power source as well as the focus assist itself. This then wirelessly sends a distance readout to the box as well as to the focus puller’s handset.
SOUND
Finally, sound equipment may be attached. This could either be in the form of a small box, such as a tentacle, which is used to maintain the same timecode between the audio recorder and the camera - which is plugged into the timecode port, or a device which feeds the audio signal from a mixer into the camera through an input such as a 6-pin Lemo.
CONCLUSION
Cinema cameras are modular by design and therefore capable of being configured into an array of different shapes and sizes depending on what rigs will be used, or how the DP would like to operate the camera.
If you’re new to cinema cameras and don’t have much hands-on experience, the easiest way to practise assembling cameras is at a gear rental house. Reaching out to them to intern there or asking them if you could pop in and job shadow a bit will allow you to see various cameras in the flesh and get experience on how to assemble them.
It’ll also help you to better learn the gear and be able to put together a camera gear list of what you need to rent for a shoot. So that, hopefully, when you see a cinema camera on your next job, you’ll have some experience and knowledge on how to go about setting one up.
How Pro Filmmakers Gear Check Before Shooting
Let's look at how professional crew members gear check cinema cameras before shooting with them.
INTRODUCTION
If you’re new to filmmaking you may not know that prior to shooting, the technical crew always attends a gear check to carefully go through all of the equipment that will be used to shoot a film.
In this video I’ll explain why this is done, then take you through the entire process of how a cinema camera is gear checked, so that you can do it the same way as the pros on your next shoot.
WHY DO A GEAR CHECK?
While smaller corporate, documentary or solo shooter jobs may be done by cinematographers who own, rent and operate their own gear, most industry level jobs are shot with high end equipment which is rented from gear houses.
For commercials or short form content, crew is usually booked and paid for a half day to do a gear check at the rental house a day prior to shooting. While long form crew on features or series may be booked to do a few days, or even a week long, gear check that requires a greater degree of technical prep.
There are three main reasons why industry productions do gear checks.
The first and most important reason for a gear check is to ensure that all the equipment that is being rented is in proper working order and that the crew has all the necessary bits and pieces to make the gear package work. For example, you never want to just grab the gear the night before a shoot only to find out as you’re setting up to do the sunrise shot that you’re missing a cable that you need to power the focus motor. I’ve even seen, with my own eyes, an extra Alexa Mini LF, that wasn’t gear checked, brought straight onto a job only to find out that it was missing a sliding dovetail plate and couldn’t be rigged onto a car mount to do a sunrise driving shot.
The second reason to perform a gear check is to prevent accepting faulty or broken gear from the rental house. It’s much easier to immediately flag an issue with gear as you are testing it, than to report faulty gear to the rental house on set and then go through the blame game of when exactly the gear broke and who is responsible for fixing or replacing it.
The final reason is a bit of a rarity, but occasionally, in the event of an insurance claim related to faulty equipment, such as dead pixels on the sensor that spoil the footage, or problems from a film stock, the camera tests that were shot during the gear check may be reviewed by the insurance company or assessor.
GEAR CHECK
So, now that we know why it is so important to do a gear check prior to shooting, let’s go over how industry level crew perform a gear check from start to finish.
The easiest way to make sure that a cinema camera is working properly and that all the necessary accessories are there is to build the camera into the same configuration that it will be built into on set.
The focus puller will assemble the camera and attach all of the accessories, such as a lens, a mattebox, a focus motor, a monitor and a transmitter. They will build the camera into a neat form factor and make sure that all of the necessary accessories are getting power and are functioning properly.
Crew may need to get the floor workers at the gear house to get small bits and pieces of gear to make their build work, such as finding the right mattebox back or reduction ring for a unique lens, getting extra backup cables, or changing out the gear on a follow focus to a different pitch.
The VT operator will test to make sure that they are receiving a video feed to their station, and check that the focus puller is receiving a transmitted signal to the sidekick receiver that is going into their monitor.
If audio will be recorded on the job then the sound recordist will also check that their gear is working and that there is a transmitted audio feed which they can send to the director or clients. They may also rig something like a tentacle onto the camera which will keep the timecode in sync with their external recorder.
Over in another section of the gear house, the grips and electrical department will make sure their equipment is also in working order using the same idea of setting up, powering and testing the gear.
Now that the camera has been built into its shooting configuration, it’s time to start programming or marking up the lenses. The specifics of this will change depending on the kind of wireless focus handset the focus puller has, although the concept is the same - it even applies to old manual follow focus units.
Each cinema lens has markings on the barrel which indicate at what distance the lens will render sharp focus. The 1st AC will now sit and transcribe these markings from the barrel of the lens to the focus ring on their wireless unit.
Some lenses, such as the LDS Ultra Primes or the Cooke S8/i series, come with an LDS strip that sends metadata from the lens, including focus distance, to the camera. This means that when using these lenses with certain compatible focus handsets, they don’t need to be manually programmed as the focus distance will automatically display on the handset.
However, for most vintage lenses or incompatible handsets, the focus puller will need to manually program them. This is done by first calibrating the lens, so that the focus motors find an end stopping point for infinity and close focus. The focus puller will then systematically align each distance marking with the centre indicator mark. For example, it may start at infinity.
This distance can then be programmed into the handset so that the handset knows that for this lens, at this position on the focus motor, the reading is at infinity. This same method can also be done by manually marking old focus rings with a pen.
Once each lens in the set has been marked up, the 1st AC is now ready to shoot camera tests. A slate will be marked up with the name of the production, the date, the camera’s serial number, the lens’ focal distance and its serial number, as well as any other info that may be requested such as the stop, white balance or the focus distance.
The camera will be positioned in front of a focus chart. The 1st will measure the distance between the chart and the film plane of the camera and set the distance marking on the lens to that same measurement. They will check that the image is sharp - which it should be. If it’s not sharp then the lens may need to be sent off to the lens technician at the rental company for adjustment.
Before shooting the DIT will dial in the correct specs, such as resolution and codec, and any other settings specified by the cinematographer - such as a LUT or frame lines for the aspect ratio. They will then put in the slate with the relevant info and shoot a short clip for a couple of seconds to record that the lens was indeed sharp. They will repeat this process for every lens that will be used on the production. These tests are also a good way of checking that each lens will cover the full sensor and that there will be no vignetting, especially at wider focal lengths.
Sometimes, jobs that require VFX work will also ask for lens grids or distortion charts to be shot. These are especially necessary for anamorphic lenses or wider focal lengths that have a lot of curvature to how the image is captured.
Distortion charts have a straight black and white checker block pattern that can be used to measure the degree of distortion a lens has in post production, and then straighten, or unwarp, the image to make 3D tracking easier.
When shooting these tests it’s important to line up the camera to the centre of the chart and make sure that the chart is straight on all axes. Like with the focus tests, a front slate should be used to denote which lens is being shot as even the same brand and focal length lens in a different set may have minor differences in distortion.
Once all the tests have been shot, the card can be handed over to the DIT who will copy over the footage onto the prepared hard drives and examine the integrity of the tests to make sure that there aren’t any issues.
If there are specialised rigs on the job like a Steadicam, gimbal or remote head, then the camera needs to be built and tested for these setups. It’s important to ensure that the camera will be able to balance on these rigs. This might involve stripping the camera into a smaller form factor.
While all of this is going on, the 2nd AC will usually be working on labelling gear and organising it into cases. They will apply long strips of coloured gaffer tape onto the outside of the lens box and use a permanent marker to write down each focal length inside the box, its stop and its minimum focus distance. This makes it easy to identify the lenses you have and give quick information on the stop and MOD to the focus puller or cinematographer.
Long form jobs may involve a lot of labelling and organising of various other gear, such as filters, into the easiest manageable workflow. It’s a loader's job to be able to know and pull out every piece of gear that is on the job at a moment’s notice.
Finally, the camera may get packed into what’s called a coffin, or a standby box, that allows the camera to easily and safely be packed away without deconstructing the whole build.
What A DIT Does On A Film Set
What exactly is a DIT? Let's find out.
INTRODUCTION
In the transition from shooting on film to working with digital cinema cameras, a new job appeared on movie sets - almost overnight. That job is the DIT or digital imaging technician.
You may not have heard of this crew member, but they are now indispensable on any industry level set. So, what exactly is a DIT and what do they do that makes their job so important? Let’s find out.
ROLE
The job of the DIT came about during the transition to shooting with digital cameras. Before then, exposed film stock would be passed on from the camera team to the film lab where it would be developed and later enter a post production workflow.
Shooting digitally suddenly meant that camera cards with footage on them needed to be transferred onto hard drives during shooting, and that digital footage needed to be prepped on set for post production.
This is because each card that a digital camera records onto contains a finite amount of recording space. For example, when shooting in open gate ARRIRAW on an Alexa 35, each 1TB card will be able to roll for 28 minutes before running out of space. If you get 3 cards in a camera kit, this means that after an hour and a half of roll time you will run out of recording space, unless DITs transfer the digital files from the cards onto hard drives.
There may be a bit of a misconception out there that this is all that a DIT does: dump footage onto hard drives and back it up. This is a better description for another crew member that may work for the DIT, a data wrangler. However, DITs on high end jobs do far more than just sit behind a computer in a van, well, at least they should.
Overall, the DIT acts as a connector between the on-set production and capture of the footage and how that footage is set up for post production.
They collaborate with the DP to correctly set up digital cameras and dial in the settings: such as the codec, resolution and frame rate. Some DPs also prefer that the DIT handles any setting changes that may happen during shooting, such as if they want to shoot at a high frame rate for a single shot, which means that the camera must drop down in resolution.
Importantly they also ensure, or oversee, the transfer of footage and metadata from camera cards onto hard drives. Once the footage has been secured they are responsible for quality control checking the integrity of it, by examining things like the sharpness of the focus, the exposure, the noise, any unusual colour tints or irregularities from filters, and to check for dead pixels, or any unwanted optical defects like lights strobing, or ghosting.
They will establish a file structure on the hard drives and come up with a workflow for that data where necessary. Depending on the needs of the project they may be tasked with preparing the files for post production so that the offline editor receives the drive with files that are ready to work with.
This may include doing transcodes or creating proxies from high resolution footage or syncing up the video with the sound clips on a timeline.
At the same time, DITs on big jobs often work on-set with the cinematographer to provide advice or manage how the images are captured, by ensuring that they are correctly exposed.
As such, DITs need to be versed both in the on-set camera side of capturing images, as well as in the post-production computer side of managing camera data and creating a workflow.
AVERAGE DAY ON SET
Before production begins, the DIT must attend the gear check. This is where the camera that will be used on the production is set up at the gear rental company and tested by the crew to make sure all is in working order.
The DIT will program the correct capture settings into the camera that they received from the cinematographer, such as resolution, RAW or ProRes, or adding the correct de-squeeze factor if using anamorphic lenses.
They will also add and overlay the frame lines for the final aspect ratio or ratios of the project, and, at the request of the DP may add a custom LUT - so that the image may be viewed with a specific look.
Once the camera is built, the focus puller will shoot a test roll of a camera chart on each lens that will be used on the job. The DIT will then transfer these tests to the drive, and carefully examine them to make sure the focus on each lens is sharp, that there are no unwanted aberrations and that the sensor is clean and free of any dead pixels.
They usually also shoot a black - where the camera rolls with the turret covering the mount - and a white - where a piece of paper is passed through the frame and recorded. This is a good way of checking the integrity of the sensor.
At the beginning of every shoot day, the first thing that they will do is to take the turret off the camera and physically check the sensor with their eyes to make sure there is no dust on it. If there is dirt on the sensor which may have been collected in between lens changes then it is the job of the DIT to very carefully dust it or clean it with a sensor swab.
A card will be loaded into the camera, they will double check that all the settings are correct and the camera is then ready for shooting.
On big jobs with multiple cameras, the DIT may be called upon by the cinematographer to help maintain a consistent exposure across all of the cameras. This usually involves getting a transmitted video image from each camera and using exposure tools such as a waveform and false colour to find an exposure that is consistent. They will then get the camera assistants to adjust the aperture or filtration on each camera until all exposures are uniform. Or they may use a remote iris control to adjust the aperture.
Once enough footage has been shot they will call on the 2nd AC to reload the camera. The AC will hand them the card with the footage on, they will head to the DIT van, and they, or their data wrangler, will immediately start transferring and backing up the footage to hard drives on a computer using software such as Silverstack - which uses checksum verification to make sure that all the files from the source get copied over to the correct destination on the drive without any missing files or duplication.
During this process they will check the clips on a monitor to make sure there are no optical or camera related issues with the files. If, for example, one shot was out of focus on every take, the DIT will quickly go to set and inform the focus puller or DP that there was an issue. They will then determine whether it is possible to re-shoot the shot.
Then, depending on what was requested from the person or company doing post production, they will prepare the drive for the edit. This usually means transcoding the footage to a lower resolution and codec that the offline editor can work with, making sure that the file names are the same and will correctly align again for the online edit.
Usually DPs will ask them to add a LUT to the proxy files. Some cinematographers will also ask the DIT to colour correct each clip that is transcoded so that they can be viewed in the edit with a look that will be as similar as possible to the final grade.
Once a card has been transferred and backed up onto at least two drives, the DIT will then return the card to the 2nd AC. Any card that they return means that the footage on it has been secured and it is safe to erase or format the card and start recording a new roll of footage over it.
TIPS
The worst fear of any DIT is that footage will get lost. It’s also probably the easiest way to get fired and never hired again.
That’s why good DITs have a system that they always stick to that ensures that no data is ever lost. This may be getting the loader to label and seal each exposed card with a roll number. The DIT can then put this sticker next to the card reader when the card is inserted and the footage is copied over, then take the label off the card and case once the footage has been copied and backed up.
That way, even if you are very busy, you’ll always know what footage needs to be copied and what footage has already been copied.
Some DITs also use software to do this, such as Parashoot. This software will detect when all the files from a card have been copied over and backed up. Once this card is handed back to the 2nd AC and loaded into the camera, a message will pop up on the screen asking if you’d like to erase the card. If this message pops up then the loader knows that the card has been backed up and is ready to format.
Another important skill for a DIT is to effectively manage the flow of cards. This may mean that they call for the camera to be reloaded early, before it is completely filled up, so that they can start copying and backing up the footage. If they let too many cards fill up without copying them over, then they may get into a situation where the production runs out of available cards to shoot on - another easy way of getting fired.
Another trick most DITs do, is to reload the camera after the first shot on day one is completed. They can then double check that there are no technical issues with the footage, before an entire card of shots gets filled up with unusable takes.
Being a DIT means that you need to always stay abreast of the newest technology. Most DITs have their own van, or cart, that they custom build into a mini post production studio with a computer, monitor, RAID and a UPS solution.
They always need to ensure that they have the necessary gear to deal with whatever camera is on the job. For example, if doing a shoot with two Alexa 65 cameras, you will need to have a lightning fast computer and hard drives to keep up with copying and processing the enormous data files.
Being able to calculate how much storage you need is another important skill for DITs to have. Hard drives are expensive, so you don’t want to overestimate your storage needs and get production to purchase lots of storage which isn’t used, but at the same time you never want to be in a position where you run out of space on the drives.
There’s a good tool on the Arri website where you can plug in the camera and settings you are using and it’ll spit out how much recording time you’ll get per card. This may be a good starting point for making this calculation.
How Cinematographers Set Their Exposure
Exposure is determined by six factors: the sensor’s sensitivity or ISO, the shutter speed, the lens’ aperture, the frame rate, ND filters and of course the overall amount of light in the scene that is being photographed. Let's take a look at how cinematographers expose.
WHAT IS EXPOSURE?
You’ve no doubt seen how some movies look darker than others. Comedies may be bright and colourful, while dramatic crime films may be dark and moody. This is, to a large part, determined by how cinematographers set their exposure: the level of brightness across a frame.
Although it may seem simple on the surface, being able to expose properly is one of the most crucial skills that every cinematographer needs.
Exposure is determined by six factors: the sensor’s sensitivity or ISO, the shutter speed, the lens’ aperture, the frame rate, ND filters and of course the overall amount of light in the scene that is being photographed. Before we talk about how to get to the correct exposure it’s important to be aware of a few terms.
The amount of light present in an image will almost never be completely even across the whole frame. Part of the job of the cinematographer is to use lighting to create different pockets of light and shadow in an image - called contrast. Sometimes that contrast might be light, other times it may be strong.
The difference between the brightest and darkest part of the image is called the contrast ratio. Traditionally in cinematography this refers to the difference in illumination between the brightness of the key light and brightness of the fill.
Similarly, a camera also has a threshold of how much of the difference between light and dark it can capture before the information turns to pure white or pure black. This is called dynamic range.
Modern digital cinema cameras usually have around 15 to 17 stops of dynamic range, while consumer video cameras may have less than 14 stops. This is one of the reasons that it may be more difficult to achieve a balanced exposure on cheaper cameras.
If the strength of the contrast ratio exceeds the dynamic range of the camera, then the image will either blow out - and turn to pure white - or it will get crushed - and turn to pure black.
Usually cinematographers like to light scenes so that the total exposure falls within the dynamic range of the camera. In other words, they like to create images in which the highlights do not blow out, and in which the shadows are not completely crushed. In a similar way to how the human eye perceives brightness in the real world.
HOW TO FIND THE RIGHT EXPOSURE?
So, we know what exposure is, but how do we find the right exposure using a camera?
Cinematographers, or sometimes the DIT, regularly use different tools to determine the best exposure - which we’ll get to a bit later. But good exposure on digital cameras can also be achieved just by referring to the image on the monitor.
The thing is, what is considered a ‘good’ exposure is a bit subjective. For example, The Batman is lit and exposed at a very low level of illumination, while a comedy like Palm Springs exposes at a much brighter level. I’d say this lower level of exposure helps to accentuate the moody, emotionally dark tone in Batman, while the brighter exposure is, literally, lighter, and brighter in tone.
It doesn’t necessarily mean an image is bad, or incorrectly exposed if a small part of the frame is blown out, or crushed to black. As long as its exposure suits the tone of the story and the environment being photographed.
Although exposure is subjective, most cinematographers still try to avoid one thing - blowing out highlights to pure white. There are always exceptions to this rule and situations where this might be impossible to avoid, but for the most part a good rule of thumb with exposure is to try to set it so that any large, bright part of the frame maintains detail.
This is called exposing for, or protecting, the highlights.
Cinematographers using this technique will first dial down the exposure to the point where the brightest highlights still preserve colour information. This may mean that other areas of the image are now too dark.
To counteract this, DPs may then add light - using artificial sources - to lift up the brightness of the shadows until there is detail present.
They may also use lighting to bring down the levels of any highlights to an acceptable level, for example using a dimmer on a practical lamp, or adding an ND gel or a net to a window.
That’s why the best way to control the contrast ratio in an image is usually by lighting.
Sometimes cinematographers may be forced to blow out bright highlights that they can’t control which take up a small portion of the image, such as windows, flames or practical fixtures.
As a side note, film stocks tend to deal with overexposure of highlights better than digital cameras do. Film gives more of a soft halation, a gradual fall off and renders highlights in a creamier colour, as opposed to the uglier, pure white of digital.
However, there are ways to counteract blown digital highlights, both in the grade, as well as by trying to introduce a softer highlight rolloff using diffusion filters.
EXPOSURE TOOLS
Although exposing by just looking at the monitor is popular, it also comes with a few downsides. The first of which is that, unless you’re working with a specially calibrated monitor, every screen produces a different level of brightness and represents colours and exposure values in different ways. This could trick you into exposing incorrectly.
Light conditions on set may also affect how you see exposure. Even if you’re working with an older monitor like a TV Logic 056, which has lovely, accurate colours, your exposure may still be off as the monitor has a very dim picture brightness in outdoor situations.
To avoid making exposure mistakes, cinematographers either use physical tools, like a light meter, or digital software tools built into the camera such as a waveform or false colour - which is the most common exposure tool on high end cinema cameras from Arri and Red.
When activated, false colour displays images on a monitor in a specific colour spectrum - ranging from a warm red, to a cool purple or dark blue - with green sitting somewhere in the middle. Each colour represents an IRE value - a measure of brightness on a scale of 0 - being complete black - to 100 being complete white.
Cinematographers can use false colour to check that their highlights aren’t red and blowing out and that their shadows aren’t purple and getting crushed to black. People often recommend that to get perfect skin tones you should expose to where the skin tones turns green or a light grey on the false colour reading.
However, as I mentioned before, some cinematographers may want to disregard this and expose darker in the dark grey to blue range to achieve a moodier feel.
Another useful exposure tool to understand is the waveform. This is made up of a vertical axis that expresses exposure in IRE values from 0 to 100 and a horizontal axis that represents the width of the frame.
For example if there is one bright highlight in the centre of the image it will be represented by a spike in IRE value in the centre of the waveform.
Like with false colour, a waveform can also be consulted to see whether areas of the image are exceeding 100 IRE and blowing out, or sitting on 0 IRE and getting crushed.
Usually when consulting a monitor for exposure it’s good to use a LUT to expose with rather than looking at the log image. Because a log image is so flat it may be easy to under or overexpose it, while still seeing detail in the log. But later in post production you may find that when you add a LUT or grade it that it is under or over exposed and doesn’t have enough detail to work with.
CONTROLLING EXPOSURE
As I mentioned before, there are six exposure variables. Each of these variables can be manipulated to arrive at an acceptable exposure, however, some of these variables also have other effects on the image.
For example frame rate changes the speed at which motion is captured, the shutter angle changes how blurry that motion is, the ISO affects how grainy or noisy the image is, and the aperture affects to what extent the background is out of focus.
For this reason, many cinematographers maintain the same or a similar frame rate, shutter, aperture and ISO throughout a project and change the other two variables - the amount of light on a set and the strength of the ND filter - to control exposure.
ND or neutral density filters are made up of glass which reduces the amount of light that hits the sensor without changing the colour characteristics of that light.
When lighting a scene it’s important to ensure that there is enough light to properly expose the image. Rather have too much light than too little light. You can always cut down the amount of light by using ND filters, but if a space is too dark you can’t get a brighter exposure without increasing the amount of grain or noise by lifting the camera’s sensitivity.
The best way to go about controlling your exposure with a cinema camera is to first ensure your ISO, shutter and frame rate are correctly dialled in. Then stop down the aperture on the lens until you arrive at an exposure which you like. This may leave the lens at a stop of, for example T/ 8.
Next, you need to determine what stop or aperture you would like to shoot at. For example maybe you’d like to shoot at T/ 2 to get a shallow depth of field.
You then need to calculate the difference in stops between T/8 and T/2 - one, two, three, four. This means you need to add four stops of ND to get to a shooting aperture of T/2.
Each stop of ND filter is measured in increments of 0.3. So four stops of ND would be ND 1.2. This means you can add an ND 1.2 either with internal ND filters or with external glass ND filters in a mattebox and set the lens to T/2.
You now have the correct exposure, while also having your desired aperture of T/2 that produces a beautiful, shallow depth of field.
Cinematography Style: Polly Morgan
Polly Morgan’s work as a cinematographer lies at the intersection between naturalism and traditionalism. Her work is fueled by capturing an expression of human experiences while at the same time often embracing a more traditionally cinematic style of working. Let's unpack her work further in this edition of Cinematography Style.
INTRODUCTION
“Stories that really excite me are the stories that are really representative of the human experience. The pain and the suffering or the joy and the love and all of these things that we all go through as people.”
Polly Morgan’s work as a cinematographer lies at the intersection between naturalism and traditionalism. Her work is fueled by capturing an expression of human experiences while at the same time often embracing a more traditionally cinematic style of working.
This video will unpack this further by going over how she got her start in the industry, looking at her philosophy on capturing images and breaking down some of the gear which she has used to do so.
BACKGROUND
“As a young girl I was always just a massive fan of movies. When I was 13 I was lucky enough to experience a film crew come onto our farm house and using it as base camp. You know, they let me look through the camera and it was really then that my love of cinematography was born.”
To turn this early dream into a reality took some time and a lot of work. After getting a Bachelor of Arts in broadcasting where she got hands-on experience with operating cameras, she spent about 10 years working in film as a crew member.
One of her early jobs in the industry was working as a production assistant for RSA, Ridley Scott’s commercial production company. After putting in time as a PA she worked as a camera trainee and then moved up to working as a loader. As an assistant she worked on some big studio films, such as Inception, and worked for many well known DPs such as Haris Zambarloukos, Wally Pfister and Dan Mindel.
This experience allowed her to study at the AFI Conservatory, after which she was able to launch into a career as a cinematographer - first shooting independent movies, before progressively shooting larger studio films.
PHILOSOPHY
When she starts a new project she’ll work closely with the director in pre-production to determine the emotional arc of the story, and then find a visual camera language that is representative of that story and the characters in it.
Her goal, along with the director she is collaborating with, is to craft a look that embodies a feeling. A look that supports the story so well that it becomes invisible to the audience, to the point that they are unaware that their emotions and impressions of the story are being manipulated by the camerawork.
“The one thing about cinematography is you don’t want the viewer to be aware of it. You don’t want someone to watch a film and say, ‘uh, the film was terrible but wow that cinematography.’ Because my job is to serve the story. My job is not to make beautiful imagery. If you’re telling a story how do you want the viewer to feel?”
This look and feeling that she creates may be different depending on what the subject matter and genre she is working on is. Suspenseful horror may involve a more creeping movement and a darkly exposed image, whereas a lighthearted comedy will have a brighter colour palette, lighter exposure and a more stable camera.
Perhaps her ability to alternate between different techniques and working styles comes from her diverse filmography.
“I think I’ve been incredibly fortunate in that I managed to shoot commercials, TV, feature films and documentaries. It means that you’re working with a variety of budgets and you’re working with a variety of crew size and a variety of time.”
Although she is capable of changing the look of the cinematography to suit the story, I feel like at the core of her look she balances naturalistic feeling techniques - such as using natural, soft light and handheld movement - with more traditional, conventional, classical techniques - like using longer focal lengths for close ups, smooth cinematic movement or using a blue-ish moonlight and haze look for night exteriors.
In this way she goes between using newer school techniques and more traditional techniques to tell stories.
So much of the role of the DP involves focusing on the technical side of creating images, but an equally important part which she focuses on is how she manages people to get the most from her crew and create the right on set environment.
“It takes a crew of people to make a film. It’s not just being an artist. You also have to be a manager and you have to be a leader. It’s just a juggle of, like, how to get the best out of people. How to be, sort of like, a collaborator and how to be a mediator. If people see the people that they are working for get stressed out or lose their cool, I think then it really creates an atmosphere which is not a conducive one to do good work.”
GEAR
Since I keep mentioning how she contrasts naturalistic techniques with more traditional ones - let me break this down a bit by defining some examples of how, and with what gear, she does this.
Let’s look at her lighting, starting with the naturalistic side. Ambient light which is not from artificially created human fixtures, tends to be quite a soft source of illumination. Natural sunlight often bounces off various objects, like walls, grass, or even the moon. Because bounced light is less direct, it is softer. Therefore she often uses softer, diffused or bounced sources of light to create a naturalistic feel.
“A more modern approach to lighting has been softer light. Now that we have more sensitive cameras and we don’t have to worry like they did 100 years ago on very slow film stocks, we don’t have to light with very strong, hard light.”
An example of her using a more sensitive camera can be found in 6 Balloons, which she shot on the Panasonic Varicam 35 - one of the earliest high end cinema cameras to feature a dual base ISO that went up to 5,000.
Having such a sensitive camera meant that she could use minimal lighting for night scenes on the street to achieve a more realistic look - which felt the real world light sources as they passed. For car driving scenes, which were shot on a process trailer, she accentuated the natural light by rigging RGB Arri Skypanels on the trailer outside the car to create the look of stop lights.
For other films she’s also leaned heavily into a soft, diffused, more neutral, natural, pretty ambient light look - especially for day exteriors. She’s flipped this convention in other stories, by lighting scenes using a more traditional, old school convention - blue backlit moonlight with haze.
On A Quiet Place II, she shot on a far less sensitive Kodak 500T film stock, which she rated at 400 ISO. Shooting at 400 ISO, compared to 5,000 meant that she needed much more light to expose the image properly. She used a combination of large HMI units and the high output LED Skypanel 360 - which she brought to a colour temperature of 4,400K. When shot with the 3,200K film stock it produced a cool, blue look.
In a traditional, old Hollywood style, she backlit the actors with these hard sources, so that the lights were facing towards the camera. This subtly lifts the background and leaves characters in silhouette, illuminated by an edgy highlight. She then added haze, which accentuates the light source and raises the exposure of the midtones in the background - when the smoke is hit by light.
This technique of creating a cool moonlight is quite a conventional way of lighting night exteriors. Even on other movies which she’s shot digitally, such as The Woman King, she’s repeated this technique.
So her lighting of night scenes is one way that she has contrasted more naturalistic photography with a more conventional approach. Another technique is how she uses camera movement. The films that she shoots often have a rougher, on the shoulder, handheld movement which track the character with a more documentary-esque feeling of naturalism.
She contrasts this with smooth, fluid, camera moves done on dollies, Steadicam or a Technocrane which are more conventionally cinematic.
When it comes to choosing a camera, she’s opted for both film and digital acquisition before. She’s used film cameras, such Panavision’s Millennium XL2, with Kodak 250D and 500T stocks. However, she mainly uses digital cameras. Some examples include Panavision’s DXL2, the Alexa 65, Mini and Mini LF and Panasonic’s Varicam 35.
Likewise she’s used both anamorphic lenses, like the T-Series and Cooke Xtal Express, and spherical lenses, like the Panavision Primo 70s and the Zeiss Super Speeds.
She’s often drawn to Panavision glass and even shot Where The Crawdads Sing on unreleased prototype large format spherical lenses made by Panavision which had imperfections, focus falloff and a softness to them which are reminiscent of anamorphic glass - again, evoking a more textured, organic, naturalistic feeling.
While, at the same time, she often uses longer focal length lenses for close ups, another more traditional technique.
Netflix Approved Cinema Cameras
If you want to shoot original content for Netflix then you have to do it with one of their approved cinema cameras. Why exactly is this and what cameras are approved?
INTRODUCTION
The age of digital streaming has brought about many changes and has for the most part shifted the movie experience from the cinema to the couch. Perhaps the largest and most influential player in this space is Netflix. If you want to shoot original content for them you have to do it with one of their approved cinema cameras. Why exactly is this and what cameras are approved? Well, let’s find out.
WHY NETFLIX APPROVES CAMERAS
If you go to Netflix’s website, the official reason that they give for mandating their list of approved cameras is “to help creatives produce their best work and create compelling visual experiences for our audience.”
Although this is a pretty general statement, they do go on to say that some of the image capture benchmarks that they take into account when approving a camera include: a high dynamic range, colour accuracy, detail rendition, low grain and high resolution.
For Netflix authorised fiction, 90% of the footage must have been captured on one of their approved cameras. That leaves a 10% gap for footage that may be needed from specialty cameras, such as smartphones, high frame rate cameras, drones, or action cameras.
For non-fiction content they are a little bit more flexible with this ratio and may allow, for example, more than 10% of the footage to be shot with non-approved cameras, such as when using old stock footage in documentaries.
An interesting question however is whether things like higher resolution truly do provide, in their words, “a compelling visual experience” or a visual experience which best suits the story.
Some of the shows on Netflix with the highest numbers of streams, were in fact not original content, and were therefore quite often captured on cameras which don’t come close to Netflix’s image capture benchmarks of today.
For example, if The Office, the highest streamed TV show on the platform in 2020, was pitched to Netflix today it would have to be captured with a far cleaner, higher resolution, larger sensor, low noise look, which would go against the intended fly-on-the-wall, TV broadcast, 1080p video look of the show.
It should be noted that Netflix’s regulations about approved cameras only apply to newly filmed, Netflix original content. Acquisition of older content is not bound to their image capture benchmarks.
CAMERA SPEC REQUIREMENTS
Let’s dive further into the specific minimum capture requirements that all Netflix approved cameras need to adhere to.
The first, and probably most important minimum requirement that Netflix sets is resolution. Netflix originals need to be captured by a camera in a resolution of at least 3840 horizontal photosites - also called 4K UHD.
One of the reasons I imagine they created this resolution requirement was so that they could offer a dedicated Premium price tier, different from their Standard Full HD tier, which allows users to stream in 4K UHD.
When this resolution requirement was first introduced a number of years ago, it controversially cut out most Arri cameras from approval, including the most popular cinema camera of the time, the Alexa Mini - which has 3424 horizontal photosites - just short of the 3840 that are required.
Since then, Arri has released a few cameras capable of recording at Netflix’s required resolution - which we’ll get into later.
Netflix approved cameras need to be able to capture using either a lightly compressed or uncompressed RAW codec, or an Intra Frame 4:2:2 codec, capable of 10-bit or greater recording with a data rate at or above 240 Mbps in a wide or log scene-referred colour space.
These requirements ensure that cameras capture a good level of colour and information which can be worked with intensively in post-production. Most high-end cinema cameras stand up to these image standards - but it does eliminate some smaller mirrorless cameras - such as the Sony A7 III - which only does 8-bit 4:2:0 internal recording.
Finally, Netflix requires that approved cameras are capable of jamming timecode to an external source and recording that timecode as metadata. This is mainly used for synchronising sound in post-production which is captured on location with an external recorder.
Again, this eliminates many smaller body cameras, such as the Sony A7s III, which can’t jam or write timecode as metadata.
It’s also possible to shoot on film - which will involve some technical coordination with Netflix - and usually means scanning the film negative at a 4K resolution to comply with their UHD standards.
NETFLIX APPROVED CAMERAS
Since there is quite a long list of Netflix approved cameras to choose from I’ll just be highlighting some of the most popular ones, breaking them into two camps: documentary and lower end cinema cameras and high end cinema cameras. If you’d like to review all the approved cameras you can do so on Netflix’s website.
Starting at the lower end is Panasonic’s S1H, the only camera on the list with an SLR style body that takes both stills and video. This sits at a similar level as Canon’s C70 and Sony’s FX3, which both make the list, occupying the positions of being affordable Netflix approved options that come in a small form factor.
These cameras are great as B or C cams where they may accompany a higher end A camera. Their smaller size and light weight also means they can be rigged into tight or less accessible spaces.
When it comes to observational documentaries, the choice of camera often boils down to a decision between Canon or Sony. Sony’s cheaper FX6 and more expensive FX9 are both approved and take up Sony’s prime spots for single operator documentary cameras. Older Sony models like the FS7, F55 and some broadcast cameras also make the list.
The main Canon single shooter documentary competitor is the C300 - with both the mark two and mark three making an appearance. While the C500 and C700 come in at a higher price point.
Alternatives in this single operator, midrange budget level include Blackmagic’s Ursa Mini and Ursa Mini Pro, as well as Panasonic’s lower budget EVA1 and their higher budget Varicam LT.
Finally, we get into the high end cinema camera range. These cameras are used to shoot the majority of Netflix’s series, movies and high end documentaries. This category is dominated by Red, Panavision - which is basically a repackaged Red sensor, Sony and Arri.
Earlier I mentioned that when these approval guidelines were first released, most Arri cameras were excluded due to their inability, or rather lack of interest, in recording at high resolutions. It was only their 6K, 65mm format Alexa 65 which made the cut - albeit at a very high budget point.
To address this, they have since added three other 4K capable, Netflix approved cameras to their lineup: the large format Alexa LF, the smaller body version the Mini LF and their recent Super35 camera, the Alexa 35. All of which can be used for both regular spherical and anamorphic capture.
Arri’s early exclusion meant that Red, with their focus on high resolution, dominated the early days of the streaming service. Although much of that early dominance has been eroded in recent years, Reds are still popular cameras of choice for original Netflix productions.
They have loads of camera and sensor variations that are approved - most of which are capable of anamorphic capture. Some of the most popular are the full frame, 8K Monstro, their Super35 Dragon and Helium cameras, the newer body Red Ranger, and their little, low price point Red Komodo.
Panavision also offers the DXL2 which takes the Red Monstro sensor and places it in a more traditional studio style cinema camera body with great ergonomics.
The final high end competitor is Sony, with their Venice and Venice 2 which are approved for spherical and anamorphic capture. This has been a great product for Sony which has won them a lot of market share in this high end cinema camera category.
ASPECT RATIO REQUIREMENTS
So we know that Netflix original content requires using a camera capable of capturing at least 4K UHD. But, what happens after filming is completed and the final files need to be delivered to Netflix?
The first thing to be aware of is aspect ratio. This is displayed as the value of a file’s width in relation to its height. The easiest way to get to this ratio is to divide the number of pixels that make up the width by the number of pixels that make up the height.
So if we take video captured in Netflix’s desired 3840x2160 resolution, we divide 3840 by 2160. This equals 1.78. Therefore it has an aspect ratio of 1.78:1.
All original Netflix content needs to be delivered in 4K UHD in a container aspect ratio of 1.78:1 otherwise called 16:9 - so 3840 pixels by 2160 pixels. This doesn’t however mean that aspect ratios besides 16:9 can’t be used.
Netflix supports a host of different established active picture aspect ratios, such as 1.33, 1.78, 1.85 and 2.39:1. These ratios are a creative choice made by filmmakers.
When productions choose an active picture aspect ratio different from the container aspect ratio of 1.78:1, there will be black matte bars on either the sides, called pillar boxes, or on the top and bottom, called letter boxes.
So, the total video file including the black bars are called the container ratio - which is 1.78:1 and 3840x2160.
While the part of the file that isn’t black and displays only the filmed content is called the active picture aspect ratio - which could be for example 1.33:1 and 2880x2160 or 2.39:1 and 3840x1610.
Some content, like that which was not filmed on 4K UHD capable cameras, must still be delivered to Netflix in a 1.78:1 container ratio, but at a lower 1920x1080 resolution.
How To Build A Career As A Cinematographer
On a film set with tens or even hundreds of crew members, there is only one head of department role of director of photography. In this competitive environment what are some of the ways that you can increase your chances of landing this job and sustaining a career as a DP?
IT’S NOT AS EASY AS IT SEEMS
Every job is impacted by the nature of supply and demand. Each year film schools pump out a ton of graduates who leave with expectations that they’ll go straight into getting a job as a cinematographer on movies. However, what isn’t always mentioned is that these jobs are in very high demand, with an oversupply of people vying for them.
On a film set with tens or even hundreds of crew members, there is only one head of department role of director of photography. So, in this competitive environment, what are some of the ways that you can increase your chances of landing this job, and sustaining a career as a DP?
STARTED FROM THE BOTTOM
The reason it’s difficult to give generic advice on this topic is that there is no single, straightforward path that will lead you to guaranteed success in the film industry.
When you talk to different DPs you’ll quickly find out that each of them kickstarted their careers in different ways. I know some who began working as a still photographer and then transitioned to shooting moving images, others went to film school and built a network of collaborators there, while some did it the old fashioned way and worked their way up through different crew positions over years or even decades.
That isn’t to say that there aren’t some commonly identifiable paths that are regularly taken by many to build a career as a cinematographer. The most traditional path, that usually also takes the longest, is to build up on set experience by working in different crew roles.
Although there is only one job of director of photography on a set, there are far more other roles available in the production, camera, electrical and grip department. Playing the numbers game by trying to land any crew role that you can is a good way of getting on to a set.
“I always wanted to light and to shoot. I started out working at rental houses, fixing up equipment for various shoots. I was a trainee and then I was out loading, yeah. I launched myself into my career at that end. You know I was a clapper loader for four or five years. I did a period as a focus puller. It was a slow progression but a very rewarding one.” - Ben Davis, Cinematographer
Very rarely do people in the film industry become DPs overnight. More often than not it takes years to build a career.
This trajectory is kind of like a standard career where you work for a company and then progressively get promoted up the ranks over time until you land an executive job.
However, as you'll be a freelance worker, the onus is on you to gather up skills, and then, when you are able to, push to move to a higher up crew position and cease taking on jobs in your previous position. By being on set you’re not only being exposed to the more obvious things like gear, how to build and operate a camera, how to light and how a dolly works.
You’re also learning the nuances of how a film set operates: how people interact, the slang and words for certain things, what crew you want to work with, the structure and conventions of how shots are set up and executed and how to adapt your mind and body to a sometimes stressful work environment.
In this way, starting from the bottom and working up provides a valid platform to kick start a career.
FILM SCHOOL
Although getting a job as a DP often takes a long time to happen, there are always a few cases where people are able to work as a cinematographer at a fairly young age. More often than not this is achieved by film school graduates.
“It’s so much about who you meet and the work that you put out there right up front. And I think I was fortunate enough to go to AFI and there be an amazing group of students that we continued to stay friends and, you know, my classmates would recommend me and I was able to meet some really influential people. You know I’d do a small project, maybe like a fashion film. Stuff like that. And then you start to build my reel.” - Autumn Durald Arkapaw, Cinematographer
A key component of working in the film industry is building up a network of contacts and collaborators who you can work with. For DPs this means forging bonds with directors and producers who will hire them.
Probably the easiest way to quickly build up a network of collaborators is through attending a film school where you’ll all be put in the same class and forced to work together on different projects.
Graduates leave film school with a network of like minded collaborators who may employ them in the future.
The second part to early success in this equation is building a body of work. Shooting something which gains traction online, or which gets accolades at a film festival is the best way of promoting yourself and will encourage new directors to reach out to you.
SHOOTING YOUR WAY UP
“I ended up being a news cameraman through desperation. I was broke, living in England and I did know how to use a camera. In fact the first job that I got was as a news sound man and I pleaded with them and said, ‘You know, I can shoot.’ The Falklands War occurred and they were desperate for cameramen so they gave me a camera and I was given 10 years as a news cameraman to make every possible mistake you can make.” - Sean Bobbitt, Cinematographer
Another way of building up your skillset as a camera person - outside of attending film school or working on an industry set - is to find jobs where you shoot in whatever scenario you can. This may be shooting weddings, no budget music videos, your own documentaries or landing a job in TV.
As much as you can build up theoretical knowledge by studying or watching videos online, the real training ground is actually shooting. There’s a big gap between knowing how to expose and operate a camera and being a director of photography. One of the reasons that it’s rare to find super young DPs shooting high end work is purely because you need lots of time behind the camera to sharpen your skills and build experience.
Although I think those three paths of working as crew on a set, attending film school and shooting smaller films are probably the three most common ways of building a career as a DP, they are of course not mutually exclusive. Most cinematographers will do some if not all of these things in the early stages of their careers.
PATH TO SUCCESS
With those jumping off points out of the way, let’s go over six tips, or areas of advice that I’ve focused on and found useful when building my career - which is still a bit of a work in progress by the way.
One, it’s important to know the kind of lifestyle that working in film comes with and you should prepare to set yourself up for that. Working in the film industry means that you will be employed on a freelance basis. You must be happy with the risk of not having a guaranteed salary. This means that in the beginning you may get called up for a job the night before it happens.
Therefore creating a life for yourself that ensures as much flexibility in your schedule as possible, so that you can take any opportunities that arise, is necessary.
In the beginning this may also mean finding another freelance or flexible source of employment income that will cover your expenses when there is a lull in film work.
Two, it’s important to have a clear goal and sense of direction. Being a cinematographer means being a specialist, not a generalist. Do you mainly want to shoot narrative films, documentaries, or commercial work? Or a step further, do you want to shoot car commercials, beauty commercials or food commercials? Even though you may need to take any job you can get in the beginning, try and focus on identifying and working towards your niche as soon as you can.
Three, time. Perhaps it’s because it’s not in a film school’s interest to sink dreams, but you need to prepare yourself for the fact that to become an industry level DP will take time. This may take less time for some than for others, but nonetheless you should be aware that you are setting a long term career goal, which likely won’t happen in a couple months or even a couple years.
Four, be proactive. Since you’ll be self employed, it’s on you to be proactive about seeking opportunities, reaching out to collaborators or mentors, being open to doing free work, shooting passion projects or finding an agent. You need to put yourself in the right environment for opportunities to present themselves.
Five, you will get hired based on your reputation. It’s pretty obvious, but important, to remember that however you act on set, and whatever work you produce will be what gets you hired on the next job. Film industry circles are usually pretty small, where many people know each other and rely on word of mouth references for crew. If you have a reputation for being good to work with and a reel to back that up, you’ll get more jobs in the future.
Finally, six, always get better. As you build a career you are building three things: your skills and experience, your network of contacts, and your reel. I personally have found documentaries to be a great training ground for improving my speed, awareness of light, framing and a host of other skills. You should constantly be building all three so that when an opportunity does present itself you are in the best possible position to take it.
Shutter Angle In Cinematography Explained
Let's unpack what shutter angle is, explain the differences between shutter angle and shutter speed and then look at some examples from movies that have used the shutter to create interesting visual effects.
INTRODUCTION
One of the key things that makes cinematography different from photography, painting or illustration is, of course, that cinematographic images create an illusion of movement.
How smooth, or jerky this motion appears when captured is dependent on the important camera setting of shutter angle or shutter speed. Let’s use this video to unpack what shutter angle is, explain the differences between shutter angle and shutter speed and then look at some examples from movies that have used the shutter to create interesting visual effects.
WHAT IS SHUTTER ANGLE?
A useful exercise is to think of a filmed clip not as one glob of video, but rather as a series of individual photographs, or frames, that are played back fast enough to create an illusion of motion.
Shutter angle controls how much motion blur there is in a single frame. An angle of 360 degrees will create more of a streaky blur when a character moves, while an angle of 45 degrees will create a sharper, crisper, stuttering image. But what exactly is a shutter angle and how do we arrive at those numbers?
Inside old film cameras, an image is created when light passes through a lens and hits a frame of film. In between the lens and the film there is a rotating circular disk which is called a shutter that determines how much light is let through.
Usually these old film camera shutters could be mechanically opened up to different angles from around 11 degrees to a maximum of 180 degrees. Modern digital cinema cameras however are capable of digitally dialling in specific shutter angles all the way up to 360 degrees.
The amount that this circular shutter is solid and blocks light versus open and lets light in is called the shutter angle.
For example, if the shutter is set to 45 degrees it means that it is open for 45 degrees of the 360 degree circle and closed for 315 degrees. Light will hit and expose each piece of film only when it passes through the small open portion of the 45 degree shutter. This small exposure time means that a miniscule moment is frozen in every frame, which means the image will have less motion blur. Therefore, when multiple frames captured with a 45 degree shutter that all look sharp and crisp are played back it gives a stuttering effect and renders small moving details sharply.
For an opposing, theoretical example let’s imagine we could open the shutter up all the way to 360 degrees. This would mean each frame is exposed for much longer and therefore this shutter angle would let through more light. Because the exposure time is much longer, there is a greater period to capture movement, which means each frame will have more motion blur. Therefore when multiple frames captured with a 360 degree shutter are played back it gives a burred, lagging feeling to movement and renders small moving details with a hazy smoothness.
Both of these shutter angles are generally considered to give an exaggerated appearance of motion that is unnatural - one that is overly choppy and one that is overly blurry.
The middleground is a 180 degree shutter. This is widely considered to be the default shutter angle setting on a camera as it produces motion which looks most natural to the human eye. The vast majority of movies that you watch will use a 180 degree shutter for most scenes.
Many digital video cameras come with the ability to adjust their shutter, however this isn’t done by changing a physical mechanical shutter like on older film cameras. This adjustment is instead done by changing the amount of time that the digital sensor collects light for each frame which is captured.
So to sum this up, a smaller shutter angle means less light will hit the film plane, and that images will be sharper and appear more choppy and stuttering when played back. While a larger shutter angle means more light will be let through, images will be more blurred and will have a smoother motion when played back.
SHUTTER SPEED VS SHUTTER ANGLE
If you use a broadcast or consumer video camera rather than a cinema camera, the measurement of the shutter will likely be displayed as a shutter speed rather than a shutter angle. Shutter speed is rooted in the same fundamental concepts as shutter angle but with a few small differences.
Instead of being displayed as the angle of the shutter that blocks the light, it is displayed as the amount of time that each individual frame is exposed to light for.
A good rule of thumb to get to an equivalent 180 degree shutter angle is to take the frame rate that you are shooting at, for example 25 frames per second, and double it - so a shutter speed of 1/50th of a second.
This is the reason that tutorial videos talking about camera settings will advise you to use a 50th of a second shutter speed on DSLR or mirrorless cameras.
The issue with using shutter speed, and why it isn’t used on high end cinema cameras, is because it requires you to do slightly clunky calculations about shutter speed every time you want to change the frame rate on the camera.
Many cinematographers want to maintain the same feeling of normal motion blur by using a 180 degree shutter angle across the whole project. On cinema cameras you can set it to 180 degrees and leave it there, so that whether you are shooting at 24, 33, or 200 frames per second, you will always have the same shutter angle that produces the same feeling of motion blur. However if you’re using shutter speed and you want to maintain the same 180 degree shutter angle then you need to perform a calculation and change the shutter speed setting every time that you want to shoot at a different frame rate.
If you’re shooting at 24 frames per second and want a 180 degree shutter angle, the shutter speed must be doubled, or set to 1/48. If you change to shoot 60 frames per second then you need to change the shutter speed to 1/120. At 200 frames per second your shutter needs to be 1/400th of a second.
Using shutter angle eliminates the need to change the shutter speed when you alter the frame rate on the camera. It does it automatically so to speak.
Another reason that cinematographers may want to use angle rather than speed is that it gives you the ability to dial in a very specific shutter. For example, sometimes DPs shooting at 24 frames per second like to shoot with a shutter angle of 172.8 degrees. This is because when shooting in countries that use 50HZ electricity to power lights, they will not have an undesirable flickering or strobing effect when the camera shoots at 172.8 degrees.
CREATIVE USES
Let’s take a look at some different examples of how shutter angle can be used to subtly further storytelling in movies.
The first shows how a narrow shutter angle can be used to introduce a jittery feeling of urgency and panic that recreates the emotional reality of on the ground warfare.
“Saving Private Ryan was never meant to be thrilling. It was never meant to be an adventure. It was meant to be a recreation, a kind of a documentation, of the closest I could get to the experiences of those who fought there. All the research that we did really brought that scene to life because it was ugly. Because it was honest. And it had to be ugly because that’s the way it was.” - Steven Spielberg, Director
Cinematographer Janusz Kamiński decided to shoot most of the famous opening scene in Saving Private Ryan using either a 45 degree or a 90 degree shutter. This, along with the use of a handheld camera gave the footage a crisper shake and imparted an added sense of reality and urgency to the scene.
Using a narrow shutter angle also had a pronounced effect on the explosions that were happening around the characters. If we pause on a frame with an explosion you can see individual chunks of sand flying through the air. Because of the shorter exposure time there is less motion blur so every individual grain of sand stands out as crisp - giving the scene a literal grittiness.
Digital cinema cameras came onto the scene with an added ability that film cameras didn’t have. Being able to shoot with a wide shutter angle past 180 degrees. One of the earliest examples of a filmmaker using this technology reportedly occurred in the shooting of Michael Mann’s Collateral.
Shooting with a wide open shutter angle served two purposes. First and foremost it let more light into the camera. This allowed them to shoot night exterior shots in very low light conditions - which they otherwise wouldn’t have had enough light for.
Visually it also gave the city a more blurry, hazy, glowing feel which made the city itself almost feel like one of the movie’s characters. It’s an interesting example of letting the camera shape the feeling of the environment on screen.
“It was night time and I just cranked the shutter open and went down to 6 frames. If you can’t afford to do something it’s good to find a solution that gives the feeling of something. You have the impression of a battle and a fight rather than saying, ‘Oh, he was there and he went that way and the horse went this way.’ It was the feeling of it. It was the emotion of being in a battle rather than the historical outlay of who and where and when they crossed the line.” - John Mathieson, Cinematographer
John Mathieson pulled from two different techniques - shutter speed and frame rate - when shooting the battle scenes in Gladiator to effect the feeling of motion in the image. On the one hand, he used a choppy 45 degree shutter angle to reveal the sword’s movement as it swung around wildly, which increased the feeling of intensity and made the combat feel more aggressive and sharper.
On the other hand he employed a different technique, separate from shutter angle, to increase motion blur and slow time down. He did this by shooting with a slow frame rate of 6FPS that allowed the film to be exposed for much longer and therefore blurred any movement. These 6 frames were then printed four times using step printing to arrive at a frame rate of 24 frames.
Combining these two different approaches to movement meant that action scenes were shot in a more experimental way that emphasised the emotions and feeling of being in a battle rather than just trying to portray it accurately.
How Movies Are Shot On Digital Cinema Cameras
Let’s break down the workflow of how most movies these days are shot on digital cinema cameras: all the way from choosing and setting up a camera to exporting the final, finished product.
INTRODUCTION
In a prior video I gave an overview of how movies today are shot using film. While it’s good to know, it probably won’t be applicable to the vast majority of movies which are mostly captured, edited and presented using digital technology.
So, let’s break down the workflow of how most movies these days are shot on digital cinema cameras: all the way from choosing and setting up a camera to exporting the final, finished product.
CAMERA SETUP
The digital cinema camera that cinematographers choose to shoot a movie on will likely be influenced by three decisive factors. One, the camera’s ergonomics. Two, the camera’s internal specs. And three, the cost of using the camera - because as much as you may want to shoot on an Alexa 65 with Arri DNAs it may be beyond what the budget allows.
Once you have an idea of what budget range the camera you select must fall into, it's time to think about the remaining two factors.
Ergonomic considerations are important. You need to think about the kind of camera movement you may need and what camera can be built into the necessary form factor to achieve that. If it’s mostly handheld work you may want something that is easy to operate on the shoulder. If you need to do a lot of gimbal or drone shots then a lighter body will be needed.
Also think about what accessories it’ll be paired with. What lens mount does it have? What are the power options? Do you need a compatible follow focus? What video out ports does it have? Does it have internal ND filters? If so, how many stops and in what increments?
These are all questions that will be determined by the kind of project you are shooting.
The second consideration is the internal recording specs that the camera has. What size is the sensor? Do you need to shoot in RAW or ProRes? Does it have a dual ISO? Do you need to shoot at high frame rates? What kind of codec, dynamic range and colour depth does it record? How big are the file sizes?
Once you’ve chosen a camera that best fits the needs of the project it’s time to set it up properly before the shooting begins.
On high end productions this will be done by the DIT or digital imaging technician, under the direction of the DP. At the beginning of every shoot day or at regular intervals the DIT will continue to check that the specs are correctly dialled into the camera.
They will start by setting the codec that the camera records in: such as Arriraw or ProRes. Next, they’ll make sure that the correct sensor coverage is chosen. For example if using anamorphic lenses a fuller, squarer coverage of the sensor may be desired and a de-squeeze factor applied.
They’ll then dial in the resolution required, such as 4K, 4K UHD or 2K. Sometimes this might change during shooting if cinematographers want to capture clips at higher frame rates than their base resolution allows.
Next, they’ll set the base frame rate for the project. Even if the cinematographer decides to change the frame rate during shooting, such as to capture slow motion, the base frame rate will never change. This is the same frame rate that the editor will use when they create their project file.
With the basic settings dialled in, the DP may now either be happy to shoot with a regular Rec709 LUT or they may ask the DIT to upload a custom LUT that they’ve downloaded or created.
Cinema cameras are set to record a flat colour profile in order to maximise how the images can be manipulated in post. However it can be difficult to get an idea of how the final image will look when working with a flat log reference. So, a LUT is added on top of the image - which isn’t recorded onto the footage. This applies a look, like a colour grade, to the image so that cinematographers can better judge their exposure and what the final image will look like.
Finally, frame lines will be added and overlaid over each monitor so that operators can see the frame with the correct aspect ratio that has been chosen for the project.
Now, the camera is ready to go.
SHOOTING
While shooting the DP will usually manipulate the camera's basic settings themself and set exposure. These settings include the EI, white balance, shutter speed, frame rate, internal ND filters and the aperture of the lens.
There are different ways of judging exposure on digital cinema cameras. Most commonly this is done by referring to how the image itself looks on a monitor and occasionally also referring to the cameras built in exposure tools. On high end cameras the most used exposure tool is false colour which assigns a colour reading to different parts of the image based on how bright or dimly exposed they are.
If you see red it means an area is bright and overexposed. Green refers to an evenly exposed 18% middle grey - a good reference for skin tones. While a dark blue or purple indicates underexposure.
There are also other exposure tools, often found on broadcast cameras, such as histograms, waveforms or zebras which cinematographers may also use to assess their exposure. Alternatively, exposure can also be measured by DPs with a light metre, however this is becoming increasingly rare when working with digital cameras.
On bigger jobs with multiple cameras, the DP may also request that the DIT help set the exposure of each camera. For example, I’ve worked on jobs where there are say four to six cameras shooting simultaneously. The loader or assistant for each camera will be hooked up to a radio with a dedicated camera channel.
The DIT will have a station set up in a video village where they get a transmitted feed from all of the cameras to a calibrated monitor with exposure assist tools. While setting up for each shot they will advise each camera over the radio channel whether the assistants need to stop up or down on the lens and by how much so that all the cameras are set to the same exposure level and are evenly balanced.
For example they may say, ‘B-Cam open up by half a stop’. The assistant will then change the aperture from T4 to T2.8 and a half. On other shoots they may even be given a wireless iris control which is synced up to an iris motor on each camera - such as an Arri SXU unit. They can then remotely adjust the stop on each camera while judging exposure on a calibrated monitor.
The DIT, under the direction of the DP, may also change to different LUTs for different scenarios. For example, if they are shooting day for night, a specific LUT needs to be applied to get the correct effect.
DATA MANAGEMENT
Once the big red button has been pushed and some takes have been recorded digitally onto a card inside the camera, it’s time to transfer that footage from the card or capture device to a hard drive. This secures the footage and organises it so that editors can work with it in post production.
This is done either by the DIT or by a data wrangler using transfer software. The industry standard is Silverstack which allows you to offload the contents of a card onto hard drives, backup, manage and structure how the files are organised - usually by shoot day and the card name.
The standard rule is to always keep three copies of the footage - one master copy and two backups. This is to ensure that even if one of the drives fails or is somehow lost or stolen that there are still copies of the precious footage in other locations. Even though data storage can be expensive, it's almost always cheaper than having to reshoot any lost footage.
Once this footage has been secured on the hard drives the card can be handed back to the loader who can format it in camera. The footage is carefully viewed by the DIT to make sure that the focus is sharp and there aren’t any unwanted artefacts, aberrations, pulsing light effects or dead pixels - which may require the camera to be switched out.
The next job of the DIT or data wrangler is to prepare the drive for the editor. Because footage from high end digital cinema cameras comes in huge file sizes and is very intensive for computers to work with, smaller file size versions of each clip need to be created for the editor to work with so that there is no playback lag when they are editing. These files are called proxies or transcodes.
This is usually done in software like Da Vinci Resolve. The DIT will take the raw footage, such as the log 4K ProRes 4444 files, apply whatever LUT was used during shooting on top of that log footage, then process and output a far smaller video file, such as a 1080p 8-bit ProRes clip. Importantly these proxies should be created with the same clip name as the larger original files. This will be important later.
Data wranglers may also be tasked with doing things like creating a project file and syncing up sound to video clips to help speed up the edit.
POST PRODUCTION
Once shooting wraps the precious hard drive which includes the raw footage as well as the proxies will be sent over to the editor. There are two stages to the post production process: the offline edit and the online edit.
The first stage, offline, refers to the process of cutting the film together using the smaller, low res transcoded proxy files with video editing software, such as Avid or Adobe Premiere Pro. This will be done by the editor and director.
They will usually go through multiple cuts of the movie, getting feedback and adjusting things along the way, until they arrive at a final cut of the film. This is called a locked cut or a picture lock - meaning that all the footage on the timeline is locked in place and will no longer be subject to any further changes.
Having a locked final cut indicates the end of the offline edit and the start of online. Online is the process of re-linking up the original, high res, raw footage that came from the camera.
To do this the offline editor will export a sort of digital ledger of every cut that has been made on the timeline - in the form of a translation file such as an EDL or XML.
This file is used to swap out the low res proxies in the final cut timeline with the high res log or RAW footage that was originally on the camera card. This is why it is important that the proxies and the original files have the same file names so that they can easily be swapped out at this stage without any hiccups.
The original files can now go through a colour grade and any VFX work needed can be performed on them. The colourist will use any visual references, the shooting LUT, or perform a colour space transform to do basic colour correction to balance out the footage. They’ll then apply the desired grade or look to the original footage, all the while getting feedback from the director and cinematographer.
The main industry standard software for colour grading is Baselight and Da Vinci Resolve. Once the grade is complete, the final, fully graded clips are exported. They can then be re-linked back to the timeline in the original video editing software. A similar process will happen with the final sound mix where it is dropped in to replace the raw sound clips on the editing timeline, so that there is now a final video file and a final audio file on the time ready for export.
The deliverables can then be created. This may be in the form of a digital video file or a DCP that can be distributed and played back on a cinema projector.
Cinematography Style: Wally Pfister
In this edition of cinematography style lets unpack Wally Pfister’s career as a director of photography.
INTRODUCTION
How is it that Wally Pfister went from shooting quote unquote ‘erotic thrillers’ to being the cinematographer responsible for bringing some of the biggest, most iconic blockbusters of the 2000s to the big screen for director Christopher Nolan?
In this edition of cinematography style let’s unpack Wally Pfister’s career by going over some of his philosophy, thoughts and ideas about filmmaking and then looking at some examples of the film gear he used to execute those ideas.
BACKGROUND
After graduating from high school, with a keen interest in filmmaking, Pfister started at the bottom as a production assistant at a television station. He was allowed to borrow their CP-16 16mm film news cameras on weekends to shoot visual essays. After showing them to the production manager he was brought on to shoot low budget PSAs. This later developed into a full time career as a news cameraman who also shot TV documentaries for PBS.
After an encounter with Robert Altman on a series he directed, he made the decision to move to LA and enrol at the AFI to study cinematography. Some of his AFI classmates included Janusz Kamiński and Phedon Papamichael who he sometimes worked with as a grip or electrician.
“When you come out to Hollywood, I came out here to be a cinematographer, to shoot movies. You end up shooting a lot of different things. MTV shows. And I did Animal Instinct. I did all of these erotic thrillers that were not pornography but kind of R-rated. You know, they called them erotic thrillers. Honestly that’s where I cut my teeth and learned how to light. And learned how to light fast. We shot movies in 15 days. You know, if you had a big budget one you shot it in 18 days.”
After working on a string of B-movies, he started to gain traction shooting films such as The Hi-Line which went to Sundance and received festival acclaim. At a festival he met a director called Christopher Nolan who was looking to shoot his second feature Memento.
This began the start of a collaborative relationship which would last many years.
PHILOSOPHY
A core tenet of his philosophy comes from the idea that the job of the cinematographer is at the intersection between being a technician and being an artist.
“What you have to understand is that cinematography is one part artistry and one part craftsmanship.”
Much of the early development of his skills as a craftsman happened through working on R-rated movies, where being able to light very quickly and well became an important skill to have. These practical skills are then intermingled with creative thinking about how to best visually represent each story on screen.
This process of thinking about the story begins during the planning phase in pre-production all the way when he’s first presented with a screenplay.
“My first read of the script is really just as a viewer and reacting to it as an audience member. And then the subsequent passes are just breaking it down, coming up with questions really. My process with Chris is that I just make pages and pages and pages of questions and I walk into his office and say, you know, ‘What was your thought about doing this? What do you think about doing this?’ And then really it’s easier for me to start thinking creatively after all of those sort of those logistical…and once I’m in Chris’ head and I understand how he sees it on the screen.”
This process of collaborating with the director carries over to all heads of departments from the production designer to the editor. Pfister makes an analogy of cinematographers being kind of like a general that reports to the first in command.
Good directors will surround themselves with high calibre collaborators whose job it is to present ideas, suggestions, questions and occasional pushback to directors - who then decide whether to accept, reject or use those suggestions to inspire a new creative idea.
A cinematographic concept that he has carried into various films, especially movies such as Inception with its various, simultaneous plot threads, is the idea of providing a unique visual look for each scene.
“One of the sort of challenges creatively was to break this into different looks. And particularly later on when the film is intercutting between the ski slopes of Calgary and, you know, a tumbling hallway and a van on a rainy street in Los Angeles it’s important to distinguish looks between those three elements.”
Not only do these different looks and locations provide a sort of global scale and greater depth to the world of the movie, but they also serve as a way of distinctly differentiating between different plot lines, dream layers or characters which more clearly solidifies the story in the minds of the audience and minimises any potential visual confusion between the different plots.
A final concept that both he and Christopher Nolan are drawn to is to try and shoot action scenes as much in camera as possible. This comes from pushing to find practical solutions to difficult shots rather than constructing those shots purely with visual effects work.
Of course certain lines in a script such as a Parisian street folding in on itself require an integration of visual effects and live action photography. For these sequences, again, communication with collaborators, such as the visual effects supervisor, are crucial.
“There are not that many sequences that are pure CG like that. I just remember shooting the action bits and then shooting the emotional bits and then of course the main action set pieces have almost no visual effects in them. All the way back to Batman Begins I’ve been involved in the visual effects stuff in that they send me still photographs and comps and I help guide them in terms of the lighting and the colour to match our original photography.”
GEAR
“The important thing is that we’re not treated like technicians, we’re not treated as anything else but the visual storytellers of the film. And in that people should not be dictating what kind of equipment we use to capture our films. We should be the ones, with our expertise, that are able to choose the tools that we want to choose.”
When looking at his films, particularly his large budget work with Christopher Nolan, they maintain quite a similar overall look, while the visual nuances in the storytelling, like framing, lighting, lens selection and camera movement change based on each story's requirements.
I’d describe his look on Nolan films as being bound by three things. One, being shot largely on 35mm film. Two, utilising a combination of the anamorphic film format and the taller, higher resolution, large, Imax format. And, three, lighting and exposing the negative with a rich level of contrast, colour and saturation.
Pfister shot all of his high end work exclusively on film, even his later movies which were shot after Arri had released their digital Alexa camera. This choice was primarily down to the look of film, but also took into account the way in which film is worked with on set.
“Chris and I are real film guys. The fact that 35 is closer to 8K and there’s no digital 8K camera but it’s also the physical depth of the layers on film that give it the contrast that give it the colour saturation. And, it’s the simplicity of it for Chris and I. All we really have to plug into that film camera is a battery and we have to load a magazine of film on there.”
Most of his high end work was done in the anamorphic format with Panavision anamorphics, such as the Primos, E- and C-Series, and with 35mm Panaflex cameras like the Millenium XL and the Platinum.
In his early work with Nolan on Memento and Insomnia they almost exclusively used the longer 75mm anamorphic focal length lenses. However, for his later films that included more blockbuster action that needed to be captured in wider shots he preferred to shoot on wider 35, 40 and 50mm anamorphic focal lengths with the camera physically closer to the characters and action.
Even if they’re shooting with a single camera at a time, Pfister often likes to have multiple cameras on standby built in different configurations - which saves on the time of building cameras. For example, on Batman Begins they carried one Panaflex Platinum in a studio build and two Panaflex Millennium XLs - one for Steadicam and one for handheld.
He’s also used specialist cameras like a Arriflex 435 or a Photo-Sonics 4C to shoot slow motion film at higher frame rates.
From The Dark Knight onwards this 35mm anamorphic look was mixed with Imax for certain set pieces. This granted them a higher resolution and a larger than life format.
“I take credit for making it happen but it was actually the brainchild of Chris Nolan to shoot on Imax. I put a lot of effort early on in pre-production on The Dark Knight to figuring out what it was all about and what it meant to shoot with Imax cameras.”
This included finding technical ways to move the huge camera, such as with an enormous Steadicam Imax rig. More creatively it also meant thinking about framing. Unusually, they made the decision to cut between footage shot with the different native aspect ratios of the anamorphic and the much taller Imax format.
Pfister found that as long as he kept the focus of the framing in the Imax shots more towards the centre, giving more headroom to characters, it transitioned smoother when a cut was made between the two formats as the eyes of the audience were in the same place.
The third part of Pfister and Nolan’s look comes from a rich looking negative. One part of this is down to how Pfister likes to expose the film. He regularly rated his films at a lower ISO and then did what was called ‘printing down or timing down’.
For example, he would take a 500T film stock from Kodak that was recommended to be shot at 500 ISO and instead rate it on his light metre and shoot it at 400 ISO. The film would then be developed regularly at 500 ISO. This meant that he overexposed the film slightly and then in the DI or colour grade brought the exposure down by the same amount to produce a rich, deeper looking negative with nice contrast and saturation.
Another reason for the contrast in his images of course comes from his lighting. He often positions his lights so that his key light on his character comes from the side. Side lighting a character produces shadow on one side of the face, while the other side is exposed brighter.
He also often hits characters with strong sources of backlight so that a highlight around the back of their hair forms that separates them from a darker background. He regularly combines these techniques of side light and back light to produce a negative with strong contrast.
CONCLUSION
Pfister’s career developed slowly, learning the ropes and how to light quickly on B-movies to eventually shooting some of the biggest films in the world. His style is informed by him prioritising collaboration, maintaining a core, Hollywood look that was flexible depending on the story and shooting visceral, in camera action sequences.
All of this, while operating at the intersection between technology and creativity.
The 1959 Black & White Film That Hollywood Still Shoots Movies With: Eastman Double-X
Let's take a deeper look at Eastman Kodak’s Double-X black and white negative motion picture film.
INTRODUCTION
Usually in the world of film gear things tend to move pretty quickly. Every year there’s a new camera, lens set or piece of lighting kit which is released that DPs flock to try out. Movie technology always advances forward.
That’s why it’s strange that there’s one piece of filmmaking tech which has remained completely unchanged since the late 1950s and still continues to be used on productions to this day. That technology is Kodak, formerly Eastman Kodak’s Double-X 5222 black and white negative film stock.
Let’s take a deeper look at this film stock by unpacking what black and white film does, looking at two different types of black and white film, identifying Double-X’s characteristics and in doing so look at some examples of modern movies shot on the stock.
HOW IS BLACK & WHITE DIFFERENT FROM COLOUR NEGATIVE
“Black and white negative just looks different than colour negative too. It just has its own physical presence. It’s chunks of silver, you know, embedded in an emulsion.” - Jarin Blaschke, Cinematographer
Film is created by taking a cellulose triacetate base - basically plastic - and adding layers of chemicals to it so that when it is exposed to light and then developed it produces an image.
Colour negative film is coated in at least three different layers of colour sensitive dyes, mainly: red, green and blue. These three layers when combined represent a full spectrum of colour. When this film is developed and processed it creates a negative - where the inverse of each colour is reflected. This can later be scanned and turned back to a positive.
Black and white film on the other hand usually contains one layer of silver halide crystals. Rather than being sensitive to a particular colour spectrum of light, black and white film is affected by luminance only. So negative film renders dark shadows as white and bright highlights as black. Like colour negative film, this is then scanned and inverted to produce an image in black, white and shades of grey for exposure values in between.
This means that cinematographer’s don’t need to worry about the colour temperatures of lights when shooting black and white film and can instead focus purely on contrast or how bright or dark each area of the image is.
Black and white film also requires a different development process to colour negative or colour reversal film.
PANCHROMATIC VS ORTHOCHROMATIC FILM
When black and white film was initially developed it was only sensitive to blue and green light rays on the colour spectrum. This is called orthochromatic film.
Because orthochromatic film wasn’t sensitive to red tones in an image, and there tends to be a lot of red in skin tones, this meant that skin tones captured on this film appeared darker and that blemishes, shadows and lines on faces were accentuated.
On the other hand, orthochromatic film is much more sensitive to areas in the image with lots of blue, such as skies - which is why they would read very bright and overexposed.
A great example of orthochromatic film can be seen in the photographic portraits taken by Yusuf Karsh. His subjects often would be photographed with orthochromatic film that made their skin appear darkened, textured and more weathered.
Eastman Double-X is different. It was created in 1959 as a panchromatic film. This means that instead of capturing blue and green tones it responded to all colours depending on their level of brightness. This allowed redder skin tones to be captured and meant that faces were less harshly represented.
Since the introduction of panchromatic film, black and white orthochromatic stocks ceased to be widely manufactured.
Interestingly, on The Lighthouse, cinematographer Jarin Blaschke wanted to recreate the weathered, old look of orthochromatic film, even though panchromatic film was the only choice available, by shooting with a custom filter.
“We came up with a filter also to emphasise texture. It’s a cyan filter that Schneider made for us. It replicates very early emulsions that were insensitive to red. It tends to heighten local contrast too, especially on skin tones which are mostly red of course. So any little variation of red is further amplified. So any blemishes, pores, you know, if you’re hungover and your cheeks are red, you know it all gets amplified.” - Jarin Blaschke, Cinematographer
EASTMAN DOUBLE-X CHARACTERISTICS
“I think this stock was created in the end of 50s early 60s so this is the same stock they shot those movies on. If we’re paying kind of an homage to a certain 60s Hollywood style. So using this exact stock was a good decision I guess.” - Marcell Rév, Cinematographer
Eastman Double-X comes in two different gauges: 16mm and 35mm. The 16mm version is called 7222 and can be purchased in 100 or 400 foot rolls. The 35mm variant, 5222, comes in 400 or 1000 foot rolls.
The stock has a very specific look. It contains large amounts of thick, dancing grain which is especially noticeable in the grey mid tones and makes footage look unapologetically old fashioned.
5222 produces different pockets of contrast. So skin tones, which may be rendered in a similar contrast range by other stocks or cameras, contain more difference between shadows and highlights on Double-X. This emphasises textures and makes images feel slightly more dimensional and less flat.
Because it’s so old, 5222 doesn’t have a big dynamic range. It has a steep gamma curve which means it reacts a bit more like slide film even though it's a negative film. This limited latitude means that cinematographers need to be careful when exposing it. Overexposure will quickly blow out the highlights to white, while underexposure will muddy up the blacks so that they aren’t pure and crisp.
It also has relatively low sharpness compared to more modern stocks at 100 lines per mm in perfect conditions. However, that slight softness can actually be nice for capturing people and skin tones as it provides a more gentle roll off that subtly smooths out how skin is rendered.
Kodak presents two different exposure indexes for different lighting conditions, like an iso on a digital camera.
If you’re shooting the film under tungsten light sources then it is recommended to expose at 200 EI. If instead the shot is lit by a source with a daylight colour temperature then it can be shot at 250 EI.
The reason for this difference can be found in the film’s spectral sensitivity data curve. Basically this shows how the film is more sensitive to light at cooler colour temperatures - such as HMIs or daylight - and less sensitive to light at warm colour temperatures such as tungsten sources. Hence the reason for their different sensitivity rating.
Back when this stock was first released it was considered a highly sensitive stock for lower lighting conditions - since it was around in an era when shooting on 50ASA stocks was the norm.
However, in the modern era, where shooting with an EI of at least 800 on digital cinema cameras has become the norm, this is no longer the case.
“We were shooting on film. The Double-X black and white stock so we were around 250. I underexposed it a little bit just to protect the highlights because they can blow out pretty easily. I would lie if I’d say I wasn’t scared of the low sensitivity. And it goes to Fotokem to develop and then you won’t see it for two days.” - Marcell Rév, Cinematographer
One disadvantage of shooting on such a niche film stock is that there are relatively few film laboratories that can process this film in motion picture quantities. It is still possible but shooting in North America requires shipping the stock to a lab like Fotokem where it will be processed, scanned and then delivered as rushes, usually in a digital file, a few days later.
Another disadvantage is that shooting night scenes with a film that is rated at 200 means that large light sources are required to reach exposure, which also means that these scenes will tend to have a more old world feeling to the lighting with actors illuminated by strong, hard sources, rather than the soft, dark, ambient lighting style that can be more easily achieved by using digital cameras.
“This is more limited. You have a certain kind of contrast built into the stock that you have to embrace. I think if that clicks with the project then it can really benefit from it.” - Marcell Rév, Cinematographer
So, while it may be easier to shoot digitally and try to arrive at a vintage looking black and white image in post, for some productions who have the right budget and access, shooting on a 60 plus year old film stock which still holds a textured, grainy, contrast laden, classically Hollywood look that is very difficult to accurately create otherwise is the best way to go.
A Cinematographer's Guide To Pre-Production
Going over the nine different steps that cinematographers may take when they prepare to shoot a film.
INTRODUCTION
Early on, when I first started working on film sets, I used to think that the job of the cinematographer was the cushiest job around. After all, they have a huge technical team that carries all the heavy gear into remote locations, a gaffer and electrical crew who set up all the lights, someone who’ll bring them a cup of coffee in the blink of an eye, all while they sit on a dolly and play with the camera while a grip pushes it.
The reality is that there is an important reason why some DPs, to the untrained eye, do seemingly little work during production. It’s probably because they’ve done a large amount of the groundwork prior to stepping on to set.
For many cinematographers and HODs on high end jobs their workload is kind of like an iceberg, the little bit at the top that you see is the work done on set, while the massive chunk below the surface happens during prep.
In this video I’ll try to create a basic guide for pre-production by listing all the respective tasks that are usually performed by cinematographers prior to shooting.
Of course, since filmmaking can vary all the way from massive multi-month shoots to single day, single crew member corporate documentaries, this process will never be fixed.
Prep will always be flexible depending on the nature of the project, the working style of others, the budget and many other factors. Nonetheless here’s a basic template for prepping for a film.
1 - PROJECT INTRODUCTION
Getting a job in the first place won’t happen without having some contacts in the industry. Cinematographers usually receive an introduction to each project through directors or producers.
This may involve a meeting where the project is pitched and they have a conversation about it. Or it may start by being sent a copy of a treatment or script. During this early phase when ideas for the project may still be in their infancy, probably the best thing you can do is to focus on building a relationship with the director themself.
The more time you spend together chatting the better you’ll be able to understand their perspective, how they work, references they may like and start to develop an idea of the visual picture that they have in their head.
These early conversations lead into another task.
2- RESEARCH
Researching the topic of the film is a nice way of familiarising yourself with the subject matter.
What you’re doing throughout this process is slowly building up a better understanding of the story and world that it takes place in that will inform a perspective on how the visual language will be crafted in your head and, later, how it will be shot.
This may include noting questions about characters in the screenplay, looking at the architecture of the locations in the story world, familiarising yourself with a historical context for a documentary topic, or researching vintage lighting technology for a period film. What this research covers will be entirely dependent on the context of the film.
3 - VISUAL PLAN
Now that you know what the project is and are better informed on the subject matter, it’s time to put together any ideas in the form of a visual plan. There are a few different ways to do this. One idea is to draw from films as references that hit the kind of tone that the director is after, or address a similar subject matter.
If you want to hit a polished, dark, satirical comedic tone then watch some films by Ruben Ostlund. If your story’s world is parched, dry, barren and bleak then you might find something in the language of There Will Be Blood that works.
Other forms of reference such as music, photographs or paintings may also be used. What you're trying to do when coming up with visual ideas is not to directly copy a reference but rather to use them to inspire the kind of tone that is suitable for your project, when thinking about things like shot sizes, framing, camera movement and lighting.
I find it useful to compile a moodboard of images - which can also be used to establish the kind of colour palette that you’re after.
4 - LOCATION RECCE
Having a visual plan will inform the next step of prep which is location scouting. Sometimes the director and production team would have already locked down shooting locations, but other times cinematographer's may be able to provide their thoughts and input on which location best suits the production’s needs.
This can be done by visiting locations in person, by browsing location stills or even by looking on Google maps.
The crew will then do a recce of the chosen shooting locations. Usually the HODs, like the producer, 1st AD, the director, the cinematographer, the gaffer, the grip, the unit manager and the production designer will attend this scout so that they can make the necessary preparations for the shoot day.
For example, the grip may have to make measurements at the location so that they have the right sized rigging tools, or have enough space to fit large equipment through tight spaces. The unit manager will determine the logistical access to the space and how many vehicles will be able to park nearby. The production designer will have a look at what decor is already inside a location and then determine what art elements they’ll need to bring in.
As well as being practical in nature, a recce also provides a creative opportunity for the cinematographer and director to run through how they imagine the scene in the space and start shaping shots and lighting ideas.
It’s always a good idea to keep records of this process. Whether that’s in the form of written notes, photographs, or by finding frames in the space using a viewfinder app such as Artemis.
5 - SHOT LIST
Using the ideas for shots that were discussed during the recce, the director and cinematographer may now sit down and start compiling a shot list. Usually this is done with a spreadsheet and can include a written description of the shots, drawings, or use location photos or viewfinder images that were taken during the recce.
For higher budget projects and commercials an artist may also be employed to create storyboards for the shoot or specific scenes. On even higher budget shoots which involve lots of visual effects work they may also create a previs, which is a rough, animated cut that displays all the shots that are required.
6 - LIGHTING PLAN
This step isn’t a must, but I find that creating a lighting plan - especially for more complicated setups - is a great way to determine what lighting gear needs to be rented, the access to power, how long this setup will take to achieve and to more thoughtfully think about the light that you want.
Sometimes a ‘light study’ may be necessary - where you visit the shooting location at different times during the day to see what the ambient light is doing and whether or not you’ll need to control it.
A lighting diagram can of course be hand drawn or more precisely done using software like Photoshop. It’s also great to have this document to send to the gaffer to give them a clearer idea of your vision and so that they can potentially use it as a guide to pre-light locations before you arrive.
7 - GEAR LIST
By now, you should have enough technical information from the scout, shot list and lighting diagram to determine what gear will need to be rented. Some cinematographers also like to do a day of camera tests for long form projects to determine a lens, camera or sometimes film stock combination that they are happy with.
Usually I break my gear lists down into the separate department sections of camera, grips and lighting. Unless you’re working at a higher budget level, there may be a bit of a negotiation that will take place between the gear rental company and the production team.
Usually I like to first send through a complete gear list which includes everything I’d like to make the shoot happen. Then, if we get a quote from the rental house that is over the allocated gear budget, I will start to provide alternative, cheaper gear options - such as a less expensive zoom lens instead of a set of cinema primes - or start losing gear on the list until we reach a point where the production’s budget and the rental house quote meet.
When you create a gear list it’s also important to consider the post production workflow. Shooting in a higher resolution or RAW versus log versus a baked in Rec709 look - will affect whether a DIT, colourist, transcoding, or more expensive hard drives will be required - which all affects the budget.
8 - CREW LIST
Now that you have the gear you’ll need crew to run it. Cinematographers will usually either pass on crew names to production who they have worked with before, have interview sessions where they talk to crew HODs, or go with the crew recommendations of a crew agent or the production team.
Like with gear, how many crew you are able to hire will be determined largely by the production’s budget but also by the working style that you are after and, of course, the technical needs of the project.
For example, even if you have a large budget for an intimate documentary shoot, you may still only want one director, one cinematographer and one sound person to be on set.
9 - TECHNICAL COORDINATION
The final step involves coordinating all the technical details so that the shoot goes as smoothly as possible. This includes creating a schedule using the shot list.
Usually this is done by the 1st assistant director, however cinematographers may need to provide guidance or feedback to their schedule regarding shooting times, pre-lighting, rigging and what crew will be required where.
For example, some exterior dusk scenes may need to be scheduled so that the wide is shot during the best after sunset light and any close up insert shots are saved for last where they can be lit with artificial light sources once the ambient light has died.
CONCLUSION
Generally, prep for a film starts by being situated in a more creative, thoughtful space - where you can dream up the vision you would like. Throughout the planning process it’ll evolve into being more technical, practical and more about how the shoot will be executed.
By the time you arrive on set, you should already have a good understanding of the world of the story, have formulated a broad visual approach, have all the gear that you need, the crew to run it, and have scheduled things to run as smoothly as possible.
So if you’re on a shoot that is all going according to plan and the DP is watching his crew work, giving bits of feedback all while casually drinking a cup of coffee - it might be because most of the iceberg of prep has already been done.