Gray Kotze Gray Kotze

How LUTs Can Elevate Your Cinematography

Let's explain the colour workflow process, what LUTs are, and how they can be used to improve the look of your footage.

INTRODUCTION

If you’ve ever shot something in log and accessed that raw footage straight from the card, you’ll know that it looks extremely flat and visually unappealing. But don’t panic. 

This is because that log footage needs to be processed through a specific colour workflow in order to elevate how it looks. Part of this workflow involves using what is called a LUT. 

If you’ve at all dived into the YouTube rabbit hole of LUTs, you may have been left a little confused, with the added expectation that I will start trying to sell you my special LUT pack straight after the intro…Don’t worry I won’t. 

Instead, I’ll try to provide an overview to clearly explain the colour workflow process, what LUTs are, and how they can be used to improve the look of your footage.

WHAT IS A LUT?

The reason that cinematographers choose to shoot in RAW or with a flat colour profile is that it offers the most colour flexibility in post production, with the widest dynamic range.

Shooting with a colour look already applied or baked into the footage that comes out of the camera leaves minimal room for colour correction to be made or a different look applied to the footage later.

While shooting in a flat colour profile, means that you can later alter the look of the colour and exposure to much a greater degree, manipulate the image and easily make any colour corrections, like changing the white balance, or the exposure without the footage falling apart.    

This is all well and good, but what does this have to do with LUTs and what are they?

LUT stands for ‘lookup table’ and is a way of adjusting the colour and tones in an image. The way I like to imagine a LUT in my brain is in terms of layers.

At the bottom layer we have the raw footage that is recorded by the camera. When you download the footage from the card onto a hard drive this is what you will get. As we mentioned, when working with cinema cameras, this is usually recorded in a flat, desaturated colour profile.

A LUT is an extra layer that can be applied on top of the bottom layer. This LUT transforms each pixel’s colour value to give the footage a new look. Different LUTs can be created that output different looks: such as a more standard, natural look, a warm, romantic look, or a look that tries to emulate a film stock.

The reason that I like to think of the raw footage and the LUT as separate layers, is because when using a cinema camera, the LUT is not baked into, or combined with the raw footage. Rather the flat footage is recorded onto the card, while the LUT exists as a separate file that can be applied to the footage or deselected at will.

Because the raw footage is so flat it is difficult to judge things like exposure or colour temperature by using it as a reference on a monitor. To get around this, cinema cameras can apply a LUT as a reference on top of the raw footage that the camera will record, so that the cinematographer can better imagine the final image.

If this same LUT is later applied on top of the flat, recorded footage during the colour grade in software such as Da Vinci Resolve, then the image will look the same as how it was viewed on set with the same reference LUT.

Alternatively, different types of LUTs, other than the reference LUT that was used for exposure on set, can also be chosen and applied on top of the raw footage in the grade. 

If there is a colourist, they may choose to either use a LUT as a starting point for the grade and then make additional colour corrections on top of that, or they may prefer to start from scratch and build a new look during the grade. 

3 WORKFLOW LEVELS

Before I discuss the way in which LUTs are typically used by filmmakers in the industry on movies, series and TV commercials - I think it’s important to address the common colour workflows that are used across three different budget levels: from solo shooter-level, to mid-level, to industry-level. 

Starting at the solo shooter level, such as wedding videographers - many people within this bracket have their own cameras and also edit and grade the footage that they shoot.

Having the ability to completely control how you want the colour in your image to look at this stage is fantastic - as you can deliver the grade exactly as you imagine it. 

However, there’s sometimes a bit of a misconception that a LUT is this magical colour-thing that can be downloaded online then thrown on top of your footage like a filter when you’re editing to make your footage ‘cinematic’.

While this sometimes works, the issue with applying a LUT after you’re already shot all the footage comes down to intention and control. What you want to also be doing is using that same LUT that you use in the colour grade to also monitor how your colour and exposure looks as you’re shooting.

That way you will be able to correctly expose and light the footage in a way that suits the LUT, rather than shooting footage, applying a LUT in the edit and then seeing that you’ve underexposed, overexposed, or lit with an undesirable white balance. 

You want to shoot the footage to suit the LUT, not grade the footage to suit the LUT.

Once you start shooting more mid-level content, such as for broadcast TV, you may find that although you shoot the footage, that footage will now get handed over to an editor, and sometimes go through an online edit, which will be done quickly and which you often have no input in.

The next time you see the footage after you’ve shot it is usually when it is broadcast. In my experience this can sometimes go OK, and other times go disastrously wrong - especially if the online editor just throws a random LUT over everything.

Therefore, what I’ve started doing, to try and regain control back over the look of what I’ve shot, is to expose with a LUT that I’ve created in Resolve, get it as close as possible to the final look that I want on set, then hand over that same LUT file to the editor to use as the base look. They can then make small colour corrections if necessary - which saves them time and at the same time preserves the look that you want.

Finally, at the high-end industry level, particularly on long form jobs, cinematographers often regain most of that control of the colour back. This is because there is now money to spend on a proper colourist, who can help get the footage to the look that you and the director imagine. 

INDUSTRY COLOUR WORKFLOW

Before filmmakers transitioned to using digital cinema cameras, productions were shot, processed and printed on film. It was the job of the cinematographer to choose which film stock worked best for the job and how that film stock should be processed, colour timed and printed at the lab. This all determined the ‘look’ of the footage.

After the digital intermediate and digital cameras were introduced as the norm, some of this control of the colour and ‘look’ of the footage was taken away from cinematographers - especially if they weren’t allowed to participate in the colour grade.

In recent years many cinematographers have tried to regain control of the look by using a workflow with LUTs that treats them more like you would a film stock back in the day - by exposing with the LUT on set rather than plonking a look onto the footage in post production. 

That way they can get as close to the final look of what they want while they are shooting. They can do this by getting a colourist to create a custom LUT for the film before shooting begins. 

“The process for me in prep is how close can I can get things ready so that when we are shooting we’re delivering the best product. You know, we start talking about colour - a lookup table, a LUT. You know, I believe the future is creating the strongest negative you can on set.” - Andrew Wehde, Cinematographer  

Earlier we talked about the idea of a LUT being like an extra layer that’s applied on top to enhance colour, well, there are also a few more nuances to the colour workflow.

Before adding a look or a LUT, the flat files that come straight from the camera need to use colour processing to be converted to the correct colour space. The most common colour space is Rec 709. This adds saturation and contrast so that the colour looks normal or realistic.

In grading software this is often called doing a colour space transform by inputting the colour space of the camera files - such as Arri Log C - and then choosing the output colour space that you want - such as Rec 709.

Now that we have the footage in the correct colour space we can add a LUT layer or a look to the footage. On professional cinema cameras this can be done with either a 3D LUT or what is called a CDL - a colour decision list. 

A CDL is basically a way of tweaking the colour on set as you shoot, by doing things like adding a tint, or controlling how much colour is in the shadows, midtones or highlights. This is usually done using live grading tools on a DIT cart.  

“For about two years now I’ve been working on on set colour and trying to deliver my on set dailies to be as close to final as possible. So I’m doing a large amount of on set adjustments. I have a panel so that I can do my highlight and shadow control, I can do full individual colour channels for high, medium, low and I have tint adjustments. I‘m basically pushing the CDL as far as possible. The Bear season two, what you see on TV is my delivered CDL. That’s the first time I’ve proven I can do final colour on set with minor adjustments.” - Andrew Wehde, Cinematographer

His DIT can then create proxies using the look he’s created on set, which are used for editing and passed all the way down the post production pipeline - retaining his look.

Other methods I’ve seen cinematographers use, without live grading tools on set, is to either expose with a specific LUT that their DIT will use to create the proxies, or to get their DIT to grade their dailies on set with software like Resolve, before they create proxies with that look.

Sometimes the DIT will carry an iPad which they can export graded stills to that the DP can view, get feedback on and approve on set before the proxies with that look are created. 

Whatever method is used, I think it’s good to at least have some idea about the kind of colour look you want to create before you start shooting. I personally really like this new trend of DPs trying their best to maintain as much control over the look of the colour that they can by using a CDL or a LUT - both when exposing the footage on set as well as when finishing it during the grade.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Rodrigo Prieto

Let’s dive into how Rodrigo Prieto’s philosophy on image making affects the camera and pick out some of the film gear he uses to create his masterful images.

INTRODUCTION

If you put together a list of some of your favourite working directors of the last two decades, there’s a decent chance that Rodrigo Prieto has shot for one of them: Martin Scorsese, Spike Lee, Alejandro Iñárritu, Greta Gerwig, Oliver Stone…the list goes on.

Although his cinematography spans decades, it often contains a deliberate use of rich saturated colours, a widescreen aspect ratio and visually bold decisions, which are always focused on presenting a subjective view of the character to the audience.

So, in this episode, let’s dive into how Prieto’s philosophy on image making affects the camera and pick out some of the film gear he uses to create his masterful images.   


PHILOSOPHY

“I try to make the cinematography subjective. And that’s in every movie, really, I do. I try to make the audience, as much as possible, be in the perspective of the character. That is not only literally the camera angle being in the POV of a character. It’s more making the camera behave, and the lighting, and everything about it - the texture and the feel of the movie - behave like the main person we’re following.” - Rodrigo Prieto

The idea of creating images that put the viewer into the shoes of the protagonist is one of the underlying philosophies in his work. For example, how do we visually convey a character’s progression from a naive, straight laced graduate to an excessive, risk taking, paranoid white collar criminal.

The camera could start by moving with a smooth, steady motion, under a well exposed light, and later shift to a rough, raw, handheld aesthetic with harder light and stronger shadows.

Or, if we take another story, how do we visually present a series of interweaving timelines and narratives so that the audience doesn’t get too confused?

You could do it by using a different colour cast for each different character arc. Using more sickly, cooler tones for a man in need of medical care, and a much warmer palette for a man trying to hold his life together through his faith.

Or, how do you put the audience in the shoes of a disorientated warrior during a surreal, near death experience during a battle?

How about by radically shifting colour out of the bounds of reality.

You could pick apart each and every one of the film’s he shoots in this way and uncover a philosophical rationale behind the often bold visual decision making that supports the story.  

It’s more about creating a feeling and a tone that is representative of a character’s state of mind than it is about shooting from the physical POV or perspective of the protagonist.

Each director he works for brings a different artistic sensibility, however the camera’s subjectivity is always present.

“Every director is completely different. For example, Ang Lee is very precise and also very methodical. And he likes to pick the focal length of the lens. And he talks to me about the framing and composition. He’ll look through a viewfinder and tell me have that corner of that window in frame and this and that, you know. Oliver Stone thrives in chaos. So every time I say, ‘Oliver we have this problem with the camera’, there’s a bump in the dolly, whatever, he’d say ‘Rodrigo Perfect is the enemy of good.’ And Scorsese is maybe a mix of both. He’s very precise in the shot listing he designs and he has a very good idea of the energy he needs the camera to have. But he also leaves space for improvisation by the actors and for new ideas to come.” - Rodrigo Prieto

Being able to adapt to how different directors work is an important skill. Cinematographers should be able to offer both their technical skills and practical advice on how to achieve a desired look or an unconventional shot, as well as lighting each scene.

Some director’s like to maintain more control over how each shot is composed, while other director’s may just describe a tone or feeling that they want to achieve and then leave room for the DP to offer their own ideas and suggestions as to how to achieve that.

When working with directors that like to maintain creative control over small details, it helps to build up a trust in their visual sensibilities and accept the challenge of focusing on the lighting and technical execution of the shots.

Sometimes it may also be necessary to surrender control of minor visual details in order to capture better performances. 

“The performances were the essential thing in this film. So, you know, I had to compromise sometimes the possibilities of the lighting to be sure that we captured the performances of these amazing actors.” - Rodrigo Prieto

On The Irishman, this meant embracing the decision to use multiple cameras to cover dialogue scenes - which allowed them to get greater coverage of the performances.

The reason this may compromise cinematic choices is because the more pieces of gear that you place within a set, the more you limit the kind of angles you can shoot, or the space that you can place light without it getting blocked or seen in a shot. 

To further complicate things, they had to use an interesting but cumbersome rig that actually accommodated three cinema cameras in order to de-age characters.

GEAR

This 3-D rig consisted of a Red Helium that could shoot high res, 8K files that could later be used for VFX work. This camera was placed in the centre of the rig and captured the actual shot and frame that they would use.

Two, special infrared Alexa Minis were then placed alongside the capture camera as ‘witness cameras’ that also had an infrared ring light to neutralise shadows that could only be picked up by the Minis and wouldn’t show up on the recorded Red image.

They could use these reference clips with the recorded clips and combine it with some AI and machine learning, powered by a NVIDIA GPU chip, to de-age the actors.

Prieto got his camera and grip team to reconfigure this large rig and made it more modular so that the ‘witness cameras’ could be moved around to either be alongside the main camera or at the top and bottom. This allowed them to use this hefty setup on a variety of grip rigs.  

Prieto’s photographic decisions are often bold, and use colour expressively. Across his career he has manipulated colour in different ways as the technology has progressed. He’s done it photochemically with film, by using a combination of film and the digital intermediate, as well as with digital cameras and the colour grade.

Let’s compare some of the techniques he’s used - starting with film.

The most common way of shooting on film, is to use a colour negative stock and expose and develop it normally. However Prieto has often experimented with different stocks and development methods.

For example, on Alexander he used the rare Kodak Ektachrome 2443 EIR stock. Sometimes called Aerochrome, it is sensitive to infrared light and displays colour in unusual and often unpredictable ways: turning greens to red, purple or blue. He rated this stock at 125 ISO and used an ND0.3 and a Yellow No. 12 filter to make the effects of infrared light more intense.

Another technique he used in many films, such as Amores Perros, is a type of development called bleach bypass processing. During the processing of film in a lab, the step of bleaching the film is skipped. This results in a black and white layer that is overlayed on top of the colour image, which reduces the saturation of colour, but increases both the contrast and film grain - creating a raw, gritty look.

Instead of doing this technique photochemically on Babel, he did it in the digital intermediate. In other words he processed the film normally, then added a half bleach bypass look to the film in the colour grade. 

This allowed him to control the intensity of the look, creating colour that was somewhere in between a bleach bypass and regular development.

As the technology has shifted more towards digital, he’s been able to do everything digitally instead of photochemically: from picking the look of a stock to choosing a development method, all within the grade.   

On The Irishman, he chose to differentiate the time periods by applying different film emulation LUTs to both the digital and film footage from different eras: a Kodachrome look for the 50s, an Ektachrome look for the 60s and a bleach bypass development look for the 70s onward.

You can see how foliage looks different across these looks: including deeper shades of blue and stronger contrast in the shadows for the 50s, a bit of a warmer look in the 60s, and a very destaurated but high contrast look from the 70s onward.

He’s used many lenses over the years, but has often reverted to shooting in a widescreen format with anamorphic glass, such as the Hawk V-Lites, the Arri Master Anamorphics or Panavision G-Series. 

Prieto also likes using Arri cameras, whether that is the Arricam ST or LT when shooting on film, or on variations of the Alexa when shooting digitally. 

Another example of how he uses colour can be found in how he lights night interiors and exteriors. He often uses quite a classic technique of creating alternate planes of colour in different parts of the image. Specifically, he likes to create pockets of warm light indoors and then place cooler, blue sources of light outside of windows. This creates increased colour contrast and more depth in the frame.

CONCLUSION

Although he often paints with lots of colour and quite strong visual techniques, it is always done deliberately. Prieto uses the technical tools at his disposal to craft images that create a mood that mimics that of the main protagonist.

Whether that’s through his use of colour, lighting or camera movement.

The cinematography remains subjective and deliberate in a way that subtly or sometimes not so subtly helps to support the story.

Read More
Gray Kotze Gray Kotze

Cinematic Lighting Vs Natural Lighting

In this video I’ll show you two different lighting setups for two different looks and compare how you can get away with using only natural light, or how you can elevate that look a bit more by supplementing natural light in a motivated way.

INTRODUCTION

You may think that cinematography would be all about using a camera. However, the most important part of a cinematographer’s job is actually lighting. 

Scenes are lit to: create a look that tonally suits the story, to provide a consistent source of illumination that doesn’t change or effect continuity, and to give the camera enough light to be able to properly expose.

In this video I’ll show you two different lighting setups for two different looks and compare how you can get away with using only natural light, or how you can elevate that look a bit more by supplementing natural light in a motivated way. 


MOTIVATED LIGHTING

Lighting can take two forms. It can be more expressionist and exaggerated, to completely elevate the footage out of the real world. Or it can be naturalistic, where, although artificial lights are used, they are used more subtly in a motivated way to keep the story within the bounds of realism.

Today we’ll focus on creating a naturalistic look by using motivated lighting. What exactly is that?

Motivated lighting involves first examining the natural light sources that are present in the space and then placing additional artificial film lights to supplement the natural light.

Or, sometimes, if a source doesn’t exist, cinematographers will create an imaginary motivation for it in their head (outside of the frame of the shot) and then add artificial light using that idea. 

There are three things to consider when lighting in this way: the direction of the light, the quality of the light and the colour of the light.    

Let’s keep these three factors in mind when we go about creating two different looks: a brighter illuminated high key look and a more shadowy low key look.



HIGH KEY - NATURAL 

Let’s start by showing how we can create a high key look - without many shadows on our subject - using only the natural available light.

When only using ambient light in a space, it’s very important to be aware of what the natural light is doing. 

I chose to shoot the natural light look at a specific time in the morning where the sun was still low enough in the sky that it would angle in I through the main window in the space. I checked the forecast beforehand and made sure it was a sunny day. Light scouting, weather observation and scheduling is very important when dealing with only natural light.

Next we need to think about direction. In this room the main source of light comes from a large window on the side and a smaller window from the back. 

Another important part of natural lighting is how you position the subject. Rather than placing her so that she is directly in front of the window and the light source is totally front on and flat, I’ve positioned her so that she is side lit by the sun coming through the window.

Also, placing the main source of light directly behind the camera is normally not a good idea as it may cast the shadow of the camera onto the subject. 

This positioning of the subject means the natural light comes through and creates contrast on one side of the face. Also this little window provides a small backlight which separates her from the background.

Now that direction is sorted we can focus on the quality of the light. I’ve used a muslin curtain to diffuse the intensity of the light, softening any shadows, and reducing the brightness of the illumination outside the window. 

When setting the exposure level for a high key look I’ve focused on the illumination of the skin by increasing exposure - in this case with an ND filter - until I’m happy with the level of light on the face. This may mean that the area by the window blows out a little bit - or turns to pure white - which isn’t ideal but we can’t control that. Not without lights anyway.

Finally, the colour of our light is that of the natural sun - which also can’t be changed. One reason I usually don’t turn on any overhead house lights when using natural light is because mixing the colour of artificial warmer ceiling bulbs and natural daylight may throw off how colour is recorded.

So there we go, a high key look using only natural light.





HIGH KEY - CINEMATIC 

One reason that DPs use lights to still create a naturalistic look is because of this curtain in the background. It’s a bit blown out. In other words the natural light from outside is much too bright and turns to white, lost information. This is not pleasing to the eye.

So to create a better look I will start by setting the exposure of the camera so that it is balanced to the light in the background by only looking at this window. Now it’s not blowing out, however, it’s much too dark to be a high key look. 

So, we need to add light. Let’s start by thinking about direction. 

Our strongest source of light is coming from the window - we’ll call this our key. Then some of that lighting from the window is coming inside and bouncing back as a soft ambient source - we’ll call this our fill. Then, finally, ambient light from that little window is hitting the back of her head - we’ll call that our backlight.

Using three light sources in this way is called three point lighting.

Now that we’ve identified where the light is coming from, let’s add film lights that mimic the direction of the natural sources. 

With our lights on hand, let’s think about the quality of the light that we want. Because the sunlight coming through the big window is strongest we’ll put our biggest light there - a Nanlite Forza 500B II.

The sunlight coming through the window has been diffused by the curtain and is nice and soft, so we’ll do the same and add a softbox, with a layer of stronger diffusion in front of it to soften it as much as possible. I’ve also added an egg crate grid to it which controls the spread of the light, focusing it more directly on our subject and preventing it from spilling everywhere.

Next, we’ll take our second strongest light, a Forza 60B, and use it to recreate some of the natural ambient fill light. This we’ll also diffuse and make nice and soft by using a lantern. This creates more of a soft spread of light. As you can see here it hits the little plant on the table. This mimics the spread and quality of natural ambient sunlight bouncing off a wall. 

Finally I rigged a little tube light on an extended c-stand arm as a backlight. This ever so slightly adds to the feel of the light coming from the back window. 

Now, for our third variable: colour. To me, the brightness of high key lighting feels like it would go well with a warm, morning look, so I cranked all the colour temps on my lights to 5,000 Kelvin - which is just a bit warmer than normal sunlight. 

The 500B also comes with a cool new feature of being able to adjust the amount of magenta or green tint to the light. So I added a bit of magenta which to my eye helps give a warmth to the skin tones.

And there we have it. A high key look - this time with added artificial lighting that should still feel quite natural.




LOW KEY - NATURAL 

Let’s take away those lights and create a new low key look with only natural light. 

Rather than being bright like the high key look, low key lighting accentuates shadows and darker tones and usually has an overall higher contrast between areas of light and shadow.

Since we’re not allowed to use any lights we’ll keep the same positioning, quality and colour of light as before. However, we are going to change our overall exposure. 

To prevent those window highlights from blowing out like they did for the high key look, we’ll lower the exposure using an ND filter on the front of the lens, until we can still read information on the window and it’s not blown out.
This leaves the frame looking much moodier than before, even though the only change made was in exposure, not in lighting.

This creates plenty of shadows across the frame, which may work as a nice look for some stories, however may be a bit too dark for others.






LOW KEY - CINEMATIC 

So, let’s see if we can’t find a middle ground between the very shadowy low key, natural light look and the high key look - by introducing some film lights.

We’ll use almost the same placement for our key light as before. But this time, instead of being more in front of the character, we’ll bring it around a little bit more until it’s right on the edge of the frame and is lighting more from the side. 

This will create just a little bit more contrast, as less light will fall on the side of her face nearest to the camera.

We’ll go ahead and turn on the same backlight as before. However, this time, we’ll leave our fill light off. 

If you compare the high key lighting shot that uses a fill light and the low key shot without one you’ll see that not illuminating the one side of her face creates a gentle shadow on the side that favours the camera - therefore creating more contrast.

Because I’ve moved the key light around, there is less light that spills on the wall behind her, which also makes it feel like more of a low key look.

On top of this, there is a slight difference in colour. Because the low key look is a bit moodier, I cooled down the colour temperature on my fixtures from 5,000K to 6,000K.

So there we go. A low key look that was achieved with motivated lighting, by simply eliminating the fill.      






ADVANTAGES OF USING ARTIFICIAL LIGHT

Four different looks: two created without any lights and two created using artificial sources. Lighting is always subjective and should change depending on the nature of the story you are telling. 

This look may be better suited for commercial applications, while this look works for a film with more dramatic content.

But besides the look, what other advantages does using lights provide? Perhaps most importantly using lights creates a consistent look, which will hold up for much longer periods of shooting. 

If it takes a couple of hours to shoot a scene using only natural light, the look of the ambience may have completely shifted as clouds came over, or the sun got flagged by a building. This means that the consistency and continuity when cutting to different shots will be off.

Using film lights means that even if the natural light changes, the artificial light should maintain the continuity of the look, which means you will be able to shoot for longer.

Also, relying purely on natural light means you have limited to no control over the look of the image. For this video I could pick a day and a specific time where I knew we would get strong sunlight, but that isn’t always the case. If you need an interior to look warm and sunny, but it rains that day and you don’t have any lights, then there’s not much you can do.

Read More
Gray Kotze Gray Kotze

2-Perf vs 3-Perf vs 4-Perf: 35mm Film Formats Explained

By far the most popular film format is 35mm. But what you may not know is that there are then 3 further format choices that need to be made between: 2-perf, 3-perf or 4-perf. But what is a perf and how does it affect both the budget and how the footage looks?

INTRODUCTION

The starting point when it comes to choosing which format to shoot a movie on is between digital and film. If film is selected, by far the most popular film format is 35mm. But what you may not know is that there are then 3 further format choices that need to be made between: 2-perf, 3-perf or 4-perf. But what is a perf and how does it affect both the budget and how the footage looks? Let’s find out.


WHAT ARE PERFS?

The manner in which a piece of 35mm film is exposed is determined by the negative pulldown. This is described in what are called perfs. Perfs stand for perforations and are the little holes that you see in the film that span the length of each individual frame. 

These holes align with sprockets, which turn in order to mechanically pass an unspooling roll of film vertically through the camera. The film is exposed when it is hit by light which is let through the lens. 

35mm film frames can be shot with either four vertical perfs, 3-perf, or 2-perf. As the width of a 35mm frame is standardised to a size of 24.9mm, the number of perfs only effect the height of the frame that is recorded - with 2-perf capturing the skinniest surface area, and 4-perf capturing the tallest surface area.

Exposing a larger area of film to light is kind of like the digital equivalent of recording at a higher resolution - the larger the area the more clarity and higher fidelity it will be. However, the larger the exposure area, the more film needs to be used and the more you will need to pay for film stock and development. So perfs affect both the cost of shooting as well as the quality or fidelity of the image.

The motion picture camera that is used must be specifically set to record frames with a certain number of perfs by adjusting the speed at which the film runs through the camera as well as the height of the gate that lets through light. Most cameras can record either 4-perf or 3-perf, while only specific cameras can record 2-perf frames.

There are two different steps to the filmmaking pipeline. Capturing images on film with a camera and projecting those images on film by passing light through them.

Image capture can happen on either 4, 3 or 2-perf, however 35mm film projectors are set to work with a 4-perf film print. This means that if you capture film in 2 or 3-perf, you would still need to print the final 35mm projection roll in 4-perf frames.

However, now that digital projection has taken over, it’s possible to capture 35mm in either 2, 3 or 4-perf, scan the film negative and then work with the scan in the same way as a digital file - which can later be sent out to cinemas that use a digital projector or for online distributors to upload the file and stream it digitally.

4-PERF

In the late 1800s and early 1900s when motion picture film technology was undergoing development, 4-perf 35mm film capture and projection emerged as the industry standard. This produced a tall aspect ratio of 1.33:1. 

4-perf offers the largest exposure area of 35mm film at 18.7mm tall. Because more surface area is used the film grain will be smaller and the image will be of a higher quality. 

This large surface area also allows lots of possibilities for aspect ratios. When shooting Super35 with normal sphercial lenses the frame can be used for taller aspect ratios like 1.33:1 or the top and bottom can be cropped to get widescreen aspect ratios like 1.85:1 or 2.40:1.

Before digital, this crop would have been done by printing the final film to a different ratio with a letterbox, or by using a narrower plate that chopped off the top and bottom of the frame when projecting. Now this can be done by scanning the negative and using software to crop the image.

4-perf can also be used with anamorphic lenses. These lenses squeeze the image by a factor of 2, to around a 1.2:1 aspect ratio, so that it is captured as a tall, compressed film frame. It is then later de-squeezed by a factor of 2 to get it to a widescreen 2.40:1 aspect ratio.

Because this method uses such a large portion of the tall 4-perf frame, anamorphic negatives have a higher fidelity and low amount of grain.

Another advantage of 4-perf is that when shooting Super35, the extra recorded area on the top and bottom of the image, that will be cropped out, can help with VFX work, such as tracking.

A disadvantage of 4-perf is that more film must run through the camera faster, which makes it noisier. This also means that it uses the most film out of the 35mm formats, which means more money must be spent on buying film stock and developing it.

It also means that a 400’ roll of film will only be able to record for a mere 4 minutes and 26 seconds, before a new roll must be reloaded into the camera.


3-PERF

In the 80s, cinematographer Rune Ericson collaborated with Panavision to produce the first 3-perf mechanism for 35mm cinema cameras. 

Shooting each frame 3 perforations tall as opposed to 4, produced a less tall frame with a height of 13.9mm and an approximate aspect ratio of 16:9. 

When shot with spherical lenses this negative could easily be ever so slightly cropped to get to a 1.85:1 aspect ratio, or more cropped to get to a 2.40:1 aspect ratio. 

Because of the lack of height of the frame, 3-perf wasn’t suitable for using 2x anamorphic lenses, as it would require too much of the frame width to be cropped and therefore go against the point of getting a high quality anamorphic image. Therefore, 3-perf is best when used with spherical lenses.

However, it is possible to use the much less common 1.3x anamorphic lenses with 3-perf film, as they squeeze a 16:9 size negative into a widescreen 2.40:1 aspect ratio.

Due to the decrease in recording surface area, grain will be slightly more prominent in the image than when using 4-perf.

The main reasons for shooting 3-perf rather than 4-perf are financial and practical. 3-Perf uses 25% less film - which means a 25% reduction in the cost of both film stock and processing of the film at a lab. 

It also means that the camera can record for 33% longer than 4-perf. So a 400’ roll gives a total run time of 5 minutes and 55 seconds before the camera needs to be reloaded. This is practically useful especially when shooting during golden hour or in situations where taking the time to reload a camera might mean missing a shot.

2-PERF

2-Perf, first called Techniscope, gained popularity in the 60s when it was used to shoot lots of Spagetti Westerns. These movies were often done on quite low budgets, yet wanted a wide 2.40:1 aspect ratio to frame the characters in sweeping landscapes.

2-Perf does this by further cutting down on the vertical recording height of the negative, taking it to 9.35mm, creating a native widescreen aspect ratio.

At the same time, this reduction in frame size also equates to a reduction in the amount of film that needs to be used. Since it is about half the height of 4-perf, about 50% can be saved on purchasing film stock and processing film. Therefore 2-perf was a great solution to both save money and create a widescreen aspect ratio.

It also basically doubles the recording time of each roll, allowing you to get 8 minutes and 53 seconds with 400’ of film. This means that it’s possible to either roll for longer takes, or that many more short takes can fit on the roll before needing to reload the camera. 

Because it is so skinny and lacks height it’s not possible to use this format with anamorphic lenses - not that you would need to since you get the same aspect ratio by using spherical lenses. 

It’s also only really suitable for using this aspect ratio, as getting a taller ratio would require cropping into the image far too much and increase how the film grain looks significantly.

Although it has the same ratio as anamorphic, it has a different look. Because the surface area is much smaller than 4-perf, the grain shows up as much more prominent. 

In the modern era where film stocks have become much finer grain and cleaner looking some cinematographers like using 2-perf to deliberately bring out more filmic texture and make the footage feel a bit more gritty.

I’d say 2-perf 35mm is basically a middle ground between a cleaner 4-perf 35mm look and a grainier 16mm gauge stock.

CONCLUSION

How many perfs you choose to shoot on has an effect on a number of factors. 

4-Perf records onto a greater surface area, which looks cleaner, with less grain, can be used with both anamorphic lenses, or spherical lenses, and has room to crop to different aspect ratios.

However, this comes at a higher cost, with a camera that makes more noise and very short roll times. 

On the other hand 2 and 3-perf, use less of the negative, which makes the image a bit grainier, isn’t compatible with 2x anamorphic lenses, and limits the amount of taller aspect ratios you can choose from. But, it’s much cheaper and the camera can roll for longer.

In this way, the choice of 35mm film format, is another technical decision which filmmakers can make that effects both the look and feeling of the image, as well as providing certain technical limitations and advantages.

Read More
Gray Kotze Gray Kotze

What Directors Do Vs What Cinematographers Do

How much of the look of each film is created by the director and how much is the look influenced by the cinematographer?

INTRODUCTION

In modern cinema the authorship of a movie is always attributed to the director. And much of a movie is made up of how the visual information is presented in shots. 

However, most directors don’t directly operate a camera, pick out the camera gear or determine how each scene is lit. This is usually overseen by the cinematographer, otherwise called the director of photography. 

This begs the question: how much of the look of each film is created by the director and how much is the look influenced by the cinematographer? The answer is…well, it depends.

Some directors like Stanley Kubrick were famous for having a large hand in the cinematography choices - from framing and shot selection all the way to picking out what individual lenses would be used. 

While other directors may be far more concerned with working on the script and the performance of the actors, and leave many of the photographic choices up to the DP.

Normally though, the answer is somewhere in between these two extremes.

VISUAL LANGUAGE

In order to determine the authorship of a film’s look, it helps to define all the individual elements and creative choices which go into creating a visual language.

Each frame is due to a compilation of choices. This includes: what shot size is used, how the shot is angled and framed, how the actors are blocked within that frame, the arrangement of the production design and what is placed in front of the camera, the choice of medium and aspect ratio, how the camera moves, the choice of lens, how it is lit, graded, and how each shot is placed next to each other and paced through the editing.

There are no doubt other creative choices that also go into creating a visual language, but these are some of the main ones to think about.

Although some directors and some cinematographers may have a hand in guiding each one of those choices, many of these decisions are controlled more strongly by either the director or the DP. 

CREW STRUCTURE

The decision making process on a film set is similar in many ways to how a company operates. It is headed by the director, the CEO, who manages an overall vision and direction, and has to make lots of small decisions quickly to manage the project of making a film.

Below the director are other ‘executives’, who also have a large impact on the film, but who occupy a more specialised role. For example the producer, or CFO, who focuses more on the niche of the finances. 

Or the cinematographer, the CTO, who is responsible for overseeing how technology is used to capture the film.

Then there are loads of other department heads that occupy leadership roles that are increasingly specialised: like the production manager, or the focus puller. 

This analogy isn’t perfect but you get the idea. So, let’s unpack this a bit further by breaking down what a director does versus what a cinematographer does and which visual decisions each is usually responsible for.     

   

WHAT A DIRECTOR DOES VS. WHAT A DP DOES

Creating shots and shot sizes is hugely important in establishing the look. Typically directors and cinematographers collaborate on this, but I’d say more often than not director’s have a stronger say in this, especially in the more structured world of TV commercials - where each shot is storyboarded ahead of shooting.

On larger Studio series or films where shooting time is quite expensive, many directors will create a storyboard in pre-production, which will be passed on to the DP when they come onboard. 

Even on less expensive movies directors often like to use this technique to express their vision, keep to schedule and not overshoot a lot of coverage. For example, the Coen brothers are known for using storyboards and being quite particular about each frame which is shot.

However, other directors, such as Steve McQueen, prefer to work in a more collaborative fashion, coming up with shots with the DP and choosing how they want to cover scenes once they are in the location with the actors. 

Choosing whether to move the camera and how to do so is built into this decision about creating shots. Often directors will determine what kind of camera moves they would like to build into the shots, such as a push in, or lateral tracking motion. 

The cinematographer will then take those ideas and work out the best way to practically execute those moves: whether that be with a gimbal, a Steadicam, a dolly or handheld on a rickshaw.

In other words taking the overall tonal direction and making it happen practically.  

Which lens, particularly which focal length is chosen, has an effect on how the shot looks. This is an area where the cinematographer usually controls this choice more than the director.

However, some directors may like to lean into using particular lenses for a trademark look, for example the Safdies have often used long, telephoto lenses on their films, which helps elevate the tense, voyeuristic tone. 

While in other cases the cinematographer may bring a look to the table based on their lens selection, such as Emmanuel Lubezki’s work, which is known for using extremely wide angle lenses close up to characters. He’s used this technique in different films, working for different directors.     

Blocking, or how actors are placed or moved within a scene, is a visual component that is also entirely determined by the director in most cases. They will work with the actors and walk through the scene, while the cinematographer watches and thinks about camera placement. 

Occasionally DPs may provide suggestions to the director if they think that certain movements or positionings may not work visually - but more often than not they will try to work with whatever blocking the director puts forth.

Another part of the process which is mainly controlled by the director is the production and costume design - which is done in collaboration with the art director and costume designer. When pitching a film or commercial, a director’s treatment will often include direction about the kinds of locations, colour palettes and costume which they envision.

However, some director’s may also be open to collaboration with the cinematographer, particularly when it comes to crafting a colour palette.

The palette can also be influenced by lighting. This is a factor controlled almost entirely by cinematographers, and is probably the biggest stylistic part of the look that they bring to the table.

The easiest way to see this is to look at the work of directors, who have worked with different cinematographers on different projects.

These are all night scenes in films by the same director: Luca Guadagnino. Two of them were shot by cinematographer Yorick Le Saux, which feature toppy lighting, a darker exposure and a more muted, darker palette. 

The other two were shot by cinematographer Sayombhu Mukdeeprom and feature a more vibrant, earthy palette, a brighter, side key light and hanging practical bulbs in the background.

Or how about these films from Quentin Tarantino. Two were shot by Andrzej Sekuła and are lit with hard light from cooler HMIs through windows. These are cut in the background to have different areas of hard light and shadow.

While the other two were lit by cinematographer Robert Richardson, which have more warmth in the skin tones, and are cooler in the shadows. Both use his table spotlight technique: where he fires a hard light rigged in the ceiling into the table, which then bounces a softer warmer light onto the actor’s faces. 

 Again, same director, but subtly different looks from different DPs. 

However, occasionally directors will communicate a specific lighting style across multiple films to the different DPs that they work with. For example, Terrance Malick’s exclusive use of natural light and emphasis on filming in golden hour. 

The choice of medium is one that is probably equally contributed to by directors and cinematographers. By this I mean the choice of whether to shoot digitally or on film, in large format or Super35, with spherical or anamorphic lenses.

These overarching decisions about medium are usually made by the DP and director based on their artistic and practical merits. The further technical nuances of that choice, such as which large format camera to shoot on, or which anamorphic lens to use will then almost always be made by the cinematographer.

Choosing the visual language of how shots are juxtaposed and paced in the edit is almost 100% done by the director and editor. The only input a DP may have in this regard is when they provide guidance about shooting a scene in a very specific way during production - such as using a long take, or shooting with very limited coverage - which leaves the director minimal cutting options in the edit.

Once the final cut enters the grade in post production, on average I’d say the director has slightly more control than the DP. But, not always. Some DPs like to expose and shoot digitally on set with a specially built LUT. This LUT is later used as the basis of the look in the grade. 

Some cinematographers also push to always be present in the grade, as how the footage is shaped in post production hugely contributes to how a film looks.

A good example of this is how the Coen brothers work with two different cinematographers: Roger Deakins and Bruno Delbonnel. 

Whether working digitally with a LUT, or with film in the DI, Deakins tends to favour a more saturated, vibrant, contrasty, look with warmer skin tones and deeper, darker shadows.

While Delbonnel is known for crafting a specific look in post with his film negative that is lower in saturation, cooler in both the highlights and the shadows, and quite often introduces heavy layers of diffusion on top of the image to give it a more of a dreamy look.

CONCLUSION

Ultimately, the creation of the images is a balancing act which is dependent on the input of multiple collaborators - from the director to the DP to the production designer. 

Directors tend towards providing more of a conceptual guidance about how a movie looks, while cines generally are more about taking those ideas and visually executing them by working with technical crew and equipment.

A DP working for a good director, shooting a good story, will make their work look better. And as a director you want someone who will help you to enhance and photographically bring your vision to life.

Regardless of who does what, the most important thing is to find great collaborators and be open to at least hearing what ideas they bring to the table.

Read More
Gray Kotze Gray Kotze

5 Techniques For Framing Cinematic Shots

Filmmakers compose and visually present information within a frame. Let’s go over five different techniques which may help you frame with more of a cinematic eye and tell stories using only images.

INTRODUCTION

Filmmakers compose and visually present each shot within a frame. Frames can be of wide expanses, close up details, symmetrically balanced or even off kilter.

It’s also probably the easiest cinematography skill to practise: as all you need is to be able to snap away on a camera - including the one on the back of your phone. But what is it that makes a good frame? 

In this video, let’s go over five different techniques which may help you frame with more of a cinematic eye and tell stories using only images. 



1 - USING THE BACKGROUND 

What we choose to include or exclude from each shot is a deliberate choice that any image maker should be aware of. 

Most shots, whether in cinematography or photography, can be broken down into two components: the subject which the eye is most drawn to and the background, which the subject is placed against.

When filmmakers run through, or block, a scene with actors, one of the factors that they use to decide on the placement of the camera, and therefore the frame, is what background they want to place the subject against. 

The background does a few things. First and foremost it’s a way of conveying information within a shot. An isolated, tighter shot of a character against a white wall background includes limited information for the audience. While shooting a character in front of a wider, carefully dressed set with windows reveals several layers of information. This person is sitting in a shabby dressing room, so perhaps he’s a performer. 

The highrise building outside suggests that it’s in a city. He’s interacting with another character, but because he is in sharp focus, the camera is suggesting that the man sitting is the main subject in the scene.    

For more untraditional, atmospheric filmmakers, who let stories play out slowly without much exposition, how they present each frame is especially important for the audience to glean as much information about the characters and their environment as they can.

A background can either be flat or have depth. This depends on the distance between the subject of the shot in the foreground and the distance to the objects in the background. 

Normally, shots which have more depth to them are considered a bit more cinematic - as they create more separation between the foreground and the background and therefore a greater feeling of dimensionality, and more of an illusion of reality.

Like this shot which places a wall of highrises far off in the distance, rendering the looming city with a sense of scope while at the same time isolating the character.

This is also why documentary filmmakers often try to pull their interview subjects away from walls or backgrounds, both to render them more out of focus and to create more depth in the frame.

 

2 - LENS SELECTION

Each frame is not only affected by the direction in which the camera is pointed, but also by the focal length of the lens that is chosen.

The focal length is the degree of magnification that a lens has and is denoted in millimetres. An easy way to start thinking about focal lengths is by breaking them into three camps: wide angle lenses, medium lenses and telephoto lenses.

There aren’t any official millimetre categories when it comes to grouping focal lengths but I generally think of Super 35, spherical wide angle lenses being somewhere between 16mm and 25mm. With medium focal lengths being around 35mm to 65mm, and telephoto lenses approximately 75mm or longer.

Not only do wide, medium and telephoto lenses provide different levels of magnification, but they also change how the background of a frame is rendered.

The wider the focal length, the more the frame will be distorted and stretched and therefore the more background you will see. Whereas the longer the focal length, the more the frame will be compressed and the less background you will see.

Therefore when framing a close up shot of a subject it’s important to consider whether you want to use a super wide angle lens, with the camera physically closer to the actor, that displays more information in the background. 

Or, whether you want to frame using a telephoto lens, with the camera further away from the actor, and show less of the background with a shallow depth of field.


3 - FRAMING GUIDELINES

Although there is technically no right and wrong when it comes to framing, there are a few aesthetic and cinematic conventions or guidelines which have been widely adopted in filmmaking over the years.

One of the foundational framing guidelines is called the rule of thirds. This principle suggests dividing the frame into nine segments made up of two evenly spaced vertical lines and 2 evenly spaced horizontal lines.

You then place the most visually strong elements in the frame, like the subject along these lines, or at the intersection of these lines.

Probably the easiest example to show this is by framing the horizon. Usually cinematographers will either frame a landscape so that the sky portion occupies the top third of the frame and the earth portion occupies the bottom two thirds, or they will flip it and place the earth portion on the bottom third and the sky on the top two thirds. 

Another convention is the idea of leading lines. These are where objects in a frame are lined up compositionally so that they create an invisible path which guide’s the audience’s gaze towards a specific part of the frame.

These lines can be created in symmetrical frames by finding elements that lead to a centralised point, like a doorway where a character is standing.  

Filmmakers can also create a frame within a frame by composing the subject within a shape, like a mirror, a door or a window to create a more formal perspective.  


4 - SHOT SIZE & ANGLE

One of the most important decisions there is when it comes to framing is deciding how wide or tight you want your shot to be. 

As we hinted at earlier, wider shots are great at establishing the environment around characters and familiarising the audience with the geography of the film’s world. 

While tighter shots, where the subject occupies a large area within the frame, can be used to punch in and highlight details: whether those are an important object in the story or to better read and empathise with the emotion on someone’s face.

I’ve made a whole video on shot sizes before, which I’ll link below, but I think the biggest take away from it is the idea that, in life, our proximity to a person defines our emotional relationship to them.

Therefore, the tighter we frame a shot on a character, the more intimate it feels, while wider compositions feel more emotionally neutral and observational.

At the same time, the angle at which we point the camera at a subject also has a large effect on how frames can be perceived by audiences. 

Most shots in filmmaking are taken from a neutral, or medium angle, where the camera is positioned roughly at the eye level height of a character without any significant vertical tilt.

This approximates the viewer’s own eye level and creates a level of empathy and identification with characters. It also conveys a sense of normalcy and realism as it’s not visually jarring.

Low angles, where the camera is positioned at a height below the character's eye line and angled upward, creates more of an uneven emotional connection, which is often used to make characters feel more powerful, physically larger, dominant, imposing and stronger.

While high angles, shot from a tall position with the camera tilted down, tend to have the inverse effect of creating a sense of vulnerability, diminished size or weakness.

    

5 - BALANCE

Shots that are thought of as cinematic generally have a degree of balance to them. However, this balance can also be deliberately thrown off and subverted for effect.

A balanced frame is one where no part of the image has an overwhelming visual weight from elements that throws off other areas.

One way to think about this is in terms of negative space, empty areas in the frame without significant visual elements, and positive space, parts of the frame that draw the eye towards a focal point.

Filmmakers can create a symmetrical balance by centrally framing a subject and then equally weighting areas of negative space against the middle area of positive space.

Or they can frame shots with asymmetrical balance by placing the main subject in the image off-centre and then weighting the other side of the image with elements of negative space.

Other visual aspects like colour or areas of light and shadow can also be layered to either achieve symmetrical or asymmetrical balance within a shot.

When it comes to framing a dialogue scene between two characters, a common technique is to use a tik-tok or shot-reverse-shot: where each shot is taken from the same side of a 180 degree circle, in a way that may place the characters on opposite sides of the frame.

This introduces another two framing concepts: breathing room and headroom. 

Breathing room is the amount of negative space between the subject and the edge of the frame. Traditionally this space is left open in front of characters to give a sense of normalcy. Unless filmmakers want to deliberately introduce a bit more uncertainty or tension by leaving characters with limited or no breathing space.

Headroom is the amount of space above a subject's head. This can either be traditionally framed so that there is some negative space above the character, or the subjects can be given a haircut, so that they have no headroom and the top of their scalp is framed out - which may make the shot feel a bit tighter, more intimate and even tense.

CONCLUSION

There’s no magic tool that will allow you to create perfectly cinematic frames. Probably because there’s not really such a thing as a perfectly cinematic frame. Some movies may need uglier, off kilter shots, while others may benefit from a more precise, symmetrical form.

It always comes down to forming a perspective on a story that you then translate into a look. Every brain will do this differently and interpret how the world is framed in different ways. But hopefully the next time you are practising snapping photos or composing your next shot, you will have some of these techniques in the back of your mind that you can use to manipulate how you want your images to look. 

Read More
Gray Kotze Gray Kotze

Cinematography Style: Shabier Kirchner

This video will unpack Shabier Kirchner's work as a cinematographer a bit further by going over how he got his start in the industry, looking at some of his thoughts and philosophies on filmmaking and breaking down some of the gear he’s used to create images.

INTRODUCTION

After getting his first big breakthrough working with director Steve McQueen, Shabier Kirchner has emerged as a prominent, self-taught cinematographer in the world of festival and indie films. 

His photography mixes expressive but not heavy handed lighting, different formats, big close ups, handheld movement and naturalism to create an often dense, grainy, filmic look that evokes unconscious feelings from the audience.  

This video will unpack his work as a cinematographer a bit further by going over how he got his start in the industry, looking at some of his thoughts and philosophies on filmmaking and breaking down some of the gear he’s used to create images.  

BACKGROUND

“Images were always something that I was surrounded by. I was just immediately attracted to drawing and eventually photography as a way of expressing how I was feeling. In a way that I couldn’t really do with words or I couldn’t do with writing.”

Although the Antiguan born cinematographer struggled a bit in school, he developed an early love of photography. This was amplified by his dad who also loved photography, so much so that he had his own darkroom.

Here, Kirchner got to experiment with film and learn the basics of exposure and development. As he got older he began shooting a lot of what he was surrounded by, such as surfing and skateboarding. He slowly built a reel, which he would later use to apply to study a Masters in cinematography at the NFTS film school in London.

After making it to the final round of selection, he wasn’t selected. On his way from a job he landed in New York, where he managed to work as a trainee on a movie. The DP on that movie, Thomas Scott Stanton, immediately landed him the amazing opportunity to shoot 2nd Unit cinematography. 

After that he settled in New York, working on commercials, music videos, short films and indie projects for the next eight years.

One day he got an unexpected call from Sean Bobbitt, Steve McQueen’s regular cinematographer. Since Bobbitt sometimes worked with NFTS, he assumed it was in regard to his earlier application to the film school, however, it was something far better. 

He was looking to introduce a new cinematographer to Steve McQueen, as Bobbitt would be unavailable to shoot an upcoming series he was working on called Small Axe. This sparked another opportunity in his career. 

PHILOSOPHY

By breaking down the choices that cinematographers make, my videos may make it seem like cinematography is a very analytical process. But often it’s not at all. Many DPs internalise their years of practice and formal and technical training, then use that to unconsciously make decisions which dictate the visual language of a film based on what feels best. 

“Like, everything about this project I felt was done on a very unconscious level. It’s something that now looking back on it I feel that there is a lot of value to operating on your impulses and operating on your emotions. Things that you feel that you may not be able to quite put into words.”

This doesn’t mean that he doesn’t prepare. Depending on the project he may put together a collection of images from photographers that he likes, or conduct research through conversations with the relevant people or communities that the story takes place in. While at the same time shaping a perspective based on his own experiences. 

And, of course, he may put together a shotlist. For example, during prep on the first episode of Small Axe, he compiled comprehensive lists of shots for covering scenes - with as many as three different alternatives per shot.

The director, McQueen, dismissed this approach, preferring to work off conversations about the story and characters, which ultimately led them to finding the right visual approach for each episode in the series.

Kirchner also drew from the wall full of period reference photos that the production designer had gathered. This gave everyone a sense of direction of the look, which also fed into his spirit for open collaboration with the crew.

“I want everybody to have read the material. I want everybody to feel what it is that we’re trying to achieve. That, you know, everybody had agency. I think that’s a really important thing. And when you feel that happening throughout a whole crew, the reverberation of that is, like, intoxicating.”

This collaborative environment that he encourages also extends to some of the gear decisions that are made by his technical crew. 

GEAR

Fostering an environment on set where everyone, including the technical crew, is on the same page about the look helped when it came to selecting gear and designing lighting setups.

“I wouldn’t call myself the most technical of people and I’m, like, totally OK with that because I have so much trust in people like Ian and his crew. To go and be able to describe a feeling or describe an aesthetic or a quality of light and have someone like Ian take that and interpret it in a way that is achievable is really great. Here’s a photograph, you know, Eggleston took at night and this is the light and it looks like we’re underwater. What do you think?”

This led to a member of the lighting team proposing to the gaffer that they use ETC Source Four fixtures to create different pools of overhead light. These spotlights, often used in stage lighting, can be used to create crisp beams of light that can be spotted. This means that the spread of light can be controlled and dimmed. 

They were also easy enough to rig, as top down lamps, from the highest windows of the street facing apartments.

They were all gelled blue-green to mimic the silver allied Mercury vapour lights of that era, to create multiple, controllable top down pools of bluish light reminiscent of Kirchner’s reference photo.

When lighting, he often uses contrasting colour temperatures and fixtures, to create different pops of colour across the frame.

For example, in this interior he used super thin LED Lightmats which could be velcroed to the ceiling, diffused with an off-colour fabric and gelled with leaf-green, steel-green or yellow in different areas to break up the modern, perfect feel of LED light.

This overhead ambience lifted the light levels of the entire space, which was further accentuated by practical tungsten wall sconces to create a warm look. This warm interior light was offset by the cooler Source Four street lights that were rigged outside. 

Even for more traditional day interior scenes, which are often lit through windows with stronger, daylight balanced HMIs, he may add little pops of tungsten practicals in the background to contrast the cooler daylight feel with a homely warmth. 

“I have so much love for celluloid. I just think that there is something very special to it. The way it treats skin. But I also think that the process in which we work with film, as well. There’s a lot of value in that. To be able to think, like, in an economical way and not just sort of spin the camera and roll and roll and roll. To, sort of, really trust what you’re doing as well.”

When it comes to choosing a medium, he does love the look of shooting on film, but will also choose digital cameras like the Alexa Mini or a Sony Venice, depending on his practical needs or the needs of the story.

A great example is the five part series Small Axe. Each episode was shot on a different medium. He used the cleaner, wider perspective of the large format digital Sony Venice for one, the digital Super 35 Alexa Mini for another episode for its ability to roll for long takes up to 45 minutes. 

With grainier, 16mm film used to bring out a 1970s aesthetic, the textural, widescreen 2-perf 35mm film look to frame wider aspect ratio shots of a community, and the more stabilised, structured, taller aspect ratio in 3-perf 35mm for another episode.

Each choice of format brought a different look that better suited the story of each episode.

When shooting on film he used 500T stock from Kodak, 5219 for 35mm and 7219 for 16mm. This stock has a beautiful, higher textural grain to it, and - being rated at 500 ASA - is fast enough to practically use it for both day and night scenes. He’s even push processed this film at times to create even more grain.

Kirchner shoots this tungsten balanced film without using any correction filters - even when shooting in daylight. Prefering to correct the colour temperature in the grade, rather than in camera.

Like his choice of formats, how he chooses lenses is also dependent on the kind of look for the story that he is after. For example, he’s used the Cooke Speed Panchro 2s for their soft vintage roll off and warmth, the PVintage range from Panavison for their smooth, fast aperture, romantic look, and the Zeiss Master Primes for their modern, cooler, sharper rendering of detail which helped capture 16mm with a higher fidelity look.

Although the type of camera movement he uses does depend on the director and the story they’re telling, his camera motion often has a rougher, handmade feeling to it. 

Whether through his regular use of handheld, or even by choosing not to stabilise bumps by using software in post production. 

Instead, embracing the little imperfections that come from a human operated crane movement in a sweeping shot across a courtroom. 

CONCLUSION

“I took some wild chances on things that I didn’t really believe that I could do but I just did it anyway and I failed terribly. But if I could go back again and do it all again I’d do it the exact same way because failing is success. I’ve learnt the most from things that I didn’t succeed at 100%.” 

Read More
Gray Kotze Gray Kotze

Grip Rigs For Cinematic Camera Movement (Part 2)

To move a cinema camera in different ways requires different types of mechanical rigs. In this video let’s go over some of the common, interesting and even unusual rigs that are used in the film industry to create motion.

INTRODUCTION

There are many reasons to move the camera in filmmaking. It can be used to reveal more of a space and establish the geography of a scene. It can elevate action in fight  sequences. Evoke an emotion or a tone. Or even provide an unusual perspective to a scene.

To move a cinema camera in different ways requires different types of mechanical rigs. In this video let’s go over some of the common, interesting and even unusual rigs that are used in the film industry to create motion.   

BOLT

The Bolt is a specialised robotic arm rig, which is designed to move the camera at extremely high speeds, extremely precisely. It is built by Mark Roberts Motion Control and is the go to robotic arm for industry level film work. 

So, how does it work? This cinebot has a 6-axis robotic arm - which means it has 6 different points where the arm can swivel, rotate, pan, tilt and roll the camera. This arm is attached to a heavy base which is designed to slide along a track - which come in 3 metre length pieces - giving it an additional lateral movement axis. 

This total of 7-axes of movement means that it can move the camera in very complex ways, almost anywhere within a confined area. What makes the Bolt special is that it comes with software called Flair that is used to program each move that it makes, frame by frame. 

Once a move is programmed it can be saved and repeated as many times as necessary in frame perfect passes. In other words it can perform the exact same motion multiple times, so that each move records exactly the same image, even when broken down frame for frame.

This allows filmmakers to record multiple plate shots of the same take - where they can record different details in different parts of the frame multiple times, then layer different sections of each plate on top of each other in post production.

For example, this is a shot from a commercial that I camera assisted on a few years ago. The Bolt could be used to record two passes. One plate shot of the boy drinking orange juice, and another plate with a dog being cued to jump by an animal wrangler. 

In post, the animal wrangler could be cropped out and the motion of the dog jumping overlayed on top of the shot of the boy, so that it looked like it was recorded in a single take. This is made easy by the Bolt’s frame perfect, repeatable, programmed camera moves.

The Bolt is often combined with a high frame rate camera, like a Phantom, to shoot slow motion because the Bolt can move at extremely high speeds. When shooting slow motion, everything, including camera motion, gets slowed down. This means that to shoot extreme slow mo and still get a normal tracking movement, the camera needs to move at a much faster speed than normal.

It can also be used to get super fast camera motion when shooting with the camera at a normal frame rate.

It’s actually a bit scary how fast this heavy chunk of metal can move. That’s why the Bolt operators will usually either cordon off the area that the arm moves in or give a stern warning to cast and crew not to go anywhere near the arm, unless the operators give permission. Because if this thing were to hit anything at a high speed it’d be super dangerous if not fatal. 

For this reason, camera assistants will usually strip the camera of a monitor, mattebox, eyepiece and any additional weight that could offset balance or upset smooth movement or even pieces that could fly off while the arm moves and stops at extreme speeds.

Another use case for the Bolt is to program it to do very specific, macro moves. Using the Flair software and a special focus motor, the focus distance can also be programmed for each frame - since pulling focus at these extreme speeds manually is very difficult, if not impossible. 

This means it can repeat moves in macro shots, get multiple plates, all while maintaining perfect preprogrammed focus.

Although you can do incredible things with the Bolt, it’s usually reserved for specialised, pre-planned shots only, as it's both an expensive toy to rent and because moving it around and programming it takes a lot of time to do.     

 

TOWERCAM

Another piece of equipment which is designed for a very niche type of camera movement is the Towercam. This is a telescoping camera column which is designed to get completely vertical, booming camera motion. It is remote controlled by an operator near the base of the rig.

Unlike a Technocrane, which is more of an angled telescoping arm, the Towercam is an arm that moves completely vertically and can either be rigged from the ground or rigged from above and telescope up and down.

Although the hydraulic arm of a dolly can also be used to do vertical up and down moves, the range of its arm is much more limited to around 1 metre of vertical boom movement. There are different versions of the Towercam, but the XL can extend the height of the camera to almost 10 metres.

This is a great tool for getting large, symmetrical, up and down moves - which is why Robert Yeoman often uses it when shooting with Wes Anderson, who loves himself some symmetry. Using a dolly for horizontal tracking moves and a Towercam for vertical tracking moves.

But, it can also be rigged with a remote head, which allows an operator on the ground to pan and tilt the camera while it moves vertically. Which is great for this kind of a shot of tracking an actor walking up a flight of spiralling stairs. 

It can also be used for doing fast vertical moves, capturing live events, nature documentaries, or any other application where straight, vertical motion is required.       


3-AXIS GIMBAL

Let’s move on to a piece of gear that most people may know: a 3-axis gimbal. This is a stabilisation device that is used to keep the camera steady while capturing fluid motion. 

3-Axis refers to the three motorised points on the rig that counteract any unwanted camera shake. These axes control the tilt, sometimes called pitch, which is the vertical up and down motion, the pan, sometimes called yaw, which is the horizontal left and right motion and the roll, which is the side to side camera motion.

By stabilising the camera across these three points, these motors minimise shakes and vibrations, and give the camera movement a smooth, floating, fluid feeling - very similar to what a Steadicam does. 

Probably the two most popular gimbals on the market for industry level film production are currently the DJI Ronin 2 and the Freefly Movi Pro. 

There are many reasons these stabilising devices have soared in popularity in recent years. For one, they make it very quick to set up and adjust a shot with stable movement - much quicker than setting up and levelling a dolly and tracks. 

For another, their relatively light weight makes it easy to shoot in hard to access locations, where bringing other heavier gear like a crane or a dolly isn’t possible.

They are also flexible in their applications. For example, they can double up as a stabilised head - which can be operated remotely with camera wheels or a joystick - or they can be operated in a handheld configuration using the natural drift of the motors. 

Whereas in the past it may have been necessary to hire a Steadicam, a Steadicam operator, a stabilised Libra head and a Libra head technician. Now the productions can get away with hiring one gimbal technician with a gimbal - which can be switched between different rigs - including a drone.

Their flexibility also extends to how they can be operated. For example, shooting from a dolly on a track locks you into one lateral line that you can shoot from, while shooting with a gimbal is like shooting with a stabilised handheld rig. It can go anywhere your feet go, which makes it easy to adjust to the movements of actors changing position during a take. 

However, there are a few reasons why some DPs don’t like using them much. 

The camera needs to be balanced so that no undue strain is applied to one of the motors. Sometimes this is easier said than done, especially when using a heavier camera build or longer lenses. Any time a lens is changed it needs to be rebalanced. If a certain weight is exceeded it may not be possible to balance it at all.

Even when accompanied by a gimbal technician, these machines can give technical issues. In my experience, working with a Steadicam and working with a gimbal is like night and day. A Steadicam is simply engineered and almost never runs into technical issues, whereas gimbals almost always need to be tweaked by assistants to deal with technical glitches that pop up.

Also, unless the gimbal is being operated by a second operator with wheels or a joystick, it can be difficult to get precise framing, as the motors create a natural drift of movement as you move or stop, which can be hard to predict.

A last advantage of a gimbal is that they can sometimes be combined with other grip rigs for a special move. For example, when I assisted on this show, the key grip came up with an overhead sliding rail system with a bungee cord that the gimbal could be attached to. This took some of the strain of operating the heavy camera away and made it quick and easy to shoot fight scenes.

Or there are special shots like this, which combine a gimbal with the final rig we’ll talk about: a Spydercam.

SPYDERCAM

A Spydercam is a cable suspended rig that allows for smooth, controlled aerial movement. 

Setting one up begins by rigging several high tensile strength cables which need to be securely anchored above the shooting area so that the cables don’t touch the ground. 

A camera rig with a stabilised remote head is then suspended from those cables and its lateral and horizontal movement is controlled by motorised winches that are connected to the suspension cables.

This way the physical position of the overhead camera can be changed by an operator, called the pilot, using software while at the same time another camera operator can pan, tilt or roll the camera to change the frame that it sees.

The Spydercam was regularly used in films before the widespread use of commercial drones, to get controlled, smooth aerial footage within confined spaces, which couldn’t be shot by using a helicopter.

For example, aerial plate shots which swing or fall through a highrise city which they could later drop shots of characters into.

Even after drones became popular production tools, there are still applications for the Spydercam - most notably for indoor scenes or scenes shot in a studio that require controlled aerial camera moves.

For example, the shot mentioned before used a Spydercam with a specially designed release. This allowed a stabilised gimbal to float from one floor to the next, which the camera operator could then grab on the second floor and begin shooting with in a handheld configuration.

Another reason filmmakers may want to use a Spydercam is for a shot which starts close to actors. Having a drone start this close to the ground, objects or actors would cause a visible wind on screen and would also be a bit dangerous. Instead a precisely controllable Spydercam could be used to start on a nice, tight shot, which was then brought all the way up to a massive aerial wide by winching it up the cables.

Read More
Gray Kotze Gray Kotze

How To Shoot Action: John Wick 4

Let’s break down how cameras, lights and grips were used to elevate action scenes in John Wick: Chapter 4.

INTRODUCTION

The first thing that I thought about while watching the fight scenes in the new John Wick movie, was another fight scene in Oldboy. And it’s not only because of the great action choreography.

Both movies use cinematography in a very deliberate way to enhance these fight scenes. Through the use of wide shot sizes, vividly saturated lighting and a smoothly moving camera.

So, let’s break down how cameras, lights and grips were used to elevate action scenes in John Wick: Chapter 4.


APPROACHES TO SHOOTING ACTION

Watching this movie got me thinking about the different ways that fight scenes can be presented by filmmakers. I think a large part of how we as an audience interpret or get involved in these scenes comes down to the kind of visual style in the cinematography.

There are basically two directions that filmmakers can go in when shooting action: one is more realistic, immersive and first-person, the other approach is more elevated, observational and third-person.

John Wick very much falls into the second camp. The fight scenes are larger than life and something of a spectacle. Somewhere in between semi-reality and a comic book world. 

This look is in a large part created by three aspects of the cinematography: the shot sizes used, the lighting and the camera movement. Lots of slower cutting wide shots are used. The lighting is hard and vividly colourful. And the camera moves are smooth.

Compare this to a fight scene in The Bourne Identity. There are lots of quick cutting close ups. The lighting is softer, more motivated and naturalistic. And the camera moves are rough and handheld.  

To me, this way of shooting feels much more like the audience is being forced into the more realistic perspective of the main character. 

While this way of shooting feels like the audience is being invited to sit back and enjoy a heightened, action packed spectacle.

Of course, this isn’t a hard and fast rule that has to apply to each and every fight scene in a movie. For example, Oldboy has one long-take fight scene that feels almost like you are watching a video game from a wide, 3rd person perspective. 

And, it has another fight scene where the perspective shifts to first person, includes POV shots, and covers the scene with close ups, quicker cuts and a rougher camera movement.


SHOT SIZE

Part of what makes the fight scenes in John Wick impressive from a technical perspective is that they often play out in wider shot sizes, with slower cutting. This relies on the performances of the stunts by the actors to be at a high level, rather than getting stunt doubles to do the action and using lots of quick cutting to break each stunt down shot by shot.

Cinematographer Dan Laustsen enhanced this feeling of width through his camera selection.

“I think that is the goal or the key. To shoot it wide and beautiful. As wide as we can.” - Dan Laustsen

The previous John Wick movies had been shot on Super 35 cameras with Master Anamorphic lenses. One way to pull off an even wider look is to shoot on a camera with a larger sensor - the Arri Alexa LF. 

The bigger the sensor is the wider the field of view that it is able to capture. However, they ran into a snag. Previously you could either choose to shoot with a large format camera and spherical lenses, then crop to widescreen in post, or you could shoot with a Super 35 size sensor with anamorphic lenses that natively creates a widescreen aspect ratio.

To overcome this issue, they turned to Arri, who had recently developed a set of anamorphic lenses which were able to cover large format sensors. These lenses are called the Arri Large Format Anamorphics, or ALFAs. 

This gave them the combination of a large format wide field of view and the anamorphic look - all in one. These were used to shoot 95% of the movie.

Since the widest ALFA lens is a 32mm, the remaining 5% was mainly made up of super wide angle shots using spherical Signature Prime lenses - probably with a 12mm. You can tell which are the spherical lens shots in the movie because of their almost fish eye look and round bokeh, while the ALFAs have oval bokeh. 

“We also shoot wide so Keanu could do what Keanu had to do. And then getting into the close ups for whatever reason super fast because Keanu is running out of speed, of course, because he’s not a stunt guy.” - Dan Laustsen

So, they mainly used wide focal length lenses to capture the stunt scenes in two or three full takes of wide shots. Then used long focal length lenses mainly for close ups during dialogue scenes, not really using many focal lengths in between.

  

LIGHTING

Another large part of what elevates the John Wick movies from reality, and places them in an exaggerated, comic book like world is due to the lighting.

Laustsen is known for lighting with exaggerated colours, often mixing steel blue, cyan, or green tones, with warmer light. In fact it was partly his use of colour in Crimson Peak that won him the job to shoot the franchise with the director.

“I think he loved Crimson Peak because he loved the colours. We talked a lot about colour. He really likes colour. You know, he loved this heavy world.”  - Dan Laustsen

This coloured lighting, that lifted the story out of the normal, realistic world, was created in two ways. Firstly, with practical lighting and secondly by using film lights.

Practical lights are sources that are included in the frame of the shot. These were normally built into the sets themselves using different coloured strips of LEDs. Or, in this movie, they were done by rigging hundreds of RGB Astera Titan Tubes across the set, which were placed inside fluorescent housings. 

These fixtures could then be wirelessly controlled through a DMX board to tweak their brightness or colour.

Often these tubes would be set to a colour that contrasted with the colour of the key light. For example if the key was set to a cyan-green, then the practical tubes could be set to an orangey-red. This not only creates a vivid, exaggerated, colourful palette, but it also separates the background from the foreground.

These practicals also served an extremely important practical purpose - they could be seen in the shot. This is especially necessary for the kind of super wide shot sizes they were shooting, where the camera saw everything. Like this scene, where they rigged these practicals next to the canal for an incredible 2km. 

When you’re shooting in such an enormous space at night, there is little to no place to hide film lights anywhere near the actors without them being seen.

“You know, that is my goal with lighting all the time. I want to have the light as far away as possible. Of course if you do a close up of something I have light close to the camera but normally all my light is far, far, far away.” - Dan Laustsen

The further that you place a light source from an actor, the less effect that light will have. The only way to counteract this is by having stronger sources of light that compensate for the light lost through distance. This means either hiring higher output sources, or increasing the amount of sources - which, either way, is an expensive endeavour.

The wider you want your night exterior shots to be, the more difficult and expensive it becomes to light.

They also shot basically everything at night - even the day interiors and a scene that was meant to take place at dawn. This is because it was the only way to control the level of ambient sunlight and maintain the kind of strong, hard, consistent light for long periods of time without competing with sunlight - which would have overpowered the film lights.

His team either lit the scenes with very high output, hard sources, like T12 or T24 tungsten units. Or they used loads - and by loads I mean like 85 units - of lower output, RGB LED Creamsource Vortex 8s. These gave an even spread of RGB light and were also water resistant - which was very necessary for certain scenes with lots of water.    


CAMERA MOVEMENT

The camera movement generally tracks the action of the choreography with slow, fluid, smooth movement. 

The technical team pulled out almost every piece of gear they could to do this in a variety of different situations: from a drone, to a Spidercam, to a Technocrane. For example, this long take, top-down God’s eye view shot meant rigging a line with a Spidercam, that could move the camera precisely above the large set. 

However, the two main tools that they used to move the camera for the fight sequences were a Steadicam and a dolly - with A-cam on dolly and B-cam on a Steadicam AR.

The AR allows operators to quickly change from high mode to low mode, while keeping the camera completely stabilised. 

Shooting from a Steadicam meant that they could get smooth, flowing movement that followed the action, but which also allowed the operator to be flexible and adjust their movements to the movements of the actors. 

Because the fight scenes were not easy on the actors, they didn’t want to have to do more takes than were necessary. Therefore, the whole movie was shot at quite a deep stop of T/4, which gave the focus puller a little bit of leeway for the difficult, fast moving action scenes.

Because of the speed of the scenes, the 1st AC also chose to often pull focus by judging distance by eye. Since it’s easier to pre-empt where the actors, and therefore the focus, needs to be when you are watching their movements as opposed to watching on the monitor. 

CONCLUSION

John Wick 4 is a great example of how action and the world of the story can be made to feel more elevated due to the cinematic choices made by the filmmakers.

 All three elements of wide shots, vibrant, colourful lighting and fluid movement combined to make the action scenes in John Wick feel like an entertaining, extravagant spectacle. 

Read More
Gray Kotze Gray Kotze

The Most Popular Cinema Lenses (Part 6): Zeiss, DZOFilm, Leica, Angénieux

In part six of this series, I’ll go over four more popular cinema lenses that are used in the high end film industry, break down the images they produce, their practical design and what kinds of filmmaking they are useful for.

INTRODUCTION

The choice of which cinema lens to select for your shoot is a pretty important one. Each lens varies not only in a practical and ergonomic sense, but also in the kind of look that they produce. 

So, in part six of this series, I’ll go over four more popular cinema lenses that are used in the high end film industry, break down the images they produce, their practical design and what kinds of filmmaking they are useful for.



DZOFILM VESPID RETRO

Starting with something a bit unusual, the Vespid Retros are a limited edition set of seven prime lenses. They are modelled on the same optical and mechanical construction of the original Vespid prime set that preceded it. 

They come in a pretty standard set of primes, ranging from 16mm to 125mm focal lengths with a stop of T/2.1, except for the 16mm which has a T2.8 aperture. 

The Retros cover a 46mm image circle, which means they can cover all of the recent Vista Vision, Full Frame and LF digital cinema sensors without vignetting.

Despite this coverage, they come in a very compact, lightweight form factor, which makes them a good choice for cinema rigs that need balancing, such as gimbals. They come with a standardised 80mm front diameter across the range of primes, which makes it easy to quickly swap out a mattebox when changing lenses. All the lenses, except the 16mm also feature a 77mm filter thread which can be used to attach screw-in filters.

They come with a PL mount, but it’s also possible to swap out the mount to a Canon EF mount. This means that they come with all the modern features that make them quick and easy for camera assistants to work with - just like the original Vespid primes.

So what makes them different from the original Vespids? Well they come with a bit of a party trick. The glass is finished with a golden coating. This coating produces warm, amber flares that haze up the lens when you point it at a direct light source. They resolve fine details, but also offer a subtle softness. All of this combines into what they call a ‘retro’ or ‘vintage’ look. 

This warm, dreamy look is good for more experimental applications, like shooting dream or flashback sequences, or for capturing films where you want to play with light and texture.

For example, I recently rented these primes out on a music video shoot, which I lit with lots of backlight in order to get maximum flares for a more dreamy, cosmic musical performance.

And, well, they’re silver, which looks kind of cool. 




ANGENIEUX OPTIMO 15-40MM & 28-76MM

Now, let’s move away from primes and take a look at a pair of popular zoom lenses.

The Angenieux Optimo 15-40mm and 28-76mm are a pair of lightweight zooms with a 2.7x zoom ratio. This means that the longest focal length is 2.7 times longer than the widest focal length in the zoom range.

The reason I pair these two zooms together is because they are constructed to an identical weight and size, and are often rented as a pair as interchangeable wide angle zooms that can cover from wide to medium focal lengths, while maintaining a high degree of optical quality and an aperture of T/2.6.

With an image circle of around 31mm, these lenses are designed to cover ever so slightly wider than Super 35 size sensors. But won’t cover the full width of large format sensors, like the Alexa Mini LF. 

At 1.92kg these zooms are lightweight enough to be used on a Steadicam, a gimbal or handheld on the shoulder. The zoom and focus happens internally in these lenses which means they have a constant lens volume.  

In other words the lens barrel doesn’t extend and twist as it might with some vintage lenses. This means a balanced, stabilised rig like a gimbal will remain balanced even if the lens is ‘zoomed’ during a take.

In terms of the look, they have a good degree of contrast and no vintage softness or aberrations. Basically it’s a good, sharp, clean zoom. The Optimo range retains homogeneity of colour, contrast and resolution across the range. This makes it easy to use multiple zooms in the range on the same shoot, without worrying about optical differences in quality or colour.

On the wide 15mm end the lens also has minimal distortion.

As mentioned, these lenses are perfect for anyone who needs a wide angle zoom on a Steadicam and gimbal, to make quick and easy adjustments to focal length or zoom the lens in a shot. 

They are also used by solo operators in documentary situations - for example, I recently used these lenses in a handheld build with an Amira to shoot a doccie-style commercial. Zooms also save time swapping out lenses, and are often carried on long form jobs to accompany a set of primes.    





ZEISS STANDARD SPEED

The Zeiss Standard Speeds, sometimes also called the Zeiss T/2.1s, are a set of lightweight primes. They were originally manufactured somewhere between the late 60s and mid 70s by Carl Zeiss for Arri to be paired with their Arriflex cameras.

They originally came in 5 focal lengths, 16mm, 24mm, 32mm, 50mm and 85mm focal lengths. However, later Mk II versions of these lenses in the 1980s included additional focal lengths.

The Mk II versions of this lens are the ones most commonly found at rental houses and feature a similar build and coating as the faster T/1.3 Zeiss Super Speeds. For this reason they are often used by rental houses to ‘build out’ or add extra focal lengths to a Super Speed lens set, especially on the longer end with the 100mm and 135mm focal lengths. But also for DPs that like focal lengths in between the traditional ones, like a 40mm.  

A mechanical difference between the Super Speeds and the Standard Speeds is that the actual glass inside the housing of the T/2.1s is pretty tiny - around the size of another vintage prime set from the same era - the Cooke Speed Panchros. While the glass used for the Super Speeds takes up the entire housing.

Regardless, the T/2.1s are still able to cover Super 35 size sensors without vignetting. Because of their small glass elements, these lenses are very light, yet still solidly constructed and easily usable by camera assistants - as long as they have been well maintained and calibrated by the rental house. 

What I love about these lenses is the look. It is a very comparable look to the Super Speeds as, from what I can gather, the T/2.1 Mk I and Mk II sets both have the same T* multicoating on the glass as the Super Speeds.

Because of the older optical design of these lenses they have some vintage characteristics. For example, when shot with a wide open aperture they have lower contrast than modern lenses, like the Cooke S4s or Master Primes. This gives the images more of a washed out feel.

These old lenses have a simple, symmetrical lens design which produces beautiful bokeh which is slightly swirling. When I’ve shot with them I’ve found the Standard Speeds, especially the 40mm, to have really beautiful, creamy, slightly off kilter, oval bokeh - which I actually prefer to the Super Speeds. Despite their age, these primes are still quite sharp when you hit the focus sweet spot.

So, if you’re looking for compact cinema prime lenses that don’t have to have a super fast aperture, with a vintage, slightly low contrast look, and very pretty bokeh, then the Standard Speeds are an often overlooked choice which are worth considering.

   




LEICA SUMMICRON-C

Moving from the past back to the present, let’s take a look at a modern set of spherical primes - the Leica Summicron-Cs. 

In the same way that the Zeiss Standard Speeds are a slower aperture version of the Zeiss Super Speeds, the Summicron-Cs are a slower aperture version of the much more expensive Summilux-Cs. However, at T/2 they are still fast enough for most shoots. 

They come in a set of 11 focal lengths from a 15mm up to a 135mm. 

With a 36mm image circle the Summicrons cover an enlarged Super 35 size sensor, but won’t cover most full frame or large format cameras without vignetting.

One of the big advantages of the Summicrons is their build. For modern cinema lenses with a fast stop they are very compact and lightweight, but still have a long focus throw. This means they have lots of well spaced distance markings that 1st ACs can use to more accurately program focus distances into a handset and pull focus.

The 18mm to 100mm focal lengths lenses are also all the same physical length at exactly 101mm. This small, standardised sizing makes them perfect for rigs that need balancing, like drones, gimbals or Steadicam.

The images that these lenses create are pretty typical of sharp, modern cinema glass. They have authentic colour rendition, good contrast, even wide open, and render fine details with crisp, sharp edges. This look is consistent across the entire set.

Therefore these lenses are well suited to productions shooting on a Super 35 camera, that want a reliable, versatile set of standardised, compact lenses that produce a crisp, consistent, modern look with a fast aperture. 

And for those that need all of the above but are wanting to shoot in very low light conditions or are looking for just a tad more depth of field - the T/1.4 Summilux-Cs are also a viable choice.

Read More
Gray Kotze Gray Kotze

FPV Drones Are Changing Filmmaking

How do FPV drones work and what makes them so special? Let’s find out.

INTRODUCTION

First, there were helicopters. Then there were drones.And now, there are FPV drones.

These little, flying, cinema cams are now able to pull off shots that could have only been dreamt of decades ago, or even a couple years ago. But how do these FPV drones work and what makes them so special? Let’s find out.

WHAT IS AN FPV DRONE?

One of the main differences between a regular drone built for filmmaking and an FPV drone is in the name. FPV stands for First-Person View. This means the drone provides a real-time video feed from its onboard camera to a display - usually in the form of goggles worn by the pilot.

This allows the pilot to see a front-on perspective of exactly what the drone sees, as if they were sitting inside the drone and flying it. This immersive experience provides a sense of flight and enables more precise control and navigation of the drone

FPV drones were primarily designed to be used for sport, where they race each other to see which pilot can manoeuvre the FPV the most nimbly and quickly. 

Because of this they are usually way smaller than the big production drones, like the Freefly Alta 8 or Alta X, which are designed to carry heavy payload cinema packages on a gimbal like a Movi-Pro or DJI-Ronin.

FPV’s smaller frame and lighter weight makes them far more agile and fast, able to fit through small gaps, all with enormous acceleration.   

 

FPV DRONE SETUP

Because cinema FPV drones are a bit of a niche product, they are often custom built by their pilots. However it’s also possible to buy some that are assembled straight out of the box

Let’s break down what a cinema FPV drone may look like. 

The frame is what the camera sits on and is made up of a body - that supports the camera and battery - and arms - that hold the motors that allow it to fly.

Usually frames are constructed from carbon fibre as it is a lightweight material that is also durable. When it comes to building a rig, the name of the game is to keep weight to a minimum.

An example of a frame that may be used for a cinema FPV drone setup is the Hevi Frame from Shendrones. This is a coaxial octocopter. This means that it has four dual motors that can each turn two rotors.

To this frame a FPV camera will be attached to the front of the drone, along with a transmitter, that sends a transmitted video signal using antennas from the FPV camera to a headset that the pilot wears such as a DJI FPV goggles.

Inside this headset the pilot can also set frame lines that mimic what the cinema camera is recording, which are usually cropped in from the wider FPV camera view.

This FPV camera is built to be as lightweight as possible and is only used for monitoring and transmission purposes so that the pilot can see where they are flying. It’s not used for recording.

For the actual video recording you’ll need a cinema grade camera, which is as lightweight as possible. A Red Komodo is currently the most popular. 

This is due to the tradeoff between its light, tiny body and its ability to shoot in a 6K resolution - which gives room for image shake to be stabilised in post - its RAW codec with a high dynamic range and its global shutter -  which reduces artefacts and motion blur when filming objects moving at a high speed. 

When using the Komodo like a crash cam, to shoot in dangerous situations where there’s a possibility that the camera may get damaged, it’s useful that it has no buffering in writing data to the card. 

In other words, the camera is able to transfer the footage to the Cfast card immediately as it shoots it without a delay. So, even if the camera crashes and you are able to pull the card out, then all the data should be preserved up to the moment that the camera crashed. 

The Komodo will usually be paired with a wide angle, compact, lightweight cinema lens, such as a Laowa Zero-D Cine 9mm T/2.9 or a 15mm T/2.1. If shooting in bright conditions a ND filter can be screwed into the thread of the cinema lens.

Because no focus motor is used, the focus distance on the lens will be set at a fixed distance, normally at, or just off of, infinity. 

A lipo battery will be strapped to the bottom of the rig. The longer you want to be able to fly the drone for, the bigger the battery has to be. So, it comes down to a tradeoff between a lighter weight and more manoeuvrability or a heavier, less manoeuvrable longer flying drone.

A 4,500mAH battery may give the pilot somewhere between three to five minutes of flight time depending on how aggressively they fly it.


HOW ARE FPV DRONES CHANGING FILMMAKING?

Aerial shots have two noticeable attributes, one, the perspective they provide, and two, the kind of camera movement they achieve. 

Let’s start with perspective. Aerial shots were first done by shooting from helicopters, which provided filmmakers with a birds eye view perspective that could give context to landscape and provide a sense of scope. For this reason, helicopter shots were often used as a way of establishing a location, before cutting into a scene at that location.

Because of the safety precautions necessary when flying a large vehicle, like a helicopter, this usually limited filmmakers to shooting these establishing shots as wides, far from any building or from the ground.

When drones started taking over, they usually adopted this same, wide, bird’s eye view perspective.

Even though some stunt helicopter pilots are incredibly skilled and can do amazing manoeuvres, their size is still a limiting factor.

FPV drones are different. Because of their much smaller size, and increased manoeuvrability, these drones can physically get much closer to objects or to the ground than ever before. Flying closer to objects with a wide angle lens at a high speed makes aerial establishing shots incredibly dynamic.

“You’d usually have wide interstitials of the location as you move from location to location. Ant and I are just bored by that convention and we wanted to do something more kinetic and more dynamic that would whip you from location to location and just keep the movie moving forward at a breakneck pace.” - Joe Russo, Director

One of these dynamic establishing shots even involved blending an FPV shot that wove through a hospital entrance, with a Steadicam shot that started the scene on the actors. Using a quick panning movement to hide an invisible cut. 

This kept the intensity of the pacing high, while at the same time clearly establishing the space that the scene is unfolding in.

This shot also shows off the second noticeable attribute of aerial footage - movement.

Although helicopters are able to pick up speed, and create dynamic footage, FPV drones are different in that they can make turns and accelerate far quicker, while also using their small size to fit through gaps that wouldn’t have been possible before.

Shooting with a wide lens, close to action has a very different feel to shooting action from far away on a telephoto lens. The physically closer you are to the action the more visceral, kinetic and immersive it is.  

“Drone work is really boring in movies and I wanted to hip it up. I wanted the energy because this is a very intense movie and it just keeps that motion going. And I’m kinda known for how I move my cameras.” - Michael Bay, Director  

So, being able to film aerial footage, in both interiors or exteriors, at high speeds, incredibly close to actors and objects, allows filmmakers to get highly dynamic footage, without using visual effects. This is all possible thanks to the combination of FPV drones and incredibly compact digital cinema cameras.

Although this technology unleashes incredible possibilities, it still has quite a few limitations. They mainly revolve around the weight issue. 

To be as lightweight as possible, these little drones normally can’t use larger cine lenses, a mattebox or use 4x4 or 4x56 inch filters to control exposure or create a filter effect.

This also means that when shooting day exteriors, much of the exposure control will happen by stopping the lens down to a deep stop. However, this is also necessary as usually these drones can’t take the weight, nor afford to power a focus motor.

This means that wide lenses with a deep stop need to be used. Therefore shooting objects close with a shallow depth of field isn’t possible.

Filmmakers are also limited to very small flight times before they need to change the battery. This makes it impractical to roll for long takes. 

Also, unlike remote heads on helicopters, or larger drones that use a gimbal system, the camera cannot be panned, tilted or rolled by an operator. Instead the pilot is the camera operator and needs to position the front of the drone precisely where they want to shoot.

Finally, there’s also the risk factor. Since a lot of these manoeuvres are incredibly fast it can be dangerous, not only for actors, but for the camera. In fact I remember hearing a story a few years ago about a shoot that was using the first Red Komodo in South Africa on an FPV drone for a commercial. The pilot was skimming it over the ocean at a high speed until, suddenly, it wasn’t just above the water, but under it. And that was the end of the first Komodo in the country.

CONCLUSION

Cinematographers often talk about film equipment, like lights, cameras and grip rigs as tools, which they can choose from to chisel each story into the correct shape. An FPV drone is another tool now available to filmmakers, which allows them to get action packed, dynamic aerial movement like never before.

Read More
Gray Kotze Gray Kotze

How Cinematographers Use Hard Vs. Soft Light

n this video I’ll explain what hard and soft light is and how each can be used in different applications to impact the visual storytelling of a film. Then I’ll go over some film equipment and techniques that are commonly used to alter the quality of the light.

INTRODUCTION

Cinematography at its core is all about being able to manipulate and control lighting to tell a story. Light can be cool or warm, have a high output or a low output and, importantly, be either soft or hard.

In this video I’ll explain what hard and soft light is and how each can be used in different applications to impact the visual storytelling of a film. Then I’ll go over some film equipment and techniques that are commonly used to alter the quality of the light.

HARD VS SOFT LIGHT

Before we get into talking about how different qualities of light can be used and are created, we need to know what the difference is between hard and soft light.

Hard light refers to a type of illumination that creates distinct and sharp shadows with defined edges. In other words, the area where the shadow meets the light has a very small, if any, gradient. 

Hard light is usually achieved by using a direct light source, such as a bare light bulb or direct sunlight, that casts a focused beam of light onto the subject. 

Soft light on the other hand creates diffused and gentle shadows.

 So, unlike hard light that has no gradient between where shadows and light meet, soft light has a gradual transition between light and dark areas. Soft light is achieved by using a more diffused or indirect light source, such as sunlight that passes through a sheer curtain, that spreads light evenly over the subject.

Although I’m talking about the quality of light in quite defined terms, it’s important to remember that the softness or hardness of light occurs on a spectrum. 

Usually cinematographers will use a combination of both hard and soft light either in different scenarios or even within the same shot. How light is used is dependent on the context of the location and story that is being told.

HOW IS SOFT LIGHT USED?

Generally, it’s more difficult for cinematographers to create softer light - which we’ll get to a bit later. So what are some reasons and situations why they might want to do that?

Well, firstly, using softer light is seen as a more flattering way of lighting actors. Because of its gentler gradient of shadows and decreased harshness, it can help to smooth out skin texture and highlights in a more subtle and appealing way. 

If you compare this to a harder light source, you’ll see that harsh shadows and strong direct light accentuates textures, wrinkles and skin imperfections, whereas softer light smooths out faces in a gentler way.

For this reason, soft light has historically been used for beauty commercials as a way to smooth out the skin of the on screen talent. Back in the days of the Golden Age of Hollywood, it was also common to use soft, flattering light on the lead actress, while using a harder light with a more rugged feel for the lead actor.

Another, more modern reason that cinematographers use soft light now is because it can feel more organic, natural and realistic. Of course natural sunlight is not always soft. If you’re shooting outside, in direct sun, the source will be very hard.

However, if you turn off all the lights in a room, and observe how the natural light from the sun hits a space, there’s a good chance the light will be soft. 

This is because, unless it comes directly through a window, sunlight usually bounces and reflects off different surfaces within a room - and, as we’ll find later, bounced light is soft.

Also, if there are clearly defined shadows, such as on a wall, it is clear that artificial light, such as a placed film light, is being used and it’s easy to tell which direction it’s coming from. If instead the shadows are more gentle, it may give a feeling that the light is natural and coming from a motivated source other than a film light.

Finally, soft light also creates a tonal atmosphere or a feeling. Because it’s literally softer and lighter, it can be used to create a sense of calmness, lightness or tranquillity in scenes that have a more peaceful, comedic, romantic or introspective tone.


HOW IS HARD LIGHT USED?

Hard light can be used to create a more dramatic and higher contrast look. Because it creates less of a gradient in shadows, hard sources of light can be good for stories that need to be played out in lower key lighting with lots of shadow and contrast.
This can be particularly effective in creating a sense of tension or danger in scenes with a lot of action or suspense, or to make characters seem more imposing or dangerous. As we mentioned, it is less flattering on the skin, so can be used to portray characters as more rugged and less perfect.

In an exterior, hard light can also be used to accentuate the strength of the natural elements, by making the sun’s rays feel hot and strong. 

A harder backlight can also be placed behind characters in a scene to separate them from the background so that they stand out. 

Because it is easier to create contrast with hard light it’s also a good tool for cinematographers that like to illuminate different pockets of light and shadow within the same frame. 

These pockets of shadow can be used either to disguise details or the identity of a character within a shot, or be used to create a more stylised visual interest. 

Hard light even has associations with certain genres, such as film noir, which has extensively used it to elevate stories from realism and evoke a darker, ominous mood by creating carefully shaped, textured shadows - such as the classic office blind shadow.  

EQUIPMENT FOR SOFT LIGHT

There are two main techniques that can be used to transform hard light to soft light: diffusion and bounce.

To diffuse light, a type of translucent material needs to be placed between the light source and the subject the light will hit.

There are many different types of diffusion, but perhaps the most basic are diffusion gels - which come in rolls of white polyester plastic that can be cut into a size that can be directly clipped onto the light or a diffusion frame in front of the light.

A popular diffusion gel is a 216 White Diffusion from Lee Filters. This greatly spreads and softens the light source, but also takes away a stop and a half of light. The same is true of any diffusion. While softening the light source it also reduces the amount of light that hits the subject. The more that you soften light the dimmer the light will be. 

Therefore, DPs that want very soft light may need to use high output fixtures in order to have a bright enough light to expose properly. 

There are many other types of diffusion gels available which spread and soften the light in different ways, such as a Frost, Brushed Silk, Opal or Hampshire Frost. Some gels like a white diffusion gel also come in half, quarter and eighth strengths if DPs need diffusion with less softness that lets through more light.   

Another type of diffusion is to use a textile rather than a plastic gel. This is a cloth material that can be tied onto frames of various sizes such as 8x8, 12x12 or 20x20. 

In my experience there are three main fabrics which are used for diffusion. Silk is a very sheer, natural diffusion fabric and comes in different strength gradients. It offers a light to medium diffusion of light.

Grid cloth is a popular, waterproof fabric that comes in various strengths, such as a 1/2 , 1 or 2. Each number refers to how many stops of light it will reduce, so a grid cloth 1 will reduce the strength of your light by 50%. 

Finally, there is muslin. This is a much thicker textile which comes in either white, called bleached muslin, or a warmer cream, called unbleached muslin. Because of its thickness it takes away a lot of light when used as diffusion, which means it requires very strong, high output film lights to be used.

Other equipment, such as a softbox, uses this same idea of placing a diffusion substance in front of the light, but attaches it directly onto the front of the fixture.

Diffusion can be used in front of artificial fixtures, but it can also be placed between a subject and the biggest light source there is - the sun. Typically this will be done with one of the textiles mentioned on a metal frame such as a 12x12.

The other way to soften a light source is to bounce it. A light will be placed so that it faces away from the subject, but towards a surface that is used to bounce that light onto the subject. Like with diffusion, bouncing light can greatly diminish the strength of the source - so you need a higher output light to compensate. The further a bounce is from the subject the weaker the light will be, but the softer it will be.  

 Again, light is usually bounced off a textile, such as a muslin. However other surfaces, such as a poly board or even the walls inside a house could be used as a bounce. However, remember that whatever colour the bounce is will affect the colour of the bounced light. 

Finally, a book light takes this idea to the extreme, by first bouncing a light and then diffusing the light that comes from the bounce. This makes it super soft, but also means that very high output lights are needed to compensate for the large amount of light that is lost.  

EQUIPMENT FOR HARD LIGHT

Creating hard light is much simpler. The easiest way to get hard light is simply by positioning a direct source of light straight at the subject without diffusing or bouncing it. This could be direct sunlight or a direct film light. The stronger the source of light is, the harder and more contrasty it will appear.

 Also, minimising the amount of light sources is a good way of creating hard, high contrast light with more shadows. The more light sources that are added to a space, the more the light will get evened out.

A cheaper way of getting strong, hard light, without putting up big film lights, is to use a mirror. Mirrors can be used to directly angle a hard beam of light at a subject, without diffusing that light. 

You can also position practical sources which you see in the shot, such as a bare lightbulb. As long as these don’t have diffusion they will create a hard source of light. 

Read More
Gray Kotze Gray Kotze

Editing Transitions Every Filmmaker Should Know

In this video I’ll go over a few types of edits, so that whether you’re a cinematographer, a director, an editor or an audience member, you’ll have a better idea of how and why films get assembled the way they do.

INTRODUCTION

If shooting a movie is like growing produce and gathering a bunch of different ingredients together, then editing is where those ingredients are organised, selected and prepared in a specific way to create a meal.

From simple cuts, to graphic matches, to whatever is happening here. But we’ll get to that later. 

There are many different editing techniques and types of transitions that can be used to put stories together. 

In this video I’ll go over a few types of edits, so that whether you’re a cinematographer, a director, an editor or an audience member, you’ll have a better idea of how and why films get assembled the way they do.  


CUT

Let’s start by looking at a basic cut and then examining some of the creative ways it can be used. 

A cut is simply the act of taking a portion of a video clip and joining it with another clip using editing software, resulting in a seamless transition from one shot to another.

Cuts are often used to create a sense of continuity or, alternatively, to establish a clear change in time or location. The speed at which a series of cuts happens can also unconsciously affect the way that the audience interprets the film: whether it feels more hurried and panicked, or more slow and realistic. 

In a dialogue scene, cuts can be used to switch between the person speaking and the person listening, maintaining the flow of the conversation. This is known as a shot reverse shot


SHOT REVERSE SHOT

Often, cinematographers shoot these shots from the same side of an invisible line, called the 180 degree rule, and use framing to balance each shot with its reverse shot. 

So if character A appears on the left of frame for a close up and character B’s shoulder is on the right of frame, this will get switched for the reverse, where character B appears on the right of frame and character A’s shoulder is on the left of frame.

SHOT SIZES

Editors can also cut in or out of different sized shots. For example from a medium shot to a medium long shot. Cutting from tight to wide could be used as a way of giving more context or information to the audience. For example, here we cut wider as the character’s attention drifts to the bag on her lap.

Cutting from wide to tight focuses the information that the audience is given. For example, cutting from a wide to a tight close up shows which character is most important and allows us to more clearly read their reaction or how they are processing a moment.

INSERT

A variation of this cut is called an insert. This cut shows a close-up of a specific detail or object within a scene. It is usually a brief shot that is inserted into the sequence to provide a visual emphasis or to provide additional information to the audience. It’s a way for the editor to say, “this thing is important.”

All of the shot transitions we’ve discussed so far have been continuous, meaning they portray the scene as unfolding in real time, and try to uphold the suspension of disbelief.


JUMP CUT

The jump cut is a way of breaking continuity in a jarring and abrupt way.   

Jump cuts are created by removing a portion of the footage from the middle of a shot, causing the subject to appear to "jump" from one position to another within the same sized frame. This effect can be used to create a feeling of unease or tension, as it disrupts the viewer's sense of time and continuity. 

However, in other contexts, such as vlogs, where this cut is made as a practical way of quickly removing mistakes or filler content, the jump cut is more whimsical in tone and conveys the passing of time.  


CROSS CUTTING

To amp up tension and create a feeling of suspense, some filmmakers may choose to alternate between shots that take place in two or more different locations - that would otherwise be separate scenes unto themselves. 

This is called cross cutting. It is a way of showing different actions or events happening simultaneously or in close proximity to each other. A traditional example of this would be cutting between the perspectives of different characters in different locations during action sequences to provide a contrast between overlapping scenes while simultaneously injecting a faster energy into the cut. 

Anticipation would be reduced if each scene was allowed to play out in full, before cutting to the next scene. 

Or a more unconventional example is how it is used here to intercut between scenes with the same characters, happening at the same time, in different parallel universes.


MATCH ON ACTION

Not only is this transition an example of cross cutting, but it’s also what we call a match on action or graphic match cut

This is used to create a visual connection between two shots that share a visual similarity in shape, colour, texture, or movement.

In this case, it cuts from a close up, to another close up, where the actor is in the same central position in the frame and occupies the same space. This is a technique that The Daniels did often in this movie.

These shots can also be a bit subtler and based more on a thematic connection or on movement. A famous example is in 2001: A Space Odyssey where we start on the thematic height of technology in a pre-human world - a bone used as a club - and cut forward thousands of years to a similarly shaped spaceship - the new height of technology.   

But in order to make this kind of cut in the edit, it usually needs to be planned for and shot during production. There are a few ways of doing this. You could use a screengrab from the first shot and use that to approximately line up the actor in the same position for the second shot. 

Or, you can use what is called mix and overlay. This is performed by the VT operator on the set using software such as QTake. They take a freeze frame from the first shot, reduce its opacity, and overlay it on top of a live image of what the camera is seeing on a monitor. The camera can then be lined up until it matches the framing of the first shot.

If you don’t have a VT operator and need to do an exact match, the old school way of doing it is to take a see through sheet of plastic, overlay it on a monitor and trace the outline of whatever needs to be matched with a pen. Then when you need to shoot the second shot, you can take this stencil and match the frame to the traced outline.

CONTRAST CUT

So far we’ve only talked about the visual side of making editing transitions, however sound can also play a major role in edits.

A contrast cut can be created by placing shots which are noticeably different next to each other, whether through differences in colour, tone, texture, shape, movement, or sound.   

Here, a series of cuts get louder and louder until a contrast cut is made to a far quieter shot. This supports the moment that the character goes from a constantly growing agitation and non-stop frenetic life, to a breakdown moment of reflection.

J & L-CUTS

Another way that sound can be used in transitions is by extending it, so that it overlaps the two shots.

When the audio from the next shot is heard before the video actually cuts to that shot it’s called a J-cut, which forms a J shape on the timeline. Such as this example from Apocalypse Now where he starts to hear the building sound of a helicopter starting up before we cut to it.

The opposite is called an L-cut, where the sound from the first shot extends into the next shot. Again, this time the choppy sound of helicopter blades bleeds into shots of a soldier alone in a room. 

Throughout this movie, scenes and shots bleed into each other, blending time and the events together in a dissociative way.


DISSOLVE

Rather than using normal cuts, the editor often used a series of dissolves to enhance this feeling. 

This is where one shot gradually fades out while the next shot gradually fades in, overlapping briefly in the process. This creates a blending effect where the two shots appear to merge or dissolve into each other.

Dissolves are often used to indicate the passage of time, to suggest a change in location or mood, or to create a dream-like or nostalgic effect.

This sequence dissolves three different shots over each other: a shot of helicopters, a shot of a soldier and a shot of a fan. The fan and the helicopter blades also provide a subtle graphic match. The effect is a dreamy one, that, at the same time, hints that these past memories of combat haunt the soldier.

Dissolves can also be used more sequentially, one after each other to convey a passing of time. 

FADE TO BLACK

When a shot is dissolved against a back screen, this is called a fade or a fade to black. This effect is often used to indicate the end of a scene, to create a sense of closure, or to suggest the passage of time - kind of like a chapter in a book.

In this case a fade to black is used to end a turning point in the screenplay. When the image cuts back in from a black screen, it is clear that a portion of time has passed and we are viewing the final epilogue. 

WIPE

Occasionally filmmakers will use a more stylised transition, like a wipe. 

This is where one shot is replaced by another shot in a distinctive manner that involves a visual element "wiping" across the screen, gradually revealing the new shot. The visual element can be a simple line, a geometric shape, or any other kind of graphic element. 

If we break this transition down frame by frame we see that a wipe is combined with a dissolve and a graphic match to magically transport a character from one location to another. If we play it back at full speed it creates the illusion that this happens in one shot.


HIDDEN CUT

This is a type of hidden cut, where the transition happens on a moment of movement to disguise that an edit took place. 

If we go all the way back to the example from the beginning of the video and break it down - we’ll see another hidden cut, or in fact six hidden cuts.

 Each cut happens on a moment of fast movement which is a continuation of the movement from the prior shot. When these clips are organised and played back at full speed each cut is magically disguised.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Bill Pope

In this episode of cinematography style I’ll go over how Bill Pope broke into the world of film, look at his philosophy and ideas on cinematography and go over some of the gear and equipment that he uses to execute his work.

INTRODUCTION

After starting out by shooting music videos in the 80s, Bill Pope progressed into being a highly experienced feature film DP responsible for photographing many high budget Hollywood movies.

At the core of his work, he focuses on collaboration and using the structure that director’s provide to fill in the story’s tone visually.

In this episode of cinematography style I’ll go over how Bill Pope broke into the world of film, look at his philosophy and ideas on cinematography and go over some of the gear and equipment that he uses to execute his work. 


BACKGROUND

After graduating from NYU graduate school, Bill Pope didn’t stay in New York for long before he decided to make the move out to California where there was far more work available in the film industry.

“I just kept hammering at it. I’d just shoot for free. I went over to AFI and volunteered to shoot movies for free and I did. I shot many. You fall in with a group of people and I would always just shoot for free. And I worked for free for years. During the day I would be a PA and then I’d take time off to shoot whatever was necessary.”

While working as a camera assistant on a Roger Corman movie, he even used the camera and spare short ends, which is the unused film stock on a reel, to shoot a short film in his garage on weekends. 

One of the networks he built during this time was with a friend whose production company made music videos with budgets between $2,000 and $5,000. Later, when MTV kicked off and they were looking for DPs with music videos on their reel, Pope’s name came up. 

During this time he spent about six to eight years shooting music videos for various well known artists, before director Sam Raimi persuaded Paramount to let Pope shoot his first feature: the superhero movie Darkman.

From there he became a regular features DP, who over the years worked for many other well known directors such as The Wachowskis, Edgar Wright, Jon Favreau and Joe Cornish.   


PHILOSOPHY

“I would say to a young cinematographer the most important thing you have is relationships. And the earlier you form them the better you’re going to be. You can have all the talent in the world but if you don’t know anybody you’re not going to go anywhere.”

Not only does building relationships with directors have the most obvious and most important effect of allowing you to get work, but it also has other downstream effects. Working for a range of different directors exposes cinematographers to many different situations and many different ways of working. It also allows you to pick up on various strengths and weaknesses that directors may have, which will strengthen your skillset as a DP.

“We don’t make the framework. We bring the atmosphere. We bring the thread. We bring the tinsel but they bring the tree.”

Directors will first approach Pope either with a meeting to talk about a project, by sending a screenplay and then a bit later by sending through a shotlist or a storyboard.

Although he has compiled shot lists for certain directors and certain projects, he usually prefers to be a sounding board for the ideas, shots or storyboards that the directors put forth, and then gives his unfiltered feedback to them.

“You’re just bouncing things back and forth. In a way, I’m the first person to see the movie. So, I’m the first audience member and I just say to directors, “When I start, look I can be really annoying, because I’m going to say what is in my head and I’m not going to filter it too much. I’m gonna be direct feedback to you because I feel like that’s my job.” 

This may involve him proposing anything visual, such as an alternative shot, tone, angle, lens, costume palette or art direction to the directors that he feels better conveys the story. These ideas can either be rejected, accepted, or through collaboration, morphed into a new idea.

This process happens both during prep, as well as on the day of shooting, where he needs to be adaptive and quick on his feet to make loads of quick decisions about how to best tell the story. 

This is especially necessary when working for a director such as Edgar Wright who likes to cut quickly to a variety of angles - which makes the shooting schedule very tight. 

Making good decisions, but also making them quickly is an important quality for a DP to have. 

Using his analogy, the director brings the script and structure, or tree, and then they both decide what ornaments, like the tone, shots, or palette, they can add to it, to better accentuate it.   


GEAR

Since he started his career in the 80s, long before the first viable digital cinema cameras were released around the early 2010s, much of what he has shot was done using film. Although his recent projects have mainly been shot digitally on versions of the Alexa, he does occasionally still get opportunities to work photochemically.

“It’s just a discipline. You know the mag is going to last 10 minutes long. The slate comes in fast. Everyone is on their toes. It becomes sort of flaccid in the digital world. They’re on their phone, they’re like whatever. On film it’s a complete old fashioned discipline. It’s gotta be in a can. It’s gotta be split up. It’s gotta go to the lab. It’s gotta be returned. Everything’s got a deadline to it and a precision to it that people step up to.”

On top of film changing the set dynamics, he also values it for its look - which renders images with an elevated realism. On Baby Driver he made the decision to shoot the vast majority of the movie on a single film stock, the more sensitive Kodak 500T. He did this in order to preserve a similar continuity of grain across the entire movie. The more sensitive a film stock is to light the more prominent the grain will be - so a 50D stock will have finer grain than a 500T stock.

However, he did occasionally use the less sensitive Kodak 250D for certain car mount shots, where there wasn’t enough room to add a mattebox to the front of the lens, which, if shooting with a 500 speed stock, would be needed to hold ND filters to cut down on light.  

“Ordinarily I was shooting it all with 5219. Just so that I have enough speed to deal with polarizers and overcranking.”

Polariser filters are pieces of glass that let through certain waves of light while blocking others. Practically this filter can be used to either introduce more contrast to darker tones, most commonly in the sky, or to reduce or manage reflections, for example reducing the reflections seen when shooting through a car window.

However, this filter also reduces the amount of light let into the camera by around one stop. Therefore, Pope chose a faster film stock that was more sensitive to light to counteract this.

He also mentions ‘overcranking’. This means shooting at a frame rate which is faster than the regular 24 frames per second most commonly used. If the frame rate is doubled to 48 frames per second, it creates slow motion, but it also reduces exposure by 1 stop. 

So, again, choosing a more sensitive stock helped counteract that.

This overcranking was taken to the extreme on The Matrix, which was also shot using sensitive 500T for interiors. Because many of the fight scenes had super slow motion shots in them, Pope had to light these sets so that the camera could shoot at up to 300 frames per second. This was done on a high speed Photosonics film camera. 

300 frames per second requires about 4 stops more light to shoot at than regular 24 frames per second. This means that to get to his shooting stop on the lens of T/2.8 he had to light spaces at least 4 stops brighter, at T/11.

This is fine when shooting exteriors lit by strong sunshine, but is far more difficult when shooting the many interior sets or night scenes - which required many, high output film lights.

For the lobby scene which was shot with high frame rates, the idea was to add an ambient top light to the space and then key the scene with a side light. This sounds simple in principle, but getting to this super bright level of light, required doing this on a massive scale. 

His team rigged 1,000 tungsten par cans to the roof for ambience and about nine high output tungsten dino lights that side lit the scene through 12x12 grid cloths that softened the light.   

It also meant that any practical lights that could be seen in the shot had to be custom built using pyrex, and specially fireproofed, so that the extremely high output, hot sources of light didn’t start any fires.

While most shots in the Matrix were achieved on practical sets, there was also some visual trickery, such as the iconic bullet dodge shot. Rather than trying to physically move the camera around the actor, extremely quickly and shoot in extreme slow motion, Pope’s team instead created a rig that mounted many stills cameras placed around the actor in a green screen studio. 

They could then use an image from each camera sequentially, so that it gave the illusion that the camera was moving. This shot of the actor was then cleaned up and dropped into an environment, to achieve the famous circular slow motion shot. 

One of the metaphorical ornaments that was hung on The Wachowski’s metaphorical tree was how they used colour. The real world in the film was devoid of sunlight, so was always lit to a cool, blu-ish colour temperature. While the artificial, computer generated world of the matrix had a sickly green tint to it, inspired by the colour of the cursors of the time.

When working on these larger action based movies that require getting a lot of different shots, he’ll often shoot with multiple cameras to maximise the coverage he can get.

This means hiring crew, such as camera operators and dolly grips, who will technically collaborate with him to frame the images and influence the camera movement.

“Any director likes to have other filmmakers around them. You don’t want a piece of meat who isn’t going to give you an opinion. You want people who are smart and who are contributing all the time. I mean, I always consider operators and dolly grips to be part actor. They have to be able to interact with the actors themselves and feel the drama because the actors are never going to do it twice the same way. You have to feel it. You have to be one with them.”

The movies he shoots often use smooth dolly movement to tell stories by doing things like pulling out from an important object to reveal context, or very slowly pushing in with the camera to elevate an emotional scene or experience the character is undergoing.

Each camera move is another little ornament for the tree.

He’s used both anamorphic and spherical lenses, but does have a tendency of shooting grander, action stories in the anamorphic format, often on Panavision anamorphics, such as the G-series, T-Series or older C-series lenses.    

When deciding where to place multiple cameras, it’s useful to have a viewfinder or pentafinder to more accurately find the frame before placing the heavy cinema cameras in place or laying dolly tracks. 

There are a few photos of him using what looks to be an interesting viewfinder from Kish optics, which has a video tap inside, which can send an image to a little monitor. This makes it easy for both himself and the director to evaluate and find the right frame.    


CONCLUSION

The diversity of the films that he has shot makes it tricky to pin Pope down to only one particular photographic style. Rather, he works in a close collaborative relationship with each director in a different way. Acting as a sounding board for their ideas and providing his own perspective on how best to elevate and execute each story visually using his deep experience and knowledge of cinematography. 

Read More
Gray Kotze Gray Kotze

How James Cameron Shoots A Film At 3 Budget Levels

In this episode I’ll look at three movies shot at three increasing budget levels by James Cameron - Piranha II, The Terminator and Avatar: The Way Of Water - to uncover the similarities between them and how he built his career and style as a director.

INTRODUCTION

There are few director’s who are as well known by mainstream audiences as James Cameron. After all, he’s directed some of the biggest Hollywood blockbusters of all time.

In this episode I’ll look at three movies shot at three increasing budget levels by James Cameron - Piranha II, The Terminator and Avatar: The Way Of Water - to uncover the similarities between them and how he built his career and style as a director.


PIRANHA II - $600,000

“I started working in special effects and I did a little art direction. In fact I got my first directing gig which was Piranha II as a result of Galaxy of Terror. I was doing second unit directing on that film. And they needed a cutaway to a dismembered arm lying on the ground with worms crawling around it. So I have them put a piece of zip-cord, strip it down and lay it underneath the arm and we make it wet so that it’ll conduct and there’s an electrician around behind the set. And the only cue I could think of was action - to tell him to throw the switch. 

What I didn’t know was that these two producers who were looking for a director for Piranha II had come in behind me and they were looking over the setup. And I said, ‘Alright, roll camera’ and they rolled, ‘And action!’ and the electrician behind plugged in and the worms all started to move I said ‘that’s good that’s a cut’. He unplugs. They didn’t see him. Worms stop moving. I turn to them and they say, ‘Come with us we’d like to talk to you about a film we’re making.’”

As the title suggests, Piranha II was the sequel to Piranha, which was basically a B-movie rip off of Jaws that replaced the shark with genetically modified flying Piranhas that could somehow survive out of water. The low budget sequel was taken on by an Italian producer, Ovidio G. Assonitis, known for making B-horror movies. 

James Cameron, who had built up a career as a production designer and special effects director, was initially brought onto the project to do the special effects, but after the first director on the project was quickly fired, due to a creative disagreement with Assonitis, Cameron was replaced as the director of the movie.

Filming of the exteriors took place in Jamaica, with some interior scenes filmed in a soundstage in Rome.

Due to the low budget of $600,000, and an allegedly controlling Italian producer, almost the whole crew was made up of Italians, including the cinematographer, whose name was spelled incorrectly in the credits. This quickly made communication difficult for Cameron, as apparently most of the crew could not understand English.

As the story goes, Cameron was eventually fired from the shoot as director due to constant interference from and disagreements with the producer. After firing two different directors the producer finished directing the rest of the B-movie himself. However, as Assonitis felt that the movie needed an anglicised director name in order to sell, they kept Cameron’s name on the credits.   

Cameron was also allegedly not allowed to view rushes of the footage he shot, and was kept out of the editing process. Although he would later buy back the footage from the distributor so that he could recut, re-score and release the originally intended version for home video.

Although Piranha II will probably not go down in history as a cinematic masterpiece, the low budget horror movie still contains a few directorial stylistic threads that would continue throughout his career. 

One, his fascination with the ocean and setting films in or around water. 

Two, telling stories in a way that draws the audience into the magical world of cinema where realism is replaced by spectacle and plausibility is replaced with a sprinkle of cinematic magic.

Three, his fascination with using special effects and pushing the boundaries  of whatever technology he has at his disposal.

And, finally, presenting larger than life stories with lots of action and set pieces, which are told through the eyes of a focused cast of main characters.


THE TERMINATOR - $6 Million

In the wake of the release of Piranha II, James Cameron fell ill and had a fever dream one night about a metallic, robotic torso. This sparked the idea for his next sci-fi slasher script which he would write about a cyborg assassin sent back in time to kill someone. He called it The Terminator.   

“The first film that I directed that I got fired off of was called Piranha II and I think it was about $600,000. And the second film was The Terminator and that was $6 million.”

Although it was a substantial jump from his first low budget flick, $6 million was still considered a fairly limited budget to tell the kind of action heavy, ambitious sci-fi movie he had written.

“It’s not really about machines from the future. It’s sort of about our relationship with technology. But it’s really kind of about our human potential for dehumanisation ourselves. You see cops as examples. And then, you know, the other idea not to trust technology and even not to trust the fabric of reality.”

Cameron employed Adam Greenberg as the cinematographer on the project who worked with a relatively small camera and lighting package - without any expensive, elaborate gear setups.

Terminator was shot on an Eastman 250T film stock with an Arri 35 BL for dialogue scenes and an Arriflex 35 III for MOS, action or insert shots that didn’t need synchronised sound.

Since the script featured lots of night scenes, Greenberg chose a set of spherical prime lenses with a fast aperture that could let in lots of light - the Zeiss Super Speeds. Also, based on the classic 80s halation that you can see around the highlights, it’s likely that he used a diffusion filter, such as a Tiffen Low Con. 

When it comes to action, Cameron edits it tighter and more coherently by creating little pockets of quick cuts that group different shots of the same action or idea together, rather than just cutting back and forth between various unrelated shots.

“I think it’s a very analytical process. I remember literally having a discussion with the editor and saying, ‘Look, in this battle sequence every single time we’ve improved the battle we’ve done it putting like with like, by putting the attack with the attack, the retreat with the retreat and grouping things that way.”

The cars in the chase sequences were never allowed to drive faster than 40 miles per hour. So, Greenberg helped to create a faster sense of motion with light. He placed film lights with operators on moving vehicles next to the shooting vehicle. The operators could quickly pan and move the source around to create lighting and reflections that gave the illusion that the streetlights were passing by quicker than they actually were. 

Another analytical way of creating a menacing tone for The Terminator was to often shoot him from a low angle, while the camera shot Sarah Connor at a more comfortable, familiar neutral angle.

Overall he also used a large amount of backlight - placing hard light sources behind the actors in almost every night scene. Generally using softer sources for Sarah Connor and harder backlight for other characters. And don’t forget about the 80s hair light. This created a more silhouette-y shadow-y look which both helped to hide some of the visual effects and created the moody noir feel that they were after. 

 Another way of masking special effects was through a combination of clever angles and editing. Cameron’s philosophy was if you supply the audience with a shot A and a shot B, the audience will quickly fill in shot C using their own imagination. A good example of this is in the famous eye extraction scene. By showing shot A, a profile shot of the blade lifted up to an eye, and shot B, blood dripping into water, the audience fills in the gruesome shot C, the cyborg cutting out its own eye, by themselves.

AVATAR: THE WAY OF WATER - $400 Million

After the huge financial success of Avatar in 2009, a sci-fi epic which pushed the technological boundaries of motion capture and CGI, Cameron promised a series of sequels.

It took over a decade of development and production to create the first in a series of follow ups - which were all filmed simultaneously.

Avatar combines live action capture, as well as motion capture that is later used to create computer generated characters, in computer generated environments. As I mentioned before, a trademark of Cameron’s work has been how he pushes the technical limits of gear and technology. He has shown an interest in visual effects his whole career, so much so that he started his own VFX company, Digital Domain, in the 1990s.

Since Avatar uses an unusual process to make the movie, let’s break down the full workflow of how it was prepped, shot and then finished in post.

Cameron selected his old cinematographer Russel Carpenter, who worked on Titanic, to shoot the film. The photographic process started in 2018 where Carpenter oversaw the virtual lighting of the CGI sequences, using a programme called Gazebo which was developed by Weta FX visual effects house. He pushed for lighting with different colour nuances for different virtual spaces. This virtual lighting was also valuable as he would have to match it in the next step of the process which was shooting live action scenes with real human characters.

These scenes were filmed in 3-D, using a 3-D rig designed by one of Cameron’s companies, with two Sony Venice cameras and Fujinon zoom lenses. They had to frame for two different aspect ratios during shooting. The taller 1.85:1 was used for 3-D presentation, while the wider 2.39:1 ratio was used for regular 2-D screenings.

They also made the unusual decision to shoot at 48 frames per second, instead of the more regular 24, which decreased the amount of motion blur. To compensate for the one stop loss of light that comes from shooting at a higher frame rate, they shot using the Venice’s higher ISO base of 2,500.    

Once the live action portions were done they turned to capturing the virtual performances using motion capture technology.

“So, it’s a set without a set. Your coral reef is going to be some pieces of tubing which are screwed together to create contact points for the actors. So if they’re pulling themself through coral we don’t build the corral. We build that which they need to touch and then we warp reality to fit their hand contacts later.” 

Motion capture doesn’t really use cameras in the traditional sense. Instead, actors are put in skintight suits that have reference markers or sensors on them. A few cameras, or sensors are then placed around them that are able to capture how the actor moves in a 3-D space.

This data is then later used in post production to give a digital character the same movement as the actor on set.

For the underwater sequences with digital characters, Cameron played out scenes with his actors in mocap suits in a custom built pool in a studio. Actually shooting underwater gave a more realistic sense of motion but came with some challenges. They had to construct underwater housings for the mocap cameras, and shoot at a wavelength of light near ultraviolet which would better capture motion through water. The pool was lit by rigging 60 Skypanel S-60s over the water. They then placed little plastic beads on the water’s surface. This both diffused the light and minimised capturing reflections underwater - which would cause false data.

After all the live action and motion capture footage had been acquired they could then plug the footage back into the pre-designed lighting and virtual environments. From there an intensive post production team was used to build and finish the characters, movements and environments in the film.

Read More
Gray Kotze Gray Kotze

How Cinema Cameras On Movies Are Built

This video is a template for how to put all the pieces of a cinema camera together, as well as a guide to what camera accessories are commonly used and rented on industry level jobs.

INTRODUCTION

In one of the early videos on my channel I created a guide for how cinema cameras are built and all the accessories that are usually attached to them on an industry level production.

Since that video has been removed, due to copyright issues, I thought I’d update that video and create a new guide that can act as a template for how to put all the pieces of a camera together, as well as a guide to what camera accessories are commonly used and rented on industry level jobs.

Before we begin, just a reminder that camera technology and gear gets updated often, and although the gear shown in this video may soon be outdated, the types of accessories shown are pretty standardised across the industry and should at least remain relevant in concept in the future.      


BASE PLATE

Before we begin adding accessories, we first need to be able to stabilise the camera so that it can be worked on. This means mounting the bottom of the camera to a base plate which can be fitted to a tripod head.

For a standard studio build, the base plate gets screwed into the bottom of a sliding dovetail plate. To attach the camera to the dovetail requires a bridge plate, such as an Arri BP-8.

Bridge plates are designed with two holes that can accept bars or rods which come in either 19mm or 15mm sizes. These rods are used to mount other accessories which we’ll get to later. 

The release on the side of the bridge plate is moved to lock and unlock the plate from sliding. Once unlocked, the camera is able to slide into the dovetail base. Sliding the camera is also a quick way of balancing the weight of the camera on the tripod head so that it is not too front heavy or too back heavy.     

Base plates and bridge plates on industry level cinema cameras are standardised to the same design and size so that the different cameras, from an Alexa to a Venice to a Red, can be securely mounted onto other rigs, such as a car mount, or a stabilised head, without having to change the mounting accessories on the camera.  


LENS

Next, we can put on a lens. The most common cinema mount found at the back of the lens is a PL, or positive lock, mount. Although there are also PV mount lenses offered by Panavision, as well as large format LPL mounts. The principle for attaching a lens to the camera is the same. 

For a camera to accept a lens the mount at the back of the lens needs to be compatible with the mount on the camera. 

Line up the locating pin, found at 45 degrees, with the lens, and gently insert it into the mount. Once the lens is in, turn the PL release clockwise to lock the lens in place. To remove a lens you have to hold it in place and then turn the release counter-clockwise until the lens is loose and can be taken out. 


MATTEBOX

To the front of the lens we’ll attach a mattebox, this has two main purposes: to flag light from hitting the lens, which reduces flares, and to hold filters.

There are two types of matte boxes: clip on mateboxes, which are usually smaller, take 4” or 4x5.6” filters and can be clamped straight onto the front of a lens, and rail mounted matte boxes which are usually larger, take 6” filters, and need to be slid on and clamped to 19mm, or sometimes, 15mm rods. 

For clip on matte boxes it’s important to make sure that the back of the mattebox is the same size as the front diameter of the lens that you will attach it to. For example, if you’re using Zeiss Super Speeds with a 80mm front diameter, then your mattebox back must also be 80mm wide. 

To attach a matte box we loosen the release, push it onto the lens and then tighten or clamp the release until it holds securely onto the front of the barrel.


POWER

Before we go any further we’ll need power for both the camera and any accessories that need to be run. Cinema cameras typically have two methods of powering the camera, on-board power, where batteries are mounted onto the back of the camera, and block power, where cameras are tethered to a large battery with a cable.

On-board batteries are usually used for handheld camera builds or other builds where the camera will move a lot and be mobile. While block batteries are often used for studio setups, such as when the camera is on a tripod or a dolly - which has a platform where the battery can rest. 

They also enable on-board batteries to be replaced or hot swapped without powering down the camera.

An on-board battery plate for V-mount, gold mount, or B-mount batteries may be slid onto the 15mm rods at the back of the camera in some cases. These batteries don’t power the camera as long as cabled batteries, but they allow the camera to move untethered.


MONITOR

Now that the camera has powered up and we can see an image, let's attach a monitor onto the camera that the DP or operator can use to assess exposure and frame. 

Using an articulating arm, such as a Noga arm, we’ll screw one side into the base of the monitor and the other side into a quarter inch mounting point on the camera, such as into the handle or the top plate.

There are two cables that need to be plugged in to get the monitor working. The first is a power cable, such as a D-Tap or a two pin, which we’ll plug into a port on the battery plate, into the battery itself or into a port on the camera. It’s also possible to power monitors with their own batteries. 

The second cable is an SDI which provides a video feed from the camera to the monitor. This is plugged from an SDI output on the camera into an SDI input on the monitor.  


TRANSMITTER

Occasionally, directors and focus pullers like to stand next to the monitor on camera, but more often than not they like to have a transmitted video feed to their own monitor.

To set this up we’ll attach a wireless transmitter, such as a Bolt 500, to the camera. Again this requires 2 cables: a power cable and an SDI video feed.

This Bolt will then transmit a live video feed wirelessly from the camera to a receiver. The signal is then either received by a VT operator at their station where it will be output to different monitors, or it can be directly attached and cabled to a director’s monitor, or paired with a sidekick receiver that goes into a focus puller’s monitor. So that they can see a live video feed of what the camera is recording while remaining physically separated from it.    


FOLLOW FOCUS

Next we’ll need to be able to change the focus on the lens.

There are two types of gear we can use: a standard follow focus, that needs to be operated by standing next to the camera, or a wireless follow focus which is operated remotely using a handset.

The 1st AC will attach a wireless follow focus motor such as from Arri, or Preston, by mounting it onto a rod. This could be top mounted from a short rod, or bottom mounted onto one of the 15mm or 19mm rods on the bridge plate. The teeth of the focus motor need to be aligned with the gears on the barrel of the lens.

You then need to plug a power cable from the camera to the motor. Sometimes an additional MDR box may need to be mounted on the camera, which the motor will be plugged into. The focus puller can then alter the focus of the lens remotely by turning the knob on the handset. 

FOCUS ASSIST

A modern tool that some 1st ACs use is a focus assist device, like a Cine Tape, which sends out an ultrasonic signal to measure the distance between the camera and a subject. This allows the 1st AC to more accurately judge and pull focus. 

This tool is especially useful when shooting on film, where focus needs to be judged more off distance than by looking at an image on a monitor. 

The focus assist is mounted using an articulating arm. It usually comes with a box that needs to be mounted onto the camera and plugged into a power source as well as the focus assist itself. This then wirelessly sends a distance readout to the box as well as to the focus puller’s handset.

SOUND

Finally, sound equipment may be attached. This could either be in the form of a small box, such as a tentacle, which is used to maintain the same timecode between the audio recorder and the camera - which is plugged into the timecode port, or a device which feeds the audio signal from a mixer into the camera through an input such as a 6-pin Lemo.


CONCLUSION

Cinema cameras are modular by design and therefore capable of being configured into an array of different shapes and sizes depending on what rigs will be used, or how the DP would like to operate the camera.

If you’re new to cinema cameras and don’t have much hands-on experience, the easiest way to practise assembling cameras is at a gear rental house. Reaching out to them to intern there or asking them if you could pop in and job shadow a bit will allow you to see various cameras in the flesh and get experience on how to assemble them.

It’ll also help you to better learn the gear and be able to put together a camera gear list of what you need to rent for a shoot. So that, hopefully, when you see a cinema camera on your next job, you’ll have some experience and knowledge on how to go about setting one up.  

Read More
Gray Kotze Gray Kotze

How Pro Filmmakers Gear Check Before Shooting

Let's look at how professional crew members gear check cinema cameras before shooting with them.

INTRODUCTION

If you’re new to filmmaking you may not know that prior to shooting, the technical crew always attends a gear check to carefully go through all of the equipment that will be used to shoot a film. 

In this video I’ll explain why this is done, then take you through the entire process of how a cinema camera is gear checked, so that you can do it the same way as the pros on your next shoot. 

WHY DO A GEAR CHECK?

While smaller corporate, documentary or solo shooter jobs may be done by cinematographers who own, rent and operate their own gear, most industry level jobs are shot with high end equipment which is rented from gear houses. 

For commercials or short form content, crew is usually booked and paid for a half day to do a gear check at the rental house a day prior to shooting. While long form crew on features or series may be booked to do a few days, or even a week long, gear check that requires a greater degree of technical prep.

There are three main reasons why industry productions do gear checks.

The first and most important reason for a gear check is to ensure that all the equipment that is being rented is in proper working order and that the crew has all the necessary bits and pieces to make the gear package work. For example, you never want to just grab the gear the night before a shoot only to find out as you’re setting up to do the sunrise shot that you’re missing a cable that you need to power the focus motor. I’ve even seen, with my own eyes, an extra Alexa Mini LF, that wasn’t gear checked, brought straight onto a job only to find out that it was missing a sliding dovetail plate and couldn’t be rigged onto a car mount to do a sunrise driving shot.

The second reason to perform a gear check is to prevent accepting faulty or broken gear from the rental house. It’s much easier to immediately flag an issue with gear as you are testing it, than to report faulty gear to the rental house on set and then go through the blame game of when exactly the gear broke and who is responsible for fixing or replacing it.

The final reason is a bit of a rarity, but occasionally, in the event of an insurance claim related to faulty equipment, such as dead pixels on the sensor that spoil the footage, or problems from a film stock, the camera tests that were shot during the gear check may be reviewed by the insurance company or assessor.


GEAR CHECK

So, now that we know why it is so important to do a gear check prior to shooting, let’s go over how industry level crew perform a gear check from start to finish. 

The easiest way to make sure that a cinema camera is working properly and that all the necessary accessories are there is to build the camera into the same configuration that it will be built into on set. 

The focus puller will assemble the camera and attach all of the accessories, such as a lens, a mattebox, a focus motor, a monitor and a transmitter. They will build the camera into a neat form factor and make sure that all of the necessary accessories are getting power and are functioning properly. 

Crew may need to get the floor workers at the gear house to get small bits and pieces of gear to make their build work, such as finding the right mattebox back or reduction ring for a unique lens, getting extra backup cables, or changing out the gear on a follow focus to a different pitch.  

The VT operator will test to make sure that they are receiving a video feed to their station, and check that the focus puller is receiving a transmitted signal to the sidekick receiver that is going into their monitor.

If audio will be recorded on the job then the sound recordist will also check that their gear is working and that there is a transmitted audio feed which they can send to the director or clients. They may also rig something like a tentacle onto the camera which will keep the timecode in sync with their external recorder.  

Over in another section of the gear house, the grips and electrical department will make sure their equipment is also in working order using the same idea of setting up, powering and testing the gear. 

Now that the camera has been built into its shooting configuration, it’s time to start programming or marking up the lenses. The specifics of this will change depending on the kind of wireless focus handset the focus puller has, although the concept is the same - it even applies to old manual follow focus units.

Each cinema lens has markings on the barrel which indicate at what distance the lens will render sharp focus. The 1st AC will now sit and transcribe these markings from the barrel of the lens to the focus ring on their wireless unit. 

Some lenses, such as the LDS Ultra Primes or the Cooke S8/i series, come with an LDS strip that sends metadata from the lens, including focus distance, to the camera. This means that when using these lenses with certain compatible focus handsets, they don’t need to be manually programmed as the focus distance will automatically display on the handset.

However, for most vintage lenses or incompatible handsets, the focus puller will need to manually program them. This is done by first calibrating the lens, so that the focus motors find an end stopping point for infinity and close focus. The focus puller will then systematically align each distance marking with the centre indicator mark. For example, it may start at infinity. 

This distance can then be programmed into the handset so that the handset knows that for this lens, at this position on the focus motor, the reading is at infinity. This same method can also be done by manually marking old focus rings with a pen.

Once each lens in the set has been marked up, the 1st AC is now ready to shoot camera tests. A slate will be marked up with the name of the production, the date, the camera’s serial number, the lens’ focal distance and its serial number, as well as any other info that may be requested such as the stop, white balance or the focus distance.

The camera will be positioned in front of a focus chart. The 1st will measure the distance between the chart and the film plane of the camera and set the distance marking on the lens to that same measurement. They will check that the image is sharp - which it should be. If it’s not sharp then the lens may need to be sent off to the lens technician at the rental company for adjustment.

Before shooting the DIT will dial in the correct specs, such as resolution and codec, and any other settings specified by the cinematographer - such as a LUT or frame lines for the aspect ratio. They will then put in the slate with the relevant info and shoot a short clip for a couple of seconds to record that the lens was indeed sharp. They will repeat this process for every lens that will be used on the production. These tests are also a good way of checking that each lens will cover the full sensor and that there will be no vignetting, especially at wider focal lengths.

Sometimes, jobs that require VFX work will also ask for lens grids or distortion charts to be shot. These are especially necessary for anamorphic lenses or wider focal lengths that have a lot of curvature to how the image is captured.

Distortion charts have a straight black and white checker block pattern that can be used to measure the degree of distortion a lens has in post production, and then  straighten, or unwarp, the image to make 3D tracking easier. 

When shooting these tests it’s important to line up the camera to the centre of the chart and make sure that the chart is straight on all axes. Like with the focus tests, a front slate should be used to denote which lens is being shot as even the same brand and focal length lens in a different set may have minor differences in distortion.

Once all the tests have been shot, the card can be handed over to the DIT who will copy over the footage onto the prepared hard drives and examine the integrity of the tests to make sure that there aren’t any issues.     

If there are specialised rigs on the job like a Steadicam, gimbal or remote head, then the camera needs to be built and tested for these setups. It’s important to ensure that the camera will be able to balance on these rigs. This might involve stripping the camera into a smaller form factor.

While all of this is going on, the 2nd AC will usually be working on labelling gear and organising it into cases. They will apply long strips of coloured gaffer tape onto the outside of the lens box and use a permanent marker to write down each focal length inside the box, its stop and its minimum focus distance. This makes it easy to identify the lenses you have and give quick information on the stop and MOD to the focus puller or cinematographer.

Long form jobs may involve a lot of labelling and organising of various other gear, such as filters, into the easiest manageable workflow. It’s a loader's job to be able to know and pull out every piece of gear that is on the job at a moment’s notice.

Finally, the camera may get packed into what’s called a coffin, or a standby box, that allows the camera to easily and safely be packed away without deconstructing the whole build.

Read More
Gray Kotze Gray Kotze

What A DIT Does On A Film Set

What exactly is a DIT? Let's find out.

INTRODUCTION

In the transition from shooting on film to working with digital cinema cameras, a new job appeared on movie sets - almost overnight. That job is the DIT or digital imaging technician. 

You may not have heard of this crew member, but they are now indispensable on any industry level set. So, what exactly is a DIT and what do they do that makes their job so important? Let’s find out.  


ROLE

The job of the DIT came about during the transition to shooting with digital cameras. Before then, exposed film stock would be passed on from the camera team to the film lab where it would be developed and later enter a post production workflow.

Shooting digitally suddenly meant that camera cards with footage on them needed to be transferred onto hard drives during shooting, and that digital footage needed to be prepped on set for post production. 

This is because each card that a digital camera records onto contains a finite amount of recording space. For example, when shooting in open gate ARRIRAW on an Alexa 35, each 1TB card will be able to roll for 28 minutes before running out of space. If you get 3 cards in a camera kit, this means that after an hour and a half of roll time you will run out of recording space, unless DITs transfer the digital files from the cards onto hard drives.

There may be a bit of a misconception out there that this is all that a DIT does: dump footage onto hard drives and back it up. This is a better description for another crew member that may work for the DIT, a data wrangler. However, DITs on high end jobs do far more than just sit behind a computer in a van, well, at least they should.  

Overall, the DIT acts as a connector between the on-set production and capture of the footage and how that footage is set up for post production. 

They collaborate with the DP to correctly set up digital cameras and dial in the settings: such as the codec, resolution and frame rate. Some DPs also prefer that the DIT handles any setting changes that may happen during shooting, such as if they want to shoot at a high frame rate for a single shot, which means that the camera must drop down in resolution. 

Importantly they also ensure, or oversee, the transfer of footage and metadata from camera cards onto hard drives. Once the footage has been secured they are responsible for quality control checking the integrity of it, by examining things like the sharpness of the focus, the exposure, the noise, any unusual colour tints or irregularities from filters, and to check for dead pixels, or any unwanted optical defects like lights strobing, or ghosting.

They will establish a file structure on the hard drives and come up with a workflow for that data where necessary. Depending on the needs of the project they may be tasked with preparing the files for post production so that the offline editor receives the drive with files that are ready to work with. 

This may include doing transcodes or creating proxies from high resolution footage or syncing up the video with the sound clips on a timeline.

At the same time, DITs on big jobs often work on-set with the cinematographer to provide advice or manage how the images are captured, by ensuring that they are correctly exposed.

As such, DITs need to be versed both in the on-set camera side of capturing images, as well as in the post-production computer side of managing camera data and creating a workflow. 


AVERAGE DAY ON SET

Before production begins, the DIT must attend the gear check. This is where the camera that will be used on the production is set up at the gear rental company and tested by the crew to make sure all is in working order.

The DIT will program the correct capture settings into the camera that they received from the cinematographer, such as resolution, RAW or ProRes, or adding the correct de-squeeze factor if using anamorphic lenses. 

They will also add and overlay the frame lines for the final aspect ratio or ratios of the project, and, at the request of the DP may add a custom LUT - so that the image may be viewed with a specific look.

Once the camera is built, the focus puller will shoot a test roll of a camera chart on each lens that will be used on the job. The DIT will then transfer these tests to the drive, and carefully examine them to make sure the focus on each lens is sharp, that there are no unwanted aberrations and that the sensor is clean and free of any dead pixels.

They usually also shoot a black - where the camera rolls with the turret covering the mount - and a white - where a piece of paper is passed through the frame and recorded. This is a good way of checking the integrity of the sensor.

At the beginning of every shoot day, the first thing that they will do is to take the turret off the camera and physically check the sensor with their eyes to make sure there is no dust on it. If there is dirt on the sensor which may have been collected in between lens changes then it is the job of the DIT to very carefully dust it or clean it with a sensor swab.

A card will be loaded into the camera, they will double check that all the settings are correct and the camera is then ready for shooting.

On big jobs with multiple cameras, the DIT may be called upon by the cinematographer to help maintain a consistent exposure across all of the cameras. This usually involves getting a transmitted video image from each camera and using exposure tools such as a waveform and false colour to find an exposure that is consistent. They will then get the camera assistants to adjust the aperture or filtration on each camera until all exposures are uniform. Or they may use a remote iris control to adjust the aperture.

Once enough footage has been shot they will call on the 2nd AC to reload the camera. The AC will hand them the card with the footage on, they will head to the DIT van, and they, or their data wrangler, will immediately start transferring and backing up the footage to hard drives on a computer using software such as Silverstack - which uses checksum verification to make sure that all the files from the source get copied over to the correct destination on the drive without any missing files or duplication.

During this process they will check the clips on a monitor to make sure there are no optical or camera related issues with the files. If, for example, one shot was out of focus on every take, the DIT will quickly go to set and inform the focus puller or DP that there was an issue. They will then determine whether it is possible to re-shoot the shot.

Then, depending on what was requested from the person or company doing post production, they will prepare the drive for the edit. This usually means transcoding the footage to a lower resolution and codec that the offline editor can work with, making sure that the file names are the same and will correctly align again for the online edit.

Usually DPs will ask them to add a LUT to the proxy files. Some cinematographers will also ask the DIT to colour correct each clip that is transcoded so that they can be viewed in the edit with a look that will be as similar as possible to the final grade.

Once a card has been transferred and backed up onto at least two drives, the DIT will then return the card to the 2nd AC. Any card that they return means that the footage on it has been secured and it is safe to erase or format the card and start recording a new roll of footage over it. 


TIPS

The worst fear of any DIT is that footage will get lost. It’s also probably the easiest way to get fired and never hired again.

That’s why good DITs have a system that they always stick to that ensures that no data is ever lost. This may be getting the loader to label and seal each exposed card with a roll number. The DIT can then put this sticker next to the card reader when the card is inserted and the footage is copied over, then take the label off the card and case once the footage has been copied and backed up.

That way, even if you are very busy, you’ll always know what footage needs to be copied and what footage has already been copied.

Some DITs also use software to do this, such as Parashoot. This software will detect when all the files from a card have been copied over and backed up. Once this card is handed back to the 2nd AC and loaded into the camera, a message will pop up on the screen asking if you’d like to erase the card. If this message pops up then the loader knows that the card has been backed up and is ready to format.

Another important skill for a DIT is to effectively manage the flow of cards. This may mean that they call for the camera to be reloaded early, before it is completely filled up, so that they can start copying and backing up the footage. If they let too many cards fill up without copying them over, then they may get into a situation where the production runs out of available cards to shoot on - another easy way of getting fired.

Another trick most DITs do, is to reload the camera after the first shot on day one is completed. They can then double check that there are no technical issues with the footage, before an entire card of shots gets filled up with unusable takes.     

Being a DIT means that you need to always stay abreast of the newest technology. Most DITs have their own van, or cart, that they custom build into a mini post production studio with a computer, monitor, RAID and a UPS solution. 

They always need to ensure that they have the necessary gear to deal with whatever camera is on the job. For example, if doing a shoot with two Alexa 65 cameras, you will need to have a lightning fast computer and hard drives to keep up with copying and processing the enormous data files. 

Being able to calculate how much storage you need is another important skill for DITs to have. Hard drives are expensive, so you don’t want to overestimate your storage needs and get production to purchase lots of storage which isn’t used, but at the same time you never want to be in a position where you run out of space on the drives.

There’s a good tool on the Arri website where you can plug in the camera and settings you are using and it’ll spit out how much recording time you’ll get per card. This may be a good starting point for making this calculation. 

Read More
Gray Kotze Gray Kotze

How Cinematographers Set Their Exposure

Exposure is determined by six factors: the sensor’s sensitivity or ISO, the shutter speed, the lens’ aperture, the frame rate, ND filters and of course the overall amount of light in the scene that is being photographed. Let's take a look at how cinematographers expose.

WHAT IS EXPOSURE?

You’ve no doubt seen how some movies look darker than others. Comedies may be bright and colourful, while dramatic crime films may be dark and moody. This is, to a large part, determined by how cinematographers set their exposure: the level of brightness across a frame.

Although it may seem simple on the surface, being able to expose properly is one of the most crucial skills that every cinematographer needs.

Exposure is determined by six factors: the sensor’s sensitivity or ISO, the shutter speed, the lens’ aperture, the frame rate, ND filters and of course the overall amount of light in the scene that is being photographed. Before we talk about how to get to the correct exposure it’s important to be aware of a few terms. 

The amount of light present in an image will almost never be completely even across the whole frame. Part of the job of the cinematographer is to use lighting to create different pockets of light and shadow in an image - called contrast. Sometimes that contrast might be light, other times it may be strong. 

The difference between the brightest and darkest part of the image is called the contrast ratio. Traditionally in cinematography this refers to the difference in illumination between the brightness of the key light and brightness of the fill.

Similarly, a camera also has a threshold of how much of the difference between light and dark it can capture before the information turns to pure white or pure black. This is called dynamic range

Modern digital cinema cameras usually have around 15 to 17 stops of dynamic range, while consumer video cameras may have less than 14 stops. This is one of the reasons that it may be more difficult to achieve a balanced exposure on cheaper cameras. 

If the strength of the contrast ratio exceeds the dynamic range of the camera, then the image will either blow out - and turn to pure white - or it will get crushed - and turn to pure black. 

Usually cinematographers like to light scenes so that the total exposure falls within the dynamic range of the camera. In other words, they like to create images in which the highlights do not blow out, and in which the shadows are not completely crushed. In a similar way to how the human eye perceives brightness in the real world.

HOW TO FIND THE RIGHT EXPOSURE?

So, we know what exposure is, but how do we find the right exposure using a camera?

Cinematographers, or sometimes the DIT, regularly use different tools to determine the best exposure - which we’ll get to a bit later. But good exposure on digital cameras can also be achieved just by referring to the image on the monitor.

The thing is, what is considered a ‘good’ exposure is a bit subjective. For example, The Batman is lit and exposed at a very low level of illumination, while a comedy like Palm Springs exposes at a much brighter level. I’d say this lower level of exposure helps to accentuate the moody, emotionally dark tone in Batman, while the brighter exposure is, literally, lighter, and brighter in tone.

It doesn’t necessarily mean an image is bad, or incorrectly exposed if a small part of the frame is blown out, or crushed to black. As long as its exposure suits the tone of the story and the environment being photographed.

Although exposure is subjective, most cinematographers still try to avoid one thing - blowing out highlights to pure white. There are always exceptions to this rule and situations where this might be impossible to avoid, but for the most part a good rule of thumb with exposure is to try to set it so that any large, bright part of the frame maintains detail.

This is called exposing for, or protecting, the highlights.

Cinematographers using this technique will first dial down the exposure to the point where the brightest highlights still preserve colour information. This may mean that other areas of the image are now too dark. 

To counteract this, DPs may then add light - using artificial sources - to lift up the brightness of the shadows until there is detail present. 

They may also use lighting to bring down the levels of any highlights to an acceptable level, for example using a dimmer on a practical lamp, or adding an ND gel or a net to a window. 

That’s why the best way to control the contrast ratio in an image is usually by lighting.

Sometimes cinematographers may be forced to blow out bright highlights that they can’t control which take up a small portion of the image, such as windows, flames or practical fixtures. 

As a side note, film stocks tend to deal with overexposure of highlights better than digital cameras do. Film gives more of a soft halation, a gradual fall off and renders highlights in a creamier colour, as opposed to the uglier, pure white of digital. 

However, there are ways to counteract blown digital highlights, both in the grade, as well as by trying to introduce a softer highlight rolloff using diffusion filters.

EXPOSURE TOOLS

Although exposing by just looking at the monitor is popular, it also comes with a few downsides. The first of which is that, unless you’re working with a specially calibrated monitor, every screen produces a different level of brightness and represents colours and exposure values in different ways. This could trick you into exposing incorrectly. 

Light conditions on set may also affect how you see exposure. Even if you’re working with an older monitor like a TV Logic 056, which has lovely, accurate colours, your exposure may still be off as the monitor has a very dim picture brightness in outdoor situations.

To avoid making exposure mistakes, cinematographers either use physical tools, like a light meter, or digital software tools built into the camera such as a waveform or false colour - which is the most common exposure tool on high end cinema cameras from Arri and Red.

When activated, false colour displays images on a monitor in a specific colour spectrum - ranging from a warm red, to a cool purple or dark blue - with green sitting somewhere in the middle. Each colour represents an IRE value - a measure of brightness on a scale of 0 - being complete black - to 100 being complete white.

Cinematographers can use false colour to check that their highlights aren’t red and blowing out and that their shadows aren’t purple and getting crushed to black. People often recommend that to get perfect skin tones you should expose to where the skin tones turns green or a light grey on the false colour reading.

However, as I mentioned before, some cinematographers may want to disregard this and expose darker in the dark grey to blue range to achieve a moodier feel. 

Another useful exposure tool to understand is the waveform. This is made up of a vertical axis that expresses exposure in IRE values from 0 to 100 and a horizontal axis that represents the width of the frame.

For example if there is one bright highlight in the centre of the image it will be represented by a spike in IRE value in the centre of the waveform.

Like with false colour, a waveform can also be consulted to see whether areas of the image are exceeding 100 IRE and blowing out, or sitting on 0 IRE and getting crushed.

Usually when consulting a monitor for exposure it’s good to use a LUT to expose with rather than looking at the log image. Because a log image is so flat it may be easy to under or overexpose it, while still seeing detail in the log. But later in post production you may find that when you add a LUT or grade it that it is under or over exposed and doesn’t have enough detail to work with.

CONTROLLING EXPOSURE

As I mentioned before, there are six exposure variables. Each of these variables can be manipulated to arrive at an acceptable exposure, however, some of these variables also have other effects on the image. 

For example frame rate changes the speed at which motion is captured, the shutter angle changes how blurry that motion is, the ISO affects how grainy or noisy the image is, and the aperture affects to what extent the background is out of focus.

For this reason, many cinematographers maintain the same or a similar frame rate, shutter, aperture and ISO throughout a project and change the other two variables - the amount of light on a set and the strength of the ND filter - to control exposure.

ND or neutral density filters are made up of glass which reduces the amount of light that hits the sensor without changing the colour characteristics of that light. 

When lighting a scene it’s important to ensure that there is enough light to properly expose the image. Rather have too much light than too little light. You can always cut down the amount of light by using ND filters, but if a space is too dark you can’t get a brighter exposure without increasing the amount of grain or noise by lifting the camera’s sensitivity.

The best way to go about controlling your exposure with a cinema camera is to first ensure your ISO, shutter and frame rate are correctly dialled in. Then stop down the aperture on the lens until you arrive at an exposure which you like. This may leave the lens at a stop of, for example T/ 8. 

Next, you need to determine what stop or aperture you would like to shoot at. For example maybe you’d like to shoot at T/ 2 to get a shallow depth of field.

You then need to calculate the difference in stops between T/8 and T/2 - one, two, three, four. This means you need to add four stops of ND to get to a shooting aperture of T/2.

Each stop of ND filter is measured in increments of 0.3. So four stops of ND would be ND 1.2. This means you can add an ND 1.2 either with internal ND filters or with external glass ND filters in a mattebox and set the lens to T/2. 

You now have the correct exposure, while also having your desired aperture of T/2 that produces a beautiful, shallow depth of field.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Polly Morgan

Polly Morgan’s work as a cinematographer lies at the intersection between naturalism and traditionalism. Her work is fueled by capturing an expression of human experiences while at the same time often embracing a more traditionally cinematic style of working. Let's unpack her work further in this edition of Cinematography Style.

INTRODUCTION

“Stories that really excite me are the stories that are really representative of the human experience. The pain and the suffering or the joy and the love and all of these things that we all go through as people.”

Polly Morgan’s work as a cinematographer lies at the intersection between naturalism and traditionalism. Her work is fueled by capturing an expression of human experiences while at the same time often embracing a more traditionally cinematic style of working. 

This video will unpack this further by going over how she got her start in the industry, looking at her philosophy on capturing images and breaking down some of the gear which she has used to do so.  

BACKGROUND

“As a young girl I was always just a massive fan of movies. When I was 13 I was lucky enough to experience a film crew come onto our farm house and using it as base camp. You know, they let me look through the camera and it was really then that my love of cinematography was born.”

To turn this early dream into a reality took some time and a lot of work. After getting a Bachelor of Arts in broadcasting where she got hands-on experience with operating cameras, she spent about 10 years working in film as a crew member.

One of her early jobs in the industry was working as a production assistant for RSA, Ridley Scott’s commercial production company. After putting in time as a PA she worked as a camera trainee and then moved up to working as a loader. As an assistant she worked on some big studio films, such as Inception, and worked for many well known DPs such as Haris Zambarloukos, Wally Pfister and Dan Mindel.

This experience allowed her to study at the AFI Conservatory, after which she was able to launch into a career as a cinematographer - first shooting independent movies, before progressively shooting larger studio films.

PHILOSOPHY

When she starts a new project she’ll work closely with the director in pre-production to determine the emotional arc of the story, and then find a visual camera language that is representative of that story and the characters in it.

Her goal, along with the director she is collaborating with, is to craft a look that embodies a feeling. A look that supports the story so well that it becomes invisible to the audience, to the point that they are unaware that their emotions and impressions of the story are being manipulated by the camerawork.

“The one thing about cinematography is you don’t want the viewer to be aware of it. You don’t want someone to watch a film and say, ‘uh, the film was terrible but wow that cinematography.’ Because my job is to serve the story. My job is not to make beautiful imagery. If you’re telling a story how do you want the viewer to feel?” 

This look and feeling that she creates may be different depending on what the subject matter and genre she is working on is. Suspenseful horror may involve a more creeping movement and a darkly exposed image, whereas a lighthearted comedy will have a brighter colour palette, lighter exposure and a more stable camera. 

Perhaps her ability to alternate between different techniques and working styles comes from her diverse filmography. 

“I think I’ve been incredibly fortunate in that I managed to shoot commercials, TV, feature films and documentaries. It means that you’re working with a variety of budgets and you’re working with a variety of crew size and a variety of time.”

Although she is capable of changing the look of the cinematography to suit the story, I feel like at the core of her look she balances naturalistic feeling techniques - such as using natural, soft light and handheld movement - with more traditional, conventional, classical techniques - like using longer focal lengths for close ups, smooth cinematic movement or using a blue-ish moonlight and haze look for night exteriors. 

In this way she goes between using newer school techniques and more traditional techniques to tell stories.

So much of the role of the DP involves focusing on the technical side of creating images, but an equally important part which she focuses on is how she manages people to get the most from her crew and create the right on set environment.

“It takes a crew of people to make a film. It’s not just being an artist. You also have to be a manager and you have to be a leader. It’s just a juggle of, like, how to get the best out of people. How to be, sort of like, a collaborator and how to be a mediator. If people see the people that they are working for get stressed out or lose their cool, I think then it really creates an atmosphere which is not a conducive one to do good work.” 

GEAR

Since I keep mentioning how she contrasts naturalistic techniques with more traditional ones - let me break this down a bit by defining some examples of how, and with what gear, she does this. 

Let’s look at her lighting, starting with the naturalistic side. Ambient light which is not from artificially created human fixtures, tends to be quite a soft source of illumination. Natural sunlight often bounces off various objects, like walls, grass, or even the moon. Because bounced light is less direct, it is softer. Therefore she often uses softer, diffused or bounced sources of light to create a naturalistic feel.

“A more modern approach to lighting has been softer light. Now that we have more sensitive cameras and we don’t have to worry like they did 100 years ago on very slow film stocks, we don’t have to light with very strong, hard light.”

An example of her using a more sensitive camera can be found in 6 Balloons, which she shot on the Panasonic Varicam 35 - one of the earliest high end cinema cameras to feature a dual base ISO that went up to 5,000.

Having such a sensitive camera meant that she could use minimal lighting for night scenes on the street to achieve a more realistic look - which felt the real world light sources as they passed. For car driving scenes, which were shot on a process trailer, she accentuated the natural light by rigging RGB Arri Skypanels on the trailer outside the car to create the look of stop lights.

For other films she’s also leaned heavily into a soft, diffused, more neutral, natural, pretty ambient light look - especially for day exteriors.  She’s flipped this convention in other stories, by lighting scenes using a more traditional, old school convention - blue backlit moonlight with haze.

On A Quiet Place II, she shot on a far less sensitive Kodak 500T film stock, which she rated at 400 ISO. Shooting at 400 ISO, compared to 5,000 meant that she needed much more light to expose the image properly.  She used a combination of large HMI units and the high output LED Skypanel 360 - which she brought to a colour temperature of 4,400K. When shot with the 3,200K film stock it produced a cool, blue look. 

In a traditional, old Hollywood style, she backlit the actors with these hard sources, so that the lights were facing towards the camera. This subtly lifts the background and leaves characters in silhouette, illuminated by an edgy highlight. She then added haze, which accentuates the light source and raises the exposure of the midtones in the background - when the smoke is hit by light. 

This technique of creating a cool moonlight is quite a conventional way of lighting night exteriors. Even on other movies which she’s shot digitally, such as The Woman King, she’s repeated this technique. 

So her lighting of night scenes is one way that she has contrasted more naturalistic photography with a more conventional approach. Another technique is how she uses camera movement. The films that she shoots often have a rougher, on the shoulder, handheld movement which track the character with a more documentary-esque feeling of naturalism. 

She contrasts this with smooth, fluid, camera moves done on dollies, Steadicam or a Technocrane which are more conventionally cinematic. 

When it comes to choosing a camera, she’s opted for both film and digital acquisition before. She’s used film cameras, such Panavision’s Millennium XL2, with Kodak 250D and 500T stocks. However, she mainly uses digital cameras. Some examples include Panavision’s DXL2, the Alexa 65, Mini and Mini LF and Panasonic’s Varicam 35.

Likewise she’s used both anamorphic lenses, like the T-Series and Cooke Xtal Express, and spherical lenses, like the Panavision Primo 70s and the Zeiss Super Speeds.

She’s often drawn to Panavision glass and even shot Where The Crawdads Sing on unreleased prototype large format spherical lenses made by Panavision which had imperfections, focus falloff and a softness to them which are reminiscent of anamorphic glass - again, evoking a more textured, organic, naturalistic feeling.    

While, at the same time, she often uses longer focal length lenses for close ups, another more traditional technique. 

Read More