Gray Kotze Gray Kotze

5 Reasons Why Zoom Lenses Are Better Than Primes

In this video let’s flip things in favour of our variable focal length friends by unpacking five reasons why zoom lenses are better than primes.

INTRODUCTION

As we saw in a previous video, there are many reasons why in some photographic situations prime lenses are a better choice than zoom lenses. The fixed focal length or magnification of prime lenses provide: a more considered perspective when choosing a focal length, better overall optical quality, a larger aperture, a smaller size and better close focusing capabilities.

In this video let’s flip things in favour of our variable focal length friends by unpacking five reasons why zoom lenses are better than primes.  

 

1 - ZOOM MOVEMENT

The first reason for choosing to use a zoom lens goes without saying - it allows you to do a zoom move in a shot. Most shots in cinema and other kinds of film content shoot with a fixed level of magnification and do not zoom during a shot. 

I think this is in part due to the traditional language of cinema built by a legacy of many older films which were photographed with prime lenses, before usable cinema zooms were widely manufactured and prime lenses were the de facto choice. 

However, during the 1970s and 1980s using in-camera zooms to push into a shot or pull out wider without moving the camera gained more popularity amongst filmmakers. 

There are many stylistic motivations behind using zoom movement. It can be used to slowly pull out and reveal more information in a shot until we see the full scope of the landscape. It can be used as a slightly kitsch, crash zoom - where the camera rapidly punches in to reveal a character, to emphasise a certain line, or land a comedic punchline.

Because of their flexibility and ease of use, which we’ll come to later, zooms have also been widely used when shooting documentaries - particularly fly on the wall type doccies. In some films this type of zoom movement is extrapolated from these documentary conventions in order to lend a visual style of realism associated with the documentary look, or even to mock this look for comedic emphasis.

The list of reasons to zoom within a shot goes on and has a different stylistic or emotional impacts depending on the context in which it is used. It should be noted though that most filmmakers are careful about not overusing zooms, as they can easily become a bit tired, distracting and cliched, unless they form part of an overall considered visual style.          

2 - PRECISE FRAMING

Of course pushing in with a lens requires a zoom, but what about those films that don’t use any in shot zooms but still decide to shoot on zoom lenses?

Another reason cinematographers may use a zoom is because they make it easy to precisely frame a shot. 

When you shoot with a prime lens’ fixed focal length on the camera and you want to change the width of the frame you need to physically move the position of the camera. This is easy when you are shooting handheld with a little mirrorless camera. 

But when you are using a hefty cinema rig, on a head and a set of legs that is so heavy that it requires a grip team each time you move and level the camera, using zooms becomes more appealing. 

With primes you may need to slightly reposition a frame by moving the camera forward six inches, realise this is too far forward, and then have to move the camera back again three inches until that light stand is just out of frame. With a zoom lens you can just just change the focal length from 50mm to 45mm without moving the camera or tripod.

A great example of this happens on most car shoots. I’ve worked as a camera assistant on loads of car commercials and about 99% of the time when using a Russian Arm to shoot moving vehicles, DPs choose a zoom lens over a prime lens. 

It’s far easier and more practical to use a wireless motor to adjust the zoom on the barrel of the lens to find the correct frame from inside the Russian Arm vehicle, than it is to get the driver of the vehicle to keep repositioning the car a couple of metres on every run until the frame is perfect.

It is also easier to find the correct position for the camera without needing to move it when using wider primes: either with a pentafinder, a viewfinder app, or just based on the experience of the DP. But, when you use longer focal lengths, like a 135mm or 180mm prime lens, because of the lens compression it becomes infinitely more difficult to find the correct frame without needing to move the camera. 

There are also less prime focal length options at the longer end - which we’ll talk about later. Therefore for telephoto shots, zooms are regularly used for their ability to punch in or out until the correct frame is found. 

3 - FASTER SETUPS

With an increased precision in framing shots, comes a faster set up time. On a film set time very much equals money. The quicker you can set up and shoot each shot, the less crew overtime, rental days on gear and location fees you have to pay.

When you’re working on a tight budget without the possibilities of extended over time or extra pick up days, taking longer to set up shots means that the director is afforded to film less takes, with less time to craft the performances of actors or set up choreographed action.

Using zoom lenses speeds up production in a few ways. For one, if you shoot everything with a single zoom lens, it means less time spent changing lenses, swapping out matte boxes and recalibrating focus motors.

As we mentioned previously, it also means that grip teams don’t need to reposition heavy and time consuming rigs, like laying dolly tracks. If the track was laid a little bit too far forward, the operator can just zoom a little bit wider on the lens to find the frame, rather than starting over from square one and re-laying the tracks.

Another practical example is when using a crane or a Technocrane. If you use a 35mm prime lens on the camera, balance it on a remote head, perfectly position the arm and then realise that the lens is not wide enough and you need a 24mm focal length instead, the grip team needs to bring down the arm, the camera team needs to switch out the lens, the Libra head technician needs to rebalance the head with the weight of the new lens, and finally the grip team then brings the crane back into the correct position. All this could take 10 minutes or more. 

If instead the DP used a zoom lens with a wireless focus motor on the zoom, this change would take less than 10 seconds.

10 minutes may not sound like a lot, but if this keeps happening throughout the day this can quickly add up to an hour or two of wasted shooting time - which is both expensive, means less footage will be shot and therefore gives the director less precious takes to work with in the edit. 


4 - FOCAL LENGTH OPTIONS

A prime lens set usually covers a fair amount of different focal lengths on the wide end, but, when it comes to telephoto options beyond about 100mm their selection is usually very limited.

For this reason, DPs that like shooting with long focal lengths that compress the backgrounds in shots often hire a zoom. For example, the Arri Alura can punch in all the way to 250mm. While the longest focal length available on a set of modern cinema prime lenses such as the Arri Master Primes is 150mm.

So for cinematographers who want to use long, telephoto lenses, zooms are usually a better option. 

Many zooms also offer an overall greater range of focal lengths, for example an Angenieux 12:1 zoom offers a field of view all the way from a wide 24mm lens up to an extended 290mm compressed field of view.

For shoots that are in remote areas or in locations which cannot be accessed by a camera truck, carrying around a full set of spherical primes in three or four different lens cases is far more logistically challenging for the crew than just putting a single zoom lens on the camera and walking it in to set. 

This makes zooms far more flexible and practical when compared to primes, especially sets of older vintage prime lenses, such as the Zeiss Super Speeds that only come in 6 focal length options from 18mm to 85mm.    


5 - BUDGET

The final reason may seem a little counterintuitive because when you compare the price of a single prime lens with that of a single zoom lens, the zoom lens will almost always be more expensive. 

However, prime lenses are almost never bought or rented as individual units. They come in sets: such as a set of 6 lenses, or a set of 16 lenses. 

When the rental price or buying price of a full set of primes is tallied up it is almost always more than that of a comparable, single zoom lens that covers the same amount of focal lengths.

Therefore, when the budget of a shoot is a bit tight, it may come down to either pleading with the rental house to break up a lens set into a very small selection of two or three primes that cover a limited range of focal lengths, or hiring a single zoom that you can use to cover every field of view that is required for the shoot.

In this regard, a zoom lens is far more realistic and practical.    

Read More
Gray Kotze Gray Kotze

How We Made A New Yorker Short Documentary: With Jessie Zinn

Highlights from my chat with director Jessie Zinn about the film Drummies. We discuss the process of making a short documentary - from coming up with the initial concept, hiring gear, cutting it together, to finally distributing the finished film.

INTRODUCTION

“That gimbal was terrible. And it was so heavy.” “Didn’t it overheat a couple of times? Cause it was also in the middle of summer and we were shooting in the northern suburbs where it gets up to like 40 degrees which is like in Fahrenheit in the 100s. And we were, like, sweating, and the gimbal was making a noise.”

If you’re watching this you may know me as the disembodied voice behind this channel, who edits these video essays made up of diagrams, shoddy photoshop work and footage from contemporary cinema. But what you may not know is that I also work as a cinematographer in the real world. 

So, I thought I’d mix up the format a bit and chat to a real world director, Jessie Zinn, who I’ve shot some films for. One of those films that we shot during the height of the COVID lockdown, a short documentary called Drummies, was recently acquired by the New Yorker and is now available to view for free on their site - which I’ll link for you to check out.

Our chat will break down the process of making that short documentary - from coming up with the initial concept, hiring gear, cutting it together, to finally selling the finished film. 

Also, if you want to see the full, extended podcast with Jessie, and also donate to keeping the channel going, it’ll be available on the Master tier on Patreon.

CONCEPT

Before the cameras and the lights and the calling of action, every film begins as an idea.

09:32 “I deas come through very unorthodox channels and different backends and ways of finding out about subjects and topics. I think it’s definitely worth noting that I always make films or I’m always interested in making films for me. And that’s not to say that I don’t have an audience in mind because of course  I’m always very aware of who this is for ultimately; who is going to watch it. But I always approach a subject and have interest in making a film based on a subject that I’m just personally really, really interested in or feel a sense of passion towards in some sort of capacity.”

In the case of this film it was born out of seeing a photographic series by Alice Mann on the drummies or drum majorettes of Cape Town.

“Drummies is about a team of drum majorettes in Cape Town. It’s sort of like an intersection between cheerleading and marching band processions. It’s had this really interesting political history in South Africa because it was also one of the first racially integrated sports in South Africa during apartheid. And post-apartheid it’s become almost this underground cult world amongst young girls in schools. In particular in public schools. It’s both a sport where it brings a sense of community and family to them but also provides potential possibilities for upward social mobility.”

PRE-PRODUCTION

With a concept in place, Jessie then went about identifying and getting access to the potential characters that would be in the film and who we would focus on during the shooting process. 

“So I actually cast for Drummies and I did that remotely because I was still in the States. So I asked their coach to send through Whatsapp videos of the girls - basically auditioning.”

From there she cut down her ‘cast’ to four or five characters who we would do the majority of our filming with. In the final edit this was later cut down to three characters.

Before bringing in any cameras, she did audio only interviews of her cast using a Zoom recorder and a lapel mic.

“People have their guard up when you first meet them. But in others, often with children, the first encounter is often sort of the least filtered. And so, I knew beforehand that there would probably be some audio soundbites and material that I could gather from those pre-interviews which maybe wouldn’t have been possible with having a whole camera setup around. Actually in the final film some of that audio and voice over is from those pre-interviews because some of it was such great material.”   

When Jessie contacted me to shoot the project she passed on snippets of this material to me, so that when we met to chat about the film I already had an idea of the kind of characters that we would be photographing. 

She also put together a script and a treatment which I could read through. As a cinematographer, it’s always fantastic to get this kind of thorough vision early on up front as it facilitates the discussions we have about finding a look, or overarching style, for the film.      

“It’s always deliberately decided beforehand, at least with the short docs that we work on. You know, I’ll sit down with you often at a coffee shop somewhere and I’ll be like, ‘Here’s a couple of reference films and reference images.’ Then you’ll look at them and then you’ll say, ‘OK. I think this is what we can do based on these references and based on the real people.’”

In the real world, our characters were sitting around during this hot, summer vacation, unable to do much because of the covid restrictions. This led to us discussing the idea that the footage should feel dreamy, as if they were suspended in time - which is also a line that came from one of the interviews.  

To visually represent this feeling we decided to shoot a lot of the non-dialogue scenes in slow motion with a heavy, worn, often malfunctioning Ronin gimbal that we managed to borrow from another Cape Town documentary filmmaker.

“This was something that we discussed beforehand. That we wanted there to be a dreamy aesthetic and in terms of the actual movement using a handheld, rough aesthetic wouldn’t have achieved that. You definitely don’t want to limit the dreams that you have in terms of aesthetics for your film but you also do need to be very practical about it and I think that’s what we often get right. We sit down and say, ‘these are the things we would like’ and then ‘this is the version of these things that is actually achievable.’ 

PRODUCTION

“It’s all about prep - literally. You know you’ve done your job well when you get onto set you can stand back and do very little. If you’re having to do a lot on set then you know you haven’t done a great job - basically.” 

With all of Jessie’s prep, creative vision and our discussions about the film’s look coming together, I then, as the cinematographer, need to come up with a list of gear that we’ll use to bring these ideas to the screen. 

When it comes to this, one of the biggest limitations is dictated by budget. Doccies are generally made with pretty limited funding, much of which is saved for post production finishing of the film. So for these kinds of projects I usually put together a gear list with two options: one, the kind of best case scenario with a full selection of the gear I’d like to rent, and two, a more stripped down list which is a bit lower cost.  

A little bit of back and forth with the gear house may ensue until we come up with the best gear package that meets our budget.

For this film it meant shooting on a Sony FS7, with my four Nikon AIS prime lenses - a 24mm, 35mm, 50mm and 85mm. Although most of the film was shot on the two wider lenses. To add to this dreamy look that Jessie and I talked about I shot everything with a ¼ Black Pro Mist filter which gave the image a nice, diffused feeling.

As I mentioned we managed to get a free Ronin for the shoot - which was the old, original Ronin that was a bit tired but we managed to make it work. I also used a Sachtler Ace tripod, which some may see as being too lightweight for a camera like the FS7.

But, it’s small, easy to carry and makes finding a frame and levelling it off that much quicker. And speed in documentaries is often more important than fractionally more buttery smooth pans and tilts off a larger tripod head. 

Although it hopefully does not look like it, every single interior shot in the film was lit with a combination of artificial film light sources and my most important tool: negative fill.

The lighting package I carried around consisted of a budget-friendly Dracast LED bi-colour fresnel, two 8x8 blackout textiles and a couple of C-stands to rig them with. Rigging the blackout to C-stands or hastily tying them to window frames is precisely how not to rig - but since I was working alone and had very limited time to set up each scene I had to make do.

When you’re working with a limited budget, the easiest and most cost effective way to control light isn’t by adding light, but by removing it. 

As an example let's break down the changing room scene in the film.

“With that scene which would lead into the final performance because there was this theme throughout the film of being able to achieve dreams but also being held back from achieving those dreams. Because of COVID they weren’t actually able to perform and compete in all of these games and so, the changing room is of course tethered to reality. They have to get changed before any sort of performance. But they also weren’t performing at the time because of COVID restrictions and so I thought that that sort of worked.”

For that reason, I wanted to push this sequence visually about as far as we could into a dreamy state, while still maintaining a link to the real world. We took this orange, yellow palette of the changing room that we were presented with, amped up the warmth in the lighting to the max and pumped a bunch of smoke into the room to create an extra layer of diffusion.

With the help of an assistant I blacked out all the windows and doors which were letting through sunlight - except for a single window behind the characters. Outside this window I put up our LED fresnel, warmed it all the way up and I think even added a ½ CTO gel so that the colour temperature of the light went from tungsten to orange. 

Again, doing most of the heavy lifting by removing light and then carefully placing a single backlight to create contrast and a more amplified visual world.

During the shoot, a technique which Jessie and myself often employed was to cordon off and light a specific space, almost like a set, then place the characters within that space and let them converse or act as they would naturally. That way you maintain naturalistic conversation and action but are able to also better sculpt the cinematography into the form which best suits the film. 

POST-PRODUCTION

Once production wrapped, Jessie went about editing the film herself.

“I also often edit my films and so that is a big part of crafting a documentary. You’re not finding the story in the edit but you’re definitely chipping away at the basic model that you’ve planned. So when I’m on set I’m also shooting or directing with the edit in mind.”

“Drummies was a good exercise in learning how to trust my instinct. Basically the very first assembly that I laid down on the timeline which I had to deliver to the programme I think it was like three days after we’d shot the film. Which is insane, again. And so it was like a fever dream of staying up into the night to get this assembly done. And after that obviously I did many different versions and different edits where things changed and the structure changed. But when I looked at the final film it was actually almost identical in terms of structure to the very first assembly that I’d put down, like months ago, which also was the same as the script and the treatment.”

So again we come back to this idea of prep and how having a refined vision for the film going into the shooting process, even in a more unpredictable medium like documentary, is so valuable for a director to have.

After the completion of the music composition, sound mix and the final grade - which was based on the reference of another vibrant, colourful and slightly dreamy film - The Florida Project - Jessie is left with a completed film. But what comes after you have the final project?

“A couple of years ago people would say that the be all and end all are film festivals and that determines the success of your film. ButI don’t think that’s the case anymore because there are incredible online avenues for streaming services where you can put your film out there and it can get tons of views and potentially gain an audience that is much wider and larger than a film festival.”

‘So, Drummies did go to a few festivals but it was valuable because that’s how the film got distribution in the end. Both POV and The New Yorker showed interest because they’d seen it at a festival called Aspen Shorts Fest and they both reached out to me by email and said we’re interested in seeing the film, we’d like to have a look. And both of them basically came to me with offers within a few days and so that was the first film where I’d received pretty standard almost classical distribution interest in the film as far as broadcasters and sort of news channels are concerned. Whereas my two previous films got Vimeo Staff Picks which is, I'd say, a little bit more unorthodox and more current than those avenues.”

“People often think that if your film doesn’t get into an A-list festival then it’s the end of your film. Which is just so not true. Because the festivals that Drummies played at are I would say are probably B type of festivals. And those festivals got way more distributors interested than some of the A list festivals that I’d heard about. And so, never underestimate the space that your film is screening in is a valuable lesson that I learnt.” 

Read More
Gray Kotze Gray Kotze

5 Reasons Why Prime Lenses Are Better Than Zooms

As is the case with all film gear, there are a number of reasons for and against using prime lenses versus selecting zoom lenses. In this video I’ll go over five reasons that make primes superior to zooms - and follow it up with a later video from the other side of the argument about why zoom lenses are better than primes.

INTRODUCTION

If you’re into cinematography, photography or capturing images at all you’re probably aware that there are two types of camera lenses - prime lenses and zoom lenses. 

Primes have a fixed focal length, which is measured in millimetres. This means that when you put this lens on a camera, the angle of view of what it sees, how wide it is, or how much the image is magnified is set at one distance and cannot be changed.

Zooms have a range of variable focal lengths. So by moving the zoom ring on the barrel of a lens you can change how wide an image is - in some cases all the way from a very wide angle to a close up telephoto shot.

As is the case with all film gear, there are a number of reasons for and against using prime lenses. In this video I’ll go over five reasons that make primes superior to zooms - and follow it up with a later video from the other side of the argument about why zoom lenses are better than primes. So if you like this content, consider hitting that subscribe button so that you can view the follow up video. Now, let’s get into it.

 

1 - CONSIDERED PERSPECTIVE

We can think of the width of a frame, or a focal length, as offering a perspective on the world. 

This close up, shot with a wide focal length, sees a lot of background and places us, the audience, within the same world as that of the character. This close up, shot with a longer focal length, isolates the character more from the background, blurs it, and compresses, rather than distorts the features of their face.

The great thing about a prime lens’ fixed focal length, is that it also fixes the perspective or feeling of an image. When you choose what prime to put on the camera you are therefore forced into making a decision about perspective. 

This isn’t to say that you can't do the same with a zoom, but when you work with a variable focal length lens it’s far easier to just plonk down the camera at a random distance from the subject and then zoom in or out until you get to the shot size that you want.

If you’re using a prime, you need to first decide on the focal length you want and then are forced to reposition the camera by moving it into the correct position. As they say in photography, it makes your legs become the zoom. This is especially useful as a teaching device for those learning about lens selection and camera placement. 

So, prime lenses force you more into thinking about the focal length that you chose, which may elevate the visual telling of the story by making it a deliberate decision, rather than an incidental decision.   

2 - OPTICAL QUALITY

The practical reasons behind choosing a lens are important, but so too is the look that the lens produces. Due to their design, prime lenses are considered to possess a higher quality optical look than most equivalent zooms. This is mainly because the construction of primes is much simpler and the design more straightforward than that of zooms.

Inside a lens you’ll find different pieces of curved glass. Light passes through this glass to produce an image. Because prime lenses only need to be built as a single focal length they can use less of these glass elements - and, the glass elements inside the lens don’t have to move in order to zoom. 

Less glass means less diffraction of light, which usually means sharper images. Also, prime lenses only need to be corrected for optical aberrations like distortion and chromatic aberration for a single focal length. Zooms need to do this for multiple focal lengths, which is trickier to do. 

Therefore, your average prime lens will be sharper with less distortion, or bending of the image, and more minimal colour fringing between dark and light areas. 

I should add as a caveat that modern, high-end cinema zooms are constructed to a high degree of optical quality that is comparable to many prime lenses, but you pay a pretty penny for that level of cutting edge engineering. When you’re looking at zooms and primes in a comparable price range, primes usually have the winning, optical edge.        

3 - APERTURE

A lens’ aperture is the size of the round opening at the back of the lens that lets in light. A large opening, which has a lower T or F stop number, like T/ 1.3, means that more light is let in, while a smaller opening, with a stop such as T/ 2.8 means that it lets in less light.

Once again, because of the extra glass and more complex design required to build zoom lenses, primes tend to have a faster stop. 

When it comes to cinema glass, each extra stop of light that a lens can let in is precious and demands a higher price tag. Shooting with a wide aperture comes with a few advantages. It means you can expose an image in dark, lower light conditions. It allows you to create more bokeh - the out of focus area that separates the subject from the background and is generally considered ‘cinematic’. 

This allows you to also be more deliberate about what is in and out of focus and is a way of guiding the audience's gaze to a certain part of the frame. So, for those cinematographers or photographers that want fast lenses, primes are the way to go.      

4 - SIZE

If you’re working in wide open spaces, with a dolly that holds a heavy cinema camera, then the size of the lens is less of a concern. But the reality is that more often than not that’s not the case and having a physically smaller lens attached to the camera makes things much easier.

By now we know that zooming requires extra glass and extra glass requires a larger housing. This means zooms are heavier, longer and wider than primes. 

Cinema rigs that need to be precariously balanced - like drones, a gimbal or a Steadicam - are often unable to take longer cinema zooms because of their excess weight and length. Some of the bigger zooms are so large that they can’t be effectively operated as a handheld camera on the shoulder and are limited to only being viable for use on a dolly or a tripod.

So, if you need to work in a confined space, want to move the camera on a lightweight rig, or shoot with a handheld camera then a big zoom lens may not be for you.   

 

5 - FOCUS

The fifth and final reason not to use a zoom lens is because of its focusing limitations. Every lens has a range of distances that can be rendered in sharp focus and a range that will fall out of focus. 

Cinema and photo lenses can focus on objects far away up to infinity, but they are limited as to how near they can focus on an object. This limit is called close focus or MOD - minimum object distance. Generally, zooms have inferior close focusing capabilities. 

For example, the closest that an Angenieux 24-290mm zoom can focus, set at a 50mm focal length is 4’. A Zeiss Master Prime 50mm lens on the other hand can render objects in focus up to only 20” away. 

So, if you want to shoot a close up of a face or a detail, you can get much tighter with a prime lens than with a zoom lens before you lose focus.

Additionally, when it comes to many zooms built for photography, because the glass shifts when you zoom in, the focus also changes. So if a person is in focus at 70mm and you then zoom into them to 200mm without touching the focus ring, the shot may go out of focus and be soft.

Most cinema zooms are able to remedy this and maintain sharpness across the entire focal length range, however for simpler, pin sharp focus that doesn’t shift, and a superior MOD - primes beat zooms.

Read More
Gray Kotze Gray Kotze

The Most Popular Cinema Lenses (Part 5): Zeiss, Cooke, Panavision, JDC

In this episode, let's take a look at four more cinema lenses which are popular in the film industry.

INTRODUCTION

As a cinematographer, picking the right gear that you need to use on a shoot is, of course, extremely important. One of the most important bits of gear to put on your rental list are cinema lenses.

They are chosen on the basis of two factors: one, the kind of look that they produce and two, their practical and ergonomic advantages or disadvantages. The look and aesthetic feel for each project may be different and the kind of shooting environment and setups needed will also be different depending on the situation and story.  

So to better equip you, and I guess because I’m a bit of a lens nerd, I’m going to look at three more cinema lenses that are popularly used in the high end film industry, breaking down their look and their practical design. And, at the end of the video take a look at one lens contraption that’s a bit of an oddity but a pretty interesting piece of kit, so stick around.


JDC XTAL XPRESS

First up, we have the JDC Xtal Express lenses, an interesting set of vintage anamorphic primes made for 35mm.

It’s a bit tricky to verify, but based on what people have told me this set of lenses is mainly made up of spherical glass from vintage Cooke S2 and Cooke S3 elements, combined with Japanese anamorphic elements. Hence why they are sometimes called the Cooke Xtal Express. This glass was then packaged and housed as a lens by Joe Dunton Cameras - JDC. 

They were later bought out by the gear rental house Panavision, who are a major supplier of renting out their inventory. 

Although perhaps not as well known as some of the bigger producers of anamorphic glass, this set of lenses is still fairly popular amongst DPs. Based on my anecdotal experience as a 2nd AC, particularly amongst European and UK cinematographers.

On the surface they are a bit of an odd looking bunch. Different focal lengths come in different lengths, different weights, with different sized front diameters and different stops. Although they are fairly well constructed, they were made a while ago and do need to be handled with care as they are not as robust as the housings on other modern lenses, such as the newer Cooke Anamorphics, which we’ll get to later. This may mean turning down the strength of the torque on a wireless focus motor and being more gentle when pulling focus.

Practically, their longer build, especially on longer focal lengths like the 100mm, may make them a bit front heavy on cameras and therefore a bit tricky to balance on a gimbal or Steadicam without adding extra weights to the body to balance it out. 

While they are long, they aren’t particularly heavy, so are still good lenses for applications such as handheld camera movement.

Their different front diameter sizes also means that camera assistants need to switch out the back of the mattebox to different sizes, or use custom made reduction rings depending on the lens.

But, what they lack in practicality they more than make up for with their interesting production of images. They produce what I’d call quite a heavily vintage anamorphic look. This means they have quite a bit of distortion at the edges of the frame, especially at wider focal lengths, which will lead to straight lines, such as door frames, bending.

They also have heavy falloff around the edges of the images. This means that characters will appear sharp in the centre of the frame but, when placed on the edges of an image may seem out of focus even though they are not, due to the lens’ anamorphic falloff and softness.

Wide open they have a beautiful, soft, dreamy look with oval bokeh and lovely flares - perfect for those with a taste for vintage anamorphics.     


COOKE ANAMORPHIC/i S35

To compare the Xtal’s to a more modern anamorphic option, let’s take a look at the Cooke Anamorphic/i lenses which are made to cover a 35mm sensor. 

Whereas the Xtal’s use older Cooke S2 glass, the newer Anamorphic/i lenses are made with glass that have modern coatings. So while they are sharper across the frame, they still maintain some of those lovely anamorphic image characteristics when shot wide open, such as blooming, horizontal flares, oval bokeh and a more gradual anamorphic falloff around the edges. 

On a scale from soft and vintage glass, like the Xtal’s, to super sharp, super clean modern glass, like the Zeiss Master Anamorphics, I’d put the Anamorphic/i lenses somewhere in the middle.

They come in a large range of 10 focal lengths, which is a lot for a set of Anamorphic primes that typically have fewer focal lengths than spherical lens sets. This includes everything from a wide 28mm, to a macro 65mm for close up work without needing diopter filters, and a telephoto 300mm.

They cut together well and carry the same, consistent colour reproduction as other modern lenses in the Cooke spherical range, such as the S4s and the S5s. This has been dubbed ‘the Cooke look’ - which refers to their overall warmth, high contrast, natural colour reproduction and their sharp but smooth, pleasing rendering of details.

This look is paired with the modern construction of a super solid, durable lens housing with accurate, well spaced distance and iris markings and a smooth focus action when you turn the focus gear. This, along with their more standardised sizes across the range, makes them a pleasure for camera assistants to work with. 

A final useful feature comes from the i part of their name. This refers to a contact point situated on the mount of the lens that connects to a contact point on the PL mount of most high end cameras. It’s used to send metadata from the lens such as its focal length, focus distance and stop to the camera where it is recorded for each frame. This is both useful as it negates the need for focus pullers to programme distance measurements into some wireless follow focus systems and it provides information that is useful in post production special effects work. 

ARRI / ZEISS ULTRA PRIMES

Next, let’s turn to spherical lenses and take a look at a very popular product in the Zeiss lineup - the Ultra Primes. If we look at the timeline of cinema lens releases from Zeiss, the Ultra Primes were developed after the Super Speeds and before the Master Primes.

This is fitting as I think both technically and in terms of their look they also occupy the middle ground between those two spherical prime lens sets. The Super Speeds have a subtle vintage quality to them, the Master Primes are supremely sharp, clean and modern, and the look of the Ultra Primes sits somewhere in between them.

Their look is very similar to that of Master Primes and the two sets can easily be cut together. Their colour reproduction is accurate but neutral, or slightly cooler than something like a Cooke, they are sharp and flare ever so slightly more than Master Primes - which flare very little. The choice of whether to shoot on Master Primes or Ultra Primes often comes down to their technical designs.

In terms of their build they are lightweight and compact, a little bit chunkier than the Super Speeds, but smaller than the Master Primes. With a stop of T/1.9 they are very fast, produce considerable, circular bokeh and are good for low light conditions. But, are slightly slower than the T/1.3 Super Speeds and Master Primes.

They come in 15 different focal lengths, ranging all the way from an 8mm extreme wide angle to a 180mm telephoto. This means they have the widest focal range coverage of pretty much any prime lens set available. This gives them an edge over the Master Primes for cinematographers who are very specific about their focal lengths and like to cover every in-between prime, such as having a 28mm focal length option between a 24mm and a 32mm. 

Like the relatively modern Cooke lenses we discussed, the Ultra Primes also offer a high quality, durable build with good markings that make them quick and easy to work with.  

PANAVISION FRAZIER LENS SYSTEM

Finally let’s take a look at something a bit more unusual. This quote unquote ‘lens’ might not be strictly popular, but I guess you could say it’s popular for achieving a specific application. It’s not even a lens per say, but rather a lens system that can be attached to a camera for specialty macro shots where the lens needs to squeeze into a space and be incredibly close to the subject, sometimes at unusual angles.

Often referred to as a snorkel or a periscope, the Frazier Lens System is attached to a cinema camera via a PL mount. At the end of its long extended barrel filled with glass and mirrors, there’s a second lens mount which is designed to take the capture lens - which is usually a small, manual iris, stills lens - such as a Nikon AIS.

So why would you shoot with this contraption? It has a swivel tip that allows you to tilt, pan or rotate what the camera sees without having to rotate the camera itself. Because of its thin design it also means a lens can be squeezed into very tight spaces close to what it is photographing. This makes it a popular way to capture and move the camera around miniature sets.

A drawback is that it has an aperture of T/ 7.1. This means that a lot of light is needed to achieve exposure. Because of its build it is; however, easier to light macro shots without the camera blocking the light source. Since you usually want to shoot extreme macro shots at as deep a stop as possible to make it easier to focus, this T/ 7.1 aperture isn’t actually a big deal.

They have an iris and a focus gear on the barrel near where the PL mount is so that assistants are able to manipulate focus - while the focus on the taking lens is set at a distance and left there.

To better explain a real world application here’s a BTS photo I snapped on a commercial shoot I was working on as a 2nd AC. This system was used to get the lens within about an inch of our subject's eye to achieve an extreme macro shot and still have space to get in studio light. We used the Frazier system on an Alexa Mini with a Nikon AIS taking lens.

Read More
Gray Kotze Gray Kotze

What We Don't See In Movies Is More Important Than What We Do

Since cinema is not a 360 degree medium - well, not yet anyway - and our perspective is not omnipresent, the choice of what is included and what is excluded from the screen is a choice that is left up to the filmmakers.

Let’s take a look at some examples of off screen action from well known films to uncover why this is done and why sometimes what we don’t see in movies is more important than what we do.

INTRODUCTION

What is it about this scene that makes it feel scary? It takes place in the middle of the day without any frightening, dark lighting. It’s set in a suburban kitchen, not a haunted house. Kitchen chairs aren’t all that terrifying. 

What makes it scary isn’t because of what we do see, it’s because of what we don’t see.

Since cinema is not a 360 degree medium, well, not yet anyway, and our perspective is not omnipresent, the choice of what is included and what is excluded from the screen is a choice that is left up to the filmmakers. 

It’s their job to create the perspective that the audience is presented with. So, let’s take a look at some examples of off screen action from well known films to uncover why this is done and why sometimes what we don’t see in movies is more important than what we do.

WRITING

The biggest decision that determines what we see in movies is made during the process of writing the script.

Just as the screenwriter wields control over what scenes they choose to include, they also control by extension what is not seen. Like in life, the world of fictional stories has an extensive timeline. It’s the job of the screenwriter to pick the moments from this imaginary timeline that best add to the telling of the story.  

Quentin Tarantino’s first film, Reservoir Dogs is a classic example of not allowing us to see the full story timeline. 

We open on a scene at a restaurant with a group of men conversing. The scene ends with them stepping out to perform a diamond heist. Cut to credits. Instead of the next scene opening on them performing the robbery, which would happen chronologically, we cut to chaos. One of the characters is critically injured. The robbery already happened.

Skipping over what would usually be the most important set piece in a heist movie is of course a very deliberate writing choice. Tarantino chooses not to show us that part of the story timeline. This sets up the tension and the key question in the entire film - what happened during the heist and why did it go wrong?  

The cliched mistake that first time filmmakers make in their debut short film is not understanding that what you don’t show is as important as what you do. They’ll open on a character waking up, showering, getting in the car, driving to a restaurant and then having a conversation. 

Something writers should always ask themselves is ‘what does this add’ and ‘is there a better way to present this information to the audience’? Does a character waking up add some crucial characterisation? Does it add to the core story or any themes? 

Or, can we characterise characters in other scenes in other ways - by combining multiple facets of information into scenes and suggesting events rather than showing them.   

By withholding extraneous scenes, filmmakers can actually be more deliberate about how they choose to deliver a story on screen. 

SOUND

While the foundation of the story can be altered by what is left out of the script, the filmmaking decisions of what to exclude in scenes - which are made during the process of filmmaking or in the edit - can also shape how we emotionally interpret them.

Off screen sound can be used to supplement off screen images.

Rather than seeing the body as it drops down the well with a loud, revealing ‘clang’, we cut to reaction shots of the characters and empty shots of the location. Without needing to see the action, we can hear it and therefore tell how deep the well is. Seeing the characters react creates more tension than if we were to cut to the primary source of the sound and action.

Then, we begin to hear another off screen sound. The ominous drumming from an unknown, approaching threat. Again, this use of off screen action combined with off screen sound creates a tense mood and tone that would be diminished were we to merely cut to the source of the sound. 

Not seeing the source of the approaching threat creates more suspense than if we were to show it.

This kind of sound is what we call diegetic sound - sound that comes from the setting of the world of the film. Non-diagetic sound - which comes from the world of the filmmakers, such as music, is another way of influencing the mood or tone in a movie. 

TONES

So we know that music and off screen sound in movies can create a tone or mood, but we can also do this with the camera by choosing what images it captures and, more importantly, by choosing what we don’t see.

Off screen action can be used to build up tension and suspense, but it can also be used to draw out different emotional responses from an audience, such as comedy.  

Edgar Wright has used this gag in a few different films. There’s something funny and unexpected about a random arm reaching into frame with an object. If he were to shoot this wider so that we see the person who the reaching arm belongs to it wouldn’t work as well comedically. 

It would just be a regular action. He also uses this technique of shooting objects entering frame in extreme close ups during his snappy montage edits to advance the story forward in time. In this way framing is used so that the action of the object reaching in is unseen, unexpected, unpredictable and amusing.   

Information can also be withheld in darker situations for a different effect. In No Country For Old Men, we cut from a conversation about the nature of choice and chance, and a woman's impending death, straight to a shot of the hitman leaving the house, checking his shoes for blood. 

There’s something bone chilling, ambiguous and deeply affecting about this scene that comes from brushing over the assumed act of violence that we don’t see and rather sitting in quiet with Chigurh. Not seeing what happens is a more contemplative way of communicating violence and the themes that the movie deals with. 

One of the genres that plays with off screen action the most is horror. I’ve used the analogy before of horror being like pulling off a plaster. It can be done with a quick yank, like a jump scare, or it can be pulled off slowly, with a gradual, prolonged, pain. 

By filming scenes from the protagonists point of view and delaying putting the threat on screen for as long as possible, the tension and terror is cranked up to ten. What we don’t see is far more psychologically terrifying than what we do. 

FINANCIAL LIMITATIONS

Coming back to the example of Reservoir Dogs, another reason why filmmakers may want to skip over showing certain things on screen may be influenced by financial limitations.

Certain scenes are more expensive than others to shoot. It’s pricey to effectively shoot action, car chases, crowds, or most kinds of traditional set pieces.

 Whether that’s a massive car pile up, like in Red Rocket, or a bank heist, like in Reservoir Dogs. Excluding set pieces that need to access specific locations and contain lots of action is sometimes logistically necessary. 

In these cases it’s either on the screenwriter or on the director to find creative ways to communicate that these moments in the story took place while not being able to show it. 

This might be through presenting it off screen with sound from a news broadcast discussing the aftermath of the event in the background or by skipping the event in the story timeline and making how it is discussed by characters the source of the drama and suspense.


PERSPECTIVE 

Finally, a movie like Rashomon uses what we’ve discussed as the entire premise of the film. 

The movie is built around a narrator who conveys four different accounts of the same incident. Each is presented by a different witness, with different details, to question the nature of truth and how it is reported. 

The entire premise of Rashomon relies on not showing the actual events as they unfold from an objective point of view. Eyewitnesses account for what is not seen to comment on how everyone has their own bias and subjective view of how life unfolds.

What we see and what we don’t see in a movie is defined by the filmmaker. By not showing certain information and by showing other information they are creating a perspective on how the story is told. 

What is shown and what is not are inextricably linked. Both are conscious choices. So the next time you’re writing, shooting, editing or watching, remember that what you don’t see is at least as important as what you do.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Linus Sandgren

In this episode I’ll dive into Linus Sandgren's philosophical approach to shooting movies and then take a look at some examples of the gear he uses to execute his photography.

Cinematography Style: Linus Sandgren

INTRODUCTION

Being a cinematographer is a collaborative process. They need to work with technical crew, the director and even indirectly with the screenwriter - due to how they visually interpret the script. 

Linus Sandgren is a DP who focuses on these collaborations, especially the director, to find a mutually agreed upon look for each movie that takes the tonal and thematic aspects of the screenplay into account.

In this episode I’ll dive into his philosophical approach to shooting movies and then take a look at some examples of the gear he uses to execute his photography.   

PHILOSOPHY

Creating images in a crafted way isn’t just a matter of arriving at a location, pointing a fancy camera at the action and hoping for the best. Some of the most important and influential work that cinematographers do actually happens before then. 

In initial meetings before shooting they will be presented with a script or with a concept for the shoot. In order to execute the shoot they first need to come up with ideas for a look or visual style for the film. There are two facets to this: uncovering the tone and themes present in the screenplay, and incorporating the tone that the director envisions for the movie. 

“You need to have a great relationship with a director. I think, like, normally I try to figure out how the director works and what he likes and I like to try to adapt to that director’s style of working. My experience with directors is they also need us to have a partner to communicate and collaborate to create the visual storytelling.”

This doesn’t mean that Sandgren sees directors as having fixed styles. More often than not the tone and feeling that needs to be communicated visually will be determined by the content of the story. 

“It was sort of a 180 from La La Land to do First Man. His style of doing La La Land was very much whimsical, a musical. So, in First Man he wanted it to be very realistic and emotionally immersive.”

Just looking at the way the camera moves differently in these two films communicates two different feelings: one is smooth, theatrical, co-ordinated and flowing, while the other is rougher, more intimate and realistic. But, we’ll get more into how he practically executed those tones later on.

So, while the director’s style of working may influence the visual approach, so does the screenplay.

In pre-production Sandgren therefore asks himself two questions. One, how do you take a script and come up with a tone or themes that are pertinent to the story? Two, how do you then take those concepts and apply them practically to create the images. 

On No Time to Die he chatted with director Cary Fukanaga during prep to come up with these answers.

“Another thing that Cary then discussed with me a lot was the sort of expectations that we have on a Bond film is that it should be entertaining, it should be should be a sort of joyride for the audience that we go through a film and feel like it’s sort of one, long, big adventure. Then we start to discuss, sort of, themes and keywords which to me is really important as a cinematographer. I feel like I always start with the most simple words between me and the director. Like, for example it could be loss, or grief, or death or things that matter for the story.”

With these keywords decided upon, he can put down emotional tones or themes on a screenplay or story timeline and then breaks down how he can achieve this effect in each scene visually. 

For example, for the emotionally jarring opening scene it was important that the images were not beautiful, postcard, landscape snow images but rather that they felt a bit more ominous, eerie, dangerous and isolated by making the images more of a monochromatic blue rather than crisp whites and a beautiful sunny blue sky with warm backlight. This is a good example of how cinematography and a look can be used to subtly influence the emotional impact of a story.

He is also deliberate about how he hires crew. In the same way that the gear will impose it’s so own visual feeling on a film, so to does the crew that you hire. Every person will set up a shot differently, come with their own technical opinions, creative suggestions and style of physically operating the camera.

“Sometimes I love to operate. So I put an operator off to not work on a film because I like to operate. Like on First Man I actually operated A Camera but on La La Land I had operators. It depends on how I want to tell the story and when I feel it’s important to be closer to the actual making of the image. And sometimes it’s better to have someone else do it.”

First Man used a lot of handheld work and immersive operation of the camera. Sometimes with handheld operation it’s easier to find the right movement to tell personal stories when the camera is on your own shoulder. 

GEAR

With an understanding of the kind of tone and feeling that he needs to impart in the images, he can now go about choosing the correct gear to execute that tone.

One aspect of this is choosing the camera and lenses for a project. For example, on Don’t Look Up, he pitched the idea that the look should be rooted as a political thriller and that the comedy should come from the script and actors, rather than shooting it as a brighter comedy. 

So, to ratchet up the tension of a thriller he used camera movement, mainly working off a dolly, and also a stylistic trademark of the director Adam McKay - zooms.

“When things sort of hit the fan and got even more suspenseful in dialogue scenes we could dolly to create tension with the dolly moves. But also we went a lot to longer zooms. For practical reasons we didn’t go to anamorphic zooms, we went for spherical zooms.”

Due to the extra glass needed in anamorphic lenses they usually let in less light, while spherical zooms have a faster stop which means they let in more light and can be shot on darker lit sets. This is particularly true for the long focal lengths they were shooting at - such as 300mm. 

He selected Fujinon Premier Cabrio zooms like the 25-300mm, which has a stop of T/ 3.85 at the long end of the zoom. 

Since they were shooting on film which has a slower EI than the native EI of most digital cameras, they needed faster lenses to be able to shoot in low light scenarios.

Sandgren also rented an Aaton Penelope which he always had on standby with a 40mm spherical Macro Kowa lens. He used this setup for moments where the characters felt really nervous or tense by pushing in inches from their eyes to visually communicate a feeling of internal, existential terror that the characters were going through in that moment.

While he has used the digital Alexa on some of his commercial work, he likes using 35mm film on features when presented with the option. 

Some other 35mm film cameras that he’s used include: the Arriflex 235, the 435, the Arricam LT, and Panaflex XL2 from Panavision. 

These have been paired with lenses like: Hawk V-Lite 1.3x Anamorphics, Canon K35s, Atlas Orion Anamorphics, the Camtec Modded Vintage version of the Zeiss Ultra Primes, and anamorphic glass from Panavision such as the C, E and G-Series. He alternates between anamorphic and spherical lenses for both aesthetic as well as practical reasons.

He has used a variety of film stocks, including Fujifilm Eterna Vivid stocks before they were discontinued, but now selects from the Kodak motion picture stocks which are still available for use. 

He doesn’t stick to a single stock and often chooses a range of stocks for different scenarios that include: fine grain daylight stocks for bright exteriors, like 50D, more sensitive daylight stocks, like 250D, and high sensitivity tungsten stocks, like 500T, for interiors and night scenes. 

His preference for film is also due to the different gauges or formats that it comes in. He can select different formats  to communicate different tones or differentiate time periods or locations in a movie. 

For example, on First Man he used lower gauge, higher grain 16mm film for scenes inside the spacecraft to create a more realistic documentary-like 60s and 70s feel that was more raw and authentic. He then switched to 35mm, with a more normal grain and increased contrast, when the story moved to the NASA industrial world and for scenes inside his home. To create an extreme contrast between the spaceship interiors and the more surrealistic moments of actually being on the moon, he turned to the super crisp, large format 70mm Imax. 

Constructing a realistic feeling sun required coming up with a strong, single light source. He got Attitude Specialty Lighting who make 100K softsuns to construct two custom, high output 200K softsuns to provide a large area source that mimicked hard sunlight.

He often uses strong single directional sources of light, such as HMIs or tungsten units that he’ll gel to get to a daylight colour temperature. He then fills in little pockets of light and creates colour contrast in interiors by using practical sources built in, or placed on the set, that you can see in the shot.

But, his approach to lighting and the gear involved changes depending on each story. This could mean using a wall of LED screens with projected NASA archival footage to create in-camera space backgrounds with more realistic light and reflections. 

Or he may go in the opposite direction of realism, into a world where the lighting is determined by emotion and classical Hollywood tropes. This was done in La La Land by setting up an overhead blanket light at 4000K, hitting the walls with colourful practical fixtures and then rigging a tungsten spotlight to a dimmer which he could bring up or dim down to isolate characters in a nostalgic moment of focus.  

These evoke different tones, one which is concerned with realism and the other plays into visualising abstract emotion.  

CONCLUSION

These examples go to show that Sandgren’s style cannot be limited to a singular look, technique or gear preference. Each project requires a different feeling and each director he works with has different sensibilities and interpretations of the script. 

It’s his job to find a collaborative balance where the tone and themes in each scene are established, and then use that as a jumping off point to create visuals which represent a feeling.

Read More
Gray Kotze Gray Kotze

Frame Rate In Cinematography Explained

Let’s unpack what frame rate is and how it is practically done with a camera and editing software, look at some common frame rates that are standardised in different places around the world, and with that knowledge, take a look at some creative ways that frame rate can be used to tell a story.

Frame Rate In Cinematography Explained

INTRODUCTION

Most of cinema unfolds in real time to replicate a version of reality. But sometimes reality isn't the best way to tell stories. What if we wanted to suspend two characters in time while the world around flies by, draw out an iconic, awe-inspiring moment of action, or invoke an anxiety-inducing choppy rapidity. 

In filmmaking we can use the tool of frame rate, frames per second, or FPS, to control how motion is captured and presented on the screen. 

So, let’s unpack this further by first explaining what frame rate is and how it is practically done with a camera and editing software, look at some common frame rates that are standardised in different places around the world, and with that knowledge, take a look at some creative ways that frame rate can be used to tell a story.  

WHAT IS FRAME RATE?

When you shoot a digital video clip it may appear as one single entity, but to understand frame rate it’s important to realise that one clip is actually a representation of loads of individual pictures, or frames.

These pictures give the illusion of motion - hence motion picture. It works the same way as leafing through still images in a flipbook. If you do it fast enough it’ll look like the little picture is moving.

Before digital, all that film cinema cameras were was a big box that took individual photos very quickly at rates such as 24 pictures in a single second. Instead of a page with a picture on it, a frame was captured as a single image onto a roll of film.  

When people talk about frame rate they usually only talk about one thing: how many frames per second a camera is able to capture. However this is only half of it. To understand frame rate properly you need to know that motion can actually be manipulated in two ways. 

One, the capture frame rate refers to how many individual pictures or frames a camera can capture in one second.

 Two, the project or playback frame rate which is set in editing software and determines at what speed the captured frames are played back.

Changing these two variables in different ways can create normal motion, slow motion and fast motion. 

Let’s say a camera’s capture frame rate is set to 24 frames per second. A clip is shot and brought into editing software, where the playback frame rate is also set to 24 frames per second. This results in on-screen motion which appears normal to the human eye.

This time, the camera is set to 48 frames per second, while the project playback frame rate stays at 24 frames per second. When you hit the spacebar and the clip is played, because the capture frame rate is double the project frame rate the motion will appear to be twice as slow as normal. Also called slow motion.

Finally, let’s set the camera to capture action at 12 frames per second while maintaining a 24 frames per second playback frame rate. This time, the capture frame rate is half the project frame rate, and the motion will be sped up twice as fast as normal. This is fast motion.      

Usually, 24 frames per second is set as the base frame rate for a project, and then the capture frame rate is manipulated to change how we perceive motion, but not always. 

Different frame rate bases also exist. But before I dive into what some of the most common of these basic frame rates are, I’d like to thank today’s sponsor that made this video possible, Squarespace.   

COMMON FRAME RATES

There are two common base frame rates that are used for the majority of film and television productions. 

The first, 24p is based on the standard film exposure rate of 24 frames per second that delivers a look that the human eye has become accustomed to. 23.976 is also commonly used as a progressive scan format based on NTSC television. Regions where NTSC is used include North America, parts of South and Central America and Japan. So productions shot for these countries usually use a 24 or 23.976 frame rate as a project base.

The other common frame rate is 25p, which is used for PAL television. PAL regions cover most of the rest of the world, where 25 frames per second are used as the project frame rate. Although these two bases are most common, there are other alternatives, mainly for broadcast TV production, like 50 or 60p.

There have even been odd experiments like the Hobbit series of movies which were shot and displayed at 48fps. Which, contrary to Peter Jackson’s thinking, got quite a bit of pushback. 

Projecting at this higher frame rate means there is less of the standard feeling of motion blur from movement that we as an audience have become used to. Everything is more crisp and slick. Which in my mind makes productions shot at higher frame rates feel artificial, because our eyes are used to perceiving movement in a more natural way with a kind of motion blur.

For these higher frame rate bases remember that we also need to change our capture rate to compensate. So for normal motion on the Hobbit, the camera shot at 48 frames per second. 

To get motion that is twice as slow as real life they therefore have to boost the capture frame rate to double the playback frame rate - which was 96 frames per second.

While those are some of the most  common frame rate bases, let’s go over some popular capture frame rates. As I have mentioned, to get slow motion a common frame rate is to double the base rate. So, 48FPS for NTSC and 50FPS for PAL. 

Then there are also frame rates such as 33 which is somewhere in the middle of standard slow motion and regular speed and gives the feeling of a subtle suspension of time.

Once we get into the higher frame rates they are usually rounded up to numbers such as 100, 120 or 200. 

Then you get extreme slow motion for specialist shots where time almost feels like it’s standing still. This is usually done on a Phantom camera, at frame rates such as 1,000.  

CREATIVE USES

Let’s take a look at three examples where frame rate is used in a creative way. You can’t really talk about frame rate and motion without bringing up the work of Wong Kar-Wai. 

One technique that appears in many of his films is the use of step printing. 

To get this effect with a film camera, they shot with a low frame rate, in this case eight frames per second. Instead of printing each frame one time and getting a sped up effect, he printed each frame three times to fill in the gaps and arrive at 24 frames in a second.

Instead of getting a new picture in every frame of playback, we get three of the same frames, then a new image for three frames, and this pattern continues.

In different contexts it may have different effects, but Wong and his DP Christopher Doyle often used this trick for action scenes with lots of movement. The stutter and motion blur it gives adds a freneticism and wildness to the scenes.

Another creative way to capture action sequences is with slow motion. On 300 this was done for many sequences in the movie to a melodramatic extent. Snyder earmarked specific shots that he wanted his DP Larry Fong to capture in slow mo. This was mainly done at varying frame rates on the Arri 435ES up to about 100FPS and on the PhotoSonics ER for really high frame rates up to 360. 

Using slow motion in this way for action sequences suspends a moment in time and allows the audience to fully appreciate and be awed by the movements of the character that at real speed would be over in a flash.

This slowed down speed also feeds into the focus that the characters themselves have in that moment and shows how keenly they are perceiving the fight unfold. In this case it also further removes the story from the world of realism and places it in more of a comic book like world where the laws of time and gravity are broken. 

Finally, coming back to the king of frame rates, let's take a look at this shot from Chungking Express. There are a few things going on here.

First, because people in the foreground are rapidly rushing by in a blur, we can establish that this was shot at a low frame rate, such as eight frames per second and projected at a base of 24 frames - not step printed like before. This speeds up the motion. 

Second, the two main characters are moving as if in slow motion. This was simply done by getting them to move extremely slowly in real life. So, even though the film is sped up, the characters remain suspended in time. To me, this has the psychological effect of isolating those two characters and showing how detached they are from the rest of the world. They are so alone that time operates differently on them and leaves everyone else a faceless blur. 

It also elevates the relational and romantic bond between them - as if no one else exists in the world.  

Read More
Gray Kotze Gray Kotze

How Taika Waititi Shoots A Film At 3 Budget Levels

In this video I’ll take a look at just three feature films that he has directed at three increasing budget levels to analyse the techniques that he uses to make them.

INTRODUCTION

If there’s one word that sums up Taika Waititi’s approach to directing it’s tone. His movies are entertaining, uplifting and lean into an unforced comedic tone with a large focus on the writing, casting and performances of the actors.

With a career in film that has involved years of work in commercials, music videos and TV series, in this video I’ll take a look at just three feature films that he has directed at three increasing budget levels to analyse the techniques that he uses to make them.    

HUNT FOR THE WILDERPEOPLE - $2.5 MILLION

His love for comedy began early when he formed a duo with Jermaine Clement - who he’d later work with on other projects. He also started making short films. One of them, Two Cars, One Night earned him an Academy Award nomination. 

Around this time he read Wild Pork and Watercress and decided he would try to write a screenplay adaptation of the book.

“I wrote the first draft of this in 2005. I hadn’t made any other features before then and I found it really difficult adapting the book because I’d never adapted anything and I  thought you needed to be super true to the material. Basically lift everything from the book and put it into a movie. I put that to the side to concentrate on some other stuff and went off and made three other features. Then coming back to the material I realised, ‘Oh you don’t have to do that at all, you can just do whatever you want’. You put it through your filter, you know.”  

This idea of putting a screenplay or an idea through his own filter is a consistent feature of his work: whether he’s writing his own original idea, working with a screenwriting collaborator or bringing a massive blockbuster script to the screen. But we’ll get to that later. 

He takes a screenplay and applies his filter for comedy and adventure to arrive at an end product which has his recognisable authorship. This filter comes from a combination of the writing process, his approach to directing actors, and how he and his creative team visually tell the story.  

“I chose the tone that I wanted as well. I decided I was going to make a comedy that was like an adventure film. I sort of chose stylistically and tonally what I wanted to do and then took the parts of the book that I felt would work in the film I wanted to make and then made up the rest.”

With the final screenplay in place and a budget of approximately $2.5 million, half of which came from the New Zealand Film Commision, he moved to the next step in the process - which is of particular importance to a director who has a large appreciation for performance - casting the actors.

This involved casting and directing a child actor to play the role of Ricky Baker. Directing children can be a challenge. Acting, of course, takes years of practice in manipulating your emotions in a controlled way. 

The level of control and consistency required is difficult for most children. However,if you find the right child that is able to lock into the character, their performance may have a purity to it that might surpass their adult counterparts as it is more natural and less constructed. 

“What the trick is when you are auditioning you search for the kid that resembles the character the most in personality. So, you never try and get a kid to pretend they are someone else. You choose the Ricky Baker’s of the world and find the one that is closest to what you want in the film. And then all they have to do is remember the lines.”

With the cast in place and enough funding to shoot for a brief 25 days, Waititi brought Australian cinematographer Lachlan Milne onto the project to shoot the film. 

They decided on a single camera approach for most of the movie and rented an Arri Alexa XT with, based on some behind the scenes pictures, what looks like Cooke S4s and an Angenieux 12:1 zoom. 

For the car chase scene, which they shot over a couple of days, they used five different cameras to get enough coverage on the relatively low budget: three Alexa XTs which shot the on the ground footage and two Red Epics, with the Angenieux 24-290mm mounted on a Shotover on a helicopter. 

To prepare, the DP used a DSLR camera to shoot different angles of a model car which could then be cut into a sort of animatic or storyboard so that they had a list of the shots they needed to get on the day.

Since most scenes take place outdoors, lighting continuity was always going to be tricky. Milne always tried to orientate day exteriors so that the actors were backlit by the sun. 

He also leaned into a natural sunlight look and didn’t use any diffusion scrims over the actors to soften the light. He didn’t want perfectly soft light that would be too pretty. 

Also, placing scrims overhead limits the movement of the actors and how wide the shot can be. The frame needs to be fairly fixed otherwise the legs of the stands will start getting into the shot.

The director wanted to draw on the visual style of films from the mid 80s, such as films by Peter Weir, which didn’t have visual effects and didn’t use fancy gear like Technocranes to move the camera. Therefore they used the 24-290mm zoom to punch into shots rather than using the more expensive, impractical and slicker camera motion.

Something about the slow zooms also effectively built up tension in scenes and, when combined with other wider shots, helped land some of the comedic gags. Another way he accentuates comedy is with the music and sound, and lingering on wider shots and not cutting too quickly.

Overall, he used the relatively low two and a half million dollar budget to produce a bigger looking movie which mainly had contained scenes with one large chase scene set piece, with a large focus on casting and performances, almost no CG work, and an experienced crew which moved quickly with a single camera to pull off the entire movie on a tight five week schedule.   

JOJO RABBIT - $14 MILLION

“There was no real pitching process for this. So I didn’t go to studios and say ‘Hey, this is my idea for a film’. I realised early on it’s a really hard film to pitch. No one really wants to hear a pitch like this, so I’m going to write a script that’s really good and I’m going to let that be the pitch.”

A screenplay looking at World War Two through the eyes of a young boy in the Hitler Youth, where an imaginary friend version of Hitler plays a supporting role, is certainly a bit of an odd pitch. 

But, after sending the completed script around, Searchlight took an interest in the project and agreed to make the film on one condition, that Waititi play Hitler. Like Hunt for the Wilderpeople, Jojo Rabbit was also an adaptation from a novel, Caging Skies.

His screenplay and vision for the film took a different approach to how most World War Two films are presented and once again drew from his own tonal sensibilities towards comedic entertainment that is uplifting.

“We can’t get complacent and keep making the same style, the same tonal style of film: it’s drama, it’s depressing…everything is desaturated and browns and greys. Crazy idea, we can also maybe create something that is colourful and bright and has humour in it. I knew the tone really early on.”

With a budget of $14 million from Fox Searchlight and TSG Entertainment they tried to find a base for production that would give them the locations they needed and the most bang for their buck.

Initially, the plan was to shoot in Germany, however since their laws meant that child actors could only work for around three hours per day, and the movie was filled with child actors, this would have almost doubled the amount of shooting days they needed.

Eventually they decided on the Czeq Republic which had buildings that came ready made to look like they belonged in the World War Two era, a reliable film industry and labour laws which allowed them to schedule the shoot into 40 days of filming.

From the budget $800,000 was given to the art department, which may sound like a lot, but is actually very low to purchase all the army equipment and create the sets for a period film. So, having town locations which were already almost good to go helped create the period world on the low budget.

Mihai Mălaimare Jr. was brought on board as the cinematographer on the film. Prior to shooting, the director and the DP collaborated to devise the format that was right for the project.

“We were both really attracted to 1.33, but the audience is not as used to that aspect ratio anymore. We were trying to work out how it would work for us framing wise and realising how much more top and bottom it would reveal in that aspect ratio. That was the only thing that made us try the 1.85:1. One thing that Taika really responded to and I wanted to try for so long was anamorphic 1.85.” - ​​Mihai Mălaimare Jr.

To get this squarer aspect ratio with anamorphic lenses he used an unusual technique. Hawk 1.3x anamorphic lenses are designed to be shot with a 16:9 size sensor and get a 2.40:1 aspect ratio. However, if you shoot these lenses with a 4:3 sensor size, de-squeeze them 1.3x and then crop just a tad you can get a 1.85:1 aspect ratio that maintains an anamorphic look. Shooting the 1.3x V-Lites on a 4:3 sensor on an Alexa XT gave him the best of both worlds: the squarer aspect ratio along with anamorphic falloff, without needing to do much cropping.

To portray a brighter version of reality, through the eyes of a child, they used a bright colour palette with lots of vibrant greens, blues, yellows and of course reds. They also used more whimsical slow motion and central, front-on, symmetrical compositions, which placed characters in the middle of the frame and used natural framing devices on the set such as doors, picture frames, tables or tiling for balance.

Much of the tonal balance was adjusted in the edit. Whereas some directors may despise test screenings - showing a cut of the film to an audience prior to release - Waititi likes to use them in order to gauge the effectiveness of the pacing of different versions of the edit.

“It was more the tonal balance. So I test my films all the time with audiences. So you get feedback. What do you think of this? Were you bored here? Were you overstimulated here? Was it too funny here? Was it too sad here? And then just finding a balance.”

Jojo Rabbit was produced on a higher $14 million budget that accommodated for more shoot days, a war set piece, lots of extras, some star performers, and period correct production design.


THOR: RAGNAROK - $180 MILLION

“I have a theory that there are periods when the economy is suffering and people and people don’t have a lot of money to spend, they don’t want to go and see films about how tough life is for people. I think the reason that a lot of those dramatic films are not doing well is because people want an escape, which is why a lot of the superhero films are doing really well.”

This movie involved a step up from a fairly regular budget to what I guess you could best call a Marvel budget.

To get the job Waititi pitched his idea of the film, which involved creating a ‘sizzle reel’ - basically a montage that he cut to Immigrant Song by Led Zeppelin using footage from other films. The studio were also enthused by his idea to bring a vitality to the movie and his trademark brand of humour to the characters.

Working on a MCU movie means that the director basically has whatever technical resources they can dream of, have as much time as they need (in this case 85 shoot days, or a full two years when including pre and post production) and can use the massive budget to hire pretty much whatever actors they want.

However it also means that most of the look of the film will be constructed after shooting with CGI and to a large degree will be controlled by the studio. You know, the desaturated, GC-laden feeling all Marvel movies have. 

What falls on the director therefore is not so much creating the aesthetic style, but rather managing the project and creating the overall tone by using performances and storytelling.

To wring more of a comedically authentic tone from the script, he worked with the actors to achieve a more natural delivery of lines.

“The thing about a lot of studio films and Hollywood films especially is that when you hear a joke in these films you get the feeling that the joke was written about a year before they shot it and then a couple of people in the board room were like, ‘and then he’s gonna say….’ and they’re like ‘that’s gonna be amazing when we shoot that in a year.”

Instead they worked with a script that had suggested dialogue and jokes. Once him and the actors were on set they could then work with that material until they found what delivery worked naturally. Not being so tightly constrained to the original shooting script. 

The film was shot by Javier Aguirresarobe, who has a long career working on a range of both low and high budget movies. About 95% of the shoot was done with bluescreens. 

This meant that the DP lit the actors as a way to suggest to the post production supervisor where, with what intensity, quality and colour temperature they imagine the light to be. CGI is then used to construct the rest of the world and the light that is in it. Motion capture suits were used to capture the movement of computer generated characters.

The film was largely shot on the large format Alexa 65 with Arri Prime 65 and Zeiss Vintage 765 lenses. The Phantom Flex 4K was also used for shots which needed slow motion.

Thor: Ragnarok therefore used its enormous budget to hire a cast of famous actors, fund a very lengthy 85 shooting days with all the gear they could imagine, loads of action scenes, and effectively pay for two years of production time that included expensive CGI work in almost every shot in the entire movie, all the while Waititi maintained his grasp of a perfect comedic, adventure tone.

Read More
Gray Kotze Gray Kotze

5 Steps To Shooting A Documentary

Let's unpack five steps that you can take whenever you get the opportunity to work on a new documentary project.

INTRODUCTION

The world of documentary is one that is fundamentally different from other forms of planned fiction filmmaking, like music videos, commercials and features. What differentiates these disciplines is that one is more pre-planned and structured ahead of shooting, while documentaries rely on a broader plan with inevitably less precision.

This means that documentary cinematographers need to always be on their toes and be quick to adapt to unexpected situations as they unfold. Having said that, this doesn’t mean that you should just go in with a camera and rely purely on luck and instinct. There are some clearly defined ways that we as filmmakers can use to shoot more consistent, stronger content.

So, I thought I’d use this video to unpack five steps that I take whenever I get an opportunity to work on a new documentary project.    

1- IDENTIFY THE CONTENT

Gone are the days where most documentaries meant setting up a few sit down interviews which would then get cut with archival footage. 

Today documentary, or documenting a version of reality, comes in many forms - from commercial, branded content that uses touches of non-fiction, to purely observational filmmaking, re-creations of events, nature documentaries, traditional talking head documentaries and everything in between.

It may seem obvious, but as a cinematographer, the first thing you need to do when starting a new project is clearly define the form of the film and identify the types of scenarios that you will be filming.

The reason this is so important is twofold, it’ll help you to identify the gear you need to bring along and will help you to nail down a visual style. But we’ll go over those two points separately.

When you’re dealing with real life situations, planning and having a clear vision for what you need to get will make it far easier to execute on the day. Half of making documentaries comes from producing and putting yourself in the right situation to capture whatever the action is.

Most of these decisions come from the director. In documentary work, the director may also be the cinematographer. If there is a dedicated cinematographer then knowing the form of the project and the kind of footage needed is still crucial. 

For example, you may go into a shoot knowing that you need an interview with the main character that should be prioritised, some B-roll footage of the location and one vérité scene with another character. 

If the schedule of the main character changes and they suddenly aren’t available to do an interview in the morning anymore then you know that: first priority is scheduling the main interview, second priority is finding time with a secondary character, or looking for a potential scene to present itself and thirdly the B-roll can be gathered throughout the day in the gaps of the schedule. 

Making a list that prioritises footage that is a must have, footage that would be nice to have and footage that would be a bonus to get is useful going into the shoot. It’s always easier to improvise and get unexpected, magical moments when you already have a solid base or plan to work from that tells the core of the story. 

Once you’ve put the edges of the puzzle in place, it’s much easier to then fill in the rest.     

2 - GEAR

As I mentioned, selecting the gear needed for a project will be determined by the kind of scenarios that need to be captured. 

For example, a verite documentary may be captured by a single handheld camera, with a single lens, which also records sound, operated by one person. While larger budget true crime documentaries with re-creation scenes may have an entire crew, complete with a cinema camera package, a lighting package, and a dolly.

Whatever gear is needed on a documentary shoot there is always one certainty: you need to be able to work fast. For that reason, you need to have a high degree of familiarity with the camera you are shooting on. If you need to quickly capture a moment in slow motion can you find the setting within a few seconds? Or if the light suddenly changes and you need to compensate for overexposure can you quickly adjust the ND filter? 

This is why going into a shoot I’d recommend configuring the camera in such a way that you are able to make changes as quickly as possible. This may be through user buttons, through having a variable ND filter on the front of the lens, or by having a zoom that can use to quickly punch in or out to a specific shot size. When you’re capturing real life, you don’t ever want to miss a crucial moment if it can be avoided.

Having less gear also speeds things up. It means less to set up, carry around and to pack away. There’s a sweet spot between having the tools that you need and not having too much stuff to lug around.  

Although there are loads of different approaches to selecting gear, let me go over what is a fairly typical setup.

Starting with the camera, a popular choice is something like a Sony FS7, a Canon C300 or something newer like the FX6. These cameras have great codecs that produce high quality images with a relatively small file size - which you need on documentary projects where you often need to shoot a lot of footage. They also come with XLR audio inputs to feed sound directly into the camera and have user buttons and internal ND filters for quick operation.

When it comes to lenses, I personally prefer working with primes, but zooms are probably more popular as they allow you to quickly readjust shot sizes. Something like a 24-70mm is a pretty standard choice. Depending on the content it’s usually useful to also carry a long zoom like a 70-200mm.

I like to carry screw-on filters with me, such as a variable ND and maybe a diffusion filter or a diopter filter, depending on the look.   

Then you want a lightweight tripod with a fluid head that is smooth to operate, but light enough to carry around all day and to quickly set up. Many people now like to shoot with a gimbal too.

I also like to carry around a little lighting bag and a stand. This can be used for an on-the-fly interview, bringing up the exposure in a dark space or lighting observational scenes so that they are more ‘cinematic’. 

I exclusively choose LEDs that are both dimmable and are bi-colour. This means you can easily change their colour temperature and the intensity of the light with the turn of a knob. Again, speed is key.

3 - VISUAL STYLE

Whether you are conscious of it or not, every decision that a cinematographer makes while shooting contributes to some kind of visual style. Even the act of just picking up a camera quickly and pressing record to capture a moment creates a visual style with a loose, handheld, verite look.

This visual style may affect the audience in a subtly different way than if the same scene was shot locked off on a tripod, or shot with lots of movement on a gliding gimbal.

There are a million different directions to go in. Maybe you decide on a specific type of framing for the interviews, maybe the entire film is handheld, maybe you only use natural light, maybe you use artificial light to enhance reality, maybe you use a drone to give context to the space, maybe you suspend time by using slow motion, or shoot with a diffusion filter to make the images more dreamy. These are all decisions that influence a film’s visual style.

Therefore the next step in documentary cinematography, before arriving on set, is coming up with an idea for an overarching visual style that supports the film. This style could be rationally decided upon based on thought or based on what feels right.

This step also needs to be considered with the first step of identifying the kind of content you are shooting. You need to find a style that is balanced with what you can realistically achieve. For example if you’re shooting a fast paced fly on the wall documentary it might not be possible to shoot everything from a tripod with perfect lighting.   

Usually, I find I have a stronger connection to films that have some kind of visual cohesion and an artistic vision that stretches across the entire doccie.

Of course since we are shooting in unpredictable situations, with less control over the environment, it’ll almost never be possible to get exactly what we want visually.

But, going in with a plan or an idea of the look, or finding the look as you begin shooting, will almost always result in stronger images than if you go into shooting with no vision or ideas at all and just get whatever you can get without giving any thought to how the images look and the feeling they will convey.

4 - SOUND

Next, let’s talk about something that is sometimes loathed by cinematographers, but which is as important, if not more important, than the image: sound.

Some documentaries may have the resources and the need to hire a dedicated sound person, but often in the field of documentary the job of recording sound may fall on the cinematographer.

Therefore it’s important to at least know the basics of how to record sound. There are two ways this is done. With lav mics that are clipped onto the subject, which then feeds a signal wireless to a receiver which is plugged into the camera or a recorder that captures the sound. Or with a boom mic that can either be mounted on-board the camera, or used by a boom operator on a boom pole. For more on this I made another video on boom operators.

The main point to be aware of as a cinematographer, is that getting good sound may involve compromise. For example, you may want to shoot a beautiful wide shot of a scene, or an interview, but if you are shooting in a noisy, uncontrolled environment you may be forced to scrap that plan and shoot everything in a close up so that you can get the boom mic nice and close to the subject.

It may be frustrating to sacrifice the better shot for the sake of sound, trust me I hate it, but what I always tell myself is that it’s better to get a worse shot that has usable sound, than to get a beautiful shot that has terrible sound. 

If you get a beautiful shot but the sound isn’t usable it’ll just end up on the cutting room floor anyway, never seen by anyone besides the editor.

Of course this is dependent on how necessary the sound is, but as a general rule if you’re working with an on-board mic and there is crucial dialogue - prioritise getting usable sound over getting a beautiful image.  

5 - COVERAGE

The final step to shooting a documentary is, well, the actual act of shooting it. Understanding coverage, which refers to the angles, shot sizes and way in which a scene is shot is an invaluable skill in documentaries.

While in fiction filmmaking you can shot list, storyboard or consider the coverage of a scene between setups as you shoot it, when you are working in unexpected situations that will only take place once, you have to make these decisions in real time.

It’s a difficult thing to give broad advice on as different scenes can unfold in different ways, but let’s go over some basic ideas for capturing an average vérité scene.

I find it’s useful to edit scenes in your head as you are shooting them. For an average dialogue scene you know you’ll need a few things. One, you’ll need a wider shot that introduces the audience to the space of the location so that they can orient themselves and understand the context. Two, you’ll need a shot of whoever is talking, specifically the main character or characters that you are focusing on. Three, you’ll need to get reaction shots of whoever isn’t talking, so that the editor can use these to shorten a scene. 

For example, there may be one sentence at the beginning which is great, then they waffle for a bit, then they have another three sentences which are great. If you have a reaction shot, then you can start on the person saying the first sentence, cut to the reaction shot while you keep going with the dialogue from the end three sentences. Then cut back to the person saying the dialogue. This naturally smooths things over and ‘hides’ a cut. 

If you only have shots of whoever is talking, then the editor will have no option but to either select one section of dialogue, for example the final three sentences, or to jump cut - which can be abrasive.

Also remember that the size of a shot affects how an image is interpreted. So for more personal moments you want to try and get as close as you can. However, you also need to take into consideration that your proximity to a person will affect how they act.

 If you meet someone for the first time and get right up in their face with a camera immediately they will be put off and likely won’t open up to you emotionally. 

That’s why I usually like to start shooting scenes wider and then begin to move closer to them as they become more comfortable with your presence and the conversation starts to heat up.

Covering a scene in a documentary situation comes with experience. It’s like an improvisational dance that needs to balance getting shots that will cut together, making the subject feel natural and at ease and anticipating the right shot size for the right moment.   

Although this just touches the surface, if you want to cut a basic, verite dialogue scene together and make an editor happy, then make sure you get, at a minimum, a shot that establishes the space, a shot of the person talking and a reaction shot of people who are not talking. 

Read More
Gray Kotze Gray Kotze

Cinematography Style: Janusz Kamiński

In this episode of Cinematography Style I’ll unpack Kaminski’s philosophy on filmmaking that uses visual metaphors to express stories, and give examples of the kinds of gear and technical tricks he’s used as a cinematographer to create images.

INTRODUCTION

With a career in producing images that has spanned decades, it can be tricky to pin down exactly the work of Janusz Kaminski. However, it’s difficult to deny that a large part of his filmography is due to his extensive collaborations with iconic director Steven Spielberg.

This raises the question, how do you separate the creative input of the director and the cinematographer? Is it even possible to do so?

In this episode of Cinematography Style I’ll unpack Kaminski’s philosophy on filmmaking that uses visual metaphors to express stories, and give examples of the kinds of gear and technical tricks he’s used as a cinematographer to create images.   

BACKGROUND

During a period of political turbulence in the early 1980s, the Polish cinematographer moved to the United States where he attended university. He decided to take up cinematography and went to film school at the AFI.

He got his first professional job in the industry as a dolly grip on a commercial. The camera operator quickly told him this wasn’t for him. Next, he worked as a camera assistant, where he was again told he also wasn’t any good. He then started working in lighting which kicked off his career.

During this time he worked on lower budget productions with fellow up and coming cinematographers such as Phedon Papamichael and Wally Pfister. He also began working as a cinematographer too in his own right.

“I was here for 13 years and I shot 6, 7 movies. So I was experienced I just didn’t have that little push. I shot a little movie directed by Diane Keaton. Steven liked the work, called my agent, we met and he offered me to do a television movie for his company and after that he offered me Schindler’s List.”

This collaboration proved to be a lasting one. Over the years they have shot 19 other films together and counting. 

Other than Spielberg he’s also shot feature films in many different genres for other directors such as: Stephen Sommers, Cameron Crowe, Judd Apatow and David Dobkin.  

PHILOSOPHY

Coming back to the question of how you separate Kaminiski’s input from Spielberg’s:  in their vast collection of films together, a lot of the overarching visual decision making does come from the director’s side.

Prior to their work together, Spielberg was known for the creative way in which he positioned and moved the camera in order to tell stories. In that way, I think a great deal of the perspective of what the audience sees in the frame comes from him.

For some movies, such as West Side Story, Spielberg uses extensive storyboards to pre-plan the coverage in a very specific way. While other movies like Schindler’s List had surprisingly little planning and were more spontaneous without any shot lists or storyboards.

For this situation, Kaminski used a portable tape recorder to dictate notes about lighting, problems or gear he may need, to bring order to his thoughts and successfully execute the photography as they went.  

In terms of the overall look and lighting of Spielberg’s early films, they all followed a similar template that was grounded by a traditionally beautiful Hollywood aesthetic with haze that accentuated an ever present glowing backlight that gave the actors an angelic, rim light halo outline. 

The other cinematographers he worked with were intent on servicing this traditional aesthetic.

When Kaminski came on board to shoot Schindler’s List he deconstructed the Hollywood, family-friendly beauty that audience’s had come to expect from Speilberg’s work.

“I think the idea of de-glamorising the images, strangely, I’m always interested in that. I didn’t want that classical Hollywood light. I wanted more naturalistic looking. We all want to take chances, because it’s not this comfortable life we’ve chosen where we just make movies and we work with movie stars. We express ourselves artistically through our work and we want to take chances.”

Throughout their collaborations together, Kaminiski was able to find a middle ground that balanced Speilberg’s desire for a traditionally beautiful look with his own appreciation for de-glamorised images that could be considered beautiful in a different way. 

Another ever present idea in his work is his use of visual metaphors - where the camerawork represents a particular idea or leans into a visual perspective that represents the location or time period that is being captured in the story.

“I think each story has its own representation. You have to allow the audience to immediately identify where they are. So if you’re not using some very strong metaphors you will lose the audience. So the first explosion is very yellow, then we go to France and it’s more blue-ish, you go to Italy it’s very warm and fuzzy, France it’s very warm and fuzzy. So using those visual cliches that we as the people identify with specific countries.”

He doesn’t only create these visual metaphors with colour. On Munich he used zooms to capture the photographic vocabulary of the 1970s when those lenses were popular. 

Or in Saving Private Ryan he mimicked the kind of manic, handheld, on the ground style that the real combat cameramen of the time would have been forced to use.

Or in Catch Me If You Can, he differentiated the time periods by giving the 60s scenes a warm, romantic glow and the 1970s scenes a slightly bluer, flatter look.

These visual languages and cues subtly change depending on the movie. They back up each film by using the images to support the story in a way that hopefully goes unnoticed by the audience on the surface, but feeds into how they interpret the movie in an unconscious way.

GEAR

“I look at cameras as a sewing machine. When you talk to the wardrobe designer you don’t ask her what kind of sewing machine do you use, because it’s just a sewing machine. It doesn’t really matter. The equipment, all that stuff is not. What you do with it is essential.” 

Some cinematographers like to be consistent with their gear selection to carry their visual trademark across the respective projects that they work on. Kaminiski isn’t like that. 

Throughout his career he has got a variety of optical effects from his big bag of tricks. Sometimes this involves using filters, sometimes photochemical manipulation, other times unique grip rigs or playing with unconventional camera settings.

So, let’s go through a few examples of some gear he has used, starting with his camera package.

He flips between shooting with Panavision cameras and lenses in the US and using Arri cameras when working in Europe. He’s alternated between shooting Super 35 with spherical lenses and in the anamorphic format. 

Spherical lenses are more practical as they are faster, have better close focus and are smaller, which makes them better suited for shooting in compact spaces such as car interiors. Examples of some of these lenses that he has used include Cooke S4s, Panavision Primos and Zeiss Standard and Super Speeds.  

He usually shoots close ups at around a more romantic 50mm focal length or longer to flatter the face, but on Schindler’s List chose to shoot them with a wider 29mm field of view that lended itself to realism.  

He’s used anamorphic lenses for their classical Hollywood look, with beautiful flares that are impossible to otherwise recreate. Some examples are the C-Series and more modern T-Series from Panavision. 

He has used digital cinema cameras occasionally but almost exclusively shoots features on 35mm film - including his recent work. His choice of film stocks has been extremely varied. 

On Schindler’s List he mainly shot on Eastman Double-X 5222 black and white. For specific sequences that required parts of the frame to be colourised, such as the famous shot of the girl in the red dress, he pulled an interesting photochemical trick by recording on Eastman EXR 500T 5296 colour negative film stock and then printing the film onto a special panchromatic high-con stock which is sensitive to all colours and used primarily for titles. 

This gave them the look they wanted that best matched the rest of the black and white footage and didn’t contain the blue tint that came with removing the colour from the colour negative in the regular way.   

To get a flatter image for the 1970s scenes in Catch Me If You Can he used Kodak 320T stock in combination with low-con and fog filters to purposefully make the images a bit uglier, more neutral and drab. This coincided with the main characters' fall from grace as he came to terms with the real life consequences of his actions.

Or on Saving Private Ryan, he settled on Eastman’s 200T film stock, which he pushed by one stop and used a film development process called ENR which both desaturated the stock and sharpened up the look of textures, giving the details in the image a grittiness.

When it comes to lighting with his gaffer he acknowledges that some gaffers are more technical while others are more conceptual. Due to the large scope of the kind of sets he lights it’s more practical for him to describe the lighting he wants in more general terms. Such as no backlight, or this source needs to feel warm - rather than describing and placing loads of specific units around a set.

“The scope is way too large. You can’t demand every light be placed on set according to your desires, so you have a gaffer who is knowledgeable. On the shooting day or the day before you talk about the specifics of each scene or you adjust the lighting. Or you do the lighting with the gaffer on the given day right after the rehearsal. Surround yourself with the best people so you can work less and I want to work as little as possible.”

Spielberg likes to move the camera in a fluid, expansive way, with rigs such as a Technocrane, that reveals large portions of the location. This adds to his challenge of lighting as it’s far easier to light in a single direction with a 15 degree camera angle than it is to cover 270 degrees of the set.

Although for other films such as Saving Private Ryan a lot of handheld moves were done to introduce a feeling of realism that placed the viewer right down on the shoulder of the operator, in the middle of the action.

To inject even more intensity into an already shaky image he used Clairmont Camera’s Image Shaker. This is a device which can be mounted onto the front bars of the camera and vibrates at a controlled level with vertical and horizontal vibration settings which could mimic the effect of the explosions happening around the soldiers. 

CONCLUSION

Kaminiski uses whatever technical trick he can think of to create visual metaphors that push the story forward, whether that’s done photochemically, with a filter or by physically shaking up the image.

In the end, the technical solution or piece of equipment itself is less important than the cinematic effect that it produces.

Spielberg and Kaminski’s filmmaking is an intertwined creative partnership which has combined Spielberg’s traditionally cinematic visual direction with Kaminski’s focus on visual metaphors. Sometimes this means perfect golden backlight, but other times a feeling of realism that is far more ugly and true to life is what is required.

Read More
Gray Kotze Gray Kotze

Alexa 35 Reaction: Arri's First New Sensor In 12 Years

My first reaction to details about the Alexa 35 prior to the release of the camera.

We’ve been hearing rumours that Arri has been developing a new Super 35 4K camera for years…Well, it seems it’s finally time. A brochure for the new Alexa 35 has leaked that outlines all the features of this new camera.

If you follow the channel you’ll know that I don’t really react to new stories but rather focus on discussing a more general overview of filmmaking topics. However, since I think this new Alexa 35 has the potential to take over the high end cinema camera industry in a similar way that the original Alexa Mini did all those years ago, I’m going to run through and react to some of the key features of this new camera. 

BACKGROUND

Before I start, I should probably mention that Arri’s approach to camera development and releasing new cameras is a bit different to some other brands. Brands like Red, for example, are known for putting out cameras as soon as they can and then sorting out any bugs or issues that arise in early testing.

Arri is far more conservative and precise about their releases. They don’t release new gear very often. The Alexa 35 represents Arri's first new sensor that they have developed in 12 years. So, when they do choose to unveil a new piece of gear to the public you can rest assured it has been thoroughly tested and carries a reputation that it will live up to all the specs that they mention.

SUPER 35 4.6K

Arri’s cameras are all developed to fulfil a specific section of the cinema market that relates to its sensor size, specs or physical size of the camera. For example, the Alexa Mini was developed as a Super 35 camera which was small enough to be used on a gimbal. Or the Alexa 65 was developed to provide a 65mm digital sensor size. 

The Alexa 35 was developed to be an update of the Alexa Mini, with a Super 35 sensor, a small form factor and the crucial update of recording higher resolutions. Apart from its effect on the images, a big reason this increase in resolution was made was to meet the 4K requirements needed to film Netflix Originals. Previously this was only possible with their cameras that had larger sensors like the Mini LF and was unavailable in the Super 35 format.

As I’ve said in a previous video, Super 35 sensors have a different look and field of view than large format cameras. Since it’s been the standard format throughout cinema history, there is also the largest range of cinema lenses to choose from. 

SPECS

So let's run through some key specs. Like their other new cameras, the Alexa 35 can record in ProRes or ARRIRAW. It tops out at 4.6K in Open Gate and can record up to 75 frames per second onto the larger 2TB Codex drives, which goes down to 35 frames on the 1 TB drive.

In regular 4K, 16:9 mode, this frame rate is pushed up to 120 in ARRIRAW. This is a nice upgrade from the Mini LF and will cover most slow motion needs on set, before needing to change to a dedicated slow motion camera like a Phantom. 

An impressive feature of this new sensor is that Arri has found an extra one and a half stops of dynamic range in the highlights and another stop in the shadows. This brings the total exposure latitude of the camera to 17 stops. 

They also claim that the highlights have a naturalistic, film-like roll off to them. To me, how a cinema camera handles the highlights is one of the most important factors in creating a pleasing filmstock-like look. It’s something that the previous ALEV 3 sensor did well, which I’m sure will continue or be improved upon by this new iteration.

As many DPs tend to push a more naturalistic lighting style these days, I think the increased dynamic range that they claim will help control the light in more radical exteriors and make sure there is detail in the highlights from hot windows in interiors.

More manufacturers these days, such as Sony, have been moving to a dual ISO model that has a standard ISO for regular use and a boosted native ISO for low light situations. 

It seems Arri hasn’t gone quite this far but has made a move in the direction of improving the low light performance of the camera with what they are calling an ‘Enhanced Sensitivity Mode’. This can be activated when the EI is set between 2,560 and 6,400. They claim this creates a low noise image in low light and is targeted at filmmakers who want to use available light during night shoots. 

When it comes to colour, Arri has developed a new workflow called Reveal colour science, which they claim is a simpler workflow for ARRIRAW post production and leads to higher quality images with accurate life-like colour. They also claim that the Alexa 35 footage will be able to be cut with their existing line of Alexa cameras. While I assume the colour will therefore be fairly similar to the existing Arri look, this is going to be something that will need to be seen once footage starts getting released.

TEXTURES

So I mentioned a new feature of the Alexa 35 that I’m excited about, and that is what they are calling Arri Textures. When digital cameras were originally introduced the common way of working with them was to record as flat a log image as possible, which would then have more room be manipulated in post production by doing things like creating a look, adding artificial film grain, adjusting saturation, these kinds of things.

I think as cinematographers have gotten more used to the digital workflow there has been a bit of a push to go back to the ways of old where the decisions that cinematographers made on set determined the look of the negative. 

Some do this by creating a custom LUT before production, which is then added to the transcoded files that are edited with, so that a ‘look’ for the footage is established early on, rather than found later when its handed over to a colourist at the end of the job.

With that said, Arri Textures is a sort of setting plugin that is made in camera that defines the amount and character of the grain in the image, as well as the contrast in the detail or sharpness. 

So, cinematographers now have the ability to change the way the camera records an image, much like they would back in the day by selecting different film stocks. I think this is a great idea as a tool as it puts control back into the hands of cinematographers and allows them to make these decisions on set, rather than having to fight for their look in the grade. 

ERGONOMICS

With all of these new features and high resolution comes a need for more power in order to get all this done. With that in mind, the Alexa 35 will be a completely 24V powered camera - rather than prior cameras that could run off 12V batteries like V-locks as well as 24V power.

This will be done with their new system of B Mount batteries. I haven’t personally worked with these batteries yet, but one plus I foresee, apart from them providing a higher level of consistent power is that they can be used by camera operators who operate with their hand on the back of the battery. 

This has become a popular way to operate, particularly with a rig like an Easyrig. I always found older gold mount or V-mount batteries had a tendency to lose power and shut down the camera from time to time as the contacts shifted when operated. This should no longer be a problem with the B-mount. 

In terms of its form factor, I think this new Alexa is a great size, around the same size as the Mini LF - a little larger than the original Mini but small enough to be used for handheld and gimbal work. 

The pictures show the addition of a little menu on the operator’s side of the camera, with quick access to basic settings like frames per second, shutter, EI, ND and white balance. It kind of reminds me of old Arri film cameras that came with a little setting display screen on the operator side.

The main reason I think this will be useful is for when the camera needs to be stripped down, for Steadicam, gimbal or drone, and loses its viewfinder which has the main menu access. On the old cameras if you needed to change settings, you’d have to awkwardly plug in the eyepiece, and wait for it to power up before you could do so, or do it through the Arri app on a phone which can be buggy. This new menu should save time in those scenarios.

Other than that they’ve added some extra user buttons which reminds me of the Amira  a bit and perhaps is intended for quicker use in documentary situations. The new camera comes with a bunch of re-designed components, with the intention of making it a small but versatile camera that can be built into light or studio setups. 

Finally, one criticism I have is that like the Mini LF, the Alexa 35 only has 3 different stops of internal ND, a 0.6, 1.2 and 1.8. I’m surprised they didn’t try to add more stops to compete with Sony’s Venice that has 8 different stops of internal ND filters from 0.3 to 2.4. I know cinematographers who like shooting on the Venice almost entirely for the ease and speed that having all the internal NDs you could need provides.

Read More
Gray Kotze Gray Kotze

What A Steadicam Operator Does On Set: Crew Breakdown

In this Crew Breakdown video, let’s take a look at the Steadicam Operator and go over what their role is, what their average day on set looks like, and a couple tips that they use to be the best in their field.

INTRODUCTION

A long time ago, in a world far before low cost gimbals were a thing, there were only a handful of options when it came to moving cameras with a cinematic stability.

You could put a camera on a dolly. You could put a camera on a crane. Which are both great options, but what about if you wanted to do this shot? How do you chase a character over uneven ground, through twists and turns, at a low angle for an extended, stabilised take? 

The answer was with a piece of stabilising equipment invented by Garrett Brown, called the Steadicam, that could attach a camera to an operator, giving filmmakers the mobility of a handheld camera combined with a cinematic stability.

This created the new crew position on a film set of Steadicam Operator. So, in this Crew Breakdown video, let’s go over what their role is, what their average day on set looks like, and a couple tips that they use to be the best in their field.   

ROLE

“I liked handheld. I did not like the way it looked - then or now. And so what I needed was a way to disconnect the camera from the person.” - Garrett Brown, Steadicam Inventor

Before going over what the role of the Steadicam operator is, let’s take a basic look at how a Steadicam works. 

A Steadicam is basically a perfectly balanced, weighted gimbal attached to the camera operator’s body that isolates the camera from the operator’s movement. This allows the camera to be moved around precisely with smooth, stabilised motion.

It can be broken down into three basic sections: the vest, the arm and the sled. The sled includes a flat top stage which the camera sits on and a post which connects the bottom section with a monitor mount and a battery base. 

The top stage with camera and the bottom stage with the monitor and the batteries are positioned so the weight of the camera is counterbalanced and even. Like balancing a sword on a finger.

Having two ends which are perfectly balanced both adds weight, and therefore more stability to the rig, and puts the centre of gravity exactly at the operator’s grip, so that they can use their hand to adjust how the camera moves with delicate adjustments. 

This hefty weight is supported by a gimbal attached to the post, which attaches to an arm, which then attaches to a vest worn by the operator. The rig’s substantial weight, perfect balance and gimbal allows the operator to manoeuvre the camera around with a floating stability using the motion of their body and deft touches with their grip. 

A Steadicam is therefore a great option to move a camera through tight spaces, over uneven terrain, or do flowing, 360 degrees of movement around actors in long takes. 

It’s generally seen as providing more organic motion and the ability to do hard stops with precision better than 3-axis gimbals - which have a drifting motion to them before they come to a resting stop.

The role of the Steadicam operator is an interesting one, as it requires both a deep technical knowledge and proficiency as well as a creative flair and theoretical knowledge on how to move the camera and frame shots to tell a story.

Sometimes, but not always, a Steadicam op will also work as the primary camera operator (or the B-camera operator), operating shots from a tripod head, wheels and performing any Steadicam shots that are required.

Their job includes helping to build and balance the camera on the Steadicam rig, discussing a shot with the DP and director and then executing it - often adjusting between takes until the perfect take is in the can.          

AVERAGE DAY ON SET

Before the shoot begins, the Steadicam operator will show up to the gear check at the rental house where the camera team tests and assembles the gear. As different jobs will use different cameras and lenses, which come in different weights and sizes, it’s crucial that the camera is properly built and balanced during testing.

Nothing would be worse than building the camera on the day, without a gear check, only to realise that the lens is too front heavy to balance on the Steadicam. 

On the day of shooting, the operator will grab a shooting schedule or communicate with the 1st AD to determine what Steadicam shots need to be done and therefore when the camera needs to be built for Steadicam. Sometimes most of the day can be spent doing Steadicam, but usually it will only be reserved for a few shots, in different scenes or setups, spread throughout the day.

If there is a particularly tricky shot, or a choreographed long take that has been pre-planned, the operator may meet with the DP during pre-production, prior to shooting, and walk through the shot to work out how best to pull it off.

When it comes time for Steadicam, the first thing to be done is to build the camera. This is done by the 1st AC or focus puller who will strip the camera of excess weight, configure the necessary accessories, such as the transmitter or focus motors in the same place as they did during the gear check and attach the Steadicam’s sliding base plate to the bottom of the camera.

It is then handed off to the operator who will slide the camera onto the top stage and test it to make sure it is properly balanced on the gimbal. They’ll then throw on the vest, go up with the camera and run through a rehearsal or a rough blocking with the director, actors and DP to work out the movement. 

When they’re ready they’ll go for a take. The director and DP will watch a feed of the image transmitted on a monitor and give feedback on things like the speed of the motion, the framing or suggesting a new movement. 

The camera team will often hand a wireless iris control to the DP, that they can then use to change the aperture on the lens remotely if there are any changes in light.

Between takes when the camera isn’t needed, the operator will take the weight of the Steadicam off by placing it on a stand.

This is the core of their job. However, since the requirements of different shots can vary hugely depending on the situation, each shot may offer a different challenge when it comes to operating. Sometimes this may be the physical challenge of operating a heavy setup, other times it may be a matter of synchronising the timing of the movement with the actor and focus puller or the shot itself may require particularly nimble operation. 

The Steadicam operator has to be able to cooly and calmly adapt to each situation to provide the creative team with the kind of shot that they imagine under the pressures of a time limit.

TIPS

To become a Steadicam Operator you can’t just show up on set and learn as you go. The reason it is such a niche profession is that it takes lots of training, knowledge, practice and experience to be hired for high end film jobs. 

It’s also expensive. 

Typically, Steadicam operators buy their own Steadicam, which is a pricey piece of gear, attend Steadicam workshops where they are trained how to operate it, and are then able to rent out their expertise and their rig out to productions on jobs.

In recent years Arri also introduced the Trinity, which is similar to a Steadicam with a 3-axis camera stabiliser that allows the camera to move on the roll axis, and self balancing features which allows the camera to be moved from low mode to high mode during a shot and the post to be extended for extra reach.

With a traditional Steadicam, operators need to decide before a shot begins whether to shoot in the more common high-mode, or if the camera needs to be close to the ground with the post flipped around and used in low-mode.

Another option sometimes used is to hard mount the arm of the Steadicam on a moving vehicle. The operator then sits next to the rig to operate the camera without having to hold the full weight of it. 

An early example of this was worked out by Garrett Brown on The Shining for the famous hallway tracking shots. They hard mounted the Steadicam arm to a wheelchair which could then be pushed through the hotel corridors in either high mode, or inches from the ground in low mode.   

Since a Steadicam rig with a cinema camera is extremely heavy, operators try to minimise the amount of time that they carry the rig in order to save their stamina for shooting. Any time the camera isn’t going for a take they’ll use a stand to rest the rig, or have a grip standing close by so that they can hand the post off to them as soon as cut is called. 

Communicating with the AD to make sure that the camera only goes up at the last possible moment, and isn’t waiting there for ages while make-up does final checks and the director stands in to give notes, is another good way of minimising time holding the rig. 

Since the camera is set to balance perfectly, if there are big gusts of wind the camera can be shaken and experience turbulence. Therefore it’s good to make sure the grip department is carrying a ‘wind block’. This is a sheet of mesh material attached to a frame that is held by grips between the source of the wind and the camera in order to minimise turbulence.

Another crew member that the Steadicam operator needs to communicate with is the focus puller. Since on an average shoot day the camera will usually need to alternate between studio builds and Steadicam builds the 1st AC and the Steadicam operator should come up with the easiest possible method to change between these configurations that’ll save the production the most time. Because, on a film set more than anywhere else, time is money. 

Read More
Gray Kotze Gray Kotze

How The French New Wave Changed Filmmaking Forever

Out of all of the film movements I’d say one of the most influential of them was the French New Wave. In this video I’ll outline four things from this film movement that are still present in how movies are made and thought about today, which were responsible for altering the course of filmmaking forever.

INTRO

“He immediately talked about, kind of, the French New Wave portrait of youth.” - Greta Gerwig

“The beginning of Jules and Jim, the first three or four minutes influence the style of Goodfellas and Casino and Wolf of Wall Street and so many.” - Martin Scorsese

“Godard was so influential to me at the beginning of my aesthetic as a director, of, like, wanting to be a director.” - Quentin Tarantino

Throughout the decades, there have been many defining film movements in cinema. Some have had a longer lasting impact than others. Out of all of them I’d say one of the most influential of these movements was the French New Wave, which took place from the late 50s to the late 60s. Its impact can still be seen to this day.

During this time various directors emerged who made films that could broadly be classified by their similar philosophy and approach towards experimentation and style. 

Many of these directors began their careers as film critics and cinephiles who wrote for the magazine Cahiers du Cinéma where they rejected mainstream cinema and came up with a sort of film manifesto that encouraged experimentation and innovation. 

In this video I’ll outline four things from this film movement that are still present in how movies are made and thought about today, which were responsible for altering the course of filmmaking forever.

AUTEUR THEORY

“An Inquisition-like regime ruled over French cinema. Everything was compartmentalised. This movie was made as a reaction against everything that wasn’t done. It was almost pathological or systematic. ‘A wide-angle lens isn’t used for a close up? Then let’s do it.’ ‘A handheld camera isn’t used for tracking shots? Then let’s do it.’” - Jean Luc-Godard

In 1954 director Francois Truffaut wrote an article for Cahiers du Cinéma called ‘A Certain Tendency of the French Cinema’, wherein he described his dissatisfaction of the adaptation and filming of safe literary works in a traditional, unimaginative way. 

Up until then movies were largely credited to the actors who starred in them, or to the studios and producers involved in their funding and creation.

Instead, the cinema of the French New Wave put forward the idea that the real ‘author’ or ‘auteur’ of a movie should be the director. They should be the primary creative driving force behind each project by creating a visual style or aesthetic specific to them. Their themes, tone, or overall feeling from their films should also be consistent and identifiable across their overall body of work. 

If you could glance at a film and immediately tell who the director behind it was - that was a sign it was created by an auteur. 

A film by Quentin Tarantino will have ensemble casts, non-linear storylines, chapter divides, mixed genre conventions and pay homage to the history of cinema.

A film by Wes Anderson will have fast-paced comedy, childhood loss, symmetrical compositions, consistent colour palettes and highly stylised art direction. 

This idea was revolutionary as it encouraged directors to tell stories through their own distinctive voice, rather than acting as craftsmen that followed the same rules and chiselled out each film the same way for a studio.

All it takes is watching a few trailers or the credits in a film to tell that auteur theory is still alive and well. Many movies use the name of the director as a selling point, even more so than the actors in some cases.

If we turn to short form filmmaking, a huge number of directors of commercials or music videos get hired by clients and agencies because they want their film told in a specific style associated with that director. 

You hire The Blaze to direct if you want a character-focused, wildly energetic, passionate, personal journey told with a fluidity of movement.You hire Romain Gavras to direct if you want a carefully coordinated, composed, concept driven set piece.

But this French New Wave idea of the director as an auteur is just the first thing that had an undeniable impact on how cinema today is created.  

  

LOW BUDGET

“I really like Band Apart. In particular it really kinda grabbed me. But one of the things that really grabbed me was that I felt I almost could have done that. I could’ve attached a camera to the back of a convertible and drive around Venice boulevard if I wanted to.” - Quentin Tarantino

In their more financially risky pursuit to break free from the constraints of the traditional mould of French cinema and create their own inventive styles as auteurs, many French New Wave directors had to work within a low budget lane.

This was also influenced by the financial restraints of post-World War Two France.  

Rather than seeing it as a disadvantage, a lot of the movies that came out of this period used their lack of resources to break conventional rules and form their own style - which we’ll get into more a bit later.

They took some cues from the Italian Neorealist movement that preceded it, which cut costs by shooting on location and working with non-professional actors in rural areas.

Likewise, many French New Wave films worked on location, with a bare bones approach to lighting and homemade, DIY camera rigs. This allowed them to work quickly, unencumbered by large crews and introduced a more on-the-ground aesthetic to the filmmaking. 

This further democratised filmmaking and made it more accessible than ever before. It showed that big studios were not always needed to produce great cinema.

This democratisation of filmmaking expanded further throughout the years, until it exploded even more with the introduction of low budget digital cinema cameras. 

There’s a reason that many low budget indie films today still use French New Wave films from this period as a primary reference and inspiration for, not only what is possible to achieve with limited resources, but also the kind of look and style that comes with it.  

VISUAL STYLE

“All these films had been very different of what had been French cinema. What was in common was to use a lot of natural light, sometimes use non actors, natural sets, a sort of speed in the inspiration and the work. That is what was in common.” - Agnes Varda 

What emerged from this rejection of cinematic tradition in a low budget environment were a burst of films that broke existing filmmaking ‘rules’ and had a vigorously experimental style.

Part of this was informed by a documentary-esque approach to cinematography that free-ed the actors up to move and improvise. Like documentaries, these films were largely shot at real locations, relied on using mostly natural light (which allowed them to shoot 360 degrees in a space), using a reactive, handheld camera and sometimes employed non-professional actors who they’d get to improvise dialogue, blocking and actions.

All this went against the more formal conventions that were previously expected of traditional studio films that were shot in studio sets, off a rigid dolly, with perfect, artificial lighting and precise blocking of a pre-approved screenplay.

In this way the French New Wave paved a path that made it OK for future filmmakers to work in a rougher, more naturalistic style and broke down the very notion that cinematography needs to conform to specific rules.


EXPERIMENTATION

“I think a lot of it has to do with the relentlessness of the voice over and the rapid speech and also the pace of the music under it.1:23 “It feels like there’s a sense of freedom. Anything could happen at any moment…Narrative is completely fractured I think.” - Martin Scorsese

French New Wave directors saw exciting possibilities for using film as a medium - more like painters or novelists did - which could not only be used to tell stories but also to translate their thoughts or ideas by experimenting with form and style. 

Much of this was done in the edit.

Whereas older films may have used a traditional, linear story, various scenes and exposition to unpack characters, films like Jules and Jim used voice over, fast paced music and snappy editing to immediately introduce characters and their relationships in a more fractured way that compressed time into a montage.

Directors like Godard broke down the medium even more into a self conscious, post modern vision by having characters literally break the fourth wall and talk directly into the camera, face to face with the audience.

Instead of attempting to suspend disbelief, Godard made his audience very aware that what they were watching was something constructed by an artist. 

Breathless also went against a universal rule of cinema and used jump cuts, a technique which cuts forward in time using the same shot, without changing the angle or shot size. The effect is an abrasive ‘jump’ forward in time. 

This technique influenced future filmmakers by tearing down the idea that the rules of cinema should be strictly followed. This post modernism that was pushed by the French New Wave has now seeped into every kind of contemporary visual art - including how many YouTube videos are now edited.

Read More
Gray Kotze Gray Kotze

Do Cinematographers Like Lens Flares? Textured vs Clean Images Explained

When it comes to the question of whether clean or textured images should be favoured, cinematographers are generally split into two different camps.

INTRO

“I can’t stand flares. I find any artefact that is on the surface of the image a distraction for me. The audience or I’m then aware that I’m looking at something that is being recorded with a camera.” - Roger Deakins, Cinematographer

“If the light shone in the lens and flared the lens that was considered a mistake.I feel particularly involved in making mistakes feel acceptable by using them. Not by mistakes or anything but by endeavour.”  - Conrad Hall, Cinematographer

When it comes to the question of whether clean or textured images should be favoured, cinematographers are generally split into two different camps. Some see their goal as being to create the most pristine, cinematically perfect visuals possible, while others like to degrade the image and break it down with light and camera tricks.

Before we discuss the pros and cons of clean and textured images, we need to understand some of the techniques used by cinematographers that affect the quality of how an image is captured. Then I’ll get into the case that can be made for clean images and the case that can be made for textured images and see which side of the fence you land on in the debate.   

 

WHAT MAKES AN IMAGE CLEAN OR TEXTURED

When cinematographers talk about shooting something that looks clean, they are referring to an image which has the subject in sharp focus, which is devoid from any excess optical aberrations, video noise, grain or softening of the highlights or bright parts in the frame. Some cinematographers however like to introduce different kinds of textures by deliberately ‘messing it up’. 

The easiest identifiable optical imperfection is the lens flare. This happens when hard light directly enters the open glass section at the front of a lens and bounces around inside the barrel of the lens off of the different pieces of glass, which are called elements. 

So to get a lens flare,  cinematographers use a backlight placed directly behind a subject or at an angle that is shined straight at the lens. A common way of doing this is to use the sun as a backlight and point the camera directly at the sun.

In the past, flares were often seen as undesirable so a few tools were introduced to get rid of them. To prevent a flare you need to block the path of any hard light that hits the lens directly. A mattebox is used not only to hold filters but also to block or flag light from hitting the front element. A top flap and sides can be added to a mattebox to cut light, as can a hard matte - which clips inside the mattebox and comes in different sizes which can be swapped out depending on how wide the lens is.

If a shot is stationary and the camera doesn’t move, the lighting team can also erect a black flag on a stand to cut light from reaching the lens.  

On the other hand, a trick some use to artificially introduce a flare when there isn’t a strong backlight is to take a torch or a small sourcy light like a dedo and hit the lens with it from just out of shot.   

Different kinds of lenses produce different kinds of flares, which are determined by the shape of their glass elements, the number of blades that make up the aperture at the back of the lens and the way in which the glass is coated. Standard, spherical lenses have curved, circular elements that produce round flares that expand or contract as the light source changes its angle. 

Anamorphic lenses are made up of regular spherical glass with an added section of concave glass that vertically squeezes the image. It is then de-squeezed to get a widescreen aspect ratio. 

Because of this, anamorphic lenses produce a horizontal flare that streaks across the frame. The Panavision C-Series of anamorphic lenses are famous for producing a blue anamorphic lens streak which is associated with many high end Hollywood films. 

The glass elements inside a lens have different types of coatings. Modern coatings are used to decrease artefacts and limit flooding the image with a haze when the lens flares.

As technology has improved these coatings have gotten progressively better at this and therefore more modern lenses produce a ‘cleaner’ image. One way that cinematographers who like optical texture get around this is to use vintage lenses that have older coatings that don’t limit flares as much or bloom or create a subtle angelic haze around the highlights. You even get uncoated lenses for those that really want to push that vintage look.

Another option to soften up an image a bit is to use diffusion filters. These are pieces of glass that are placed inside a mattebox and create various softening effects, such as decreasing the sharpness of the image, making the highlights bloom and softening skin tones. 

Some examples of these filters include Black Pro-Mists, Glimmer Glass, Pearlescents, Black Satins, Soft FX filters - the list goes on. They come in different strengths, with lower values, such as an eighth providing a subtle softness and higher values providing a heavy diffusion.

Some cinematographers even go more extreme by using their finger to deliberately smudge or dirty up the front of a filter. 

A final way of introducing texture to an image is with grain. This can be done either by shooting on a more sensitive film stock, like 500ASA and push processing it, by increasing the ISO or EI on the camera, or by adding a film grain effect during the colour grade in post production. 

THE CASE FOR TEXTURED IMAGES

“What lenses? Should it be sharp? Should it have flaws? Should it have interesting flares? I always try to be open to everything.” - Linus Sandgren, Cinematographer

Now that I’ve listed all the ways that an image can be messed up by cinematographers, let’s go over some reasons why anyone would actually want to do this in the first place.

Up until about the 1960s or 1970s, the idea of intentionally degrading how an image was captured wasn’t really prevalent. However, movements like the French New Wave or New Hollywood rebelled against capturing a perfect representation of each story and intentionally used things like flares to do this. 

Producing optical mistakes from a more on the ground camera created an authenticity and grittiness to the images in a similar way that many documentaries did.

In different contexts, optical aberrations, like lens flares, have been used to introduce different tonal or atmospheric ideas. For example, Conrad Hall went against the Hollywood conventions of the time and embraced flares on Cool Hand Luke to create a sense of heat from the sun and inject a physical warmth into the image that reflected the setting of the story.

Some filmmakers like deliberately using lower gauge film such as 16mm or even 8mm to produce a noisy, textured image. Often this is perceived as feeling more organic and a good fit for rougher, handheld films.

Textured images with a shallow depth of field also feel a bit dreamier, and can therefore be a good tool for representing more experimental moments in a story or to portray a moment that happened in the past as a memory.

Since the digital revolution, many DPs have taken to using diffusion filters and vintage lenses on modern digital cinema cameras - to balance out the image so that it doesn’t feel overly sharp. 

Degrading the image of the Alexa by shooting at a higher EI, like 1,600, shooting on lenses from the 1970s, or using an ⅛ or a ¼ Black Pro Mist filter, are all ways of trying to get the more organic texture that naturally happened when shooting on film back into the image.

THE CASE FOR CLEAN IMAGES

“Digital cameras were able to give us a beautiful, very clean, immersive image that we were very keen on…3:13 It almost translates 100% what you are feeling when you are in the location.” - Emmanuel Lubezki, Cinematographer

On the flipside, some DPs seek a supremely clean look that pairs sharp, modern glass with high resolution digital cameras.

One reason for this is that clean images better transport the audience directly into the real world, and present images in the same way that our eyes naturally see things. Clean images are regularly paired with a vision that needs to feel realistic.  

These cinematographers see any excess grain or aberrations as a distraction that pulls an audience out of a story and makes them aware that what they are seeing isn’t reality and is rather a visual construction.

When light flares across a lens it’s an indication that the image was captured by a camera and may disrupt the illusion of reality.

Sometimes filmmakers also want to lean into a clean, sharp, digital look for the story. It’s like choosing to observe the world directly, in sharp focus, rather than through a hazy, fogged up window.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Ari Wegner

Ari Wegner's cinematography isn't tied down to one particular look, and is rather based on a careful and deeply thought out visual style that uses informed creative decisions to present a look that is tailor made for each individual story or script.

INTRODUCTION

“I think that’s the question for any film. How do you get the energy of the script or the idea into it visually? Every film is different and every scene is different but if you know what your aspiration is to do that then you can think of some ideas of how to achieve that.”

In this series I’ve talked before about how some cinematographers like to create a look that is fairly consistent across much of their work, while others distance themselves from one style and mould the form of their cinematography depending on the script or director that they are working with.

Ari Wegner very much falls into the latter category. The films that she shoots are never tied down to one particular look, and are rather based on a careful and deeply thought out visual style that uses informed creative decisions to present a look that is tailor made for each individual story or script.

In this video I’ll unpack this further by diving into the philosophy behind her photography and showing some of the gear that she uses to execute those ideas.      

BACKGROUND

Growing up in Melbourne around her parents who were both artistically inclined filled her with an appreciation for the arts and creative thought from an early age. 

Her desire to work in film was sparked by her media teacher exposing her to short films, notably one by Jane Campion. She then changed her focus from photography to cinematography.

After graduating from film school she spent years shooting local independent films and documentaries, before breaking out by photographing Lady Macbeth, which screened at numerous festivals.

Some of the directors she’s worked with include: ​​Janicza Bravo, Jane Campion and Justin Kurzel.   

PHILOSOPHY

Something which is key to formulating the carefully considered look for each film is the time that she takes to prepare. Usually about four to six weeks of time is scheduled for pre-production on a movie - for which a cinematographer may only be present for a fraction of. However, on Jane Campion’s The Power of the Dog, an entire year was dedicated to pre-production, with Wegner present right from the get go.

“When Jane first called me to ask if I’d be interested, one of the things that she had as a condition was that she wanted someone to start straight away. I often feel like there’s not enough time to prepare or that there’s a rush. Once everyone’s in the same place the decision can feel very pressured. So, Jane wanted this film to feel relaxed and not have to be rushed into any decision.”

This luxuriously long prep allowed them to take their time and gradually and deliberately develop the look of the film. 

Together they scouted potential shooting locations in New Zealand, learnt about the land and foliage in the area, calculated what time of year would be best to capture the kind of dry-brown look they wanted and found a mountain range that felt mythical yet humble enough to believably be on a family ranch.

Campion and Wegner also set aside four to six weeks just to fully storyboard the script. They spent time drawing out the shots that they imagined, then took those ideas and drawings to the physical set that was being constructed and walked through them to finalise the storyboard. 

All this consideration and prep resulted in a cinematic style that was down to earth, observational, natural and unadorned. 

Part of this came from the colour palette - which roots the look of a lot of her films. Whether it’s the vivid, saturated colours in Zola, the warm nights and cool days in The True History Of The Kelly Gang, or the muted, bone-dry natural palette of The Power of the Dog.

Part of the job she takes on is getting different scripts, stories and ideas and applying creative thought to them in collaboration with the director in order to create a visual style that is suitable for each project.

“We started having some big conversations about colour. Myself, the production designer, the costume designer, about how we were going to unify the whole world. Making a film set in Montana in the 1920s in New Zealand. We decided a strict colour palette would be a good part  of the way to do that.”

GEAR

Like her approach to the colour palette of each film, Wegner’s lighting and selection of camera gear is based on coming up with solid cinematic ideas that represent the story.

To convey the shift in mood of the character that happens in Zola, she used a subtle shift in lighting and camera gear selection. As the relationship between the two characters shifted from an infatuated, trusting, light, fun one to a distrustful, darker, unhappy one, Wegner subtly introduced more shadows, colour clashes, contrast and darkness into the image. 

She did this with low key lighting and also by shooting the initial scenes using a low-con filter which subtly faded out the blacks and made the highlights blossom into the mid tones.

To dial in specific colour temperatures and hues into the lighting, she used various LED fixtures, such as: Arri SkyPanels, Astera Tubes and Litemats. She likes using LED lighting as well as practical fixtures that can be seen in the frame - whether that’s from a natural source like a fire or an artificial source like a lamp.

Sometimes she takes the edge off the quality of artificial or natural light by placing diffusion gels in front of the light source to soften the shadows that appear on a face.

Another example of how she uses light to represent an idea can be seen in The Power of the Dog. Often interiors are well lit and act as a safe haven for characters, however in this film she introduced large amounts of shadow inside the farmhouse to make the space more ominous and unsafe, by pushing hard light through windows and backlighting characters so that they appeared in silhouette.

She’s shot on 16mm film before on Zola when it was requested by the director to give it a magical quality that accentuated the highly unusual almost dreamlike story. She used Kodak 250D for the sunny exteriors and the more sensitive 500T 7219 for interiors and night scenes.

From looking at some behind the scenes photos it seems that she uses a digital stills camera to metre the film, by taking a picture with the same exposure settings as the 16mm cinema camera until she finds an exposure level that she is happy with.

Usually she favours shooting digitally on variations of the Arri Alexa, such as the Mini, the XT or the Mini LF. She’ll often pair these cameras with moderate, vintage glass that breaks down the digital sharpness with a very slight optical softness.

Some examples of lenses she’s used include: Zeiss Super Speeds, Bausch & Lomb Super Baltars, Panavision Auto Panatar 1.3x Anamorphics and zoom lenses from Angenieux or the Alura.

On The Power of the Dog she used zooms to shoot some of the wide shots at long focal lengths. This compressed the background and made it feel like it was closer to the characters. 

Once again, I’d ascribe the gear that she selects as being deliberately chosen on an individual basis for each story: sometimes using spherical glass, sometimes anamorphic glass, sometimes film, sometimes digital, sometimes using a Tiffen 812 warming filter or a lo-con in front of the lens and sometimes shooting it clean.   

Rather than creating and shooting with a custom LUT like many other DPs do nowadays, Wegner likes to work and expose the camera using Arri’s standard Rec 709 LUT as a reference. This is a basic, natural look that is applied to the flat log or RAW footage. She’ll then create the final look for the film along with the director during the colour grade.      

“I love the idea of LUTs and then every time I try to make one it kind of scares me. Usually I shoot REC 709 a lot. Knowing at the end of the day with the dailies colourist you can take it to a nice place for the editors. With REC 709 I know it very well. It’s not pretty but it’s reliable.” 

CONCLUSION

Wegner’s methodology always uses the story as a jumping off point. 

After much consideration she then comes up with a visual look that will support the kind of ideas that are present in the story. 

Whether that’s always using central framing in Lady Macbeth to show how trapped her character is, using her knowledge of lighting to introduce both a physcical and a psychological darkness into a space, or whether it’s reducing the contrast of the shadows in an image with a filter to brighten the mood.

Each film she shoots offers up an opportunity to do something new. 

Whatever photographic tool she chooses, it’s always deliberately thought out, planned and then executed to create visuals that subtly represent ideas and emotions that form her point of view on the movie.

Read More
Gray Kotze Gray Kotze

The Advantages Of Low Budget Filmmaking: Red Rocket

Director Sean Baker is a prime example of a filmmaker who has directed many features, has been to all the festivals, got all the acclaim and yet chose to reside and make films in the low budget range. So let’s look at his latest movie, Red Rocket, through this lens and examine what some of the advantages are to producing low budget indie films.

Making it to the top and directing the biggest films in the world may be the ultimate goal for some, but there are definitely pros to working in the more realistic, lower budget film world of indie films.

Director Sean Baker is a prime example of a filmmaker who has directed many features, has been to all the festivals, got all the acclaim and yet chose to reside and make films in the low budget range. So let’s look at his latest movie, Red Rocket, through this lens and examine what some of the advantages are to producing low budget indie films.

“That’s the big thing that I’m always conflicted about. I’m like, should I make my life easier and make a film for a studio or go to a series. It’s definitely tempting because of monetary purposes but it’s more than that. I want to tell personal stories. Films take a long time, you put all of your energy, all of your heart into them, so why not make the movie you want to make.” - Sean Baker

It’s important to remember that what allows films to be made in the first place is due to financing. Like with any investment, when individuals, or in this case a production company, is putting money down in the form of an investment they need to balance how much money is allocated with the risk of making that money back.

If a studio is throwing hundreds of millions of dollars into a movie, they need to keep their risk tolerance low by ticking off a list of items that have been proven to increase ticket sales and make money back. 

For example, choosing a story about a superhero with an existing fanbase, casting celebrity actors and doing test screenings of different edits to make sure that the movie is understood by and entertains the largest possible group of people, are known money printers and ways of decreasing the film’s risk of financially failing.

If instead you want to make a movie about a niche subculture, without any A-list celebrity actors and be given complete creative freedom, you need to accept that it’s a higher risk project for investors and will therefore be allocated a low budget, of say one point two million dollars, which is a much lower financial target to make back.

Red Rocket falls into the latter category, which looks at a niche character archetype of an adult film star that uses and recruits women into that world. 

He cast the leads through a combination of street casting, casting a theatre actor, first time actors, casting the lead off of his Vine page and even casting one role to his producer slash continuity supervisor slash costume designer.

Operating at a low budget level gave Baker the freedom to make the kind of film that he wanted to - which was true to his vision.

Some may see it as a disadvantage, but I think another advantage that low budgets have is that they allow you to work with a small crew.

“A four person camera crew pulled off those images. You had Drew Daniels, you had a 1st AC, a 2nd AC and a gaffer/grip. Then you had a one man sound team and then you had my sister who is the production designer on the film. The other four were just producers wearing many many hats” - Sean Baker

This crew of just ten people were able to move quickly, efficiently and pulled off the feature on a tiny 23 day shooting schedule. While having more money, more crew and more gear grants filmmakers more control, it also takes far longer to set up shots and move to new locations.

Drew Daniels shot the movie on 16mm film on Sean Baker’s own Arri SR3, which were paired with two interesting 1.44X Auto Panatar lenses from Panavision, a 16mm and a 50mm. 

These anamorphic lenses allowed them to shoot in a widescreen aspect ratio on 16mm and created a unique look that combined a Hollywood anamorphic grandeur with the more grainy, organic, low-fi look of 16mm Kodak film.

Daniel’s camera crew were super bare bones, with a first AC to pull focus, a second AC to load the film and do the boards and one gaffer slash grip to set up lights, rigs and any camera moves they needed. Having a smaller crew also created more of a family atmosphere that put the performers, especially first time actors, more at ease.

“My incredible actors go in front of a camera and they’ll try something and if it doesn’t work, who cares, it doesn’t work. Let’s go for an alt take and try something else. Getting everybody in that place where everybody is comfortable and feels safe. Red Rocket was perfect for that because it was a small ten person crew, tiny, we were a pod, we were very isolated. It just allowed for that. I think we just embraced that spontaneity. We were saying there’s improv in front of the camera so why can’t there be improv behind the camera.” - Sean Baker

Working with a small, flexible camera crew frees filmmakers up and allows them to go with the flow far more than on high budget productions where everything tends to be shot listed, storyboarded, pre-visualised, pre-lit and then executed as per the plan on the day of shooting.

As a general rule, the larger the setups are, the more budget is required which in turn locks filmmakers into pre-planning. Some like working this way, but other filmmakers enjoy working with less restraints, where it’s possible to add scenes, improvise dialogue or change shots based on unexpected happy accidents.

Many of the exterior scenes in the film were also shot without location permits in the small town. Having a low budget and a small production footprint allowed them to get away with it.

On Red Rocket, Baker encouraged his actors to improvise and explore at times. To find dialogue, blocking or actions that felt more natural and aligned with what their characters would do.

The best low budget films are usually those that lean into their limitations and create a tone and look that plays to their strengths. There’s a certain charm to films shot in a more intimate on the ground, run and gun style. That adds a touch of realism and puts the audience in the character's shoes. 

Filmmakers also have to be more careful in thinking about what they can and what they can’t include in the story. If there’s a big car crash in the story and you don’t have the financial resources to shoot it, it forces you into communicating that information in creative ways.    

More time, more money and more gear gives you more control, but sometimes accepting a lack of control can actually improve a film.

“I realised I couldn’t overcome it. So, there was an acceptance during pre-production that I was going to have to accept all of these freakin limitations. We realised we do not have the money and we do not have the time to throw at problems, so instead of tackling those problems we’ll pivot and go in another direction. So every day even though there was a problem every three hours there was also a miracle every three hours.” - Sean Baker

An example of this happened during the proposal scene. They knew that the local train passed through once a day and therefore only had one chance to get it in the background of the shot and only 20 minutes to set it up. 

As they rolled the camera, the conductor started blowing his horn at a moment that perfectly complemented the dialogue, resulting in a magically chaotic scene that couldn’t have been scripted.

Red Rocket shows that low budget filmmaking has many advantages. It allows the director more freedom to make the kind of film they want to make while working with a small, efficient crew that offers possibilities for improvisation, run and gun flexibility and finding magic moments.


Ultimately, it’s also a case study in how important it is to set up a lifestyle for yourself as a filmmaker which is both financially viable and creatively rewarding. 

Baker balances the financial side of making low budget films by also working on commercials on the side. Directing just one commercial a year is lucrative and finances his lifestyle and expenses while working on the lower budget films that he has complete creative control over.

Balancing the kind of creative and financial life that you want is something that everyone in the creative industry has to contend with. It’s down to each individual as to where they fall on the scale and how they plan their career to try and land in the space that they want to.

Read More
Gray Kotze Gray Kotze

Why Most Movies Are Shot On Arri Cameras

There is one particular brand of digital cinema camera that far and above is the most selected when it comes to high end productions. Let's take a look at why cinematographers choose to shoot on the Arri Alexa.

INTRODUCTION

“I think digital cameras…they’re all tools. It depends on the project. You choose a different camera like you used to choose a different film stock.” - Roger Deakins, Cinematographer

You hear cinematographers claim all the time that a camera is just a tool. One of many tools that can be selected from their cinematic toolbox. However if we look at the statistics, there is one particular brand of digital cinema camera that far and above is the most selected. 

From the 2022 Best Cinematography Oscar nominees, four out of five productions used Arri digital cameras. Out of the Best Picture nominees that number was six out of ten. 

You may think that this is just coincidence and we need a larger data sample size. Well then, from the 2021 Best Cinematography nominations four out of five used Arri. And the 2021 Best Picture nominees? Five out of eight.

If you keep going back it’s easy to see a clear pattern emerge. Most films these days are shot on Arri digital cameras. So, based on my own experience of working in the industry with these cinema cameras, I’ll explain the four main reasons that I see as being responsible for why most cinematographers on high-end productions select to shoot on the Arri Alexa. 

HISTORY

“The Alexa is my digital camera of choice. It has been since it came out.” - Ben Davis, Cinematographer

Someone might say it’s as simple as Arri cameras produce the best looking image. But there’s more to it than that.

To understand why Arri’s digital cameras are so popular we need to understand how the movie industry operated before digital when all productions were shot on film.

Due to the prohibitively high cost of film gear and cameras, they need to be rented out for films by production companies on a daily or weekly basis. There were two dominant gear manufacturers that emerged to produce this niche rental equipment: Panavision and Arri. A key difference between them is that Arri sells their equipment to third party rental houses or individuals, while Panavision exclusively rents the gear they produce. 

Each constructed their own camera system that had some differences, such as Panavision cameras using a PV lens mount and Arri cameras using a PL mount. However these cameras were all built around a standardised way of working that accepted most third party gear accessories, such as using 19mm rods to mount a mattebox. This meant that crew with different gear accessories could jump from a Panavision to an Arri system on different jobs without needing separate kits.

When digital began taking over from film, Panavision and Arri needed to come up with a digital alternative to their film cameras that could be interchangeable with existing lenses and gear accessories. 

Over the years, many working cinematographers had built relationships with these companies and had a track record of exclusively using their gear. So when film changed over to digital they naturally were drawn to what these two companies had to offer. 

Panavision produced the Genesis using some of Sony’s digital imaging technology which had a 35mm sized sensor. After early operational issues were fixed and the Genesis began seeing some initial use, it was quickly overshadowed upon the release of Arri’s competing camera the Alexa. 

The quality of the Alexa’s image, its usability and basic ProRes direct-to-edit workflow and being able to be privately bought up by a range of individuals and companies around the world meant that the Alexa took off, leaving the Genesis in the dust.

COMPATIBILITY


“It was kind of scary for me because…until then all my movies had been on film…Of course for me it was no doubt that if I was going digital it was going to be Alexa…I knew the Alexa would be the camera…that looked more the way I used to work with film.” - Natasha Braier, Cinematographer

When it came to this transition from film to digital cameras, Arri tried to make this leap as smooth as possible.

The Alexa was designed to be compatible with existing lenses and film equipment. Importantly, the user experience was also designed around the way that film was shot. Their camera had a recommended native EI, like a film stock, and had a menu screen on the camera which was simple to operate, which was based on the same few settings available on film cameras, like shutter angle.

Other menu systems of competitors like the Red One were a bit more convoluted and had more requirements such as needing to do ‘black shading’ to recalibrate the black balance of the camera.

This meant the Red’s menu was more technical, like a computer, which I think appealed less to many experienced cinematographers who were used to working on film cameras that had limited settings. The Arri menu was a far easier transition.

Initially the Red also had a more complicated RAW workflow than the Arri’s ProRes one.

Over time, Arri added more Alexa cameras to their line up featuring different body sizes and formats all based on the Alev sensor. This meant that cinematographers could choose between mini cameras, large format cameras, studio cameras, or even 65mm cameras and maintain the same Alexa look and compatibility in whatever format they needed. 

Arri accessories, such as their wireless follow focus, are also compatible with their cameras. It’s easier for camera assistants to work with both an Arri camera and Arri accessories. Kind of like having a Macbook and iPhone from Apple, rather than a MacBook and an Android phone. 

Using Arri accessories on a Red is of course possible, but it limits some features such as changing settings or playing back takes remotely from the focus handset, and requires additional elements such as an R/S cable to run the camera.   

Overall Arri’s simplicity and compatibility won out amongst cinematographers making the jump from shooting on film to shooting digitally.

LOOK

“I think the Alexa at the moments is the best camera out there…I thought that the image quality just in terms of its resolution and just that tiny little bit of movement from the pixels moving or whatever. The Alexa just has a little bit of life to it and I think if you go too far the image becomes lifeless. So I like that bit of texture it has.” - Roger Deakins, Cinematographer

We now get into probably the biggest reason most cinematographers love the Alexa: its look. 

The Alexa is favoured for producing excellent, flattering skin tones, colour that feels filmic and resolving detail in a way that balances a high resolution with an organic texture.

This is due to two factors: the camera’s sensor and its image processing ability.

The Alev CMOS Bayer sensor that is found in the Alexa has a high number of photosites that balances image sharpness with a high dynamic range and low noise. It has a low pass filter that blocks artefacts and an IR and UV filter which avoids strange colour effects but leaves enough of the red spectrum intact to deliver pleasing skin tones. 

The image processing of the Alexa was designed by Arri colour scientists who had developed their ARRISCAN and ARRILASER film scanning technology and were able to render colour in a very natural way. 

While companies like Red pursued high resolutions, Arri took their time and focused largely on colour science - which to many cinematographers was, and still is, more important since most projects still get finished at a 2K resolution.

RELIABILITY

“I also bought the very first Alexa Classic you know when it came out and I go you know I’ll be fine if it’s useable for three years and it ended up being a functioning tool and I still use it…the longevity of these products has been amazing.” - Phedon Papamichael, Cinematographer

Finally, the durability and reliability of the Alexa is incredibly renowned across the industry.

As cinema cameras are designed to be rented out and used regularly and over many years in the extreme outdoor conditions that movies are shot in this is an important factor.

While most high end cinema cameras now have a high degree of reliability, during the early development of digital cinema cameras there were many horror stories of cameras breaking down. The Red One had a reputation for being temperamental and overheating, while the Alexa was a solid workhorse with incredible reliability.

As I say, although these reliability issues from competing cameras have been smoothed over, in the early days I think this made some people nervous to shoot on a Red and gave their cameras a bit of a stigma, as time on a film set is extremely valuable and waiting for a camera to cool down before you can reboot it wasn’t very appealing.

I’ve worked with Alexas that are many years old on beaches, in deserts, in extreme temperatures and never encountered any issues. Arri’s track record of robustness, reliability and the longevity of their cameras remains to this day.   

CONCLUSION

Overall I’d say these four factors: Arri’s historical legacy in the film industry, the compatibility and ease of use of their products, the all important look, and their reputation for reliability, are what has made them the default choice for most cinematographers working today.

So much so that, as a camera assistant, when you work on any camera other than an Alexa it’s seen as an exception to the norm. I don’t see this trend changing any time soon, especially once they release their much anticipated Super 35 4K camera.

Read More
Gray Kotze Gray Kotze

Why Some Shots In Movies Feel Different

Ever notice how some movies totally wrap you up in the world of a character to the point of it being claustrophobic and uncomfortable, while other movies make you feel more like you’re just observing events in their lives unfold in a more detached way? Much of this tone and feeling is a result of the filmmaker’s choice of shot sizes.

INTRODUCTION

Ever notice how some movies totally wrap you up in the world of a character to the point of it being claustrophobic and uncomfortable, while other movies make you feel more like you’re just observing events in their lives unfold in a more detached way?

Much of this tone and feeling is a result of the filmmaker’s choice of shot sizes. 

To better understand the impact that different shots have on an audience I’ll first go over the basic shot sizes that are used by filmmakers and then dive into the effect that different types of shots, and how they are edited together, have on an audience.   

SHOT SIZES

Before we get into their psychology we need to understand the basics. Shot size refers to the camera’s field of view and the width of the frame relative to how a character is placed in it and how much space they occupy. 

Different shot sizes can be achieved by physically moving the camera closer or further away from the subject, or by using different focal lengths. The lower the focal length number the wider the field of view is.

 So, let’s familiarise ourselves with the vocabulary that filmmakers use to refer to the width of a frame. This vocabulary helps crew members to quickly communicate their vision and is based on conventions which have been established over many years. 

Starting on the widest end of the spectrum we have an extreme long shot or extreme wide shot. This is where the subject or character is totally visible and only takes up a tiny fraction of the total frame. They are used to provide a vastness and scope to the location or landscape of the story.

Due to this they are often used as establishing shots - the first shot that begins a scene and sets the context and broader space that the scene takes place in.

Moving in a bit we come to the long shot or wide shot. Like in an extreme wide the subject is shown from head to toe, however unlike an extreme wide the character now occupies more space in the frame. They are the main focus of the shot now rather than the landscape.

Wides are also commonly used as establishing shots and to show the full body actions of a character.

Next up, we push in further on the character into what is called a medium shot. This is where the bottom of the frame starts from above the waist and the top of the frame includes their head. Because we are closer to the subject we can now read their facial expression and performance more clearly, yet we are also wide enough to capture some of their upper body language and actions.    

There are also a few variations of this shot that have some different names. A medium long, three quarter or cowboy shot is slightly wider than a medium shot, usually starting from the knee or thigh region. The cowboy derived its name from its regular use in western films. The slightly wider frame allowed the audience to see both the actors face and their guns that were slung around their waist.   

The close up tightly frames the face of the character so that they take up almost all the space in the shot. 

The bottom of the frame usually sits just below the chin at around the shoulder level and the top of the head is either included, or framed out - which is sometimes called giving the actor a haircut. There are many different degrees of width that a close up can be shot in, depending on how intimate the filmmaker needs the shot to feel.

Finally we can push in even closer to an extreme close up. This is a shot that is so tight that all we see is a detail or single feature of the face, such as the eyes. Extreme close ups can also be used to photograph objects that hold value to the story, such as text. 

An extreme close up also goes by the name an Italian shot, due to its regular use by Sergio Leone in many of his Italian Western films. 

THE EFFECTS OF DIFFERENT SHOTS

When you pick up a camera and decide to shoot something, the shot size that you choose will first and foremost be determined by what you choose to include in the frame and what you choose to leave out.

When someone asks you to take a photo of them on a phone, do you get right up in their face and take a close up, or back far away to an extreme wide shot? You probably wouldn’t do either. Because the information that you need to include is them and perhaps some of the background. Most people would take a photo with a frame somewhere in between those two extremes.

Whereas for action scenes, we tend to stick to wider shot sizes for the simple reason that we need to be able to see the overall action in order to know what is going on. And if we cut to a close up of a specific detail, chances are the filmmaker wants you to notice that piece of information. 

So, information is the one key effect that choosing a shot has. The other important consideration is the emotion or feeling that comes from framing a shot in a different way.

While there aren’t any definitive rules set in stone that apply to every single film ever made when it comes to interpreting emotion from a shot size, I think a broad rule can be generally applied.   

In life, our proximity to a person defines our emotional relationship to them. The closer we are to someone the more intimate our connection to them is, and the further apart we are the more observational and emotionally distant we are to them.      

Being face to face with a partner has a different emotional feeling than watching the actions of someone you don’t know from across the room. The same principle can be applied to shot sizes. 

The more of the frame a character takes up, the more intimate and personal our connection to them feels. So having a close up of a character means the audience will unconsciously feel a greater sense of connection towards the character in that moment. It’s as if the filmmaker is saying, ‘Make sure you notice this detail or emotion. It’s very important.’ 

While viewing an entire movie shot in wide shots will distance the audience emotionally from the character and their actions, making it feel like we are passively watching them, rather than being transported directly into their head and thoughts.

Now the reality is that most films are shot with, and include, a variety of shot sizes. This is so that different pieces of performances can be chopped together continuously and without jump cuts. Having different shot sizes to work with also allows the editor to control the pacing and emotional arc of the cut by cutting to different shot sizes that come with different emotional connotations.

For example, a textbook scene will start with a wide establishing shot of the location. then cut to a wide of the characters in a scene. As we get to know those characters we’ll cut in closer to a medium and go back and forth on mediums as the characters exchange general dialogue. Then as what the characters are saying, and how they are saying it becomes more important and intimate the editor will start to cut to close ups.

By cutting from wider shots to progressively tighter shots, the scene is able to begin by establishing the information and space of the location, and then slowly shift the audience’s perception from a more general observation of characters to building up a more personal connection with them as we get to know them.

While this is the general rule, shot sizes can be manipulated in other ways for effect. For example, Son of Saul uses close ups of the lead character for almost the entire film. This emotionally puts you in the shoes of that character and makes the space of the film more claustrophobic and confusing as we never cut wide enough to establish the space.

An opposite approach can be found in a film like Memoria, where we stay wide for most of the film. This presents the actions to us as something to be observed from afar in a more objective way.

Editors can also play against convention by flipping the idea of starting wide and cutting in closer. 

The opening to The Deathly Hallows does this by starting on a bold opening statement. We cut from an extreme close up, to a close up, to a medium shot to an extreme wide. This creates an emotional arc that moves from extremely intimate to more detached, and controls the flow of information, providing context and establishing more of the world each time we cut wider.    

CONCLUSION

In shot sizes, filmmakers hold a very valuable tool in their hands.

Like a puppet master they can use the size of a shot to manipulate what the audience does or doesn’t know and, perhaps more importantly, to manipulate the very emotions of the audience and the relationship they have with the characters on screen.

Read More
Gray Kotze Gray Kotze

6 Basic Camera Settings You Need To Know

Let’s examine 6 of the most important camera variables or settings that can be changed: EI, shutter angle, aperture, ND filters, white balance and frame rate.

INTRODUCTION

For anyone who wants to take photography or cinematography more seriously, the first step is to distance yourself from the dreaded ‘auto’ setting on a camera, move the knob over to ‘manual’ and start to understand the basic camera variables or settings that change the way that an image is captured.

Professional image makers choose to manually manipulate these settings to maintain complete control over how an image looks and not leave those creative decisions down to the whims of automatic camera software.

In this video I’ll examine 6 of the most important camera variables that can be changed. These settings affect the image in different ways and can be placed into three separate categories: exposure settings, colour settings and motion settings. These 6 variables have both technical value that can be used to control how an image looks, and also have creative value that changes the effect, mood or feeling of an image.      

  

EXPOSURE SETTINGS

Exposure refers to how dark or light an image is. This is determined by the amount of light that goes through a camera's lens and hits the sensor of the camera - where the image is recorded.

A dark image with too little light is underexposed, a bright image with too much light is overexposed and an image with enough light is evenly exposed. A camera has four variables that can be changed to alter its exposure: EI, shutter angle, aperture and by using neutral density filters.

Let’s start with the EI, or exposure index setting - a good base setting to start with. This can be referred to using different metrics such as ISO, ASA, gain or EI depending on the camera, but the concept is the same. It’s a measurement of a film or camera sensor’s sensitivity to light. 

The lower the value the less sensitive it is to light and the darker an image will be. Raising this value means the sensor is more sensitive to light and the brightness of an image will increase.

Most professional digital cinema cameras have what is called a ‘base’ or ‘native’ EI setting where the sensor performs best and has the most dynamic range and lowest noise. For example the Alexa sensor has a native EI of 800.

While EI can be changed on digital cameras, when using film its speed or sensitivity to light is set at a fixed level, such as 50 ASA, and cannot be altered without changing to a different kind of film stock.

The next exposure setting we can manipulate is the shutter angle or shutter speed

A shutter is a solid layer in front of the sensor that opens and closes rapidly. When it opens it lets in light, when it closes it blocks light. The longer the shutter is open for the more light it lets in and the brighter an image is, while the shorter the shutter remains open, the less light it lets in and the darker an image is. 

Cinema cameras use shutter angle and show a measurement in degrees. A large shutter angle means that more degrees of the circular shutter is open and more light is let in. While a smaller shutter angle, with a smaller opening, lets in less light.   

Consumer or still photography cameras use shutter speed that shows this metric in fractions, such as 1/50th of a second - a measurement of how long the shutter is open for. So, fractions, such as 1/250th of a second means that the shutter is open for a shorter time and that less light will be let in, whereas fractions such as 1/25th of a second means the shutter is open for longer which lets in more light - resulting in a brighter image.

With these two settings done, we now move to the lens where we can set the aperture, iris or stop

This is the size of the hole at the back of the lens that allows light to pass through it. Iris blades can either be expanded to open up the hole and let in more light, or contracted to make the hole that light passes through smaller. 

On cinema lenses this is done manually by adjusting the barrel of the lens and on modern digital stills cameras it is usually adjusted via a button or scroll wheel on the camera which changes the iris of the lens internally. The aperture is either measured as a T-stop on cinema lenses or as an F-stop on stills lenses.

Whatever measurement is used, the lower the stop number the more light will be let through and the brighter an image will be. So a lens with a stop of T/2 has a large aperture opening and will let in much more light, while a lens with a stop of T/8 has a smaller opening and will let in less light.

These three settings, ISO, shutter speed and aperture are foundational to exposing footage and are called the ‘exposure triangle’. 

In photography these three settings are regularly adjusted individually to find the right exposure, however in cinematography, more often than not these settings are made up front and only tweaked for their photographic effect.

For example in cinema, usually the ISO will be set to its native level, such as 800, the shutter will be set to 180 degrees or 1/50th of a second to ensure motion or movement feels ‘normal’, then the stop of the lens will be set depending on how much of the background the cinematographer wants in focus.

Opening up the aperture to a low number like T/1.3 means a shallow depth of field with much of the image out of focus, whereas stopping down to about T/8 will mean more of the image is in focus.

So if this is the case then how else do cinematographers adjust the brightness of an image? 

They do it by manipulating the strength of the lighting and with the 4th exposure variable, neutral density, or ND, filters. These are pieces of darkened glass that can be put in front of the sensor or lens that decreases the amount of light that is let in without affecting the colour or characteristics of the image.

In film, a number is ascribed to a filter to show how many stops of light it blocks. Each stop is represented by 0.3. So ND 0.3 means 1 stop of light is blocked and ND 0.9 takes away 3 stops of light.  

Many modern cinema cameras have ND filters built into the camera which can be adjusted internally via a setting. ND filters can also be used as physical glass filters that are mounted onto the front of the lens using a tray in a mattebox, or with a screw in filter on stills lenses.          

COLOUR SETTINGS

Now that we know the 4 variable settings that we can use to adjust the brightness of an image in camera, let's look at another very important setting related to colour - white balance.

White balance, or colour temperature, is measured in Kelvin and changes how warm or cool an image looks.

The two most common white balance settings are 3,200K (or tungsten) and 5,600K (or daylight). This is because when you set the camera’s white balance to 3,200K and light an actor with a warmer, tungsten light the colour will appear neutral - not overly cool or warm.

Likewise when you set the camera to 5,600K and shoot with a cooler daylight fixture or outside in natural sunlight the image will also appear neutral.

This means that the lower you set the Kelvin value of the white balance the cooler an image will appear. So if you shoot outside in natural sunlight and set the camera to 3,200K then the image will be blue. Inversely if you shoot in tungsten light with a colour temperature of 5,600K then the image will be warm.

As well as having these two preset colour temperatures, most modern cameras also allow you to pick from a range of colour temperatures on the Kelvin scale and even have an auto white balance setting which automatically picks a Kelvin value to give the image a neutral colour balance.

It should also be noted that like with EI, when shooting on film the colour temperature is fixed to either daylight or tungsten and cannot be changed without using a different film stock.

MOTION SETTINGS

Finally, let's take a look at a camera setting that only applies to moving images - frame rate. To understand what frame rate is we need to think of film not as a video clip, but rather as a series of individual images.

When shooting on film, 24 still pictures are captured every second. Each of these pictures is called a frame. To create the illusion of a moving image these pictures are then projected back at a speed of 24 frames per second. You can think of it kind of like leafing through still images in a flip book at a speed of 24 pages every second.

Therefore, recording a frame rate of 24, or 25, frames per second with a camera produces the illusion of motion at a speed which is the same as that which we experience in real life.

Frame rate can also be used to exaggerate motion for effect by keeping the same playback ‘base’ frame rate of 24 frames per second and adjusting the frame rate setting that the camera captures.

For example if we want slow motion, we can set the camera to record 48 frames per second and then play it back at 24 frames per second. This results in twice as many frames and therefore a feeling of motion that is half as slow as that of real life.

Something important to note is that frame rate also affects exposure. Doubling the frame rate - for example from 24 to 48 frames per second - means that the camera loses a stop of light and will therefore be darker.  

CONCLUSION

So, there we go: EI, shutter angle, aperture, ND, white balance and frame rate -  six camera variables that every photographer or cinematographer needs to know.

If this all seems like too much technical information, the easiest way to practically get this information in your head is to find a digital camera and start experimenting with settings by shooting.

The more you practice with a camera, the more all of this information will start to become second nature. Until you get to a point where you can manipulate all of these settings unconsciously to capture that imaginative image that you see in your head.  

Read More
Gray Kotze Gray Kotze

How Paul Thomas Anderson Shoots A Film At 3 Budget Levels

Let's take a look at three films made at three different budget levels from director Paul Thomas Anderson in order to get a sense of the trajectory of his career, his approach to filmmaking and how some of his methods of production have both remained the same and slowly shifted throughout his career.

INTRODUCTION

Compelling, flawed characters. Ensemble casts. Masterfully chaotic stories rooted in universal themes. Visual innovation. Technical competence. An overall strong vision and auteur-like control. These are some of the characteristics that, in my mind, make Paul Thomas Anderson one of, if not the best, director of the last 20 or so years. 

Before we get started I think it is important to note that usually in this series I tend to feature directors who have undergone a greater change in the level of budget that they work with. 

With the exception of his early work, Anderson has mainly stuck to producing work around the $25 to $40 million range and has never ventured into the realm of studio blockbusters. Nevertheless, let’s take a look at three projects which have been made at increasing budget levels: Hard Eight, Boogie Nights and Licorice Pizza.

In doing this I hope to give a sense of the trajectory of his career, his approach to filmmaking and how some of his methods of production have both remained the same and slowly shifted throughout his career.

HARD EIGHT

“I was way too young to be given the keys to the car I think. I wrote it because I had to because it just came out.” - Paul Thomas Anderson

Anderson’s interest in making films began in his childhood in the San Fernando Valley and continued throughout his teenage years. He would write, direct and then film his ideas for shorts with his father’s Betamax video camera. He attended Santa Monica College but quickly became disillusioned with film school when he felt his ideas and experimentation were discouraged and filmmaking was turned into homework or a chore. 

Instead he started working as a production assistant on sets in LA and managed to cobble together $10,000 from a combination of money for college, gambling winnings and his girlfriend’s credit card to produce the short film Cigarettes & Coffee. 

He managed to cast Philip Baker Hall, an actor he greatly admired due to his work on Secret Honor - a film made by one of his greatest influences, director Robert Altman. 

“Yeah it was based on stuff. I’d been working in Reno. I’d spent some time up in Reno and I was coming off experiences there of watching old guys. I loved this actor named Phillip Baker Hall, still love him and I heard his voice as the character. I just started writing and that’s what came out.” - Paul Thomas Anderson

He would continue this writing process throughout his career. Many of the films he would write were based on life experiences he had and set in locations that he would frequent or had lived in. 

He took these experiences and places and created narrative arcs and settings with them. At the same time he often filled in the characters based on actors that he wanted to work with and wrote the roles with certain actors in mind.

Cigarettes & Coffee did very well. It got into the Sundance Short programme. When Anderson decided to turn it into a feature length film he also got into the Sundance programme in order to develop it.

It was there that he secured funding for the feature version, titled Sydney, through Rysher Entertainment all while in his early 20s. He was so young that some crew members on the set initially mistook him for a production assistant instead of the director.

“You know I just bluffed my way through directing. You gotta understand that at that time probably based on the success of Pulp Fiction and a couple other small independent films there was a lot of cash floating around from these cable companies. So if you could make a movie for under $2 million they could kinda sell it off piece by piece with just enough genre elements and a couple cast names and you could just go make your movie.” - Paul Thomas Anderson

So, with an independently funded $2 million low budget he set out to make the film by squeezing the production window into a narrow 28 days. 

He cast the film by scooping up some well known actors on the tight budget and shot it all on location.

Due to the tight schedule they had loads to shoot, particularly for the casino scenes which they had to squeeze into night shoots from 10pm to around 8am the next morning. 

To shoot a lot in the small window it helped that Anderson always had a very clear idea, visually, of how he wanted to capture the film, and no time was wasted with extraneous shots or deliberation.

To execute the cinematography he hired Robert Elswit, who at the time was probably the biggest name crew member on the project. They quickly found that they complimented each other and had similar visual sensibilities.  

“Paul doesn’t need a lot of help in certain areas. I understand his taste, maybe so it’s never a surprise. I can anticipate what he wants to do for the most part. He always has a visual style before he starts. Always. I mean it changes to some extent but it’s completely thought out. Nobody is more prepared. Nobody has really thought through pictorial style as completely as Paul.” - Robert Elswit, Cinematographer

This style included explorative camera movement - often done on a Steadicam - and slow dolly tracking. Elswit lit with moderate contrast ratios, exposed the actors well and used hard light in a naturalistic way.

Even though both loved the look of anamorphic lenses, the producers prohibited it due to budgetary reasons. As a compromise they shot Super 35 Kodak film stock with Panavision cameras and lenses in a 2.39:1 aspect ratio - an aspect ratio he would often use in his later films.

Rather than giving loads of direction to performances, or ‘manipulation’ as he called it, he tried to instil the feeling of what he wanted to the actors before production and cast all the parts exactly as he wanted them. As the cliche goes - most of acting is about casting.

When it came to editing he worked with a similar methodology. He doesn’t like cutting everything up too much and instead prefers to keep the performances intact and pull from limited takes.

When he submitted his first two and a half hour cut of the film, friction between him and the production company ignited over final cut. Rysher Entertainment cut it down, changed the music, titles and even the name of the film to Hard Eight. 

As a final compromise, the company said they would be happy to release Anderson’s version of the film if he came up with the funds himself in order to finish it. So, he used all the money he had from a recent deal he had signed for his next film Boogie Nights to finance post production and cut it the way he wanted to -and agreed to give up his original title Sydney.

Paul Thomas Anderson used the modest budget to secure a solid cast of big name actors to draw in an audience, produced the relatively small scope story over a limited 28 day production window, saved money on production design and by shooting Super 35 with an experienced DP, and eventually won out the creative battle for final cut.    

BOOGIE NIGHTS

“I went into my next situation thinking that the lesson I learned was to be paranoid, protective and don’t trust anyone. Fortunately I got to work with a great studio and a guy named Mike De Luca who was able to see what I’d gone through and said ‘No, no. Trust me and put your faith in me.” - Paul Thomas Anderson

Let’s backtrack a bit. Anderson first came up with the idea for his second film Boogie Nights when he was 18. He wrote and directed ‘The Dirk Diggler Story’, a 30 minute mockumentary about the golden age of porn.

“After I made the short film I wrote as a sort of full length documentary taking a kind of Spinal Tap approach, you know. But by the time I’d finished that, that format had kind of been worn out and done many times. I just kinda figured the way to do this is to go nuts and just make it straight narrative. I eventually had a shooting script of 186 pages.” - Paul Thomas Anderson

The eventual script looked at the rise and fall of a character in the 1970s porn scene and examined the idea of addiction, ego, surrogate families and communities. 

Like Hard Eight, he wrote the script with certain actors in mind - including performers that he had worked with prior on Hard Eight. And set it in the San Fernando Valley, an area he had grown up with and was familiar with. It featured an ensemble cast, inspired by the work of Robert Altman.

“Casting and writing are kind of the same thing. Because I write parts for actors that are my friends or actors that I don’t know that I really want to work with.” - Paul Thomas Anderson

With a buzz starting to go around about the script and while in post production on his first film, New Line Cinema came on board to produce with a $15 Million budget and promised a more hands off approach.

As it was a lengthy script that was larger in scope and needed the casting of many well known actors, the budget increase was still a bit stretched. After their positive experience on Hard Eight, Elswit was again hired as his DP. 

Elswit remarked that from the first location scout Anderson would outline the kind of shots he wanted. That detailed level of specificity helped them to save time and money, since it was a lengthy, ambitious film for its budget level.

This vision was also important when coordinating and communicating some of the complex long takes that Anderson had in mind. The most famous of which was probably the opening shot of the film, where a Steadicam operator started on a ride on crane, which boomed down, gave the operator a chance to step off and then track characters into an interior - introducing the audience to the space and world of the film in the first 3 minutes.

“These long, complicated tracking shots are really fun to do. I think the actors love them. Movie acting is sort of so pieced up and chopped up. Very rarely is action called and 3 or four minutes later their scene happens. It’s just kind of fun for them to really act something through and let it breathe. Let it happen.” - Paul Thomas Anderson

Due to the arduous nature of the shot the main steadicam operator Andy Shuttleworth had a backup Steadicam operator as they had scheduled doing this 1 shot over an entire night.

Eslwit lit the exterior scene with two strong, hard backlights and some smaller units which were meant to mimic street lights which were metered at a stop of T2.8. Inside the club his team rigged the lowest budget 70s-style disco lights they could find overhead to keep all film lights out of the shot. This was brighter at T/4. 

To maintain an even exposure across the different lighting levels Elswit used a wireless iris motor to slowly move the aperture remotely, going from T/ 2.8 outside to T/ 4 as the camera moved inside.

This time they had the budget to shoot with anamorphic lenses. They used Panavision C-series and pretty much shot the entire film with 3 lenses. A 40mm and 50mm for wider frames and a 75mm for close ups.

Anderson disliked heavy film grain so they shot on Eastman 100T 35mm film stock - the slowest practical speed stock they could find.

Like on Hard Eight Elswit liked to observe the natural light and then augment it with additional fixtures. To do this he would take stills on slide film during location recces, which had a limited dynamic range and therefore clearly showed what the natural light was doing. 

He’d then come in and accentuate the natural light by, for example, using large tungsten lights through windows for day interiors to mimic sunlight. 

Overall, the budget was spent on a large ensemble cast, re-creating the 1970s period scenes in the film, over a longer production schedule with many scenes in a long script which were shot innovatively with more extensive technical gear. 

LICORICE PIZZA

After a career of producing almost non-stop critically acclaimed work Anderson turned to the 70s and, again, the San Fernando Valley for his next idea.

“I had a story that wasn’t exactly mine but that paralleled mine. My relationship with Gary Goetzman, who I don’t know if many of you know is a producer. Gary worked in the Valley. He was a child actor. When that didn’t really work out he started a waterbed business. The stories he told was an opportunity to enter into a world that I remember very well.” - Paul Thomas Anderson

Again, his script pulled from his own experiences, in a setting he was familiar with, with dialogue and characters written for specific actors, or first time actors, that he had in mind.

Having worked many times with Phillip Seymour Hoffman in the past, he turned to his son to play the role, who, although it was his first film, gave a more realistic, understated performance than Anderson saw in the other castings. 

This was paired with an on screen chemistry with another first time actor, Alana Haim, who Anderson had shot music videos for in the past.

The long screenplay with its many scenes meant he needed a budget of around $40 million - which was supplied by MGM.

In the build up to the film Anderson decided to shoot lots of tests - partly to find a look for the film and partly to see if his two leads had enough on screen chemistry for the movie to work. This was a luxury that the higher budget afforded him - compared to Hard Eight that had to be shot in 28 days. 

During these tests they also looked at different lenses and pushing and pulling different film stocks until they settled on the look they were happy with.

After parting ways with Elswit after many films together, Anderson had developed an unusual way of working without a dedicated cinematographer. 

He, along with key technical crew members, such as his Gaffer Michael Bauman, camera operator Colin Anderson and key grip Jeff Kunkel all put their skills into a giant pot and shot a project without having a director of photography as a department head.

This worked due to the director’s technical prowess and track record with his experienced team of collaborators. This was first done on Phantom Thread, which they shot in the UK and repeated on various music videos.

However, when it came to shooting in the US they needed to have an officially credited DP due to union requirements. So, Anderson and Bauman shared the official credit of cinematographer while they continued working in the same collaborative style as before, with Anderson providing a visual direction and his key crew offering their input and technical execution. 

“Because we’re shooting in California you’re kind of required to have someone listed as the cinematographer versus when we were overseas…The workflow is a very collaborative environment. He and I kinda worked very closely with Colin Anderson who is the camera operator. You know, they’ll formulate a shot, the two of them will, and he and I will have done preliminary scouting and talk about the lighting and then on the day we’ll execute what the plan is.”  - Michael Bauman, Gaffer & Cinematographer

Like with many of his films, they mainly shot on vintage C-series anamorphic lenses from the 70s. After doing extensive tests they chose a set of Cs which included three different 50mm lenses each with different characteristics which they picked from depending on the situation or shot.

Anderson has always been passionate about not only shooting on film but even screening the dailies, the raw footage, projected using 35mm. 

“One of the things that we do is that we do film dailies. We watch dailies while we are shooting. On set we have a space that we work out of that we can project film. It’s me, it’s the camera department, the core team of the camera department basically department heads would come in and out. We use that process to figure out takes we’re going to use.”  - Andy Jurgensen, Editor

Overall, Licorice Pizza’s larger budget offered the director more time and resources to fine tune his vision by doing extensive camera tests, location scouting and tests with actors before stepping onto set. This resulted in a final film which utilised extensive shots with vast period correct backgrounds, shot with a curated selection of technical gear, stunts, and an ensemble cast which included some big name performers. 

CONCLUSION

Certain aspects of Paul Thomas Anderon’s way of working as a director have remained consistent throughout his career, such as: writing scripts based on his personal experiences with ensemble casts which are ratcheted up by chaotic actions, shooting on 35mm film, often with anamorphic lenses, working with a small, consistent crew, focusing largely on casting and then letting actors do their thing, and creating innovative visual languages based on camera movement.

However, the more established he has become, the more he has also been able to take his time to create the films, with more extended production schedules and more time for testing and finding the look before production begins. 

After the departure of Elswit, his methodology has also shifted away from the traditional route of working with a credited cinematographer, to a collaborative working style where he leans on the expertise of his crew department heads.

Despite these changes, his films always have a recognisable tone and style that ties them together despite the genre, script or subject matter of the film.

Read More