Gray Kotze Gray Kotze

The Last Colour Negative Motion Picture Film In The World: Kodak Vision 3

Let’s use this video to examine the last remaining range of colour negative film stocks, go over how to choose the right film, how to identify each stock’s specifications based on the label of their film can, and talk about the visual characteristics that contribute to the ‘shot on Kodak’ look.

INTRODUCTION

Shooting on film, in both photography and in the world of cinema, has seen a bit of a resurgence in recent times. After the release of capable, high end digital cinema cameras - like the Arri Alexa in 2010 - many may have thought that the era of shooting movies photochemically was done and dusted. However, over a decade later, motion picture film still exists.

But, unlike in photography where there are still quite a few different films to choose from, in the world of motion picture film there is only one commercially mass produced category of colour negative film that remains. From one company. Kodak Vision 3.

So, let’s use this video to examine the last remaining range of film stocks, go over how to choose the right film, how to identify each stock’s specifications based on the label of their film can, and talk about the visual characteristics that contribute to the ‘shot on Kodak’ look.

CHOOSING THE RIGHT FILM

When cinematographers shoot on film there are three basic criteria that will inform what film stock they choose to shoot on: the gauge, the speed and the colour balance.

First, you need to decide what gauge or size of film you will shoot on. This may be determined on the basis of budget, or due to a stylistic choice based on the look of the format.

The four standardised film sizes to choose from are: 8mm, 16mm, 35mm and 65mm.

The smaller the width of the film is, the less of it you need to use and the cheaper it will be but the less sharpness, clarity and more grain it will have. The larger the width of the film, the more you will need, the more expensive it’ll be and the higher fidelity and less grain it will have.

Next, you’ll need to decide on what film speed you want to shoot at. This is a measurement of how sensitive the film is to light and is comparable to EI or ISO on a digital camera.

Basically, the more light you’re shooting in, the lower the film speed needs to be. So bright, sunny exteriors can be shot on a 50 speed film, while dark interiors need to be shot on a 500 speed film.

Finally, films come in two colour balances: daylight and tungsten. This refers to the colour temperature of the light source that they are designed to be shot in. So when shooting under natural sunlight or with film lights like HMIs that have a colour temperature of approximately 5,500K it’s recommended to use a daylight stock. When shooting with warmer tungsten light sources, a tungsten balanced film should be used to get the correct colour balance.

As a side note, it is still possible to shoot a tungsten film, like 500T, in cooler sunlight.

Kodak recommends using a warm 85 filter and exposing the film at 320 instead of 500. However, some cinematographers, like Sayombhu Mukdeeprom, prefer to shoot tungsten stocks in daylight without an 85 filter and then warm up the processed and scanned images to the correct colour balance in the colour grade.


HOW TO READ A FILM LABEL

Within the Kodak Vision 3 range there are 4 remaining film stocks in production. Two daylight balanced stocks - 50D and 250D - and two tungsten stocks - 200T and 500T.

One of the best ways to further unpack the technical side of what makes up a film is to look at the information on the label that comes with every can.

The biggest and boldest font is how we identify what kind of film it is. This is broken into two parts. 50 refers to the film speed or EI that it should be metered at. So cinematographers shooting a 50 EI film will set the ISO measurement on their light metre to 50 to achieve an even or ‘box speed’ exposure of the image.

‘D’ refers to daylight. So this is a daylight balanced film.

The second part, 5203, is a code to identify what type of film it is. Every motion picture film has a different series of numbers that is used to identify it. So 35mm Kodak Vision 3 50D is 5203. 8622 is 16mm Fujifilm Super-F 64D. 7219 is 16mm Kodak Vision 3 500T.

It’s crucial that all cans of film that are shot are labelled with this code when sent to the film lab for development so that the film can be identified and developed at the correct box speed.

This brings us to the next text, develop ECN-2. This refers to how the film needs to be developed. ECN-2 development is basically the same process of passing the film through a series of chemical baths as C-41 - which is used to process colour negative film in photography.

However, it also includes an extra step where the remjet layer on the Vision 3 film is removed. Remjet is used to minimise the halation of highlights and decrease static from the film quickly passing through the camera at 24 frames per second.

Next, we have a table that indicates how the film should be exposed in different lighting conditions. Under daylight no extra filters are required and the film can be exposed with an EI or ISO of 50.

When shooting with a 3,200K tungsten light source Kodak recommends using a cooling 80A filter - which changes the light from 3,200K to 5,500K or daylight. Adding this filter lets through less light, so in this situation Kodak recommends exposing the film with an EI of 12.

This 35 means that the film comes in a 35mm gauge width. These numbers refer to the kinds of perforations it has on the sides of the film. 

And, the final important number refers to how many feet of film the roll contains. 

When shooting on 35mm the most common roll length is 400ft -  which is used for lighter camera builds. But 1000ft rolls can also be used in larger studio magazines that allow filmmakers to roll the camera for longer before needing to reload.   


KODAK VISION 3 CHARACTERISTICS

There’s a good reason why many DPs who shoot on digital cinema cameras still try to create a Kodak ‘look’ for footage using a LUT or in the colour grade. 

Whether it’s the result of the long legacy of shooting movies on film, or whether it’s just that filmic colour is actually more visually appealing, the film look remains sought after. However, it’s important to remember that the look of film has changed over the years due to the methods used by manufacturers.

For example, many iconic Hollywood films from the 70s that were shot with the famous 5254 have a more neutral, crushed, grainy look than modern Vision 3.

Also, keep in mind that modern productions shot on film are all scanned and then graded in post. So the colour in the final file may be different depending on how much the colourist remained true to, or strayed from, the original colour in the negative.     

Kodak film has always been considered rich, with good contrast and warmer than Fujifilm - which has more pronounced blues and greens.

As it’s the most modern, the Vision 3 range is the cleanest looking motion picture film stock produced. The most sensitive of the bunch, 500T, has very low grain - even when push processed.

For this reason, filmmakers who seek a deliberately high grain, textured image these days regularly opt to shoot in 16mm, rather than the lower grain 35mm.

The colour produced is on the warmer side - which helps to create beautiful, rich looking skin tones that are more saturated than Kodak’s older Vision 2 stock.  

Vision 3 film also has a big dynamic range of approximately 14 stops - which is more than older films. This means that when it’s scanned and converted to a digital file, the colourist is able to do more with it, such as use power windows to recover highlights from over exposed areas.

“As a colourist my job is to try to build a good contrast level and keep the detail in the lowlights. I find that the 5219 stock was designed so that I can have that contrast and the detail as well without having to do anything extra like power windows to pull the detail out.” - Mike Sowa, Colourist

What I especially love about the film is how it renders the highlights with a subtly blooming halation effect and how it renders detail in a way that is not overly sharp. 

With modern post production colour, it’s possible to almost exactly replicate this look with digital footage. You can get almost identical colours, you can add scans of film grain on top of the image. But, to me, what is still not achievable in post is recreating how film renders details in an organic way that digital technology is still not able to recreate.  

CONCLUSION

So that brings us to the end of this video. As always, a final thanks to all of the kind Patrons who keep the channel going with their support and receive these videos early and free of ads. Otherwise, until next time, thanks for watching and goodbye.

Read More
Gray Kotze Gray Kotze

How Jordan Peele Shoots A Film At 3 Budget Levels

Jordan Peele is a director who possesses a true love of genre - especially the horror genre. His films have used genre as a structural framework, which are filled in with satirical stories that explore wider themes, ideas and issues in society, told through the eyes of his protagonists. In this video I’ll explore the work of Jordan Peele by looking at three films that he has directed at three increasing budget levels: Get Out, Us, and Nope.

INTRODUCTION

Jordan Peele is a director who possesses a true love of genre - especially the horror genre. His films have used genre as a structural framework, which are filled in with satirical stories that explore wider themes, ideas and issues in society, told through the eyes of his protagonists.

Telling stories in such a bold, direct manner, that at times challenge and poke at the audience’s own insecurities and deep set fears has sometimes meant that his films have gotten polarised reactions.

In this video I’ll explore the work of Jordan Peele by looking at three films that he has directed at three increasing budget levels: the low budget Get Out, the medium budget Us, and the high budget Nope to unpack the methodology behind his filmmaking and his career.  


GET OUT - $4.5 MILLION

From his background in sketch comedy, Peele transitioned to another genre for his debut feature film. 

 “I think horror and comedy are very similar. Just in one you’re trying to get a laugh and in one you’re trying to get a scare.” - Jordan Peele

Both genres rely on careful pacing, writing, reveals and filmmaking gags that are used to invoke an emotional response from the audience. He also brought his appreciation for direct satire and social commentary from sketches into the horror screenplay.

In fact, some of the films that inspired him were stories written by Ira Levin, like The Stepford Wives and Rosemary’s Baby - built around the horror genre and underpinned with a satirical commentary on society. 

“Those movies were both extremely inspiring because what they did within the thriller genre was this very delicate tightrope walk. Every step into ‘weird town’ that those movies make, there’s an equal effort to justify why the character doesn’t run screaming. That sort of dance between showing something sort of weird and over the top and then showing how easily it can be placed with how weird reality is. That’s the technique I brought to Get Out.” - Jordan Peele

Justifying the actions of the characters so that the audience does not question the decisions that they make is particularly important in the horror genre or any genre that incorporates elements of the supernatural into a story.

Slowly backing the characters up into a corner until they have no escape is what creates the necessary suspenseful environment. 

He pitched the script to Blumhouse Productions - who have a track record of producing low budget horror films, under the $6 million mark, that they are later able to get wide studio releases for that catapult them to financial success due to the wide commercial audience for horror.

It was through Blumhouse that he was connected with DP Toby Oliver who had previously shot other films for the production company.  

“It began as the fun of a horror story. In the middle of the process it turned into something more important. The power of story is that it’s one of the few ways that we can really feel empathy and encourage empathy. When you have a protagonist, the whole trick is to bring the audience into the protagonist’s eyes.” - Jordan Peele

Peele puts us in the character’s shoes through the way that he structures almost all of his stories around a central protagonist. He also uses the placement of the camera, how it moves and the overall cinematography to make us see the world from the point of view of the main character.

Oliver lit most of the film in a natural way, presenting the world to the audience in the same way that the protagonist would see it.

“My pitch to him was that I thought the movie should have really quite a naturalistic feel. Not too crazy with the sort of horror conventions in terms of the way it looks. Maybe not until the very end of the movie where we go towards that territory a little bit more. With the more stylised lighting and camera angles.” - Toby Oliver 

Instead, the camera often tracked with the movement of the protagonist or stayed still when he was still. 

They also shot some POV shots, as if the camera literally was capturing what the character was seeing, or used over the shoulder shots that angled the frame to be a close assimilation of the actor’s point of view.

This framing technique, combined with a widescreen aspect ratio, also stacks the image so that there are different planes within the frame.

“What I love to do as a DP is to have story elements in the foreground, midground and background. When you’re looking through the frame there’s depth that’s telling you something more about the characters and story as you look through it.” - Toby Oliver 

One of the challenges that came with the film’s low budget was an incredibly tight 23 day shooting schedule. To counter this they did a lot of planning about how the film would be covered before production started - which included Peele drawing up storyboards for any complicated scenes and walking through the scenes in the house location and taking still photos of each shot they needed to get, which Oliver then converted into a written shot list.

They shot Get Out using two Alexa Minis in 3.2K ProRes to speed up the coverage of scenes, using Angenieux Optimo Zoom lenses, instead of primes, which also helped with the quick framing and set up time that was needed.

Overall, Get Out was crafted in its writing as a contained, satirical horror film, shot with limited locations, fairly uncomplicated, considered cinematography through the eyes of its protagonist, and pulled off on the low budget by shooting on a compressed schedule with pre-planned lighting diagrams and shot lists.   

US - $20 MILLION

“It really is a movie that was made with a fairly clear social statement in mind, but it's also a movie that I think is best when it's personalised. It’s a movie about the duality of mankind and it’s a movie about our fear of invasion, of the other, of the outsider and the revelation that we are our own worst enemy.” - Jordan Peele

Building on the massive financial success of Get Out, Peele’s follow up film took on a larger scope story that demanded an increased budget. Again, Blumhouse Productions came on board to produce, this time with an estimated budget of $20 million.

Like Get Out, Us was also written as a genre film, this time leaning more into the slasher sub-genre of horror. 

“I think what people are going to find in Us is that, much like in Get Out, I’m inspired by many different subgenres of horror. I really tried to make something that incorporates what I love about those and sort of steps into its own, new subgenre.” - Jordan Peele

This time Peele hired Michael Gioulakis to shoot the project, a cinematographer who’d worked in the horror and thriller genre for directors such as M. Night Shyamalan and David Robert Mitchell.

One of the early challenges that they faced in pre-production was a scheduling one. Because they had four leads, who each had a doppelganger in the movie, and changing between shots with those doppelgangers required hours of hair and make-up work, they needed to precisely plan each shot.  

“Because you could never shoot a scene like you normally would where you shoot this side and then shoot the other side, we ended up actually boarding pretty much the whole movie. Which helped us to isolate who would be seen in which shot in which scene and then we could move that around and structure our day accordingly with costume and make up changes.” - Michael Gioulakis

The actors would arrive on set and do a blocking from both sides of the character. When shooting they then used a variety of doubles and stand-ins, who would take up one of the dopplegangers positions so that the actor had an eyeline to play to. They would shoot the scene from one of the character’s perspectives and then usually come back the next day and do the other side of the scene.

For some wider two shots they left the camera in the same position, shot one shot with one character, one empty plate shot without the characters and one shot with the character in new make up. Or they would shoot the scene with a double and did a face replacement in post production. 

Not only was continuity very important for this, but also the lighting setups had to remain consistent between shots.

“I kind of like the idea of heightened realism in lighting. Like a raw, naturalistic look, just a little bit of a slightly surreal look to the light.” - Michael Gioulakis

A great example of this idea can be seen in the opening sequence inside the hall of mirrors where he used soft, low level LED LightBlade 1Ks with a full grid Chimera diffusion to cast a cyan light to give a more surreal feeling to what should be darkness.

Like in all of his work, Peele’s cinematographers often play with the contrast between warm and cool light and the connotations that warm light during the day is comforting and safe and bluer light at night is colder, more scary and dangerous. 

This isn’t always the case, but generally in his films, Peele paces moments of comforting characterisation during the day with moments of darker terror at night.

One of the trickier sequences involved shooting on a lake at night. Instead of going the usual route of mimicking moonlight, the DP created a nondescript, key tungsten source, punctuated by some lights off in the background to break up the darkness. 

His gaffer put a 150 foot condor on either side of the lake, with three 24-light dinos on each condor to key the scene. They then put up a row of 1ks and sodium vapour lights as practicals in the background.

The film was shot with an Alexa and Master Primes - on the 27mm and 32mm for about 90% of the film. He exposed everything using a single LUT that had no colour cast at the low end which rendered more neutral darker skin tones.

In the end, Us was shot over a longer production schedule that accommodated for double-shooting scenes with the leads, stunt scenes, bigger set design builds, and digital post production work by Industrial Light & Magic. 


NOPE - $68 MILLION

“First and foremost I wanted to make a UFO horror film. Of course, it’s like where is the iconic, black, UFO film. Whenever I feel like there’s my favourite movie out there that hasn’t been made, that’s the void I’m trying to fill with my films. It’s like trying to make the film that I wish someone would make for me.” - Jordan Peele

For his next, larger budget endeavour he turned to the UFO subgenre with a screenplay that was larger in scope than his previous films, due to its large action and stunt set pieces and increased visual effects work. 

Even though it was a bigger movie, the way in which he told and structured the story is comparable to his other work in a few ways. One - it was written as a genre film, based on horror with offshoots of other subgenres. Two - it was told over a compressed time period using relatively few locations. Three - it featured a small lead cast and told the story directly through the eyes of his protagonist.

With a larger story and a larger budget came the decision, from esteemed high budget cinematographer Hoyte Van Hoytema, to shoot the film in a larger format.

“So I talked to Hoyte. Obviously scope was a big thing and I wanted to push myself and I asked him, ‘How would you capture an actual UFO? What camera would you use?’ And that’s what we should use in the movie. Both in the movie and in the meta way. And he said the Imax camera.” - Jordan Peele

So the decision was made that to create an immersive, otherworldly, large scope cinema experience they would shoot on a combination of 15-perf, large format IMAX on Hasselblad lenses and 5-perf 65mm with Panavision Sphero 65 glass.

They stuck to Imax as much as they could, but had to use Panavision’s System 65 for any intimate dialogue scenes, because the Imax camera’s very noisy mechanics that pass the film through the camera make recording clean sync sound impossible. 

They shot the daytime scenes on 65mm Kodak 250D and dark interiors and night scenes on Kodak 500T. They also used Kodak 50D to capture the aerial footage. He developed the film at its box speed without pushing or pulling it to ensure they achieved maximum colour depth and contrast ranges without any exaggerated film grain.

The most challenging scene for any cinematographer to light is a night exterior in a location which doesn’t have any practical lights to motivate lighting from. 

Unlike the night exteriors in Us, which were keyed with tungsten units from an imagined practical source, van Hoytema chose to instead try to simulate the look of moonlight. There are two ways that this is conventionally done. 

The first is shooting day for night, where the scene is captured during the day under direct sunlight which is made to look like moonlight using a special camera LUT.

The second way is to shoot at night and use a large, high output source rigged up in the air to illuminate a part of the exterior set. However the larger the area that requires light, the more difficult this becomes. 

Van Hoytme came up with an innovative third method that he had previously used to photograph the large exterior lunar sequences on Ad Astra. 

He used a decommissioned 3D rig that allowed two cameras to be mounted and customised it so that both cameras were perfectly aligned and shot the same image. 

He then attached a custom Arri Alexa 65 which had an infrared sensor that captured skies shot in daylight as dark. A Panavision 65 camera was mounted to capture the same image but in full colour.

In that way they shot two images during the day that they could combine, using the digital infrared footage from the Alexa 65 to produce dark looking skies and pull the colour from the film negative of the Panavision 65.

This gave the night sequences a filmic colour combined with a background which looked like it was lit with moonlight and allowed the audience to ‘see in the dark’.

“Shooting on Imax brings its whole own set of challenges to the set. So for somebody that hasn’t shot on Imax you definitely bump yourself out of your comfort zone. By doing tests it became very evident, very early, that the advantages by far outweighed the disadvantages or the nuisances.” - Hoyte van Hoytema

While maintaining many of the story and filmmaking principles from his prior films, Nope was pulled off on a much larger budget that allowed them to shoot in the more expensive large format, with more money dedicated to post production, stunts and large action sequences that the bigger scope script required. 


CONCLUSION

Jordan Peele’s filmic sensibilities that value genre, stories which contain broader social commentary, told with a limited cast, in limited locations, through the sympathetic eyes of its central protagonist have remained the same throughout his career as a writer and director.

What has changed is the scope of the stories he tells. Each new film he’s made has seen increasingly bigger set pieces, more complex action scenes and larger set builds which are captured by more expensive filmmaking techniques.

This increase in scope is what has influenced each bump up in budget - all the way from his beginnings as a low budget horror filmmaker to directing a massive, Hollywood blockbuster.

Read More
Gray Kotze Gray Kotze

Why Top Gun: Maverick Action Scenes Feel Unbelievably Real

The runaway financial success of Top Gun: Maverick that makes it, at the time of this video, the sixth highest grossing movie in US box office history can be boiled down to numerous factors. This video will look at one of those factors: its aerial action scenes.

INTRODUCTION

The runaway financial success of Top Gun: Maverick that makes it, at the time of this video, the sixth highest grossing movie in US box office history - coming out ahead of even an Avengers movie - can be boiled down to numerous factors. 

It was built on existing copyright and boosted by the success of the original Top Gun. It starred Tom Cruise. It pulled at the sentimental strings of a huge audience that missed the big, Hollywood blockbusters of old while still revitalising it with something fresh. It was directed with a deft handling of emotion. And - what we’ll talk about in this video - it was executed with amazingly filmed aerial action sequences that kept audiences right on the edge of their seats.

IN-CAMERA VS VFX

But, what is it that differentiates these moments of action from many of the other blockbuster set pieces that we’ve become used to? I’d pin point it to an effective use of ‘in-camera’ photography. In other words using real effects more than visual effects.

“I think when you see the film you really feel what it’s like to be a Top Gun pilot. You can’t fake that.” - Joseph Kosinski

Much of the appeal of what makes up a blockbuster comes from the sequences which feel ‘larger than life’ and offer a spectacle. Whether that means large choreographed dance routines, car chases, bank heists or displays of superpowers.

Every scene like this requires a filmmaking solution beyond the realms of just shooting two actors talking. 

On the one end we have practical or in-camera effects. This is where real world filmmaking, engineering solutions and optical trickery are mixed - such as shooting miniatures or using forced perspective.

At the other end we have CGI, where computer software is used to manipulate and create those images.

Almost every big budget movie nowadays, including Top Gun: Maverick, uses a combination of both practical photography and computer-generated imagery. However some films, like Maverick, prioritise in-camera effects in order to achieve shots with a greater tie to reality.

“You can’t fake the G-forces, you can’t fake the vibrations, you can’t fake what it looks like to be in one of these fighter jets. We wanted to capture every bit of that and shooting it for real allowed us to do that.” - Joseph Kosinski

Once director Joseph Kosinski and cinematographer Claudio Miranda had the shooting script in their hands they had to start making decisions about how they would translate the words on the page into awe inspiring aerial action set pieces.

Shooting aerial sequences is a large practical challenge. 

First, they broke the aerial shots that they needed into three types of shots: one, on the ground shots, two, air to air shots, and three, on board shots. 

1 - ON THE GROUND

To execute the many aerial sequences in the movie they turned to David Nowell, a camera operator and specialist aerial director of photography who had worked on the original Top Gun film. 

“If you analyse the first Top Gun about 75% of all the aerials we actually did from the mountain top because you can get stuff on a 1,000mm lens that you just can’t quite get when you’re filming air to air. And I brought that forward to Joe Kasinski, saying, ‘You have to do this on this movie. This is the difference it makes.’ And so, we did. We spent almost a week on the new Top Gun just on the mountain top getting all the different shots that they needed.” - David Nowell

Cinematographer Claudio Miranda selected the Sony Venice as the best camera for this shoot - for reasons we’ll get to later. This digital footage was warmed up a lot, given deep shadows and had artificial 35mm film grain added to it in the grade to give the footage a similar feeling to the original - with its warm, bronzed skin tones.

To further enhance the original Top Gun look, Miranda consulted with Jeffery Kimball, the cinematographer on the 1986 film, who passed on information about the graduated filters that he shot with.

Grads or graduated ND filters have a gradient level of ND that is strong at the top and decreases at the bottom, either softly or with a hard definition. Usually grads are used to shoot landscapes or skies. When the darker ND part of the filter is placed over the sky it produces a more dramatic, tinted look.   

To capture all the angles that they needed for these scenes meant that a massive camera package was used. Six cameras could be used for the on-board action, four cameras could be mounted to the plane's exterior at a time, the air-to-air shooting was another camera and a few cameras were needed for the ground to air unit.

Like the original they decided to shoot on spherical lenses and crop to a 2.39:1 aspect ratio. This was due to spherical lenses having better close focus abilities and being smaller in size than anamorphic lenses, which allowed them to be placed in tight plane interiors. 

To get shots of the planes from the ground, a camera unit was equipped with a Fujinon Premier 24-180mm and a 75-400mm zoom. They also carried two long Canon still lenses that were rehoused for cinema use: a 150-600mm zoom and a 1,000mm lens.

When this wasn’t long enough they used a doubler from IBE Optics. This 2x extender attaches to the back of the lens via a PL mount and doubles the focal length range. So a 75-400mm zoom effectively becomes a 150-800mm lens.

Tracking fast moving objects so far away is very difficult, so the operators ended up using modified rifle scopes mounted on top of the camera to help them sight the planes. 

The on the ground scenes captured an F-14 Tomcat, which was re-skinned or made to look like an F18, with digital effects. This is a great example of the kind of intersection between practical photography and digital effects which I talked about earlier.    

2 - AIR TO AIR

Although very useful, on the ground cameras are unable to physically move the camera to track with the aircrafts beyond using pans and tilts. For dynamic, in the air motion and a raised point of view the camera team shot air to air footage.

This required shooting with a CineJet - an agile Aero L-39 Albatros jet that has a Shotover F1 stablised head custom built onto the front of the nose which houses the camera. 

The camera can be operated while the position of the plane is also adjusted relative to the other planes they were shooting by an experienced pilot. 

Since the Shotover is primarily designed to be used from a slower moving helicopter, and on Maverick they were shooting a fast moving Boeing F/A-18F Super Hornet fighter jet, they needed to come up with a technical solution.

“The one big change for Top Gun is that the Shotover systems that we’ve used for years…was never fast enough to go any faster than what a helicopter would do. But then Shotover…they updated the motors that would take the high torque needed to pan and tilt while flying 350 knots, that’s close to 400 miles per hour.”  - David Nowell   

For certain sequences that required a shot looking back on aircrafts, they used an Embraer Phenom 300 camera jet that had both front and back mounted Shotovers.

The Venice that was mounted on the Shotover was paired with a Fujinon zoom, either a 20-120mm or a 85-300mm zoom. Some helicopter work was also done with the larger Shotover K1 that had an extended case that could house Fujinon’s larger 25-300mm zoom.

3 - ON BOARD

Arguably the most engaging and jaw dropping footage in the film comes from the cameras that are hard mounted onto the plane itself.

There are two ways that this kind of footage can be shot. The most common technique involves placing actors in a stationary car, spaceship, plane or whatever kind of moving vehicle it is, on a studio soundstage. 

Outside the windows of said vehicle the technical crew will place a large bluescreen, greenscreen or nowadays, a section of LED wall. The actors then pretend the vehicle is moving, do their scene and the crew may give the vehicle a shake to simulate movement. 

In post production this bluescreen outside the windows is replaced with either footage of the background space they want to put the vehicle in, such as highway footage, or with an artificial, computer generated background.

The two main reasons for shooting this way is that, one, it is usually a cheaper way of shooting and two, it offers a far greater degree of control. For example, it allows the actors to easily repeat the scene, the director can monitor their live performances and talk to them between takes, the sound person can get clean dialogue and the DP can carefully light so that the scene is exposed to their liking. 

Instead of taking this more conventional approach, Top Gun’s creative team made the radical decision to shoot this footage practically - in real life. 

To prepare, the actors underwent three months of training, designed by Tom Cruise, so that they could withstand the extreme forces that would play out on them during filming.

Along with the difficulties involved in the actors giving complex emotional performances while flying at extremely high speeds, rigging the on board cameras to capture these performances was no easy feat.

The main reason that Miranda went with the Sony Venice was due to its Rialto system. This effectively allows the camera to be broken in two: with one small sliver that has the sensor and the lens and the other which has the rest of the camera body and the required battery power. These units are tethered by a cable.

1st AC Dan Ming, along with a team of engineers, came up with a plan to mount six cameras inside the F18.

They custom machined plates that could be screwed into the plane that the cameras were mounted to. Three Venice bodies and a fourth Venice sensor block were mounted in front of the actors in the back seat of the jet. These were tethered to a body block and battery rack that they placed near the front seat where the real pilot was. 

Two additional sensor blocks were also rigged on either side of the actor to get over the shoulder shots. Again, they were tethered to body blocks at the front of the plane.

As I mentioned, fitting that many cameras into such a tight space meant that the lenses need to be spherical, have good close focus and be as low profile as possible. Miranda went with a combination of 10-15mm compact Voigtländer Heliar wide-angle prime lenses and Zeiss Loxia primes. 

Earlier I mentioned that this method of hard mounting the cameras came with a lack of control. This is perhaps best seen by the fact that once the plane took off, not only were the actors responsible for their own performances but they even had to trigger the camera to roll and stop when they were up in the air.  

“Ultimately when they’re up there it’s up to them to turn the camera on and play the scene. I mean, the biggest challenge is not being there to give feedback. So you’re putting a lot of responsibility and trust in our cast. So, that was a unique way of directing the film for those particular scenes but it’s the only way to capture what we were able to get.” - Joseph Kosinski

Filming in this way meant that they’d do a run, come back and sometimes find out that parts of the footage wasn’t useful because of the lighting, or the actor’s eyeline being in the wrong place, or even because an actor didn’t properly trigger the camera to record.

However the footage that did work looked incredible and gave a feeling of being in a real cockpit - complete with all the vibrations, natural variations in light, and realistic adrenaline filled performances from the actors. These images wouldn’t have been the same had they shot these scenes in a studio. 

Four cameras were also hard mounted directly onto the exterior of the jet. Again they used the Rialto system with wide angle Voigtländer primes. Another advantage of using the Venice is that it has a wide selection of internal ND filters. 

This meant that they didn’t need to attach a mattebox with external NDs to decrease the exposure which would have made the camera’s profile too big for the interior shots, and would have probably been impossible to do safely on the exterior cameras due to the extreme high speeds of the jet. 

CONCLUSION

Top Gun: Maverick brings us back to an era of filmmaking where real effects are used to tell stories and the CGI that is used is done subtly and largely goes unnoticed by the audience. 

For years now, by and large, I’ve been nonplussed watching most action in films. The overabundance of CGI effects triggers something in my brain that tells me that what I’m watching isn’t real, which makes the action feel less exciting. 

By putting us in an environment where each and every manoeuvre is physical, real and visceral it makes the stakes real. This leads to a real emotional connection and immersion in the story.

There’s a reason why you often hear some auteurs sing the praises of in-camera effects and disparage the overuse of CGI. Maverick uses the best of both worlds. The crew executed most of the action with bold, practical photography, which was safe and innovative. 

Subtle digital effects were then brought in later when necessary to make up for those shots which were practically impossible. 

I can only hope that Hollywood executives take this away as one of the reasons for the film’s financial success and encourage these kinds of filmmaking decisions going forward. There’s always a time and a place for great VFX in cinema, but sometimes shooting things practically is the best way to go.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Matthew Libatique

In this edition of Cinematography Style, we’ll unpack Matthew Libatique’s cinematography.

INTRODUCTION

“There’s an abundance of ways to shoot a film. In this world because we have so many people who are trying to make films, being original is also really difficult. You really have to go with a kind of abandon when you’re trying to create something special.”

From gritty, low budget movies to the biggest, blockbuster superhero flicks in the world, Matthew Libatique’s cinematography has covered them all. Directors are drawn to his appetite for creative risk taking, his bold, subjective, in your face close ups combined with his deep experience and on set knowledge that has covered a broad array of technical shooting setups.

In this edition of Cinematography Style, we’ll unpack Matthew Libatique’s photography by unveiling some of his philosophical thoughts and insights about working as a DP, as well as breaking down some of the gear and technical setups he has used to achieve his vision. 


BACKGROUND

“I started becoming interested in films because of the camera. In undergraduate school I saw Do The Right Thing. It was like a mind explosion of possibility. It was the first time I ever saw a film that made it feel like it was possible for a person like me to make films.”

Even though Libatique majored in sociology and communication during his undergraduate studies at university, he was still strongly drawn to the camera. This led him to enrolling to study an MFA in cinematography at the AFI. It was there that he met a director who would be one of his most important and long running collaborators: Darren Aronofsky. 

He shot Aronofsky’s early short film Protozoa, and when it came to working on their debut feature film, Pi, Libatique got called up to shoot it.

“The director gives you a motivation, an idea, a concept. And then you can build off of that. And the more they give you the more you can do.”

After the success of Aronofsky’s early films, Libatique began working as a feature film DP with other A-list directors, like: Spike Lee, Jon Favreau and Bradley Cooper.

PHILOSOPHY

“When I was becoming interested in filmmaking in undergrad I didn’t study film. It was in sociology and communications. The one camera they had was an Arri S and it had a variable speed motor on it. The variable speed motor was set to reverse. So when I got the footage back I had double exposed everything. And I looked at it and it was a complete and utter f— up on my part. But then I was sort of inspired by the mistake. I always look back on that moment and I’ve kinda made a career on those mistakes working out.”

I’d ascribe Libatique’s appetite for visual risk taking, which include what may be seen as ‘mistakes’ or ‘happy accidents’, as being a large part of what informs his photography.

What I mean by visual risk taking is that the films that he shoots often carry a visual language which doesn’t conform to what is seen as mainstream, Hollywood, cinematic conventions - such as steady, flowing camera moves, neutral colour palettes and more restrained contrast levels with highlights that don’t blow out and turn to pure white.

At times, his camera movement and lighting deliberately distorts and challenges what is seen as a perfect, clean image, by finding beauty in imperfections.

For example, his debut film Pi was shot on the highly sensitive black and white reversal film. This film has an exposure latitude that is far more limited than traditional colour negative film. What this means visually is that there is a tiny range or spectrum between the brightest parts of the image and the darkest parts of the image, and that areas of overexposure are quick to blow out, while shadowy areas of underexposure are quick to get crushed to pure black.

This resulted in an extremely high contrast black and white film, the complete opposite of Hollywood’s more traditionally accepted colour images that have gently graduated areas of light and shadow.

Another example of visual risk taking is using body mount rigs on Aronofsky movies like Requiem For A Dream where he strapped the camera directly onto actors for a highly subjective, actor focused point of view.

Even in his recent, high end budget work on a Superhero movie like Venom, he often directed light straight into anamorphic lenses, deliberately producing excessive horizontal flares that dirtied up the image.

Often these stylistic ideas will come from the director, especially when working with a director that is more hands on about the cinematography, like Arnofsky. But other times, visual ideas evolve from a combination of discussions and real world tests prior to shooting.

When prepping for A Star Is Born, Libatique brought a camera into director and actor Bradley Cooper’s house while he was working on the music to shoot some camera tests with him. A lot of ideas came out of this test that informed the language of the film. This included a red light that Bradley Cooper had in his kitchen, which inspired the use of magenta stage lighting for many of the performances in the film.

A final stylistic philosophy which I’d attribute to Libatique is his continual awareness of the point of view of the camera and whether the placement of the camera takes on a subjective or an objective perspective. 

In many of his films, particularly in his work with Aronofsky, he’s known for using a highly subjective camera that is one with the subject or character of the film. He does this by shooting them almost front on in big close ups that are tight and isolate the character in the frame. 

This is also paired with a handheld camera that he operates himself. By shooting with the characters in a reactive way as if he’s tethered to them it also makes the shots more character focused and subjective.

This isn’t to say that he always does this. Some other stories he’s shot in a wider, more detached, objective style. But whatever the movie he’s always acutely aware of where he places the camera and the effect that it has on the audience.


GEAR

Earlier I mentioned that he shot Pi on black and white reversal film, 16mm Eastman Tri-X 200 and Plus-X 50 to be precise. Unlike modern digital cinema cameras that have something like 17 stops of dynamic range, this reversal film that he shot on only had about 3 stops of range between the darkest shadows and brightest highlights.

This required his metering of exposure to be very precise. If he let the highlights be 4 stops brighter than the shadows then they would blow out to white and lose all information. One way he narrowed down the precision of his exposure was with reflective metering.

“The thing that has really stuck with me throughout my career is the impact of reflective lighting.”

There are two approaches to metering or judging how much light there is. One is called incident metering. This is where the light metre is placed directly between the source of the light and the subject - such as in front of an actor’s face - facing the light, to determine how much light is directly hitting them.

Another way to metre light - which Libatique uses - is reflective metering. Instead of facing the metre towards the light, he faces it towards the subject. This way the light metre measures the amount of light that is hitting the subject and bouncing back - hence reflective metering.

“I’ve been using a reflective metre my entire career until this digital revolution. And even so I use a waveform that gives me a reflective reading of somebody’s skin tone because that’s the only way that I know how to expose anything.”

He mixes up his choice of format, camera and lenses a lot depending on the story and practical needs. For example, some of Aronofsky’s work he’s shot in 16mm with Fuji Eterna film for some and Kodak Vision stock for others.

Much of the rest of his work prior to digital cinematography taking over was shot on 35mm - again alternating between Fujifilm and Kodak stocks for different projects. 

Since digital has taken over he mainly uses different versions of the Arri Alexa - especially the Alexa Mini - but does occasionally use Red cameras.

He even famously used a Canon 7D DSLR with a 24mm L series lens to shoot the subway scenes in Black Swan, which he shot at 1,600 ISO at a deep stop of T8 ½. He did it in a documentary style, even pulling his own focus on the barrel of the lens. His colourist Charlie Hertzfeld later manipulated the 7D footage, especially the highlights, until it could be cut with the rest of the grainy 16mm footage.

His selection of lenses is as varied as his selection of cameras. He switches between using spherical and anamorphic lenses. Some examples of lenses he’s used include Panavision Ultra Speeds, Cooke Anamorphics, Zeiss Ultra Primes, Panavision Primos and Cooke S4s. 

On A Star Is Born, he carried two different anamorphic lens sets - the more modern, cleaner Cooke Anamorphics, and the super vintage Kowas - and switched between them depending on the feeling he wanted.

He used the Kowas, with their excessive flaring, hazing and optical imperfections for more subjective close up moments on stage. Then for the more objective off-stage work he switched to the cleaner Cookes.

Overall most of the lighting in his films does tend to gravitate more towards the naturalistic side. But, within that, he introduces subtle changes depending on the nature and tone of the story.

For the more comedic Ruby Sparks a lot of his lighting, although naturalistic, was very soft and diffused on the actors faces. While Straight Outta Compton, which tips a bit more into a tense dramatic tone, had harder shadows, less diffusion and an overall lower exposure while still feeling naturalistic. 

So while his lighting is always motivated by reality; the texture, quality, direction and colour of it changes depending on how he wants the image to feel. 

Since the rise in LED lighting, he often uses fixtures like LiteGear LiteMats, Astera Tubes and of course Arri Skypanels. When he can, he likes rigging them to a board so that he can precisely set levels and sometimes even make subtle changes as the camera roams around in a space.

Although he has used every kind of rig to move the camera, from a MOVI to a Steadicam to a Dolly, he is partial to operating the camera handheld on the shoulder. I think in some contexts this can be seen as one of those creative risks that we talked about earlier. 

For example, even on the large budget, traditional blockbuster - Iron Man - which you would expect to only have perfectly smooth dolly, crane and Steadicam motion - he threw the camera on his shoulder and gave us some of those on the ground, handheld close ups which he does so well.          


CONCLUSION

Although he uses a lot of different tools to capture his images, he doesn’t do so haphazardly. Being a good cinematographer is more than just knowing every piece of gear available. It’s about knowing how you can use that gear to produce a tangible effect.

Sometimes that effect should be a bit more subtle, but certain stories call for boldness.

His images may take large creative risks that go against conventions and expectations, but those risks are only taken when they are justified by the story. 

Read More
Gray Kotze Gray Kotze

5 Reasons Why Zoom Lenses Are Better Than Primes

In this video let’s flip things in favour of our variable focal length friends by unpacking five reasons why zoom lenses are better than primes.

INTRODUCTION

As we saw in a previous video, there are many reasons why in some photographic situations prime lenses are a better choice than zoom lenses. The fixed focal length or magnification of prime lenses provide: a more considered perspective when choosing a focal length, better overall optical quality, a larger aperture, a smaller size and better close focusing capabilities.

In this video let’s flip things in favour of our variable focal length friends by unpacking five reasons why zoom lenses are better than primes.  

 

1 - ZOOM MOVEMENT

The first reason for choosing to use a zoom lens goes without saying - it allows you to do a zoom move in a shot. Most shots in cinema and other kinds of film content shoot with a fixed level of magnification and do not zoom during a shot. 

I think this is in part due to the traditional language of cinema built by a legacy of many older films which were photographed with prime lenses, before usable cinema zooms were widely manufactured and prime lenses were the de facto choice. 

However, during the 1970s and 1980s using in-camera zooms to push into a shot or pull out wider without moving the camera gained more popularity amongst filmmakers. 

There are many stylistic motivations behind using zoom movement. It can be used to slowly pull out and reveal more information in a shot until we see the full scope of the landscape. It can be used as a slightly kitsch, crash zoom - where the camera rapidly punches in to reveal a character, to emphasise a certain line, or land a comedic punchline.

Because of their flexibility and ease of use, which we’ll come to later, zooms have also been widely used when shooting documentaries - particularly fly on the wall type doccies. In some films this type of zoom movement is extrapolated from these documentary conventions in order to lend a visual style of realism associated with the documentary look, or even to mock this look for comedic emphasis.

The list of reasons to zoom within a shot goes on and has a different stylistic or emotional impacts depending on the context in which it is used. It should be noted though that most filmmakers are careful about not overusing zooms, as they can easily become a bit tired, distracting and cliched, unless they form part of an overall considered visual style.          

2 - PRECISE FRAMING

Of course pushing in with a lens requires a zoom, but what about those films that don’t use any in shot zooms but still decide to shoot on zoom lenses?

Another reason cinematographers may use a zoom is because they make it easy to precisely frame a shot. 

When you shoot with a prime lens’ fixed focal length on the camera and you want to change the width of the frame you need to physically move the position of the camera. This is easy when you are shooting handheld with a little mirrorless camera. 

But when you are using a hefty cinema rig, on a head and a set of legs that is so heavy that it requires a grip team each time you move and level the camera, using zooms becomes more appealing. 

With primes you may need to slightly reposition a frame by moving the camera forward six inches, realise this is too far forward, and then have to move the camera back again three inches until that light stand is just out of frame. With a zoom lens you can just just change the focal length from 50mm to 45mm without moving the camera or tripod.

A great example of this happens on most car shoots. I’ve worked as a camera assistant on loads of car commercials and about 99% of the time when using a Russian Arm to shoot moving vehicles, DPs choose a zoom lens over a prime lens. 

It’s far easier and more practical to use a wireless motor to adjust the zoom on the barrel of the lens to find the correct frame from inside the Russian Arm vehicle, than it is to get the driver of the vehicle to keep repositioning the car a couple of metres on every run until the frame is perfect.

It is also easier to find the correct position for the camera without needing to move it when using wider primes: either with a pentafinder, a viewfinder app, or just based on the experience of the DP. But, when you use longer focal lengths, like a 135mm or 180mm prime lens, because of the lens compression it becomes infinitely more difficult to find the correct frame without needing to move the camera. 

There are also less prime focal length options at the longer end - which we’ll talk about later. Therefore for telephoto shots, zooms are regularly used for their ability to punch in or out until the correct frame is found. 

3 - FASTER SETUPS

With an increased precision in framing shots, comes a faster set up time. On a film set time very much equals money. The quicker you can set up and shoot each shot, the less crew overtime, rental days on gear and location fees you have to pay.

When you’re working on a tight budget without the possibilities of extended over time or extra pick up days, taking longer to set up shots means that the director is afforded to film less takes, with less time to craft the performances of actors or set up choreographed action.

Using zoom lenses speeds up production in a few ways. For one, if you shoot everything with a single zoom lens, it means less time spent changing lenses, swapping out matte boxes and recalibrating focus motors.

As we mentioned previously, it also means that grip teams don’t need to reposition heavy and time consuming rigs, like laying dolly tracks. If the track was laid a little bit too far forward, the operator can just zoom a little bit wider on the lens to find the frame, rather than starting over from square one and re-laying the tracks.

Another practical example is when using a crane or a Technocrane. If you use a 35mm prime lens on the camera, balance it on a remote head, perfectly position the arm and then realise that the lens is not wide enough and you need a 24mm focal length instead, the grip team needs to bring down the arm, the camera team needs to switch out the lens, the Libra head technician needs to rebalance the head with the weight of the new lens, and finally the grip team then brings the crane back into the correct position. All this could take 10 minutes or more. 

If instead the DP used a zoom lens with a wireless focus motor on the zoom, this change would take less than 10 seconds.

10 minutes may not sound like a lot, but if this keeps happening throughout the day this can quickly add up to an hour or two of wasted shooting time - which is both expensive, means less footage will be shot and therefore gives the director less precious takes to work with in the edit. 


4 - FOCAL LENGTH OPTIONS

A prime lens set usually covers a fair amount of different focal lengths on the wide end, but, when it comes to telephoto options beyond about 100mm their selection is usually very limited.

For this reason, DPs that like shooting with long focal lengths that compress the backgrounds in shots often hire a zoom. For example, the Arri Alura can punch in all the way to 250mm. While the longest focal length available on a set of modern cinema prime lenses such as the Arri Master Primes is 150mm.

So for cinematographers who want to use long, telephoto lenses, zooms are usually a better option. 

Many zooms also offer an overall greater range of focal lengths, for example an Angenieux 12:1 zoom offers a field of view all the way from a wide 24mm lens up to an extended 290mm compressed field of view.

For shoots that are in remote areas or in locations which cannot be accessed by a camera truck, carrying around a full set of spherical primes in three or four different lens cases is far more logistically challenging for the crew than just putting a single zoom lens on the camera and walking it in to set. 

This makes zooms far more flexible and practical when compared to primes, especially sets of older vintage prime lenses, such as the Zeiss Super Speeds that only come in 6 focal length options from 18mm to 85mm.    


5 - BUDGET

The final reason may seem a little counterintuitive because when you compare the price of a single prime lens with that of a single zoom lens, the zoom lens will almost always be more expensive. 

However, prime lenses are almost never bought or rented as individual units. They come in sets: such as a set of 6 lenses, or a set of 16 lenses. 

When the rental price or buying price of a full set of primes is tallied up it is almost always more than that of a comparable, single zoom lens that covers the same amount of focal lengths.

Therefore, when the budget of a shoot is a bit tight, it may come down to either pleading with the rental house to break up a lens set into a very small selection of two or three primes that cover a limited range of focal lengths, or hiring a single zoom that you can use to cover every field of view that is required for the shoot.

In this regard, a zoom lens is far more realistic and practical.    

Read More
Gray Kotze Gray Kotze

How We Made A New Yorker Short Documentary: With Jessie Zinn

Highlights from my chat with director Jessie Zinn about the film Drummies. We discuss the process of making a short documentary - from coming up with the initial concept, hiring gear, cutting it together, to finally distributing the finished film.

INTRODUCTION

“That gimbal was terrible. And it was so heavy.” “Didn’t it overheat a couple of times? Cause it was also in the middle of summer and we were shooting in the northern suburbs where it gets up to like 40 degrees which is like in Fahrenheit in the 100s. And we were, like, sweating, and the gimbal was making a noise.”

If you’re watching this you may know me as the disembodied voice behind this channel, who edits these video essays made up of diagrams, shoddy photoshop work and footage from contemporary cinema. But what you may not know is that I also work as a cinematographer in the real world. 

So, I thought I’d mix up the format a bit and chat to a real world director, Jessie Zinn, who I’ve shot some films for. One of those films that we shot during the height of the COVID lockdown, a short documentary called Drummies, was recently acquired by the New Yorker and is now available to view for free on their site - which I’ll link for you to check out.

Our chat will break down the process of making that short documentary - from coming up with the initial concept, hiring gear, cutting it together, to finally selling the finished film. 

Also, if you want to see the full, extended podcast with Jessie, and also donate to keeping the channel going, it’ll be available on the Master tier on Patreon.

CONCEPT

Before the cameras and the lights and the calling of action, every film begins as an idea.

09:32 “I deas come through very unorthodox channels and different backends and ways of finding out about subjects and topics. I think it’s definitely worth noting that I always make films or I’m always interested in making films for me. And that’s not to say that I don’t have an audience in mind because of course  I’m always very aware of who this is for ultimately; who is going to watch it. But I always approach a subject and have interest in making a film based on a subject that I’m just personally really, really interested in or feel a sense of passion towards in some sort of capacity.”

In the case of this film it was born out of seeing a photographic series by Alice Mann on the drummies or drum majorettes of Cape Town.

“Drummies is about a team of drum majorettes in Cape Town. It’s sort of like an intersection between cheerleading and marching band processions. It’s had this really interesting political history in South Africa because it was also one of the first racially integrated sports in South Africa during apartheid. And post-apartheid it’s become almost this underground cult world amongst young girls in schools. In particular in public schools. It’s both a sport where it brings a sense of community and family to them but also provides potential possibilities for upward social mobility.”

PRE-PRODUCTION

With a concept in place, Jessie then went about identifying and getting access to the potential characters that would be in the film and who we would focus on during the shooting process. 

“So I actually cast for Drummies and I did that remotely because I was still in the States. So I asked their coach to send through Whatsapp videos of the girls - basically auditioning.”

From there she cut down her ‘cast’ to four or five characters who we would do the majority of our filming with. In the final edit this was later cut down to three characters.

Before bringing in any cameras, she did audio only interviews of her cast using a Zoom recorder and a lapel mic.

“People have their guard up when you first meet them. But in others, often with children, the first encounter is often sort of the least filtered. And so, I knew beforehand that there would probably be some audio soundbites and material that I could gather from those pre-interviews which maybe wouldn’t have been possible with having a whole camera setup around. Actually in the final film some of that audio and voice over is from those pre-interviews because some of it was such great material.”   

When Jessie contacted me to shoot the project she passed on snippets of this material to me, so that when we met to chat about the film I already had an idea of the kind of characters that we would be photographing. 

She also put together a script and a treatment which I could read through. As a cinematographer, it’s always fantastic to get this kind of thorough vision early on up front as it facilitates the discussions we have about finding a look, or overarching style, for the film.      

“It’s always deliberately decided beforehand, at least with the short docs that we work on. You know, I’ll sit down with you often at a coffee shop somewhere and I’ll be like, ‘Here’s a couple of reference films and reference images.’ Then you’ll look at them and then you’ll say, ‘OK. I think this is what we can do based on these references and based on the real people.’”

In the real world, our characters were sitting around during this hot, summer vacation, unable to do much because of the covid restrictions. This led to us discussing the idea that the footage should feel dreamy, as if they were suspended in time - which is also a line that came from one of the interviews.  

To visually represent this feeling we decided to shoot a lot of the non-dialogue scenes in slow motion with a heavy, worn, often malfunctioning Ronin gimbal that we managed to borrow from another Cape Town documentary filmmaker.

“This was something that we discussed beforehand. That we wanted there to be a dreamy aesthetic and in terms of the actual movement using a handheld, rough aesthetic wouldn’t have achieved that. You definitely don’t want to limit the dreams that you have in terms of aesthetics for your film but you also do need to be very practical about it and I think that’s what we often get right. We sit down and say, ‘these are the things we would like’ and then ‘this is the version of these things that is actually achievable.’ 

PRODUCTION

“It’s all about prep - literally. You know you’ve done your job well when you get onto set you can stand back and do very little. If you’re having to do a lot on set then you know you haven’t done a great job - basically.” 

With all of Jessie’s prep, creative vision and our discussions about the film’s look coming together, I then, as the cinematographer, need to come up with a list of gear that we’ll use to bring these ideas to the screen. 

When it comes to this, one of the biggest limitations is dictated by budget. Doccies are generally made with pretty limited funding, much of which is saved for post production finishing of the film. So for these kinds of projects I usually put together a gear list with two options: one, the kind of best case scenario with a full selection of the gear I’d like to rent, and two, a more stripped down list which is a bit lower cost.  

A little bit of back and forth with the gear house may ensue until we come up with the best gear package that meets our budget.

For this film it meant shooting on a Sony FS7, with my four Nikon AIS prime lenses - a 24mm, 35mm, 50mm and 85mm. Although most of the film was shot on the two wider lenses. To add to this dreamy look that Jessie and I talked about I shot everything with a ¼ Black Pro Mist filter which gave the image a nice, diffused feeling.

As I mentioned we managed to get a free Ronin for the shoot - which was the old, original Ronin that was a bit tired but we managed to make it work. I also used a Sachtler Ace tripod, which some may see as being too lightweight for a camera like the FS7.

But, it’s small, easy to carry and makes finding a frame and levelling it off that much quicker. And speed in documentaries is often more important than fractionally more buttery smooth pans and tilts off a larger tripod head. 

Although it hopefully does not look like it, every single interior shot in the film was lit with a combination of artificial film light sources and my most important tool: negative fill.

The lighting package I carried around consisted of a budget-friendly Dracast LED bi-colour fresnel, two 8x8 blackout textiles and a couple of C-stands to rig them with. Rigging the blackout to C-stands or hastily tying them to window frames is precisely how not to rig - but since I was working alone and had very limited time to set up each scene I had to make do.

When you’re working with a limited budget, the easiest and most cost effective way to control light isn’t by adding light, but by removing it. 

As an example let's break down the changing room scene in the film.

“With that scene which would lead into the final performance because there was this theme throughout the film of being able to achieve dreams but also being held back from achieving those dreams. Because of COVID they weren’t actually able to perform and compete in all of these games and so, the changing room is of course tethered to reality. They have to get changed before any sort of performance. But they also weren’t performing at the time because of COVID restrictions and so I thought that that sort of worked.”

For that reason, I wanted to push this sequence visually about as far as we could into a dreamy state, while still maintaining a link to the real world. We took this orange, yellow palette of the changing room that we were presented with, amped up the warmth in the lighting to the max and pumped a bunch of smoke into the room to create an extra layer of diffusion.

With the help of an assistant I blacked out all the windows and doors which were letting through sunlight - except for a single window behind the characters. Outside this window I put up our LED fresnel, warmed it all the way up and I think even added a ½ CTO gel so that the colour temperature of the light went from tungsten to orange. 

Again, doing most of the heavy lifting by removing light and then carefully placing a single backlight to create contrast and a more amplified visual world.

During the shoot, a technique which Jessie and myself often employed was to cordon off and light a specific space, almost like a set, then place the characters within that space and let them converse or act as they would naturally. That way you maintain naturalistic conversation and action but are able to also better sculpt the cinematography into the form which best suits the film. 

POST-PRODUCTION

Once production wrapped, Jessie went about editing the film herself.

“I also often edit my films and so that is a big part of crafting a documentary. You’re not finding the story in the edit but you’re definitely chipping away at the basic model that you’ve planned. So when I’m on set I’m also shooting or directing with the edit in mind.”

“Drummies was a good exercise in learning how to trust my instinct. Basically the very first assembly that I laid down on the timeline which I had to deliver to the programme I think it was like three days after we’d shot the film. Which is insane, again. And so it was like a fever dream of staying up into the night to get this assembly done. And after that obviously I did many different versions and different edits where things changed and the structure changed. But when I looked at the final film it was actually almost identical in terms of structure to the very first assembly that I’d put down, like months ago, which also was the same as the script and the treatment.”

So again we come back to this idea of prep and how having a refined vision for the film going into the shooting process, even in a more unpredictable medium like documentary, is so valuable for a director to have.

After the completion of the music composition, sound mix and the final grade - which was based on the reference of another vibrant, colourful and slightly dreamy film - The Florida Project - Jessie is left with a completed film. But what comes after you have the final project?

“A couple of years ago people would say that the be all and end all are film festivals and that determines the success of your film. ButI don’t think that’s the case anymore because there are incredible online avenues for streaming services where you can put your film out there and it can get tons of views and potentially gain an audience that is much wider and larger than a film festival.”

‘So, Drummies did go to a few festivals but it was valuable because that’s how the film got distribution in the end. Both POV and The New Yorker showed interest because they’d seen it at a festival called Aspen Shorts Fest and they both reached out to me by email and said we’re interested in seeing the film, we’d like to have a look. And both of them basically came to me with offers within a few days and so that was the first film where I’d received pretty standard almost classical distribution interest in the film as far as broadcasters and sort of news channels are concerned. Whereas my two previous films got Vimeo Staff Picks which is, I'd say, a little bit more unorthodox and more current than those avenues.”

“People often think that if your film doesn’t get into an A-list festival then it’s the end of your film. Which is just so not true. Because the festivals that Drummies played at are I would say are probably B type of festivals. And those festivals got way more distributors interested than some of the A list festivals that I’d heard about. And so, never underestimate the space that your film is screening in is a valuable lesson that I learnt.” 

Read More
Gray Kotze Gray Kotze

5 Reasons Why Prime Lenses Are Better Than Zooms

As is the case with all film gear, there are a number of reasons for and against using prime lenses versus selecting zoom lenses. In this video I’ll go over five reasons that make primes superior to zooms - and follow it up with a later video from the other side of the argument about why zoom lenses are better than primes.

INTRODUCTION

If you’re into cinematography, photography or capturing images at all you’re probably aware that there are two types of camera lenses - prime lenses and zoom lenses. 

Primes have a fixed focal length, which is measured in millimetres. This means that when you put this lens on a camera, the angle of view of what it sees, how wide it is, or how much the image is magnified is set at one distance and cannot be changed.

Zooms have a range of variable focal lengths. So by moving the zoom ring on the barrel of a lens you can change how wide an image is - in some cases all the way from a very wide angle to a close up telephoto shot.

As is the case with all film gear, there are a number of reasons for and against using prime lenses. In this video I’ll go over five reasons that make primes superior to zooms - and follow it up with a later video from the other side of the argument about why zoom lenses are better than primes. So if you like this content, consider hitting that subscribe button so that you can view the follow up video. Now, let’s get into it.

 

1 - CONSIDERED PERSPECTIVE

We can think of the width of a frame, or a focal length, as offering a perspective on the world. 

This close up, shot with a wide focal length, sees a lot of background and places us, the audience, within the same world as that of the character. This close up, shot with a longer focal length, isolates the character more from the background, blurs it, and compresses, rather than distorts the features of their face.

The great thing about a prime lens’ fixed focal length, is that it also fixes the perspective or feeling of an image. When you choose what prime to put on the camera you are therefore forced into making a decision about perspective. 

This isn’t to say that you can't do the same with a zoom, but when you work with a variable focal length lens it’s far easier to just plonk down the camera at a random distance from the subject and then zoom in or out until you get to the shot size that you want.

If you’re using a prime, you need to first decide on the focal length you want and then are forced to reposition the camera by moving it into the correct position. As they say in photography, it makes your legs become the zoom. This is especially useful as a teaching device for those learning about lens selection and camera placement. 

So, prime lenses force you more into thinking about the focal length that you chose, which may elevate the visual telling of the story by making it a deliberate decision, rather than an incidental decision.   

2 - OPTICAL QUALITY

The practical reasons behind choosing a lens are important, but so too is the look that the lens produces. Due to their design, prime lenses are considered to possess a higher quality optical look than most equivalent zooms. This is mainly because the construction of primes is much simpler and the design more straightforward than that of zooms.

Inside a lens you’ll find different pieces of curved glass. Light passes through this glass to produce an image. Because prime lenses only need to be built as a single focal length they can use less of these glass elements - and, the glass elements inside the lens don’t have to move in order to zoom. 

Less glass means less diffraction of light, which usually means sharper images. Also, prime lenses only need to be corrected for optical aberrations like distortion and chromatic aberration for a single focal length. Zooms need to do this for multiple focal lengths, which is trickier to do. 

Therefore, your average prime lens will be sharper with less distortion, or bending of the image, and more minimal colour fringing between dark and light areas. 

I should add as a caveat that modern, high-end cinema zooms are constructed to a high degree of optical quality that is comparable to many prime lenses, but you pay a pretty penny for that level of cutting edge engineering. When you’re looking at zooms and primes in a comparable price range, primes usually have the winning, optical edge.        

3 - APERTURE

A lens’ aperture is the size of the round opening at the back of the lens that lets in light. A large opening, which has a lower T or F stop number, like T/ 1.3, means that more light is let in, while a smaller opening, with a stop such as T/ 2.8 means that it lets in less light.

Once again, because of the extra glass and more complex design required to build zoom lenses, primes tend to have a faster stop. 

When it comes to cinema glass, each extra stop of light that a lens can let in is precious and demands a higher price tag. Shooting with a wide aperture comes with a few advantages. It means you can expose an image in dark, lower light conditions. It allows you to create more bokeh - the out of focus area that separates the subject from the background and is generally considered ‘cinematic’. 

This allows you to also be more deliberate about what is in and out of focus and is a way of guiding the audience's gaze to a certain part of the frame. So, for those cinematographers or photographers that want fast lenses, primes are the way to go.      

4 - SIZE

If you’re working in wide open spaces, with a dolly that holds a heavy cinema camera, then the size of the lens is less of a concern. But the reality is that more often than not that’s not the case and having a physically smaller lens attached to the camera makes things much easier.

By now we know that zooming requires extra glass and extra glass requires a larger housing. This means zooms are heavier, longer and wider than primes. 

Cinema rigs that need to be precariously balanced - like drones, a gimbal or a Steadicam - are often unable to take longer cinema zooms because of their excess weight and length. Some of the bigger zooms are so large that they can’t be effectively operated as a handheld camera on the shoulder and are limited to only being viable for use on a dolly or a tripod.

So, if you need to work in a confined space, want to move the camera on a lightweight rig, or shoot with a handheld camera then a big zoom lens may not be for you.   

 

5 - FOCUS

The fifth and final reason not to use a zoom lens is because of its focusing limitations. Every lens has a range of distances that can be rendered in sharp focus and a range that will fall out of focus. 

Cinema and photo lenses can focus on objects far away up to infinity, but they are limited as to how near they can focus on an object. This limit is called close focus or MOD - minimum object distance. Generally, zooms have inferior close focusing capabilities. 

For example, the closest that an Angenieux 24-290mm zoom can focus, set at a 50mm focal length is 4’. A Zeiss Master Prime 50mm lens on the other hand can render objects in focus up to only 20” away. 

So, if you want to shoot a close up of a face or a detail, you can get much tighter with a prime lens than with a zoom lens before you lose focus.

Additionally, when it comes to many zooms built for photography, because the glass shifts when you zoom in, the focus also changes. So if a person is in focus at 70mm and you then zoom into them to 200mm without touching the focus ring, the shot may go out of focus and be soft.

Most cinema zooms are able to remedy this and maintain sharpness across the entire focal length range, however for simpler, pin sharp focus that doesn’t shift, and a superior MOD - primes beat zooms.

Read More
Gray Kotze Gray Kotze

The Most Popular Cinema Lenses (Part 5): Zeiss, Cooke, Panavision, JDC

In this episode, let's take a look at four more cinema lenses which are popular in the film industry.

INTRODUCTION

As a cinematographer, picking the right gear that you need to use on a shoot is, of course, extremely important. One of the most important bits of gear to put on your rental list are cinema lenses.

They are chosen on the basis of two factors: one, the kind of look that they produce and two, their practical and ergonomic advantages or disadvantages. The look and aesthetic feel for each project may be different and the kind of shooting environment and setups needed will also be different depending on the situation and story.  

So to better equip you, and I guess because I’m a bit of a lens nerd, I’m going to look at three more cinema lenses that are popularly used in the high end film industry, breaking down their look and their practical design. And, at the end of the video take a look at one lens contraption that’s a bit of an oddity but a pretty interesting piece of kit, so stick around.


JDC XTAL XPRESS

First up, we have the JDC Xtal Express lenses, an interesting set of vintage anamorphic primes made for 35mm.

It’s a bit tricky to verify, but based on what people have told me this set of lenses is mainly made up of spherical glass from vintage Cooke S2 and Cooke S3 elements, combined with Japanese anamorphic elements. Hence why they are sometimes called the Cooke Xtal Express. This glass was then packaged and housed as a lens by Joe Dunton Cameras - JDC. 

They were later bought out by the gear rental house Panavision, who are a major supplier of renting out their inventory. 

Although perhaps not as well known as some of the bigger producers of anamorphic glass, this set of lenses is still fairly popular amongst DPs. Based on my anecdotal experience as a 2nd AC, particularly amongst European and UK cinematographers.

On the surface they are a bit of an odd looking bunch. Different focal lengths come in different lengths, different weights, with different sized front diameters and different stops. Although they are fairly well constructed, they were made a while ago and do need to be handled with care as they are not as robust as the housings on other modern lenses, such as the newer Cooke Anamorphics, which we’ll get to later. This may mean turning down the strength of the torque on a wireless focus motor and being more gentle when pulling focus.

Practically, their longer build, especially on longer focal lengths like the 100mm, may make them a bit front heavy on cameras and therefore a bit tricky to balance on a gimbal or Steadicam without adding extra weights to the body to balance it out. 

While they are long, they aren’t particularly heavy, so are still good lenses for applications such as handheld camera movement.

Their different front diameter sizes also means that camera assistants need to switch out the back of the mattebox to different sizes, or use custom made reduction rings depending on the lens.

But, what they lack in practicality they more than make up for with their interesting production of images. They produce what I’d call quite a heavily vintage anamorphic look. This means they have quite a bit of distortion at the edges of the frame, especially at wider focal lengths, which will lead to straight lines, such as door frames, bending.

They also have heavy falloff around the edges of the images. This means that characters will appear sharp in the centre of the frame but, when placed on the edges of an image may seem out of focus even though they are not, due to the lens’ anamorphic falloff and softness.

Wide open they have a beautiful, soft, dreamy look with oval bokeh and lovely flares - perfect for those with a taste for vintage anamorphics.     


COOKE ANAMORPHIC/i S35

To compare the Xtal’s to a more modern anamorphic option, let’s take a look at the Cooke Anamorphic/i lenses which are made to cover a 35mm sensor. 

Whereas the Xtal’s use older Cooke S2 glass, the newer Anamorphic/i lenses are made with glass that have modern coatings. So while they are sharper across the frame, they still maintain some of those lovely anamorphic image characteristics when shot wide open, such as blooming, horizontal flares, oval bokeh and a more gradual anamorphic falloff around the edges. 

On a scale from soft and vintage glass, like the Xtal’s, to super sharp, super clean modern glass, like the Zeiss Master Anamorphics, I’d put the Anamorphic/i lenses somewhere in the middle.

They come in a large range of 10 focal lengths, which is a lot for a set of Anamorphic primes that typically have fewer focal lengths than spherical lens sets. This includes everything from a wide 28mm, to a macro 65mm for close up work without needing diopter filters, and a telephoto 300mm.

They cut together well and carry the same, consistent colour reproduction as other modern lenses in the Cooke spherical range, such as the S4s and the S5s. This has been dubbed ‘the Cooke look’ - which refers to their overall warmth, high contrast, natural colour reproduction and their sharp but smooth, pleasing rendering of details.

This look is paired with the modern construction of a super solid, durable lens housing with accurate, well spaced distance and iris markings and a smooth focus action when you turn the focus gear. This, along with their more standardised sizes across the range, makes them a pleasure for camera assistants to work with. 

A final useful feature comes from the i part of their name. This refers to a contact point situated on the mount of the lens that connects to a contact point on the PL mount of most high end cameras. It’s used to send metadata from the lens such as its focal length, focus distance and stop to the camera where it is recorded for each frame. This is both useful as it negates the need for focus pullers to programme distance measurements into some wireless follow focus systems and it provides information that is useful in post production special effects work. 

ARRI / ZEISS ULTRA PRIMES

Next, let’s turn to spherical lenses and take a look at a very popular product in the Zeiss lineup - the Ultra Primes. If we look at the timeline of cinema lens releases from Zeiss, the Ultra Primes were developed after the Super Speeds and before the Master Primes.

This is fitting as I think both technically and in terms of their look they also occupy the middle ground between those two spherical prime lens sets. The Super Speeds have a subtle vintage quality to them, the Master Primes are supremely sharp, clean and modern, and the look of the Ultra Primes sits somewhere in between them.

Their look is very similar to that of Master Primes and the two sets can easily be cut together. Their colour reproduction is accurate but neutral, or slightly cooler than something like a Cooke, they are sharp and flare ever so slightly more than Master Primes - which flare very little. The choice of whether to shoot on Master Primes or Ultra Primes often comes down to their technical designs.

In terms of their build they are lightweight and compact, a little bit chunkier than the Super Speeds, but smaller than the Master Primes. With a stop of T/1.9 they are very fast, produce considerable, circular bokeh and are good for low light conditions. But, are slightly slower than the T/1.3 Super Speeds and Master Primes.

They come in 15 different focal lengths, ranging all the way from an 8mm extreme wide angle to a 180mm telephoto. This means they have the widest focal range coverage of pretty much any prime lens set available. This gives them an edge over the Master Primes for cinematographers who are very specific about their focal lengths and like to cover every in-between prime, such as having a 28mm focal length option between a 24mm and a 32mm. 

Like the relatively modern Cooke lenses we discussed, the Ultra Primes also offer a high quality, durable build with good markings that make them quick and easy to work with.  

PANAVISION FRAZIER LENS SYSTEM

Finally let’s take a look at something a bit more unusual. This quote unquote ‘lens’ might not be strictly popular, but I guess you could say it’s popular for achieving a specific application. It’s not even a lens per say, but rather a lens system that can be attached to a camera for specialty macro shots where the lens needs to squeeze into a space and be incredibly close to the subject, sometimes at unusual angles.

Often referred to as a snorkel or a periscope, the Frazier Lens System is attached to a cinema camera via a PL mount. At the end of its long extended barrel filled with glass and mirrors, there’s a second lens mount which is designed to take the capture lens - which is usually a small, manual iris, stills lens - such as a Nikon AIS.

So why would you shoot with this contraption? It has a swivel tip that allows you to tilt, pan or rotate what the camera sees without having to rotate the camera itself. Because of its thin design it also means a lens can be squeezed into very tight spaces close to what it is photographing. This makes it a popular way to capture and move the camera around miniature sets.

A drawback is that it has an aperture of T/ 7.1. This means that a lot of light is needed to achieve exposure. Because of its build it is; however, easier to light macro shots without the camera blocking the light source. Since you usually want to shoot extreme macro shots at as deep a stop as possible to make it easier to focus, this T/ 7.1 aperture isn’t actually a big deal.

They have an iris and a focus gear on the barrel near where the PL mount is so that assistants are able to manipulate focus - while the focus on the taking lens is set at a distance and left there.

To better explain a real world application here’s a BTS photo I snapped on a commercial shoot I was working on as a 2nd AC. This system was used to get the lens within about an inch of our subject's eye to achieve an extreme macro shot and still have space to get in studio light. We used the Frazier system on an Alexa Mini with a Nikon AIS taking lens.

Read More
Gray Kotze Gray Kotze

What We Don't See In Movies Is More Important Than What We Do

Since cinema is not a 360 degree medium - well, not yet anyway - and our perspective is not omnipresent, the choice of what is included and what is excluded from the screen is a choice that is left up to the filmmakers.

Let’s take a look at some examples of off screen action from well known films to uncover why this is done and why sometimes what we don’t see in movies is more important than what we do.

INTRODUCTION

What is it about this scene that makes it feel scary? It takes place in the middle of the day without any frightening, dark lighting. It’s set in a suburban kitchen, not a haunted house. Kitchen chairs aren’t all that terrifying. 

What makes it scary isn’t because of what we do see, it’s because of what we don’t see.

Since cinema is not a 360 degree medium, well, not yet anyway, and our perspective is not omnipresent, the choice of what is included and what is excluded from the screen is a choice that is left up to the filmmakers. 

It’s their job to create the perspective that the audience is presented with. So, let’s take a look at some examples of off screen action from well known films to uncover why this is done and why sometimes what we don’t see in movies is more important than what we do.

WRITING

The biggest decision that determines what we see in movies is made during the process of writing the script.

Just as the screenwriter wields control over what scenes they choose to include, they also control by extension what is not seen. Like in life, the world of fictional stories has an extensive timeline. It’s the job of the screenwriter to pick the moments from this imaginary timeline that best add to the telling of the story.  

Quentin Tarantino’s first film, Reservoir Dogs is a classic example of not allowing us to see the full story timeline. 

We open on a scene at a restaurant with a group of men conversing. The scene ends with them stepping out to perform a diamond heist. Cut to credits. Instead of the next scene opening on them performing the robbery, which would happen chronologically, we cut to chaos. One of the characters is critically injured. The robbery already happened.

Skipping over what would usually be the most important set piece in a heist movie is of course a very deliberate writing choice. Tarantino chooses not to show us that part of the story timeline. This sets up the tension and the key question in the entire film - what happened during the heist and why did it go wrong?  

The cliched mistake that first time filmmakers make in their debut short film is not understanding that what you don’t show is as important as what you do. They’ll open on a character waking up, showering, getting in the car, driving to a restaurant and then having a conversation. 

Something writers should always ask themselves is ‘what does this add’ and ‘is there a better way to present this information to the audience’? Does a character waking up add some crucial characterisation? Does it add to the core story or any themes? 

Or, can we characterise characters in other scenes in other ways - by combining multiple facets of information into scenes and suggesting events rather than showing them.   

By withholding extraneous scenes, filmmakers can actually be more deliberate about how they choose to deliver a story on screen. 

SOUND

While the foundation of the story can be altered by what is left out of the script, the filmmaking decisions of what to exclude in scenes - which are made during the process of filmmaking or in the edit - can also shape how we emotionally interpret them.

Off screen sound can be used to supplement off screen images.

Rather than seeing the body as it drops down the well with a loud, revealing ‘clang’, we cut to reaction shots of the characters and empty shots of the location. Without needing to see the action, we can hear it and therefore tell how deep the well is. Seeing the characters react creates more tension than if we were to cut to the primary source of the sound and action.

Then, we begin to hear another off screen sound. The ominous drumming from an unknown, approaching threat. Again, this use of off screen action combined with off screen sound creates a tense mood and tone that would be diminished were we to merely cut to the source of the sound. 

Not seeing the source of the approaching threat creates more suspense than if we were to show it.

This kind of sound is what we call diegetic sound - sound that comes from the setting of the world of the film. Non-diagetic sound - which comes from the world of the filmmakers, such as music, is another way of influencing the mood or tone in a movie. 

TONES

So we know that music and off screen sound in movies can create a tone or mood, but we can also do this with the camera by choosing what images it captures and, more importantly, by choosing what we don’t see.

Off screen action can be used to build up tension and suspense, but it can also be used to draw out different emotional responses from an audience, such as comedy.  

Edgar Wright has used this gag in a few different films. There’s something funny and unexpected about a random arm reaching into frame with an object. If he were to shoot this wider so that we see the person who the reaching arm belongs to it wouldn’t work as well comedically. 

It would just be a regular action. He also uses this technique of shooting objects entering frame in extreme close ups during his snappy montage edits to advance the story forward in time. In this way framing is used so that the action of the object reaching in is unseen, unexpected, unpredictable and amusing.   

Information can also be withheld in darker situations for a different effect. In No Country For Old Men, we cut from a conversation about the nature of choice and chance, and a woman's impending death, straight to a shot of the hitman leaving the house, checking his shoes for blood. 

There’s something bone chilling, ambiguous and deeply affecting about this scene that comes from brushing over the assumed act of violence that we don’t see and rather sitting in quiet with Chigurh. Not seeing what happens is a more contemplative way of communicating violence and the themes that the movie deals with. 

One of the genres that plays with off screen action the most is horror. I’ve used the analogy before of horror being like pulling off a plaster. It can be done with a quick yank, like a jump scare, or it can be pulled off slowly, with a gradual, prolonged, pain. 

By filming scenes from the protagonists point of view and delaying putting the threat on screen for as long as possible, the tension and terror is cranked up to ten. What we don’t see is far more psychologically terrifying than what we do. 

FINANCIAL LIMITATIONS

Coming back to the example of Reservoir Dogs, another reason why filmmakers may want to skip over showing certain things on screen may be influenced by financial limitations.

Certain scenes are more expensive than others to shoot. It’s pricey to effectively shoot action, car chases, crowds, or most kinds of traditional set pieces.

 Whether that’s a massive car pile up, like in Red Rocket, or a bank heist, like in Reservoir Dogs. Excluding set pieces that need to access specific locations and contain lots of action is sometimes logistically necessary. 

In these cases it’s either on the screenwriter or on the director to find creative ways to communicate that these moments in the story took place while not being able to show it. 

This might be through presenting it off screen with sound from a news broadcast discussing the aftermath of the event in the background or by skipping the event in the story timeline and making how it is discussed by characters the source of the drama and suspense.


PERSPECTIVE 

Finally, a movie like Rashomon uses what we’ve discussed as the entire premise of the film. 

The movie is built around a narrator who conveys four different accounts of the same incident. Each is presented by a different witness, with different details, to question the nature of truth and how it is reported. 

The entire premise of Rashomon relies on not showing the actual events as they unfold from an objective point of view. Eyewitnesses account for what is not seen to comment on how everyone has their own bias and subjective view of how life unfolds.

What we see and what we don’t see in a movie is defined by the filmmaker. By not showing certain information and by showing other information they are creating a perspective on how the story is told. 

What is shown and what is not are inextricably linked. Both are conscious choices. So the next time you’re writing, shooting, editing or watching, remember that what you don’t see is at least as important as what you do.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Linus Sandgren

In this episode I’ll dive into Linus Sandgren's philosophical approach to shooting movies and then take a look at some examples of the gear he uses to execute his photography.

Cinematography Style: Linus Sandgren

INTRODUCTION

Being a cinematographer is a collaborative process. They need to work with technical crew, the director and even indirectly with the screenwriter - due to how they visually interpret the script. 

Linus Sandgren is a DP who focuses on these collaborations, especially the director, to find a mutually agreed upon look for each movie that takes the tonal and thematic aspects of the screenplay into account.

In this episode I’ll dive into his philosophical approach to shooting movies and then take a look at some examples of the gear he uses to execute his photography.   

PHILOSOPHY

Creating images in a crafted way isn’t just a matter of arriving at a location, pointing a fancy camera at the action and hoping for the best. Some of the most important and influential work that cinematographers do actually happens before then. 

In initial meetings before shooting they will be presented with a script or with a concept for the shoot. In order to execute the shoot they first need to come up with ideas for a look or visual style for the film. There are two facets to this: uncovering the tone and themes present in the screenplay, and incorporating the tone that the director envisions for the movie. 

“You need to have a great relationship with a director. I think, like, normally I try to figure out how the director works and what he likes and I like to try to adapt to that director’s style of working. My experience with directors is they also need us to have a partner to communicate and collaborate to create the visual storytelling.”

This doesn’t mean that Sandgren sees directors as having fixed styles. More often than not the tone and feeling that needs to be communicated visually will be determined by the content of the story. 

“It was sort of a 180 from La La Land to do First Man. His style of doing La La Land was very much whimsical, a musical. So, in First Man he wanted it to be very realistic and emotionally immersive.”

Just looking at the way the camera moves differently in these two films communicates two different feelings: one is smooth, theatrical, co-ordinated and flowing, while the other is rougher, more intimate and realistic. But, we’ll get more into how he practically executed those tones later on.

So, while the director’s style of working may influence the visual approach, so does the screenplay.

In pre-production Sandgren therefore asks himself two questions. One, how do you take a script and come up with a tone or themes that are pertinent to the story? Two, how do you then take those concepts and apply them practically to create the images. 

On No Time to Die he chatted with director Cary Fukanaga during prep to come up with these answers.

“Another thing that Cary then discussed with me a lot was the sort of expectations that we have on a Bond film is that it should be entertaining, it should be should be a sort of joyride for the audience that we go through a film and feel like it’s sort of one, long, big adventure. Then we start to discuss, sort of, themes and keywords which to me is really important as a cinematographer. I feel like I always start with the most simple words between me and the director. Like, for example it could be loss, or grief, or death or things that matter for the story.”

With these keywords decided upon, he can put down emotional tones or themes on a screenplay or story timeline and then breaks down how he can achieve this effect in each scene visually. 

For example, for the emotionally jarring opening scene it was important that the images were not beautiful, postcard, landscape snow images but rather that they felt a bit more ominous, eerie, dangerous and isolated by making the images more of a monochromatic blue rather than crisp whites and a beautiful sunny blue sky with warm backlight. This is a good example of how cinematography and a look can be used to subtly influence the emotional impact of a story.

He is also deliberate about how he hires crew. In the same way that the gear will impose it’s so own visual feeling on a film, so to does the crew that you hire. Every person will set up a shot differently, come with their own technical opinions, creative suggestions and style of physically operating the camera.

“Sometimes I love to operate. So I put an operator off to not work on a film because I like to operate. Like on First Man I actually operated A Camera but on La La Land I had operators. It depends on how I want to tell the story and when I feel it’s important to be closer to the actual making of the image. And sometimes it’s better to have someone else do it.”

First Man used a lot of handheld work and immersive operation of the camera. Sometimes with handheld operation it’s easier to find the right movement to tell personal stories when the camera is on your own shoulder. 

GEAR

With an understanding of the kind of tone and feeling that he needs to impart in the images, he can now go about choosing the correct gear to execute that tone.

One aspect of this is choosing the camera and lenses for a project. For example, on Don’t Look Up, he pitched the idea that the look should be rooted as a political thriller and that the comedy should come from the script and actors, rather than shooting it as a brighter comedy. 

So, to ratchet up the tension of a thriller he used camera movement, mainly working off a dolly, and also a stylistic trademark of the director Adam McKay - zooms.

“When things sort of hit the fan and got even more suspenseful in dialogue scenes we could dolly to create tension with the dolly moves. But also we went a lot to longer zooms. For practical reasons we didn’t go to anamorphic zooms, we went for spherical zooms.”

Due to the extra glass needed in anamorphic lenses they usually let in less light, while spherical zooms have a faster stop which means they let in more light and can be shot on darker lit sets. This is particularly true for the long focal lengths they were shooting at - such as 300mm. 

He selected Fujinon Premier Cabrio zooms like the 25-300mm, which has a stop of T/ 3.85 at the long end of the zoom. 

Since they were shooting on film which has a slower EI than the native EI of most digital cameras, they needed faster lenses to be able to shoot in low light scenarios.

Sandgren also rented an Aaton Penelope which he always had on standby with a 40mm spherical Macro Kowa lens. He used this setup for moments where the characters felt really nervous or tense by pushing in inches from their eyes to visually communicate a feeling of internal, existential terror that the characters were going through in that moment.

While he has used the digital Alexa on some of his commercial work, he likes using 35mm film on features when presented with the option. 

Some other 35mm film cameras that he’s used include: the Arriflex 235, the 435, the Arricam LT, and Panaflex XL2 from Panavision. 

These have been paired with lenses like: Hawk V-Lite 1.3x Anamorphics, Canon K35s, Atlas Orion Anamorphics, the Camtec Modded Vintage version of the Zeiss Ultra Primes, and anamorphic glass from Panavision such as the C, E and G-Series. He alternates between anamorphic and spherical lenses for both aesthetic as well as practical reasons.

He has used a variety of film stocks, including Fujifilm Eterna Vivid stocks before they were discontinued, but now selects from the Kodak motion picture stocks which are still available for use. 

He doesn’t stick to a single stock and often chooses a range of stocks for different scenarios that include: fine grain daylight stocks for bright exteriors, like 50D, more sensitive daylight stocks, like 250D, and high sensitivity tungsten stocks, like 500T, for interiors and night scenes. 

His preference for film is also due to the different gauges or formats that it comes in. He can select different formats  to communicate different tones or differentiate time periods or locations in a movie. 

For example, on First Man he used lower gauge, higher grain 16mm film for scenes inside the spacecraft to create a more realistic documentary-like 60s and 70s feel that was more raw and authentic. He then switched to 35mm, with a more normal grain and increased contrast, when the story moved to the NASA industrial world and for scenes inside his home. To create an extreme contrast between the spaceship interiors and the more surrealistic moments of actually being on the moon, he turned to the super crisp, large format 70mm Imax. 

Constructing a realistic feeling sun required coming up with a strong, single light source. He got Attitude Specialty Lighting who make 100K softsuns to construct two custom, high output 200K softsuns to provide a large area source that mimicked hard sunlight.

He often uses strong single directional sources of light, such as HMIs or tungsten units that he’ll gel to get to a daylight colour temperature. He then fills in little pockets of light and creates colour contrast in interiors by using practical sources built in, or placed on the set, that you can see in the shot.

But, his approach to lighting and the gear involved changes depending on each story. This could mean using a wall of LED screens with projected NASA archival footage to create in-camera space backgrounds with more realistic light and reflections. 

Or he may go in the opposite direction of realism, into a world where the lighting is determined by emotion and classical Hollywood tropes. This was done in La La Land by setting up an overhead blanket light at 4000K, hitting the walls with colourful practical fixtures and then rigging a tungsten spotlight to a dimmer which he could bring up or dim down to isolate characters in a nostalgic moment of focus.  

These evoke different tones, one which is concerned with realism and the other plays into visualising abstract emotion.  

CONCLUSION

These examples go to show that Sandgren’s style cannot be limited to a singular look, technique or gear preference. Each project requires a different feeling and each director he works with has different sensibilities and interpretations of the script. 

It’s his job to find a collaborative balance where the tone and themes in each scene are established, and then use that as a jumping off point to create visuals which represent a feeling.

Read More
Gray Kotze Gray Kotze

Frame Rate In Cinematography Explained

Let’s unpack what frame rate is and how it is practically done with a camera and editing software, look at some common frame rates that are standardised in different places around the world, and with that knowledge, take a look at some creative ways that frame rate can be used to tell a story.

Frame Rate In Cinematography Explained

INTRODUCTION

Most of cinema unfolds in real time to replicate a version of reality. But sometimes reality isn't the best way to tell stories. What if we wanted to suspend two characters in time while the world around flies by, draw out an iconic, awe-inspiring moment of action, or invoke an anxiety-inducing choppy rapidity. 

In filmmaking we can use the tool of frame rate, frames per second, or FPS, to control how motion is captured and presented on the screen. 

So, let’s unpack this further by first explaining what frame rate is and how it is practically done with a camera and editing software, look at some common frame rates that are standardised in different places around the world, and with that knowledge, take a look at some creative ways that frame rate can be used to tell a story.  

WHAT IS FRAME RATE?

When you shoot a digital video clip it may appear as one single entity, but to understand frame rate it’s important to realise that one clip is actually a representation of loads of individual pictures, or frames.

These pictures give the illusion of motion - hence motion picture. It works the same way as leafing through still images in a flipbook. If you do it fast enough it’ll look like the little picture is moving.

Before digital, all that film cinema cameras were was a big box that took individual photos very quickly at rates such as 24 pictures in a single second. Instead of a page with a picture on it, a frame was captured as a single image onto a roll of film.  

When people talk about frame rate they usually only talk about one thing: how many frames per second a camera is able to capture. However this is only half of it. To understand frame rate properly you need to know that motion can actually be manipulated in two ways. 

One, the capture frame rate refers to how many individual pictures or frames a camera can capture in one second.

 Two, the project or playback frame rate which is set in editing software and determines at what speed the captured frames are played back.

Changing these two variables in different ways can create normal motion, slow motion and fast motion. 

Let’s say a camera’s capture frame rate is set to 24 frames per second. A clip is shot and brought into editing software, where the playback frame rate is also set to 24 frames per second. This results in on-screen motion which appears normal to the human eye.

This time, the camera is set to 48 frames per second, while the project playback frame rate stays at 24 frames per second. When you hit the spacebar and the clip is played, because the capture frame rate is double the project frame rate the motion will appear to be twice as slow as normal. Also called slow motion.

Finally, let’s set the camera to capture action at 12 frames per second while maintaining a 24 frames per second playback frame rate. This time, the capture frame rate is half the project frame rate, and the motion will be sped up twice as fast as normal. This is fast motion.      

Usually, 24 frames per second is set as the base frame rate for a project, and then the capture frame rate is manipulated to change how we perceive motion, but not always. 

Different frame rate bases also exist. But before I dive into what some of the most common of these basic frame rates are, I’d like to thank today’s sponsor that made this video possible, Squarespace.   

COMMON FRAME RATES

There are two common base frame rates that are used for the majority of film and television productions. 

The first, 24p is based on the standard film exposure rate of 24 frames per second that delivers a look that the human eye has become accustomed to. 23.976 is also commonly used as a progressive scan format based on NTSC television. Regions where NTSC is used include North America, parts of South and Central America and Japan. So productions shot for these countries usually use a 24 or 23.976 frame rate as a project base.

The other common frame rate is 25p, which is used for PAL television. PAL regions cover most of the rest of the world, where 25 frames per second are used as the project frame rate. Although these two bases are most common, there are other alternatives, mainly for broadcast TV production, like 50 or 60p.

There have even been odd experiments like the Hobbit series of movies which were shot and displayed at 48fps. Which, contrary to Peter Jackson’s thinking, got quite a bit of pushback. 

Projecting at this higher frame rate means there is less of the standard feeling of motion blur from movement that we as an audience have become used to. Everything is more crisp and slick. Which in my mind makes productions shot at higher frame rates feel artificial, because our eyes are used to perceiving movement in a more natural way with a kind of motion blur.

For these higher frame rate bases remember that we also need to change our capture rate to compensate. So for normal motion on the Hobbit, the camera shot at 48 frames per second. 

To get motion that is twice as slow as real life they therefore have to boost the capture frame rate to double the playback frame rate - which was 96 frames per second.

While those are some of the most  common frame rate bases, let’s go over some popular capture frame rates. As I have mentioned, to get slow motion a common frame rate is to double the base rate. So, 48FPS for NTSC and 50FPS for PAL. 

Then there are also frame rates such as 33 which is somewhere in the middle of standard slow motion and regular speed and gives the feeling of a subtle suspension of time.

Once we get into the higher frame rates they are usually rounded up to numbers such as 100, 120 or 200. 

Then you get extreme slow motion for specialist shots where time almost feels like it’s standing still. This is usually done on a Phantom camera, at frame rates such as 1,000.  

CREATIVE USES

Let’s take a look at three examples where frame rate is used in a creative way. You can’t really talk about frame rate and motion without bringing up the work of Wong Kar-Wai. 

One technique that appears in many of his films is the use of step printing. 

To get this effect with a film camera, they shot with a low frame rate, in this case eight frames per second. Instead of printing each frame one time and getting a sped up effect, he printed each frame three times to fill in the gaps and arrive at 24 frames in a second.

Instead of getting a new picture in every frame of playback, we get three of the same frames, then a new image for three frames, and this pattern continues.

In different contexts it may have different effects, but Wong and his DP Christopher Doyle often used this trick for action scenes with lots of movement. The stutter and motion blur it gives adds a freneticism and wildness to the scenes.

Another creative way to capture action sequences is with slow motion. On 300 this was done for many sequences in the movie to a melodramatic extent. Snyder earmarked specific shots that he wanted his DP Larry Fong to capture in slow mo. This was mainly done at varying frame rates on the Arri 435ES up to about 100FPS and on the PhotoSonics ER for really high frame rates up to 360. 

Using slow motion in this way for action sequences suspends a moment in time and allows the audience to fully appreciate and be awed by the movements of the character that at real speed would be over in a flash.

This slowed down speed also feeds into the focus that the characters themselves have in that moment and shows how keenly they are perceiving the fight unfold. In this case it also further removes the story from the world of realism and places it in more of a comic book like world where the laws of time and gravity are broken. 

Finally, coming back to the king of frame rates, let's take a look at this shot from Chungking Express. There are a few things going on here.

First, because people in the foreground are rapidly rushing by in a blur, we can establish that this was shot at a low frame rate, such as eight frames per second and projected at a base of 24 frames - not step printed like before. This speeds up the motion. 

Second, the two main characters are moving as if in slow motion. This was simply done by getting them to move extremely slowly in real life. So, even though the film is sped up, the characters remain suspended in time. To me, this has the psychological effect of isolating those two characters and showing how detached they are from the rest of the world. They are so alone that time operates differently on them and leaves everyone else a faceless blur. 

It also elevates the relational and romantic bond between them - as if no one else exists in the world.  

Read More
Gray Kotze Gray Kotze

How Taika Waititi Shoots A Film At 3 Budget Levels

In this video I’ll take a look at just three feature films that he has directed at three increasing budget levels to analyse the techniques that he uses to make them.

INTRODUCTION

If there’s one word that sums up Taika Waititi’s approach to directing it’s tone. His movies are entertaining, uplifting and lean into an unforced comedic tone with a large focus on the writing, casting and performances of the actors.

With a career in film that has involved years of work in commercials, music videos and TV series, in this video I’ll take a look at just three feature films that he has directed at three increasing budget levels to analyse the techniques that he uses to make them.    

HUNT FOR THE WILDERPEOPLE - $2.5 MILLION

His love for comedy began early when he formed a duo with Jermaine Clement - who he’d later work with on other projects. He also started making short films. One of them, Two Cars, One Night earned him an Academy Award nomination. 

Around this time he read Wild Pork and Watercress and decided he would try to write a screenplay adaptation of the book.

“I wrote the first draft of this in 2005. I hadn’t made any other features before then and I found it really difficult adapting the book because I’d never adapted anything and I  thought you needed to be super true to the material. Basically lift everything from the book and put it into a movie. I put that to the side to concentrate on some other stuff and went off and made three other features. Then coming back to the material I realised, ‘Oh you don’t have to do that at all, you can just do whatever you want’. You put it through your filter, you know.”  

This idea of putting a screenplay or an idea through his own filter is a consistent feature of his work: whether he’s writing his own original idea, working with a screenwriting collaborator or bringing a massive blockbuster script to the screen. But we’ll get to that later. 

He takes a screenplay and applies his filter for comedy and adventure to arrive at an end product which has his recognisable authorship. This filter comes from a combination of the writing process, his approach to directing actors, and how he and his creative team visually tell the story.  

“I chose the tone that I wanted as well. I decided I was going to make a comedy that was like an adventure film. I sort of chose stylistically and tonally what I wanted to do and then took the parts of the book that I felt would work in the film I wanted to make and then made up the rest.”

With the final screenplay in place and a budget of approximately $2.5 million, half of which came from the New Zealand Film Commision, he moved to the next step in the process - which is of particular importance to a director who has a large appreciation for performance - casting the actors.

This involved casting and directing a child actor to play the role of Ricky Baker. Directing children can be a challenge. Acting, of course, takes years of practice in manipulating your emotions in a controlled way. 

The level of control and consistency required is difficult for most children. However,if you find the right child that is able to lock into the character, their performance may have a purity to it that might surpass their adult counterparts as it is more natural and less constructed. 

“What the trick is when you are auditioning you search for the kid that resembles the character the most in personality. So, you never try and get a kid to pretend they are someone else. You choose the Ricky Baker’s of the world and find the one that is closest to what you want in the film. And then all they have to do is remember the lines.”

With the cast in place and enough funding to shoot for a brief 25 days, Waititi brought Australian cinematographer Lachlan Milne onto the project to shoot the film. 

They decided on a single camera approach for most of the movie and rented an Arri Alexa XT with, based on some behind the scenes pictures, what looks like Cooke S4s and an Angenieux 12:1 zoom. 

For the car chase scene, which they shot over a couple of days, they used five different cameras to get enough coverage on the relatively low budget: three Alexa XTs which shot the on the ground footage and two Red Epics, with the Angenieux 24-290mm mounted on a Shotover on a helicopter. 

To prepare, the DP used a DSLR camera to shoot different angles of a model car which could then be cut into a sort of animatic or storyboard so that they had a list of the shots they needed to get on the day.

Since most scenes take place outdoors, lighting continuity was always going to be tricky. Milne always tried to orientate day exteriors so that the actors were backlit by the sun. 

He also leaned into a natural sunlight look and didn’t use any diffusion scrims over the actors to soften the light. He didn’t want perfectly soft light that would be too pretty. 

Also, placing scrims overhead limits the movement of the actors and how wide the shot can be. The frame needs to be fairly fixed otherwise the legs of the stands will start getting into the shot.

The director wanted to draw on the visual style of films from the mid 80s, such as films by Peter Weir, which didn’t have visual effects and didn’t use fancy gear like Technocranes to move the camera. Therefore they used the 24-290mm zoom to punch into shots rather than using the more expensive, impractical and slicker camera motion.

Something about the slow zooms also effectively built up tension in scenes and, when combined with other wider shots, helped land some of the comedic gags. Another way he accentuates comedy is with the music and sound, and lingering on wider shots and not cutting too quickly.

Overall, he used the relatively low two and a half million dollar budget to produce a bigger looking movie which mainly had contained scenes with one large chase scene set piece, with a large focus on casting and performances, almost no CG work, and an experienced crew which moved quickly with a single camera to pull off the entire movie on a tight five week schedule.   

JOJO RABBIT - $14 MILLION

“There was no real pitching process for this. So I didn’t go to studios and say ‘Hey, this is my idea for a film’. I realised early on it’s a really hard film to pitch. No one really wants to hear a pitch like this, so I’m going to write a script that’s really good and I’m going to let that be the pitch.”

A screenplay looking at World War Two through the eyes of a young boy in the Hitler Youth, where an imaginary friend version of Hitler plays a supporting role, is certainly a bit of an odd pitch. 

But, after sending the completed script around, Searchlight took an interest in the project and agreed to make the film on one condition, that Waititi play Hitler. Like Hunt for the Wilderpeople, Jojo Rabbit was also an adaptation from a novel, Caging Skies.

His screenplay and vision for the film took a different approach to how most World War Two films are presented and once again drew from his own tonal sensibilities towards comedic entertainment that is uplifting.

“We can’t get complacent and keep making the same style, the same tonal style of film: it’s drama, it’s depressing…everything is desaturated and browns and greys. Crazy idea, we can also maybe create something that is colourful and bright and has humour in it. I knew the tone really early on.”

With a budget of $14 million from Fox Searchlight and TSG Entertainment they tried to find a base for production that would give them the locations they needed and the most bang for their buck.

Initially, the plan was to shoot in Germany, however since their laws meant that child actors could only work for around three hours per day, and the movie was filled with child actors, this would have almost doubled the amount of shooting days they needed.

Eventually they decided on the Czeq Republic which had buildings that came ready made to look like they belonged in the World War Two era, a reliable film industry and labour laws which allowed them to schedule the shoot into 40 days of filming.

From the budget $800,000 was given to the art department, which may sound like a lot, but is actually very low to purchase all the army equipment and create the sets for a period film. So, having town locations which were already almost good to go helped create the period world on the low budget.

Mihai Mălaimare Jr. was brought on board as the cinematographer on the film. Prior to shooting, the director and the DP collaborated to devise the format that was right for the project.

“We were both really attracted to 1.33, but the audience is not as used to that aspect ratio anymore. We were trying to work out how it would work for us framing wise and realising how much more top and bottom it would reveal in that aspect ratio. That was the only thing that made us try the 1.85:1. One thing that Taika really responded to and I wanted to try for so long was anamorphic 1.85.” - ​​Mihai Mălaimare Jr.

To get this squarer aspect ratio with anamorphic lenses he used an unusual technique. Hawk 1.3x anamorphic lenses are designed to be shot with a 16:9 size sensor and get a 2.40:1 aspect ratio. However, if you shoot these lenses with a 4:3 sensor size, de-squeeze them 1.3x and then crop just a tad you can get a 1.85:1 aspect ratio that maintains an anamorphic look. Shooting the 1.3x V-Lites on a 4:3 sensor on an Alexa XT gave him the best of both worlds: the squarer aspect ratio along with anamorphic falloff, without needing to do much cropping.

To portray a brighter version of reality, through the eyes of a child, they used a bright colour palette with lots of vibrant greens, blues, yellows and of course reds. They also used more whimsical slow motion and central, front-on, symmetrical compositions, which placed characters in the middle of the frame and used natural framing devices on the set such as doors, picture frames, tables or tiling for balance.

Much of the tonal balance was adjusted in the edit. Whereas some directors may despise test screenings - showing a cut of the film to an audience prior to release - Waititi likes to use them in order to gauge the effectiveness of the pacing of different versions of the edit.

“It was more the tonal balance. So I test my films all the time with audiences. So you get feedback. What do you think of this? Were you bored here? Were you overstimulated here? Was it too funny here? Was it too sad here? And then just finding a balance.”

Jojo Rabbit was produced on a higher $14 million budget that accommodated for more shoot days, a war set piece, lots of extras, some star performers, and period correct production design.


THOR: RAGNAROK - $180 MILLION

“I have a theory that there are periods when the economy is suffering and people and people don’t have a lot of money to spend, they don’t want to go and see films about how tough life is for people. I think the reason that a lot of those dramatic films are not doing well is because people want an escape, which is why a lot of the superhero films are doing really well.”

This movie involved a step up from a fairly regular budget to what I guess you could best call a Marvel budget.

To get the job Waititi pitched his idea of the film, which involved creating a ‘sizzle reel’ - basically a montage that he cut to Immigrant Song by Led Zeppelin using footage from other films. The studio were also enthused by his idea to bring a vitality to the movie and his trademark brand of humour to the characters.

Working on a MCU movie means that the director basically has whatever technical resources they can dream of, have as much time as they need (in this case 85 shoot days, or a full two years when including pre and post production) and can use the massive budget to hire pretty much whatever actors they want.

However it also means that most of the look of the film will be constructed after shooting with CGI and to a large degree will be controlled by the studio. You know, the desaturated, GC-laden feeling all Marvel movies have. 

What falls on the director therefore is not so much creating the aesthetic style, but rather managing the project and creating the overall tone by using performances and storytelling.

To wring more of a comedically authentic tone from the script, he worked with the actors to achieve a more natural delivery of lines.

“The thing about a lot of studio films and Hollywood films especially is that when you hear a joke in these films you get the feeling that the joke was written about a year before they shot it and then a couple of people in the board room were like, ‘and then he’s gonna say….’ and they’re like ‘that’s gonna be amazing when we shoot that in a year.”

Instead they worked with a script that had suggested dialogue and jokes. Once him and the actors were on set they could then work with that material until they found what delivery worked naturally. Not being so tightly constrained to the original shooting script. 

The film was shot by Javier Aguirresarobe, who has a long career working on a range of both low and high budget movies. About 95% of the shoot was done with bluescreens. 

This meant that the DP lit the actors as a way to suggest to the post production supervisor where, with what intensity, quality and colour temperature they imagine the light to be. CGI is then used to construct the rest of the world and the light that is in it. Motion capture suits were used to capture the movement of computer generated characters.

The film was largely shot on the large format Alexa 65 with Arri Prime 65 and Zeiss Vintage 765 lenses. The Phantom Flex 4K was also used for shots which needed slow motion.

Thor: Ragnarok therefore used its enormous budget to hire a cast of famous actors, fund a very lengthy 85 shooting days with all the gear they could imagine, loads of action scenes, and effectively pay for two years of production time that included expensive CGI work in almost every shot in the entire movie, all the while Waititi maintained his grasp of a perfect comedic, adventure tone.

Read More
Gray Kotze Gray Kotze

5 Steps To Shooting A Documentary

Let's unpack five steps that you can take whenever you get the opportunity to work on a new documentary project.

INTRODUCTION

The world of documentary is one that is fundamentally different from other forms of planned fiction filmmaking, like music videos, commercials and features. What differentiates these disciplines is that one is more pre-planned and structured ahead of shooting, while documentaries rely on a broader plan with inevitably less precision.

This means that documentary cinematographers need to always be on their toes and be quick to adapt to unexpected situations as they unfold. Having said that, this doesn’t mean that you should just go in with a camera and rely purely on luck and instinct. There are some clearly defined ways that we as filmmakers can use to shoot more consistent, stronger content.

So, I thought I’d use this video to unpack five steps that I take whenever I get an opportunity to work on a new documentary project.    

1- IDENTIFY THE CONTENT

Gone are the days where most documentaries meant setting up a few sit down interviews which would then get cut with archival footage. 

Today documentary, or documenting a version of reality, comes in many forms - from commercial, branded content that uses touches of non-fiction, to purely observational filmmaking, re-creations of events, nature documentaries, traditional talking head documentaries and everything in between.

It may seem obvious, but as a cinematographer, the first thing you need to do when starting a new project is clearly define the form of the film and identify the types of scenarios that you will be filming.

The reason this is so important is twofold, it’ll help you to identify the gear you need to bring along and will help you to nail down a visual style. But we’ll go over those two points separately.

When you’re dealing with real life situations, planning and having a clear vision for what you need to get will make it far easier to execute on the day. Half of making documentaries comes from producing and putting yourself in the right situation to capture whatever the action is.

Most of these decisions come from the director. In documentary work, the director may also be the cinematographer. If there is a dedicated cinematographer then knowing the form of the project and the kind of footage needed is still crucial. 

For example, you may go into a shoot knowing that you need an interview with the main character that should be prioritised, some B-roll footage of the location and one vérité scene with another character. 

If the schedule of the main character changes and they suddenly aren’t available to do an interview in the morning anymore then you know that: first priority is scheduling the main interview, second priority is finding time with a secondary character, or looking for a potential scene to present itself and thirdly the B-roll can be gathered throughout the day in the gaps of the schedule. 

Making a list that prioritises footage that is a must have, footage that would be nice to have and footage that would be a bonus to get is useful going into the shoot. It’s always easier to improvise and get unexpected, magical moments when you already have a solid base or plan to work from that tells the core of the story. 

Once you’ve put the edges of the puzzle in place, it’s much easier to then fill in the rest.     

2 - GEAR

As I mentioned, selecting the gear needed for a project will be determined by the kind of scenarios that need to be captured. 

For example, a verite documentary may be captured by a single handheld camera, with a single lens, which also records sound, operated by one person. While larger budget true crime documentaries with re-creation scenes may have an entire crew, complete with a cinema camera package, a lighting package, and a dolly.

Whatever gear is needed on a documentary shoot there is always one certainty: you need to be able to work fast. For that reason, you need to have a high degree of familiarity with the camera you are shooting on. If you need to quickly capture a moment in slow motion can you find the setting within a few seconds? Or if the light suddenly changes and you need to compensate for overexposure can you quickly adjust the ND filter? 

This is why going into a shoot I’d recommend configuring the camera in such a way that you are able to make changes as quickly as possible. This may be through user buttons, through having a variable ND filter on the front of the lens, or by having a zoom that can use to quickly punch in or out to a specific shot size. When you’re capturing real life, you don’t ever want to miss a crucial moment if it can be avoided.

Having less gear also speeds things up. It means less to set up, carry around and to pack away. There’s a sweet spot between having the tools that you need and not having too much stuff to lug around.  

Although there are loads of different approaches to selecting gear, let me go over what is a fairly typical setup.

Starting with the camera, a popular choice is something like a Sony FS7, a Canon C300 or something newer like the FX6. These cameras have great codecs that produce high quality images with a relatively small file size - which you need on documentary projects where you often need to shoot a lot of footage. They also come with XLR audio inputs to feed sound directly into the camera and have user buttons and internal ND filters for quick operation.

When it comes to lenses, I personally prefer working with primes, but zooms are probably more popular as they allow you to quickly readjust shot sizes. Something like a 24-70mm is a pretty standard choice. Depending on the content it’s usually useful to also carry a long zoom like a 70-200mm.

I like to carry screw-on filters with me, such as a variable ND and maybe a diffusion filter or a diopter filter, depending on the look.   

Then you want a lightweight tripod with a fluid head that is smooth to operate, but light enough to carry around all day and to quickly set up. Many people now like to shoot with a gimbal too.

I also like to carry around a little lighting bag and a stand. This can be used for an on-the-fly interview, bringing up the exposure in a dark space or lighting observational scenes so that they are more ‘cinematic’. 

I exclusively choose LEDs that are both dimmable and are bi-colour. This means you can easily change their colour temperature and the intensity of the light with the turn of a knob. Again, speed is key.

3 - VISUAL STYLE

Whether you are conscious of it or not, every decision that a cinematographer makes while shooting contributes to some kind of visual style. Even the act of just picking up a camera quickly and pressing record to capture a moment creates a visual style with a loose, handheld, verite look.

This visual style may affect the audience in a subtly different way than if the same scene was shot locked off on a tripod, or shot with lots of movement on a gliding gimbal.

There are a million different directions to go in. Maybe you decide on a specific type of framing for the interviews, maybe the entire film is handheld, maybe you only use natural light, maybe you use artificial light to enhance reality, maybe you use a drone to give context to the space, maybe you suspend time by using slow motion, or shoot with a diffusion filter to make the images more dreamy. These are all decisions that influence a film’s visual style.

Therefore the next step in documentary cinematography, before arriving on set, is coming up with an idea for an overarching visual style that supports the film. This style could be rationally decided upon based on thought or based on what feels right.

This step also needs to be considered with the first step of identifying the kind of content you are shooting. You need to find a style that is balanced with what you can realistically achieve. For example if you’re shooting a fast paced fly on the wall documentary it might not be possible to shoot everything from a tripod with perfect lighting.   

Usually, I find I have a stronger connection to films that have some kind of visual cohesion and an artistic vision that stretches across the entire doccie.

Of course since we are shooting in unpredictable situations, with less control over the environment, it’ll almost never be possible to get exactly what we want visually.

But, going in with a plan or an idea of the look, or finding the look as you begin shooting, will almost always result in stronger images than if you go into shooting with no vision or ideas at all and just get whatever you can get without giving any thought to how the images look and the feeling they will convey.

4 - SOUND

Next, let’s talk about something that is sometimes loathed by cinematographers, but which is as important, if not more important, than the image: sound.

Some documentaries may have the resources and the need to hire a dedicated sound person, but often in the field of documentary the job of recording sound may fall on the cinematographer.

Therefore it’s important to at least know the basics of how to record sound. There are two ways this is done. With lav mics that are clipped onto the subject, which then feeds a signal wireless to a receiver which is plugged into the camera or a recorder that captures the sound. Or with a boom mic that can either be mounted on-board the camera, or used by a boom operator on a boom pole. For more on this I made another video on boom operators.

The main point to be aware of as a cinematographer, is that getting good sound may involve compromise. For example, you may want to shoot a beautiful wide shot of a scene, or an interview, but if you are shooting in a noisy, uncontrolled environment you may be forced to scrap that plan and shoot everything in a close up so that you can get the boom mic nice and close to the subject.

It may be frustrating to sacrifice the better shot for the sake of sound, trust me I hate it, but what I always tell myself is that it’s better to get a worse shot that has usable sound, than to get a beautiful shot that has terrible sound. 

If you get a beautiful shot but the sound isn’t usable it’ll just end up on the cutting room floor anyway, never seen by anyone besides the editor.

Of course this is dependent on how necessary the sound is, but as a general rule if you’re working with an on-board mic and there is crucial dialogue - prioritise getting usable sound over getting a beautiful image.  

5 - COVERAGE

The final step to shooting a documentary is, well, the actual act of shooting it. Understanding coverage, which refers to the angles, shot sizes and way in which a scene is shot is an invaluable skill in documentaries.

While in fiction filmmaking you can shot list, storyboard or consider the coverage of a scene between setups as you shoot it, when you are working in unexpected situations that will only take place once, you have to make these decisions in real time.

It’s a difficult thing to give broad advice on as different scenes can unfold in different ways, but let’s go over some basic ideas for capturing an average vérité scene.

I find it’s useful to edit scenes in your head as you are shooting them. For an average dialogue scene you know you’ll need a few things. One, you’ll need a wider shot that introduces the audience to the space of the location so that they can orient themselves and understand the context. Two, you’ll need a shot of whoever is talking, specifically the main character or characters that you are focusing on. Three, you’ll need to get reaction shots of whoever isn’t talking, so that the editor can use these to shorten a scene. 

For example, there may be one sentence at the beginning which is great, then they waffle for a bit, then they have another three sentences which are great. If you have a reaction shot, then you can start on the person saying the first sentence, cut to the reaction shot while you keep going with the dialogue from the end three sentences. Then cut back to the person saying the dialogue. This naturally smooths things over and ‘hides’ a cut. 

If you only have shots of whoever is talking, then the editor will have no option but to either select one section of dialogue, for example the final three sentences, or to jump cut - which can be abrasive.

Also remember that the size of a shot affects how an image is interpreted. So for more personal moments you want to try and get as close as you can. However, you also need to take into consideration that your proximity to a person will affect how they act.

 If you meet someone for the first time and get right up in their face with a camera immediately they will be put off and likely won’t open up to you emotionally. 

That’s why I usually like to start shooting scenes wider and then begin to move closer to them as they become more comfortable with your presence and the conversation starts to heat up.

Covering a scene in a documentary situation comes with experience. It’s like an improvisational dance that needs to balance getting shots that will cut together, making the subject feel natural and at ease and anticipating the right shot size for the right moment.   

Although this just touches the surface, if you want to cut a basic, verite dialogue scene together and make an editor happy, then make sure you get, at a minimum, a shot that establishes the space, a shot of the person talking and a reaction shot of people who are not talking. 

Read More
Gray Kotze Gray Kotze

Cinematography Style: Janusz Kamiński

In this episode of Cinematography Style I’ll unpack Kaminski’s philosophy on filmmaking that uses visual metaphors to express stories, and give examples of the kinds of gear and technical tricks he’s used as a cinematographer to create images.

INTRODUCTION

With a career in producing images that has spanned decades, it can be tricky to pin down exactly the work of Janusz Kaminski. However, it’s difficult to deny that a large part of his filmography is due to his extensive collaborations with iconic director Steven Spielberg.

This raises the question, how do you separate the creative input of the director and the cinematographer? Is it even possible to do so?

In this episode of Cinematography Style I’ll unpack Kaminski’s philosophy on filmmaking that uses visual metaphors to express stories, and give examples of the kinds of gear and technical tricks he’s used as a cinematographer to create images.   

BACKGROUND

During a period of political turbulence in the early 1980s, the Polish cinematographer moved to the United States where he attended university. He decided to take up cinematography and went to film school at the AFI.

He got his first professional job in the industry as a dolly grip on a commercial. The camera operator quickly told him this wasn’t for him. Next, he worked as a camera assistant, where he was again told he also wasn’t any good. He then started working in lighting which kicked off his career.

During this time he worked on lower budget productions with fellow up and coming cinematographers such as Phedon Papamichael and Wally Pfister. He also began working as a cinematographer too in his own right.

“I was here for 13 years and I shot 6, 7 movies. So I was experienced I just didn’t have that little push. I shot a little movie directed by Diane Keaton. Steven liked the work, called my agent, we met and he offered me to do a television movie for his company and after that he offered me Schindler’s List.”

This collaboration proved to be a lasting one. Over the years they have shot 19 other films together and counting. 

Other than Spielberg he’s also shot feature films in many different genres for other directors such as: Stephen Sommers, Cameron Crowe, Judd Apatow and David Dobkin.  

PHILOSOPHY

Coming back to the question of how you separate Kaminiski’s input from Spielberg’s:  in their vast collection of films together, a lot of the overarching visual decision making does come from the director’s side.

Prior to their work together, Spielberg was known for the creative way in which he positioned and moved the camera in order to tell stories. In that way, I think a great deal of the perspective of what the audience sees in the frame comes from him.

For some movies, such as West Side Story, Spielberg uses extensive storyboards to pre-plan the coverage in a very specific way. While other movies like Schindler’s List had surprisingly little planning and were more spontaneous without any shot lists or storyboards.

For this situation, Kaminski used a portable tape recorder to dictate notes about lighting, problems or gear he may need, to bring order to his thoughts and successfully execute the photography as they went.  

In terms of the overall look and lighting of Spielberg’s early films, they all followed a similar template that was grounded by a traditionally beautiful Hollywood aesthetic with haze that accentuated an ever present glowing backlight that gave the actors an angelic, rim light halo outline. 

The other cinematographers he worked with were intent on servicing this traditional aesthetic.

When Kaminski came on board to shoot Schindler’s List he deconstructed the Hollywood, family-friendly beauty that audience’s had come to expect from Speilberg’s work.

“I think the idea of de-glamorising the images, strangely, I’m always interested in that. I didn’t want that classical Hollywood light. I wanted more naturalistic looking. We all want to take chances, because it’s not this comfortable life we’ve chosen where we just make movies and we work with movie stars. We express ourselves artistically through our work and we want to take chances.”

Throughout their collaborations together, Kaminiski was able to find a middle ground that balanced Speilberg’s desire for a traditionally beautiful look with his own appreciation for de-glamorised images that could be considered beautiful in a different way. 

Another ever present idea in his work is his use of visual metaphors - where the camerawork represents a particular idea or leans into a visual perspective that represents the location or time period that is being captured in the story.

“I think each story has its own representation. You have to allow the audience to immediately identify where they are. So if you’re not using some very strong metaphors you will lose the audience. So the first explosion is very yellow, then we go to France and it’s more blue-ish, you go to Italy it’s very warm and fuzzy, France it’s very warm and fuzzy. So using those visual cliches that we as the people identify with specific countries.”

He doesn’t only create these visual metaphors with colour. On Munich he used zooms to capture the photographic vocabulary of the 1970s when those lenses were popular. 

Or in Saving Private Ryan he mimicked the kind of manic, handheld, on the ground style that the real combat cameramen of the time would have been forced to use.

Or in Catch Me If You Can, he differentiated the time periods by giving the 60s scenes a warm, romantic glow and the 1970s scenes a slightly bluer, flatter look.

These visual languages and cues subtly change depending on the movie. They back up each film by using the images to support the story in a way that hopefully goes unnoticed by the audience on the surface, but feeds into how they interpret the movie in an unconscious way.

GEAR

“I look at cameras as a sewing machine. When you talk to the wardrobe designer you don’t ask her what kind of sewing machine do you use, because it’s just a sewing machine. It doesn’t really matter. The equipment, all that stuff is not. What you do with it is essential.” 

Some cinematographers like to be consistent with their gear selection to carry their visual trademark across the respective projects that they work on. Kaminiski isn’t like that. 

Throughout his career he has got a variety of optical effects from his big bag of tricks. Sometimes this involves using filters, sometimes photochemical manipulation, other times unique grip rigs or playing with unconventional camera settings.

So, let’s go through a few examples of some gear he has used, starting with his camera package.

He flips between shooting with Panavision cameras and lenses in the US and using Arri cameras when working in Europe. He’s alternated between shooting Super 35 with spherical lenses and in the anamorphic format. 

Spherical lenses are more practical as they are faster, have better close focus and are smaller, which makes them better suited for shooting in compact spaces such as car interiors. Examples of some of these lenses that he has used include Cooke S4s, Panavision Primos and Zeiss Standard and Super Speeds.  

He usually shoots close ups at around a more romantic 50mm focal length or longer to flatter the face, but on Schindler’s List chose to shoot them with a wider 29mm field of view that lended itself to realism.  

He’s used anamorphic lenses for their classical Hollywood look, with beautiful flares that are impossible to otherwise recreate. Some examples are the C-Series and more modern T-Series from Panavision. 

He has used digital cinema cameras occasionally but almost exclusively shoots features on 35mm film - including his recent work. His choice of film stocks has been extremely varied. 

On Schindler’s List he mainly shot on Eastman Double-X 5222 black and white. For specific sequences that required parts of the frame to be colourised, such as the famous shot of the girl in the red dress, he pulled an interesting photochemical trick by recording on Eastman EXR 500T 5296 colour negative film stock and then printing the film onto a special panchromatic high-con stock which is sensitive to all colours and used primarily for titles. 

This gave them the look they wanted that best matched the rest of the black and white footage and didn’t contain the blue tint that came with removing the colour from the colour negative in the regular way.   

To get a flatter image for the 1970s scenes in Catch Me If You Can he used Kodak 320T stock in combination with low-con and fog filters to purposefully make the images a bit uglier, more neutral and drab. This coincided with the main characters' fall from grace as he came to terms with the real life consequences of his actions.

Or on Saving Private Ryan, he settled on Eastman’s 200T film stock, which he pushed by one stop and used a film development process called ENR which both desaturated the stock and sharpened up the look of textures, giving the details in the image a grittiness.

When it comes to lighting with his gaffer he acknowledges that some gaffers are more technical while others are more conceptual. Due to the large scope of the kind of sets he lights it’s more practical for him to describe the lighting he wants in more general terms. Such as no backlight, or this source needs to feel warm - rather than describing and placing loads of specific units around a set.

“The scope is way too large. You can’t demand every light be placed on set according to your desires, so you have a gaffer who is knowledgeable. On the shooting day or the day before you talk about the specifics of each scene or you adjust the lighting. Or you do the lighting with the gaffer on the given day right after the rehearsal. Surround yourself with the best people so you can work less and I want to work as little as possible.”

Spielberg likes to move the camera in a fluid, expansive way, with rigs such as a Technocrane, that reveals large portions of the location. This adds to his challenge of lighting as it’s far easier to light in a single direction with a 15 degree camera angle than it is to cover 270 degrees of the set.

Although for other films such as Saving Private Ryan a lot of handheld moves were done to introduce a feeling of realism that placed the viewer right down on the shoulder of the operator, in the middle of the action.

To inject even more intensity into an already shaky image he used Clairmont Camera’s Image Shaker. This is a device which can be mounted onto the front bars of the camera and vibrates at a controlled level with vertical and horizontal vibration settings which could mimic the effect of the explosions happening around the soldiers. 

CONCLUSION

Kaminiski uses whatever technical trick he can think of to create visual metaphors that push the story forward, whether that’s done photochemically, with a filter or by physically shaking up the image.

In the end, the technical solution or piece of equipment itself is less important than the cinematic effect that it produces.

Spielberg and Kaminski’s filmmaking is an intertwined creative partnership which has combined Spielberg’s traditionally cinematic visual direction with Kaminski’s focus on visual metaphors. Sometimes this means perfect golden backlight, but other times a feeling of realism that is far more ugly and true to life is what is required.

Read More
Gray Kotze Gray Kotze

Alexa 35 Reaction: Arri's First New Sensor In 12 Years

My first reaction to details about the Alexa 35 prior to the release of the camera.

We’ve been hearing rumours that Arri has been developing a new Super 35 4K camera for years…Well, it seems it’s finally time. A brochure for the new Alexa 35 has leaked that outlines all the features of this new camera.

If you follow the channel you’ll know that I don’t really react to new stories but rather focus on discussing a more general overview of filmmaking topics. However, since I think this new Alexa 35 has the potential to take over the high end cinema camera industry in a similar way that the original Alexa Mini did all those years ago, I’m going to run through and react to some of the key features of this new camera. 

BACKGROUND

Before I start, I should probably mention that Arri’s approach to camera development and releasing new cameras is a bit different to some other brands. Brands like Red, for example, are known for putting out cameras as soon as they can and then sorting out any bugs or issues that arise in early testing.

Arri is far more conservative and precise about their releases. They don’t release new gear very often. The Alexa 35 represents Arri's first new sensor that they have developed in 12 years. So, when they do choose to unveil a new piece of gear to the public you can rest assured it has been thoroughly tested and carries a reputation that it will live up to all the specs that they mention.

SUPER 35 4.6K

Arri’s cameras are all developed to fulfil a specific section of the cinema market that relates to its sensor size, specs or physical size of the camera. For example, the Alexa Mini was developed as a Super 35 camera which was small enough to be used on a gimbal. Or the Alexa 65 was developed to provide a 65mm digital sensor size. 

The Alexa 35 was developed to be an update of the Alexa Mini, with a Super 35 sensor, a small form factor and the crucial update of recording higher resolutions. Apart from its effect on the images, a big reason this increase in resolution was made was to meet the 4K requirements needed to film Netflix Originals. Previously this was only possible with their cameras that had larger sensors like the Mini LF and was unavailable in the Super 35 format.

As I’ve said in a previous video, Super 35 sensors have a different look and field of view than large format cameras. Since it’s been the standard format throughout cinema history, there is also the largest range of cinema lenses to choose from. 

SPECS

So let's run through some key specs. Like their other new cameras, the Alexa 35 can record in ProRes or ARRIRAW. It tops out at 4.6K in Open Gate and can record up to 75 frames per second onto the larger 2TB Codex drives, which goes down to 35 frames on the 1 TB drive.

In regular 4K, 16:9 mode, this frame rate is pushed up to 120 in ARRIRAW. This is a nice upgrade from the Mini LF and will cover most slow motion needs on set, before needing to change to a dedicated slow motion camera like a Phantom. 

An impressive feature of this new sensor is that Arri has found an extra one and a half stops of dynamic range in the highlights and another stop in the shadows. This brings the total exposure latitude of the camera to 17 stops. 

They also claim that the highlights have a naturalistic, film-like roll off to them. To me, how a cinema camera handles the highlights is one of the most important factors in creating a pleasing filmstock-like look. It’s something that the previous ALEV 3 sensor did well, which I’m sure will continue or be improved upon by this new iteration.

As many DPs tend to push a more naturalistic lighting style these days, I think the increased dynamic range that they claim will help control the light in more radical exteriors and make sure there is detail in the highlights from hot windows in interiors.

More manufacturers these days, such as Sony, have been moving to a dual ISO model that has a standard ISO for regular use and a boosted native ISO for low light situations. 

It seems Arri hasn’t gone quite this far but has made a move in the direction of improving the low light performance of the camera with what they are calling an ‘Enhanced Sensitivity Mode’. This can be activated when the EI is set between 2,560 and 6,400. They claim this creates a low noise image in low light and is targeted at filmmakers who want to use available light during night shoots. 

When it comes to colour, Arri has developed a new workflow called Reveal colour science, which they claim is a simpler workflow for ARRIRAW post production and leads to higher quality images with accurate life-like colour. They also claim that the Alexa 35 footage will be able to be cut with their existing line of Alexa cameras. While I assume the colour will therefore be fairly similar to the existing Arri look, this is going to be something that will need to be seen once footage starts getting released.

TEXTURES

So I mentioned a new feature of the Alexa 35 that I’m excited about, and that is what they are calling Arri Textures. When digital cameras were originally introduced the common way of working with them was to record as flat a log image as possible, which would then have more room be manipulated in post production by doing things like creating a look, adding artificial film grain, adjusting saturation, these kinds of things.

I think as cinematographers have gotten more used to the digital workflow there has been a bit of a push to go back to the ways of old where the decisions that cinematographers made on set determined the look of the negative. 

Some do this by creating a custom LUT before production, which is then added to the transcoded files that are edited with, so that a ‘look’ for the footage is established early on, rather than found later when its handed over to a colourist at the end of the job.

With that said, Arri Textures is a sort of setting plugin that is made in camera that defines the amount and character of the grain in the image, as well as the contrast in the detail or sharpness. 

So, cinematographers now have the ability to change the way the camera records an image, much like they would back in the day by selecting different film stocks. I think this is a great idea as a tool as it puts control back into the hands of cinematographers and allows them to make these decisions on set, rather than having to fight for their look in the grade. 

ERGONOMICS

With all of these new features and high resolution comes a need for more power in order to get all this done. With that in mind, the Alexa 35 will be a completely 24V powered camera - rather than prior cameras that could run off 12V batteries like V-locks as well as 24V power.

This will be done with their new system of B Mount batteries. I haven’t personally worked with these batteries yet, but one plus I foresee, apart from them providing a higher level of consistent power is that they can be used by camera operators who operate with their hand on the back of the battery. 

This has become a popular way to operate, particularly with a rig like an Easyrig. I always found older gold mount or V-mount batteries had a tendency to lose power and shut down the camera from time to time as the contacts shifted when operated. This should no longer be a problem with the B-mount. 

In terms of its form factor, I think this new Alexa is a great size, around the same size as the Mini LF - a little larger than the original Mini but small enough to be used for handheld and gimbal work. 

The pictures show the addition of a little menu on the operator’s side of the camera, with quick access to basic settings like frames per second, shutter, EI, ND and white balance. It kind of reminds me of old Arri film cameras that came with a little setting display screen on the operator side.

The main reason I think this will be useful is for when the camera needs to be stripped down, for Steadicam, gimbal or drone, and loses its viewfinder which has the main menu access. On the old cameras if you needed to change settings, you’d have to awkwardly plug in the eyepiece, and wait for it to power up before you could do so, or do it through the Arri app on a phone which can be buggy. This new menu should save time in those scenarios.

Other than that they’ve added some extra user buttons which reminds me of the Amira  a bit and perhaps is intended for quicker use in documentary situations. The new camera comes with a bunch of re-designed components, with the intention of making it a small but versatile camera that can be built into light or studio setups. 

Finally, one criticism I have is that like the Mini LF, the Alexa 35 only has 3 different stops of internal ND, a 0.6, 1.2 and 1.8. I’m surprised they didn’t try to add more stops to compete with Sony’s Venice that has 8 different stops of internal ND filters from 0.3 to 2.4. I know cinematographers who like shooting on the Venice almost entirely for the ease and speed that having all the internal NDs you could need provides.

Read More
Gray Kotze Gray Kotze

What A Steadicam Operator Does On Set: Crew Breakdown

In this Crew Breakdown video, let’s take a look at the Steadicam Operator and go over what their role is, what their average day on set looks like, and a couple tips that they use to be the best in their field.

INTRODUCTION

A long time ago, in a world far before low cost gimbals were a thing, there were only a handful of options when it came to moving cameras with a cinematic stability.

You could put a camera on a dolly. You could put a camera on a crane. Which are both great options, but what about if you wanted to do this shot? How do you chase a character over uneven ground, through twists and turns, at a low angle for an extended, stabilised take? 

The answer was with a piece of stabilising equipment invented by Garrett Brown, called the Steadicam, that could attach a camera to an operator, giving filmmakers the mobility of a handheld camera combined with a cinematic stability.

This created the new crew position on a film set of Steadicam Operator. So, in this Crew Breakdown video, let’s go over what their role is, what their average day on set looks like, and a couple tips that they use to be the best in their field.   

ROLE

“I liked handheld. I did not like the way it looked - then or now. And so what I needed was a way to disconnect the camera from the person.” - Garrett Brown, Steadicam Inventor

Before going over what the role of the Steadicam operator is, let’s take a basic look at how a Steadicam works. 

A Steadicam is basically a perfectly balanced, weighted gimbal attached to the camera operator’s body that isolates the camera from the operator’s movement. This allows the camera to be moved around precisely with smooth, stabilised motion.

It can be broken down into three basic sections: the vest, the arm and the sled. The sled includes a flat top stage which the camera sits on and a post which connects the bottom section with a monitor mount and a battery base. 

The top stage with camera and the bottom stage with the monitor and the batteries are positioned so the weight of the camera is counterbalanced and even. Like balancing a sword on a finger.

Having two ends which are perfectly balanced both adds weight, and therefore more stability to the rig, and puts the centre of gravity exactly at the operator’s grip, so that they can use their hand to adjust how the camera moves with delicate adjustments. 

This hefty weight is supported by a gimbal attached to the post, which attaches to an arm, which then attaches to a vest worn by the operator. The rig’s substantial weight, perfect balance and gimbal allows the operator to manoeuvre the camera around with a floating stability using the motion of their body and deft touches with their grip. 

A Steadicam is therefore a great option to move a camera through tight spaces, over uneven terrain, or do flowing, 360 degrees of movement around actors in long takes. 

It’s generally seen as providing more organic motion and the ability to do hard stops with precision better than 3-axis gimbals - which have a drifting motion to them before they come to a resting stop.

The role of the Steadicam operator is an interesting one, as it requires both a deep technical knowledge and proficiency as well as a creative flair and theoretical knowledge on how to move the camera and frame shots to tell a story.

Sometimes, but not always, a Steadicam op will also work as the primary camera operator (or the B-camera operator), operating shots from a tripod head, wheels and performing any Steadicam shots that are required.

Their job includes helping to build and balance the camera on the Steadicam rig, discussing a shot with the DP and director and then executing it - often adjusting between takes until the perfect take is in the can.          

AVERAGE DAY ON SET

Before the shoot begins, the Steadicam operator will show up to the gear check at the rental house where the camera team tests and assembles the gear. As different jobs will use different cameras and lenses, which come in different weights and sizes, it’s crucial that the camera is properly built and balanced during testing.

Nothing would be worse than building the camera on the day, without a gear check, only to realise that the lens is too front heavy to balance on the Steadicam. 

On the day of shooting, the operator will grab a shooting schedule or communicate with the 1st AD to determine what Steadicam shots need to be done and therefore when the camera needs to be built for Steadicam. Sometimes most of the day can be spent doing Steadicam, but usually it will only be reserved for a few shots, in different scenes or setups, spread throughout the day.

If there is a particularly tricky shot, or a choreographed long take that has been pre-planned, the operator may meet with the DP during pre-production, prior to shooting, and walk through the shot to work out how best to pull it off.

When it comes time for Steadicam, the first thing to be done is to build the camera. This is done by the 1st AC or focus puller who will strip the camera of excess weight, configure the necessary accessories, such as the transmitter or focus motors in the same place as they did during the gear check and attach the Steadicam’s sliding base plate to the bottom of the camera.

It is then handed off to the operator who will slide the camera onto the top stage and test it to make sure it is properly balanced on the gimbal. They’ll then throw on the vest, go up with the camera and run through a rehearsal or a rough blocking with the director, actors and DP to work out the movement. 

When they’re ready they’ll go for a take. The director and DP will watch a feed of the image transmitted on a monitor and give feedback on things like the speed of the motion, the framing or suggesting a new movement. 

The camera team will often hand a wireless iris control to the DP, that they can then use to change the aperture on the lens remotely if there are any changes in light.

Between takes when the camera isn’t needed, the operator will take the weight of the Steadicam off by placing it on a stand.

This is the core of their job. However, since the requirements of different shots can vary hugely depending on the situation, each shot may offer a different challenge when it comes to operating. Sometimes this may be the physical challenge of operating a heavy setup, other times it may be a matter of synchronising the timing of the movement with the actor and focus puller or the shot itself may require particularly nimble operation. 

The Steadicam operator has to be able to cooly and calmly adapt to each situation to provide the creative team with the kind of shot that they imagine under the pressures of a time limit.

TIPS

To become a Steadicam Operator you can’t just show up on set and learn as you go. The reason it is such a niche profession is that it takes lots of training, knowledge, practice and experience to be hired for high end film jobs. 

It’s also expensive. 

Typically, Steadicam operators buy their own Steadicam, which is a pricey piece of gear, attend Steadicam workshops where they are trained how to operate it, and are then able to rent out their expertise and their rig out to productions on jobs.

In recent years Arri also introduced the Trinity, which is similar to a Steadicam with a 3-axis camera stabiliser that allows the camera to move on the roll axis, and self balancing features which allows the camera to be moved from low mode to high mode during a shot and the post to be extended for extra reach.

With a traditional Steadicam, operators need to decide before a shot begins whether to shoot in the more common high-mode, or if the camera needs to be close to the ground with the post flipped around and used in low-mode.

Another option sometimes used is to hard mount the arm of the Steadicam on a moving vehicle. The operator then sits next to the rig to operate the camera without having to hold the full weight of it. 

An early example of this was worked out by Garrett Brown on The Shining for the famous hallway tracking shots. They hard mounted the Steadicam arm to a wheelchair which could then be pushed through the hotel corridors in either high mode, or inches from the ground in low mode.   

Since a Steadicam rig with a cinema camera is extremely heavy, operators try to minimise the amount of time that they carry the rig in order to save their stamina for shooting. Any time the camera isn’t going for a take they’ll use a stand to rest the rig, or have a grip standing close by so that they can hand the post off to them as soon as cut is called. 

Communicating with the AD to make sure that the camera only goes up at the last possible moment, and isn’t waiting there for ages while make-up does final checks and the director stands in to give notes, is another good way of minimising time holding the rig. 

Since the camera is set to balance perfectly, if there are big gusts of wind the camera can be shaken and experience turbulence. Therefore it’s good to make sure the grip department is carrying a ‘wind block’. This is a sheet of mesh material attached to a frame that is held by grips between the source of the wind and the camera in order to minimise turbulence.

Another crew member that the Steadicam operator needs to communicate with is the focus puller. Since on an average shoot day the camera will usually need to alternate between studio builds and Steadicam builds the 1st AC and the Steadicam operator should come up with the easiest possible method to change between these configurations that’ll save the production the most time. Because, on a film set more than anywhere else, time is money. 

Read More
Gray Kotze Gray Kotze

How The French New Wave Changed Filmmaking Forever

Out of all of the film movements I’d say one of the most influential of them was the French New Wave. In this video I’ll outline four things from this film movement that are still present in how movies are made and thought about today, which were responsible for altering the course of filmmaking forever.

INTRO

“He immediately talked about, kind of, the French New Wave portrait of youth.” - Greta Gerwig

“The beginning of Jules and Jim, the first three or four minutes influence the style of Goodfellas and Casino and Wolf of Wall Street and so many.” - Martin Scorsese

“Godard was so influential to me at the beginning of my aesthetic as a director, of, like, wanting to be a director.” - Quentin Tarantino

Throughout the decades, there have been many defining film movements in cinema. Some have had a longer lasting impact than others. Out of all of them I’d say one of the most influential of these movements was the French New Wave, which took place from the late 50s to the late 60s. Its impact can still be seen to this day.

During this time various directors emerged who made films that could broadly be classified by their similar philosophy and approach towards experimentation and style. 

Many of these directors began their careers as film critics and cinephiles who wrote for the magazine Cahiers du Cinéma where they rejected mainstream cinema and came up with a sort of film manifesto that encouraged experimentation and innovation. 

In this video I’ll outline four things from this film movement that are still present in how movies are made and thought about today, which were responsible for altering the course of filmmaking forever.

AUTEUR THEORY

“An Inquisition-like regime ruled over French cinema. Everything was compartmentalised. This movie was made as a reaction against everything that wasn’t done. It was almost pathological or systematic. ‘A wide-angle lens isn’t used for a close up? Then let’s do it.’ ‘A handheld camera isn’t used for tracking shots? Then let’s do it.’” - Jean Luc-Godard

In 1954 director Francois Truffaut wrote an article for Cahiers du Cinéma called ‘A Certain Tendency of the French Cinema’, wherein he described his dissatisfaction of the adaptation and filming of safe literary works in a traditional, unimaginative way. 

Up until then movies were largely credited to the actors who starred in them, or to the studios and producers involved in their funding and creation.

Instead, the cinema of the French New Wave put forward the idea that the real ‘author’ or ‘auteur’ of a movie should be the director. They should be the primary creative driving force behind each project by creating a visual style or aesthetic specific to them. Their themes, tone, or overall feeling from their films should also be consistent and identifiable across their overall body of work. 

If you could glance at a film and immediately tell who the director behind it was - that was a sign it was created by an auteur. 

A film by Quentin Tarantino will have ensemble casts, non-linear storylines, chapter divides, mixed genre conventions and pay homage to the history of cinema.

A film by Wes Anderson will have fast-paced comedy, childhood loss, symmetrical compositions, consistent colour palettes and highly stylised art direction. 

This idea was revolutionary as it encouraged directors to tell stories through their own distinctive voice, rather than acting as craftsmen that followed the same rules and chiselled out each film the same way for a studio.

All it takes is watching a few trailers or the credits in a film to tell that auteur theory is still alive and well. Many movies use the name of the director as a selling point, even more so than the actors in some cases.

If we turn to short form filmmaking, a huge number of directors of commercials or music videos get hired by clients and agencies because they want their film told in a specific style associated with that director. 

You hire The Blaze to direct if you want a character-focused, wildly energetic, passionate, personal journey told with a fluidity of movement.You hire Romain Gavras to direct if you want a carefully coordinated, composed, concept driven set piece.

But this French New Wave idea of the director as an auteur is just the first thing that had an undeniable impact on how cinema today is created.  

  

LOW BUDGET

“I really like Band Apart. In particular it really kinda grabbed me. But one of the things that really grabbed me was that I felt I almost could have done that. I could’ve attached a camera to the back of a convertible and drive around Venice boulevard if I wanted to.” - Quentin Tarantino

In their more financially risky pursuit to break free from the constraints of the traditional mould of French cinema and create their own inventive styles as auteurs, many French New Wave directors had to work within a low budget lane.

This was also influenced by the financial restraints of post-World War Two France.  

Rather than seeing it as a disadvantage, a lot of the movies that came out of this period used their lack of resources to break conventional rules and form their own style - which we’ll get into more a bit later.

They took some cues from the Italian Neorealist movement that preceded it, which cut costs by shooting on location and working with non-professional actors in rural areas.

Likewise, many French New Wave films worked on location, with a bare bones approach to lighting and homemade, DIY camera rigs. This allowed them to work quickly, unencumbered by large crews and introduced a more on-the-ground aesthetic to the filmmaking. 

This further democratised filmmaking and made it more accessible than ever before. It showed that big studios were not always needed to produce great cinema.

This democratisation of filmmaking expanded further throughout the years, until it exploded even more with the introduction of low budget digital cinema cameras. 

There’s a reason that many low budget indie films today still use French New Wave films from this period as a primary reference and inspiration for, not only what is possible to achieve with limited resources, but also the kind of look and style that comes with it.  

VISUAL STYLE

“All these films had been very different of what had been French cinema. What was in common was to use a lot of natural light, sometimes use non actors, natural sets, a sort of speed in the inspiration and the work. That is what was in common.” - Agnes Varda 

What emerged from this rejection of cinematic tradition in a low budget environment were a burst of films that broke existing filmmaking ‘rules’ and had a vigorously experimental style.

Part of this was informed by a documentary-esque approach to cinematography that free-ed the actors up to move and improvise. Like documentaries, these films were largely shot at real locations, relied on using mostly natural light (which allowed them to shoot 360 degrees in a space), using a reactive, handheld camera and sometimes employed non-professional actors who they’d get to improvise dialogue, blocking and actions.

All this went against the more formal conventions that were previously expected of traditional studio films that were shot in studio sets, off a rigid dolly, with perfect, artificial lighting and precise blocking of a pre-approved screenplay.

In this way the French New Wave paved a path that made it OK for future filmmakers to work in a rougher, more naturalistic style and broke down the very notion that cinematography needs to conform to specific rules.


EXPERIMENTATION

“I think a lot of it has to do with the relentlessness of the voice over and the rapid speech and also the pace of the music under it.1:23 “It feels like there’s a sense of freedom. Anything could happen at any moment…Narrative is completely fractured I think.” - Martin Scorsese

French New Wave directors saw exciting possibilities for using film as a medium - more like painters or novelists did - which could not only be used to tell stories but also to translate their thoughts or ideas by experimenting with form and style. 

Much of this was done in the edit.

Whereas older films may have used a traditional, linear story, various scenes and exposition to unpack characters, films like Jules and Jim used voice over, fast paced music and snappy editing to immediately introduce characters and their relationships in a more fractured way that compressed time into a montage.

Directors like Godard broke down the medium even more into a self conscious, post modern vision by having characters literally break the fourth wall and talk directly into the camera, face to face with the audience.

Instead of attempting to suspend disbelief, Godard made his audience very aware that what they were watching was something constructed by an artist. 

Breathless also went against a universal rule of cinema and used jump cuts, a technique which cuts forward in time using the same shot, without changing the angle or shot size. The effect is an abrasive ‘jump’ forward in time. 

This technique influenced future filmmakers by tearing down the idea that the rules of cinema should be strictly followed. This post modernism that was pushed by the French New Wave has now seeped into every kind of contemporary visual art - including how many YouTube videos are now edited.

Read More
Gray Kotze Gray Kotze

Do Cinematographers Like Lens Flares? Textured vs Clean Images Explained

When it comes to the question of whether clean or textured images should be favoured, cinematographers are generally split into two different camps.

INTRO

“I can’t stand flares. I find any artefact that is on the surface of the image a distraction for me. The audience or I’m then aware that I’m looking at something that is being recorded with a camera.” - Roger Deakins, Cinematographer

“If the light shone in the lens and flared the lens that was considered a mistake.I feel particularly involved in making mistakes feel acceptable by using them. Not by mistakes or anything but by endeavour.”  - Conrad Hall, Cinematographer

When it comes to the question of whether clean or textured images should be favoured, cinematographers are generally split into two different camps. Some see their goal as being to create the most pristine, cinematically perfect visuals possible, while others like to degrade the image and break it down with light and camera tricks.

Before we discuss the pros and cons of clean and textured images, we need to understand some of the techniques used by cinematographers that affect the quality of how an image is captured. Then I’ll get into the case that can be made for clean images and the case that can be made for textured images and see which side of the fence you land on in the debate.   

 

WHAT MAKES AN IMAGE CLEAN OR TEXTURED

When cinematographers talk about shooting something that looks clean, they are referring to an image which has the subject in sharp focus, which is devoid from any excess optical aberrations, video noise, grain or softening of the highlights or bright parts in the frame. Some cinematographers however like to introduce different kinds of textures by deliberately ‘messing it up’. 

The easiest identifiable optical imperfection is the lens flare. This happens when hard light directly enters the open glass section at the front of a lens and bounces around inside the barrel of the lens off of the different pieces of glass, which are called elements. 

So to get a lens flare,  cinematographers use a backlight placed directly behind a subject or at an angle that is shined straight at the lens. A common way of doing this is to use the sun as a backlight and point the camera directly at the sun.

In the past, flares were often seen as undesirable so a few tools were introduced to get rid of them. To prevent a flare you need to block the path of any hard light that hits the lens directly. A mattebox is used not only to hold filters but also to block or flag light from hitting the front element. A top flap and sides can be added to a mattebox to cut light, as can a hard matte - which clips inside the mattebox and comes in different sizes which can be swapped out depending on how wide the lens is.

If a shot is stationary and the camera doesn’t move, the lighting team can also erect a black flag on a stand to cut light from reaching the lens.  

On the other hand, a trick some use to artificially introduce a flare when there isn’t a strong backlight is to take a torch or a small sourcy light like a dedo and hit the lens with it from just out of shot.   

Different kinds of lenses produce different kinds of flares, which are determined by the shape of their glass elements, the number of blades that make up the aperture at the back of the lens and the way in which the glass is coated. Standard, spherical lenses have curved, circular elements that produce round flares that expand or contract as the light source changes its angle. 

Anamorphic lenses are made up of regular spherical glass with an added section of concave glass that vertically squeezes the image. It is then de-squeezed to get a widescreen aspect ratio. 

Because of this, anamorphic lenses produce a horizontal flare that streaks across the frame. The Panavision C-Series of anamorphic lenses are famous for producing a blue anamorphic lens streak which is associated with many high end Hollywood films. 

The glass elements inside a lens have different types of coatings. Modern coatings are used to decrease artefacts and limit flooding the image with a haze when the lens flares.

As technology has improved these coatings have gotten progressively better at this and therefore more modern lenses produce a ‘cleaner’ image. One way that cinematographers who like optical texture get around this is to use vintage lenses that have older coatings that don’t limit flares as much or bloom or create a subtle angelic haze around the highlights. You even get uncoated lenses for those that really want to push that vintage look.

Another option to soften up an image a bit is to use diffusion filters. These are pieces of glass that are placed inside a mattebox and create various softening effects, such as decreasing the sharpness of the image, making the highlights bloom and softening skin tones. 

Some examples of these filters include Black Pro-Mists, Glimmer Glass, Pearlescents, Black Satins, Soft FX filters - the list goes on. They come in different strengths, with lower values, such as an eighth providing a subtle softness and higher values providing a heavy diffusion.

Some cinematographers even go more extreme by using their finger to deliberately smudge or dirty up the front of a filter. 

A final way of introducing texture to an image is with grain. This can be done either by shooting on a more sensitive film stock, like 500ASA and push processing it, by increasing the ISO or EI on the camera, or by adding a film grain effect during the colour grade in post production. 

THE CASE FOR TEXTURED IMAGES

“What lenses? Should it be sharp? Should it have flaws? Should it have interesting flares? I always try to be open to everything.” - Linus Sandgren, Cinematographer

Now that I’ve listed all the ways that an image can be messed up by cinematographers, let’s go over some reasons why anyone would actually want to do this in the first place.

Up until about the 1960s or 1970s, the idea of intentionally degrading how an image was captured wasn’t really prevalent. However, movements like the French New Wave or New Hollywood rebelled against capturing a perfect representation of each story and intentionally used things like flares to do this. 

Producing optical mistakes from a more on the ground camera created an authenticity and grittiness to the images in a similar way that many documentaries did.

In different contexts, optical aberrations, like lens flares, have been used to introduce different tonal or atmospheric ideas. For example, Conrad Hall went against the Hollywood conventions of the time and embraced flares on Cool Hand Luke to create a sense of heat from the sun and inject a physical warmth into the image that reflected the setting of the story.

Some filmmakers like deliberately using lower gauge film such as 16mm or even 8mm to produce a noisy, textured image. Often this is perceived as feeling more organic and a good fit for rougher, handheld films.

Textured images with a shallow depth of field also feel a bit dreamier, and can therefore be a good tool for representing more experimental moments in a story or to portray a moment that happened in the past as a memory.

Since the digital revolution, many DPs have taken to using diffusion filters and vintage lenses on modern digital cinema cameras - to balance out the image so that it doesn’t feel overly sharp. 

Degrading the image of the Alexa by shooting at a higher EI, like 1,600, shooting on lenses from the 1970s, or using an ⅛ or a ¼ Black Pro Mist filter, are all ways of trying to get the more organic texture that naturally happened when shooting on film back into the image.

THE CASE FOR CLEAN IMAGES

“Digital cameras were able to give us a beautiful, very clean, immersive image that we were very keen on…3:13 It almost translates 100% what you are feeling when you are in the location.” - Emmanuel Lubezki, Cinematographer

On the flipside, some DPs seek a supremely clean look that pairs sharp, modern glass with high resolution digital cameras.

One reason for this is that clean images better transport the audience directly into the real world, and present images in the same way that our eyes naturally see things. Clean images are regularly paired with a vision that needs to feel realistic.  

These cinematographers see any excess grain or aberrations as a distraction that pulls an audience out of a story and makes them aware that what they are seeing isn’t reality and is rather a visual construction.

When light flares across a lens it’s an indication that the image was captured by a camera and may disrupt the illusion of reality.

Sometimes filmmakers also want to lean into a clean, sharp, digital look for the story. It’s like choosing to observe the world directly, in sharp focus, rather than through a hazy, fogged up window.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Ari Wegner

Ari Wegner's cinematography isn't tied down to one particular look, and is rather based on a careful and deeply thought out visual style that uses informed creative decisions to present a look that is tailor made for each individual story or script.

INTRODUCTION

“I think that’s the question for any film. How do you get the energy of the script or the idea into it visually? Every film is different and every scene is different but if you know what your aspiration is to do that then you can think of some ideas of how to achieve that.”

In this series I’ve talked before about how some cinematographers like to create a look that is fairly consistent across much of their work, while others distance themselves from one style and mould the form of their cinematography depending on the script or director that they are working with.

Ari Wegner very much falls into the latter category. The films that she shoots are never tied down to one particular look, and are rather based on a careful and deeply thought out visual style that uses informed creative decisions to present a look that is tailor made for each individual story or script.

In this video I’ll unpack this further by diving into the philosophy behind her photography and showing some of the gear that she uses to execute those ideas.      

BACKGROUND

Growing up in Melbourne around her parents who were both artistically inclined filled her with an appreciation for the arts and creative thought from an early age. 

Her desire to work in film was sparked by her media teacher exposing her to short films, notably one by Jane Campion. She then changed her focus from photography to cinematography.

After graduating from film school she spent years shooting local independent films and documentaries, before breaking out by photographing Lady Macbeth, which screened at numerous festivals.

Some of the directors she’s worked with include: ​​Janicza Bravo, Jane Campion and Justin Kurzel.   

PHILOSOPHY

Something which is key to formulating the carefully considered look for each film is the time that she takes to prepare. Usually about four to six weeks of time is scheduled for pre-production on a movie - for which a cinematographer may only be present for a fraction of. However, on Jane Campion’s The Power of the Dog, an entire year was dedicated to pre-production, with Wegner present right from the get go.

“When Jane first called me to ask if I’d be interested, one of the things that she had as a condition was that she wanted someone to start straight away. I often feel like there’s not enough time to prepare or that there’s a rush. Once everyone’s in the same place the decision can feel very pressured. So, Jane wanted this film to feel relaxed and not have to be rushed into any decision.”

This luxuriously long prep allowed them to take their time and gradually and deliberately develop the look of the film. 

Together they scouted potential shooting locations in New Zealand, learnt about the land and foliage in the area, calculated what time of year would be best to capture the kind of dry-brown look they wanted and found a mountain range that felt mythical yet humble enough to believably be on a family ranch.

Campion and Wegner also set aside four to six weeks just to fully storyboard the script. They spent time drawing out the shots that they imagined, then took those ideas and drawings to the physical set that was being constructed and walked through them to finalise the storyboard. 

All this consideration and prep resulted in a cinematic style that was down to earth, observational, natural and unadorned. 

Part of this came from the colour palette - which roots the look of a lot of her films. Whether it’s the vivid, saturated colours in Zola, the warm nights and cool days in The True History Of The Kelly Gang, or the muted, bone-dry natural palette of The Power of the Dog.

Part of the job she takes on is getting different scripts, stories and ideas and applying creative thought to them in collaboration with the director in order to create a visual style that is suitable for each project.

“We started having some big conversations about colour. Myself, the production designer, the costume designer, about how we were going to unify the whole world. Making a film set in Montana in the 1920s in New Zealand. We decided a strict colour palette would be a good part  of the way to do that.”

GEAR

Like her approach to the colour palette of each film, Wegner’s lighting and selection of camera gear is based on coming up with solid cinematic ideas that represent the story.

To convey the shift in mood of the character that happens in Zola, she used a subtle shift in lighting and camera gear selection. As the relationship between the two characters shifted from an infatuated, trusting, light, fun one to a distrustful, darker, unhappy one, Wegner subtly introduced more shadows, colour clashes, contrast and darkness into the image. 

She did this with low key lighting and also by shooting the initial scenes using a low-con filter which subtly faded out the blacks and made the highlights blossom into the mid tones.

To dial in specific colour temperatures and hues into the lighting, she used various LED fixtures, such as: Arri SkyPanels, Astera Tubes and Litemats. She likes using LED lighting as well as practical fixtures that can be seen in the frame - whether that’s from a natural source like a fire or an artificial source like a lamp.

Sometimes she takes the edge off the quality of artificial or natural light by placing diffusion gels in front of the light source to soften the shadows that appear on a face.

Another example of how she uses light to represent an idea can be seen in The Power of the Dog. Often interiors are well lit and act as a safe haven for characters, however in this film she introduced large amounts of shadow inside the farmhouse to make the space more ominous and unsafe, by pushing hard light through windows and backlighting characters so that they appeared in silhouette.

She’s shot on 16mm film before on Zola when it was requested by the director to give it a magical quality that accentuated the highly unusual almost dreamlike story. She used Kodak 250D for the sunny exteriors and the more sensitive 500T 7219 for interiors and night scenes.

From looking at some behind the scenes photos it seems that she uses a digital stills camera to metre the film, by taking a picture with the same exposure settings as the 16mm cinema camera until she finds an exposure level that she is happy with.

Usually she favours shooting digitally on variations of the Arri Alexa, such as the Mini, the XT or the Mini LF. She’ll often pair these cameras with moderate, vintage glass that breaks down the digital sharpness with a very slight optical softness.

Some examples of lenses she’s used include: Zeiss Super Speeds, Bausch & Lomb Super Baltars, Panavision Auto Panatar 1.3x Anamorphics and zoom lenses from Angenieux or the Alura.

On The Power of the Dog she used zooms to shoot some of the wide shots at long focal lengths. This compressed the background and made it feel like it was closer to the characters. 

Once again, I’d ascribe the gear that she selects as being deliberately chosen on an individual basis for each story: sometimes using spherical glass, sometimes anamorphic glass, sometimes film, sometimes digital, sometimes using a Tiffen 812 warming filter or a lo-con in front of the lens and sometimes shooting it clean.   

Rather than creating and shooting with a custom LUT like many other DPs do nowadays, Wegner likes to work and expose the camera using Arri’s standard Rec 709 LUT as a reference. This is a basic, natural look that is applied to the flat log or RAW footage. She’ll then create the final look for the film along with the director during the colour grade.      

“I love the idea of LUTs and then every time I try to make one it kind of scares me. Usually I shoot REC 709 a lot. Knowing at the end of the day with the dailies colourist you can take it to a nice place for the editors. With REC 709 I know it very well. It’s not pretty but it’s reliable.” 

CONCLUSION

Wegner’s methodology always uses the story as a jumping off point. 

After much consideration she then comes up with a visual look that will support the kind of ideas that are present in the story. 

Whether that’s always using central framing in Lady Macbeth to show how trapped her character is, using her knowledge of lighting to introduce both a physcical and a psychological darkness into a space, or whether it’s reducing the contrast of the shadows in an image with a filter to brighten the mood.

Each film she shoots offers up an opportunity to do something new. 

Whatever photographic tool she chooses, it’s always deliberately thought out, planned and then executed to create visuals that subtly represent ideas and emotions that form her point of view on the movie.

Read More
Gray Kotze Gray Kotze

The Advantages Of Low Budget Filmmaking: Red Rocket

Director Sean Baker is a prime example of a filmmaker who has directed many features, has been to all the festivals, got all the acclaim and yet chose to reside and make films in the low budget range. So let’s look at his latest movie, Red Rocket, through this lens and examine what some of the advantages are to producing low budget indie films.

Making it to the top and directing the biggest films in the world may be the ultimate goal for some, but there are definitely pros to working in the more realistic, lower budget film world of indie films.

Director Sean Baker is a prime example of a filmmaker who has directed many features, has been to all the festivals, got all the acclaim and yet chose to reside and make films in the low budget range. So let’s look at his latest movie, Red Rocket, through this lens and examine what some of the advantages are to producing low budget indie films.

“That’s the big thing that I’m always conflicted about. I’m like, should I make my life easier and make a film for a studio or go to a series. It’s definitely tempting because of monetary purposes but it’s more than that. I want to tell personal stories. Films take a long time, you put all of your energy, all of your heart into them, so why not make the movie you want to make.” - Sean Baker

It’s important to remember that what allows films to be made in the first place is due to financing. Like with any investment, when individuals, or in this case a production company, is putting money down in the form of an investment they need to balance how much money is allocated with the risk of making that money back.

If a studio is throwing hundreds of millions of dollars into a movie, they need to keep their risk tolerance low by ticking off a list of items that have been proven to increase ticket sales and make money back. 

For example, choosing a story about a superhero with an existing fanbase, casting celebrity actors and doing test screenings of different edits to make sure that the movie is understood by and entertains the largest possible group of people, are known money printers and ways of decreasing the film’s risk of financially failing.

If instead you want to make a movie about a niche subculture, without any A-list celebrity actors and be given complete creative freedom, you need to accept that it’s a higher risk project for investors and will therefore be allocated a low budget, of say one point two million dollars, which is a much lower financial target to make back.

Red Rocket falls into the latter category, which looks at a niche character archetype of an adult film star that uses and recruits women into that world. 

He cast the leads through a combination of street casting, casting a theatre actor, first time actors, casting the lead off of his Vine page and even casting one role to his producer slash continuity supervisor slash costume designer.

Operating at a low budget level gave Baker the freedom to make the kind of film that he wanted to - which was true to his vision.

Some may see it as a disadvantage, but I think another advantage that low budgets have is that they allow you to work with a small crew.

“A four person camera crew pulled off those images. You had Drew Daniels, you had a 1st AC, a 2nd AC and a gaffer/grip. Then you had a one man sound team and then you had my sister who is the production designer on the film. The other four were just producers wearing many many hats” - Sean Baker

This crew of just ten people were able to move quickly, efficiently and pulled off the feature on a tiny 23 day shooting schedule. While having more money, more crew and more gear grants filmmakers more control, it also takes far longer to set up shots and move to new locations.

Drew Daniels shot the movie on 16mm film on Sean Baker’s own Arri SR3, which were paired with two interesting 1.44X Auto Panatar lenses from Panavision, a 16mm and a 50mm. 

These anamorphic lenses allowed them to shoot in a widescreen aspect ratio on 16mm and created a unique look that combined a Hollywood anamorphic grandeur with the more grainy, organic, low-fi look of 16mm Kodak film.

Daniel’s camera crew were super bare bones, with a first AC to pull focus, a second AC to load the film and do the boards and one gaffer slash grip to set up lights, rigs and any camera moves they needed. Having a smaller crew also created more of a family atmosphere that put the performers, especially first time actors, more at ease.

“My incredible actors go in front of a camera and they’ll try something and if it doesn’t work, who cares, it doesn’t work. Let’s go for an alt take and try something else. Getting everybody in that place where everybody is comfortable and feels safe. Red Rocket was perfect for that because it was a small ten person crew, tiny, we were a pod, we were very isolated. It just allowed for that. I think we just embraced that spontaneity. We were saying there’s improv in front of the camera so why can’t there be improv behind the camera.” - Sean Baker

Working with a small, flexible camera crew frees filmmakers up and allows them to go with the flow far more than on high budget productions where everything tends to be shot listed, storyboarded, pre-visualised, pre-lit and then executed as per the plan on the day of shooting.

As a general rule, the larger the setups are, the more budget is required which in turn locks filmmakers into pre-planning. Some like working this way, but other filmmakers enjoy working with less restraints, where it’s possible to add scenes, improvise dialogue or change shots based on unexpected happy accidents.

Many of the exterior scenes in the film were also shot without location permits in the small town. Having a low budget and a small production footprint allowed them to get away with it.

On Red Rocket, Baker encouraged his actors to improvise and explore at times. To find dialogue, blocking or actions that felt more natural and aligned with what their characters would do.

The best low budget films are usually those that lean into their limitations and create a tone and look that plays to their strengths. There’s a certain charm to films shot in a more intimate on the ground, run and gun style. That adds a touch of realism and puts the audience in the character's shoes. 

Filmmakers also have to be more careful in thinking about what they can and what they can’t include in the story. If there’s a big car crash in the story and you don’t have the financial resources to shoot it, it forces you into communicating that information in creative ways.    

More time, more money and more gear gives you more control, but sometimes accepting a lack of control can actually improve a film.

“I realised I couldn’t overcome it. So, there was an acceptance during pre-production that I was going to have to accept all of these freakin limitations. We realised we do not have the money and we do not have the time to throw at problems, so instead of tackling those problems we’ll pivot and go in another direction. So every day even though there was a problem every three hours there was also a miracle every three hours.” - Sean Baker

An example of this happened during the proposal scene. They knew that the local train passed through once a day and therefore only had one chance to get it in the background of the shot and only 20 minutes to set it up. 

As they rolled the camera, the conductor started blowing his horn at a moment that perfectly complemented the dialogue, resulting in a magically chaotic scene that couldn’t have been scripted.

Red Rocket shows that low budget filmmaking has many advantages. It allows the director more freedom to make the kind of film they want to make while working with a small, efficient crew that offers possibilities for improvisation, run and gun flexibility and finding magic moments.


Ultimately, it’s also a case study in how important it is to set up a lifestyle for yourself as a filmmaker which is both financially viable and creatively rewarding. 

Baker balances the financial side of making low budget films by also working on commercials on the side. Directing just one commercial a year is lucrative and finances his lifestyle and expenses while working on the lower budget films that he has complete creative control over.

Balancing the kind of creative and financial life that you want is something that everyone in the creative industry has to contend with. It’s down to each individual as to where they fall on the scale and how they plan their career to try and land in the space that they want to.

Read More