Gray Kotze Gray Kotze

How Ryan Coogler Shoots A Film At 3 Budget Levels

Ryan Coogler’s career has seen a progression all the way from making a low budget indie movie that became a festival smash, Fruitvale Station, to directing one of the biggest blockbuster Marvel films in the world - Black Panther. Let's take a deeper look at his career as a filmmaker.

INTRODUCTION

“Making a film is something that involves a lot of technicalities, you know. And it is hard work. And it is something that’s systematic to me that I’ve learned. Going up against time and money you know you never have enough of either one when making a film and I hear it’s still like that on films of higher budgets.”  - Ryan Coogler

Ryan Coogler’s career has seen a progression all the way from shooting a low budget indie movie that became a festival smash, to directing one of the biggest blockbuster Marvel films in the world. 

As you may have seen in this series, directing films at different budget levels has an inevitable impact on how movies are made. Despite this, Coogler’s work is all bound by characters and stories told through an empathetic lens, made by a close knit team of collaborators, which are thoroughly researched and to a large extent shaped by how he writes his screenplays.

So, let’s take a look at the low budget Fruitvale Station, the mid budget Creed and the high budget Black Panther to see how his approach to making movies has evolved over time and identify some commonalities that are present throughout his career. 

FRUITVALE STATION - $900,000

“The biggest thing that I walked away with from film school is just a lot of my colleagues. You know, like, I met the composer that’s done all of my films at film school. One of my editors has worked with me the whole time. The community was the most valuable thing that it gave.” - Ryan Coogler

Coogler first became interested in screenwriting during a college creative writing course, where his teacher recognised his talent for his visual writing style and proposed he try writing screenplays.

After undergrad he got into film school at USC where he made a number of short films that did well at festivals with some collaborators who he would continue to work with through his career. 

During his studies he mentioned to a friend who was studying law about potentially making a film about the Oscar Grant case. Later, when his friend began working on the case, he introduced Coogler to the Grant family who gave the rights to tell the story. 

After the festival success of his shorts, and getting the script for Fruitvale Station into the Sundance Screenwriters lab in 2012, Forest Whitaker came on board to produce through his production company Significant Productions. A production budget of $900,000 was secured to make the film. 

“Being in the Bay area at the time and being the same age as the guys who were involved and seeing myself and a lot of my friends in them. Then shortly after seeing it become politicised. Seeing his humanity get split in two different directions. And the fact that he was a human being whose life was lost kinda got glossed over. You know, ,my outlet, my artistic outlet is filmmaking, so, oftentimes, when I see things or think about things I think of them in that format.” - Ryan Coogler

He compiled as much research as he could from a combination of legal documents as well as talking to the friends and family of Oscar Grant. His goal was to treat the story with a sensitivity by humanising the characters on screen through portraying an intimate, personal portrait of their relationships.

 Letting the story play out in a world which was as true to life as he could make it. To create this visual world encompassed in realism he turned to a close collaborator, cinematographer Rachel Morrison.  

“Fruitvale’s a really interesting scenario for a cinematographer because you have tons of films that are based on true stories but very few that are based on true stories that happened three years ago in locations that still exist exactly as they were. So it’s not this interpretation of something. It was really important to be really, really authentic.” -  Rachel Morrison, Cinematographer

She pushed this idea of authenticity visually by shooting on a grainier, more organic Super 16mm film, using a handheld, exploratory single camera and by keeping the lighting as naturalistic, motivated and as true to life as possible.

The smaller gauge film size meant that they shot on wider lenses and therefore had a deeper depth of field than a 35mm film plane. 

Morrison shot the film on an Arriflex 416 with Zeiss Ultra 16 lenses which were donated to the production by Arri. The Ultra 16s are probably the sharpest Super 16 primes on the market and since there were going to be a lot of low light scenes, Morrison wanted lenses that would be as sharp as possible when shot wide open at T/1.3 on the lower fidelity 16mm Kodak 500T film.

An idea that the cinematographer discussed with Coogler was finding a middle ground between a deconstructed documentary realism and a fictional, elevated cinematic gravitas - where visual tweaks were made to elevate the story.

An example of this was how they used the colour grade in post production as a low budget way of changing the colour balance of the lights in the real shooting location. 

“In the BART station the fluorescent lights up above, actually they’re warm light - which is sort of a yellow, warm feeling to them. And it’s this terrible, terrible event and for me I always, the second I saw them I’m like, ‘Well we’re going to time it cool right?’ And then we got into this dialogue about will it still feel like the BART station if we time it to feel a little cooler instead of being warm. That was the dialogue which was so interesting. Most films I think are much less beat for beat interpretations of things.” -  Rachel Morrison, Cinematographer

By shooting with no large production design builds, being quick, flexible, handheld and using a lot of the ambient light that was naturally present on the real locations Coogler pulled off the shoot with his tight knit crew on a budget of $900,000. 

CREED - $35 Million

“I’m a big Rocky fan. You know, I kinda inherited my love for those movies through the love for my father. So I knew the movies really well. I knew the world really well. I kinda came up with this idea where a young guy interacts with basically my dad’s hero at a time in his life where he’s ageing and dealing with his own mortality in a way that we’ve never seen him deal with it before. It’s really about me and my dad. As much as I could bring of my own, you know, partial inexperience. Really, my dad, my relationship with him, as a heartbeat for the creative tensions there.”  - Ryan Coogler

Contrary to what some may think, the idea for Creed was not one that was conceived by a studio. Instead, Coogler had been toying with the concept for a Rocky spin off with his USC friend Aaron Covington, just as a fun spec script while he was working on Fruitvale Station.

At the Sundance Screenwriting lab for his first film he was able to secure an agent who asked him whether he had any ideas for projects beyond Fruitvale. After explaining his concept for Creed his agent set up a meeting where Coogler pitched the idea to Stallone - the original writer, and of course, lead actor in the Rocky franchise.

After securing Stallone’s buy-in to get a script written, MGM paid for him and Convington to write the screenplay. Appreciating the fresh perspective on the story and his character, Stallone gradually agreed to come on board until MGM greenlit the project with a production budget of approximately $35 million.

In Stallone, Coogler found a new collaborator to work with.

“He’s also a producer on the film. He was comfortable taking a backseat here which is a big thing. He had written all the scripts, every single last one, for these types of films. What really made him excited was seeing this from the millennial perspective. You know, we’re talking about a 68 year old dude who is showing up in the cold everyday. And shows up for a day where we’re shooting four pages and he’s got 10 pages of character work that he did the night before. It was amazing and it was energising.” - Ryan Coogler

One example of Coogler’s openness to collaborations from his cast and crew happened when instead of Stallone throwing water in Michael B Jordan’s character’s face to wake him up as it was written in the screenplay, Stallone proposed that his character play old records instead, as it’s what he felt his character would do. They went with this suggestion and it made the cut.

To create a visual language for the film which was a little bit ‘slicker’ than Fruitvale Station, but which was still grounded by a similar photographic feeling of realism he turned to cinematographer Maryse Alberti - whose naturalistic work on The Wrestler he admired. 

Rather than something more stylised like Raging Bull, they decided on a camera language which was more realistic and which at the same time paid subtle homage to the original Rocky film with its famous early use of the Steadicam - but didn’t copy the look.

“We looked back more at what not to do. Do you like this colour? Do you like this? No? Well, me neither. And taking the good things like the iconic thing of the steps and things like that. But, yeah, he was reinventing.” - Maryse Alberti, Cinematographer 

One way that they reinvented the film visually was by making the final boxing match look more like it would be presented realistically on a modern TV broadcast by shooting at a deeper stop of T/5.6 and using a higher key lighting style.

They did this by building the ring on a soundstage and surrounding it with a 600 foot greenscreen wall which they could then replace with a composited crowd using visual effects. Her team lit the ring by rigging up square truss above the space from which they suspended 120 tungsten par can lights with different lights focused at different distances, to provide an even overhead lighting.

Because it’s tiring for actors to repeat these choreographed boxing scenes many times in a row and maintain the same level of energy, they shot with multiple cameras to get better coverage - in a slightly similar visual style to how big fights might be shot for TV.
This scene was covered with one camera on a technocrane, getting telescoping movement and wider shots, one camera on a steadicam which could roam around the ring, and two handheld cameras getting on the ground reactions.

They made the decision to shoot digitally on the Arri Alexa XT in 2.8K Arriraw with spherical Cooke S4 primes and some wide angle Angenieux Optimo zooms. She also used the Alexa Mini on a Movi for some scenes which need nimble camera moves through tight spaces such as when the camera tracked from outside the ring, through the ropes into the ring - which they did by passing the Movi off in the hand to a new operator. 

Alberti chose the S4s as they flattered skin tones and rendered them nice and softly, which counteracted the digital grain they planned to add in post which roughened up skin a little bit.

Creed was therefore pulled off on a much larger production budget of around $35 million that leaned on established franchise characters, while also invigorating the series with a new perspective that used a larger crew to run more gear, involved building sets, shooting more time-consuming action sequences and incorporating more visual effects work in post. 


BLACK PANTHER - $200 Million

“The biggest difference actually wasn’t in the restrictions. It was actually, like, in the lack of restrictions. When I was making my first film, dealing with not a lot of money you have a lot of limitations and it helps you actually move faster because you can’t do just anything. Maybe sometimes there’s only one place you can put the camera. You can only be in this location for two hours and then you have to go. It makes it easier. When you can do anything and that’s kinda what happens with a film like this. That’s what I found made it a lot harder.” - Ryan Coogler

A lack of limitations means that more preparation time can be put into the project during pre-production. As with all his movies, Coogler’s role as a director began with him being involved in writing the script. 

An extensive period of prep time was necessary for this Marvel blockbuster undertaking which involved far more scope, intricate scenes and visual effects than any of his prior work had.

This required input from multiple heads of departments. For this he brought together many of his prior collaborators who’d worked with him since Fruitvale Station, and some even since his student short films days. This included editor Michael P. Shawver, production designer Hannah Beachler, composer Ludwig Göransson and cinematographer Rachel Morrison. 

 The director and cinematographer had many discussions with Marvel’s VP of physical production and with Geoffrey Baumann, who oversaw a team of 16 different visual effects vendors that were working on the film. 

Practically, this prep work involved doing things like creating a previs - a rough animated version of the entire cut of the film - and mapping out how they would cover a scene using a 3D printed scale model of a set for a casino scene they would be constructing.

One of the moves that they came up with for this fight scene was a shot where the camera transitioned between different characters on the set’s two floors by flying through the air. They rigged a Movi gimbal onto a cable rig, which lifted the camera to the second floor. From there, another operator could grab the camera off the line and begin operating it.   

While they were working on building this set, Morrison drew up a detailed lighting plan which involved rigging multiple overhead 8x8 and 4x4 blanket lights from SourceMaker, using higher output LRX Scorpion tungsten units to backlight and then using Varilite VL1000s which could be remotely swivelled to hit specific spots with targeted light. All of these fixtures were effectively ‘built into’ the set and rigged to a DMX board so that the levels could be adjusted quickly on the day of shooting.

Coogler turned his attention to detail for each character by focusing on things such as their costumes, which in the Casino scene were designed to take on the Pan African flag colours of green, red and black.

Since seeing all the costumes, even in the backgrounds of shots, was a priority to the director, Morrison needed to shoot at a deeper stop. This meant that rather than shooting on a large format Alexa 65 camera, Morrison chose to shoot open gate on a smaller sensor Alexa XT - which would yield a slightly deeper focus than a large format camera, with the Panavision Primo spherical lenses set to a stop between T/2.8 and T/4.

Coogler shot Black Panther with its larger scope story that involved more actors, preparation, visual effects, action sequences, stunts, bigger set builds, and even larger technical camera, lighting and grips setups.

However, he maintained his fingerprints on the project by co-writing the screenplay, using real world research to provide a level of depth to each character, working with his same close knit pool of collaborators, creating a deliberate visual style which was true to the tone he wanted and carefully telling the story through a lens which is empathetic and does justice to his characters.

Read More
Gray Kotze Gray Kotze

What A VT Operator Does On Set: Crew Breakdown

In this Crew Breakdown video, let’s take a look at the VT Operator and go over what their role is, what their average day on set looks like, and a couple tips that they use to be the best in their field.

INTRODUCTION

The VT operator is one of the least talked about crew positions in film production, whether that’s on YouTube or the internet in general. They are responsible for orchestrating the live transmission and playback of video and sound via production monitors. It’s a role which is a necessity for any industry level commercial or feature film shoot and one that every technical film crew member should understand.

So I’ll use this video to try and fill in this information gap based on my observations from working as a crew member in the camera department by first unpacking the role of the VT operator, going over what an average day on set for them might look like and finally giving a couple tips which I picked up from watching experienced VT ops work.

ROLE

The process of shooting a movie involves capturing multiple takes of shots until the director is happy that they have a shot which will work in the edit. This means they need to be sure of both the technical aspects of the shot, such as the framing, camera movement and focus as well as the content of the shot such as the performances of the actors and the blocking. 

Since getting the perfect shot can be a bit of an intricate dance, filmmakers need a tool which they can use to monitor these live takes and evaluate them. This is where VT comes in. 

The video tape operator, also called video assist, playback operator, or VT, is responsible for setting up video monitors that have a live feed from the production camera or multiple cameras and then recording any takes that are done as a video file so that they can be played back after each take for the client, director or creative heads of departments to evaluate.

VT came about before digital cameras, when productions were shot on film. Since film needs to be developed at a laboratory before it can be viewed - which of course takes quite a while - film cameras couldn’t playback footage that had been shot on set. 

Therefore, the solution was to record each take from a tiny, low res ‘video camera’ inside the film camera called a video tap. This signal from the video tap was recorded onto tape with a device such as a clamshell. This tape could then be fast forwarded or rewound and playback a low res video version of each take that the film camera recorded.

Since digital technology took over and footage is now recorded to cards rather than film, the role of the VT operator has evolved but is still based on the same principle of providing a live image on a monitor and being able to quickly playback video of takes.         

There will usually be a few different monitors, reserved for different people on a film set. 

This can be done by sending a video signal either through a wired connection to different monitors, or by using a wireless transmitter that can send a signal out to multiple receivers which are plugged into monitors.

The focus puller will usually get a feed directly from the camera with a sidekick receiver. The VT operator will then transmit or wire a feed to their station and run it through software on a computer such as QTake - which is the industry standard. They’ll then distribute this feed from the software to other monitors which may include what we call a video village - a tent with production monitors that display feeds from all the working cameras that are usually reserved for the creative agency, clients, the director and sometimes the producers.

Nowadays they’ll usually also be a wireless, portable director’s monitor on the set which is either put on a stand or can be handheld by the director as they move around and give direction to various departments and actors. 

The cinematographer usually operates and exposes using a 5 or 7 inch monitor which is mounted directly onto the camera, but sometimes will request a feed to a specific colour calibrated monitor such as a Flanders Scientific screen that can be used to more accurately judge the exposure and colour of an image. Kind of like a modern light meter.

Although there’s a bit of an overlap between the 1st AC and the VT op when it comes to who is responsible for monitoring, usually the on camera monitor and the focus monitor feed is set up by the 1st AC, while the director’s feed and any other external monitoring lies with VT.


AVERAGE DAY ON SET

The kind of gear that VT needs to run will be determined beforehand depending on the kind of setups that are needed. For example, the gear for tracking vehicle scenes will be different to the kind of gear that is needed for a standard interior scene. 

Therefore the first step is to plan for the kind of video transmission required, taking into account things like transmission range and how many monitors will be needed.

There are two, or actually now three, ways to send a video signal from a camera to an external monitor. 

The first is what we call hardwiring. This is where a cable, usually an SDI cable, is plugged from a video out port on one side to a video in port on the monitor. The upside to this method is that the quality of the feed will usually be very solid. The only way to interrupt a hardwired feed is if the cable gets damaged. 

The downside however is that if the camera needs to move then the cable will often get in the way and need to be wrangled by someone to avoid getting tangled up or becoming a tripping hazard.

The second method, wireless transmission, doesn’t require tethering the camera with the cable and is therefore the most popular. It involves attaching a transmitter, such as a Teradek, to the camera and plugging it into the camera’s SDI out port. This sends a live video signal of what the camera is recording through a wireless radio frequency to a receiver. 

VT ops usually build their own custom mobile video trollies that they’ll mount the receiver to. This receiver will then get fed into some kind of a distribution converter or switcher that will get fed into a laptop or computer that runs macOS. This feed goes into the QTake software, where it can be controlled. This signal is then sent out of the video trolley through a hardwire, wifi or through transmission to a monitor.

The third, fairly new, way that video can now be transmitted is through a live stream using the internet. This was mainly done during Covid shoots and is now used for tracking vehicle work where the car will drive out of the range of the wireless transmitters. 

With this technique, a video feed is sent to a modem with a SIM card and antennas which uploads the live video signal to the cloud and creates a streaming link. This live feed can then be viewed by anyone with access to the internet anywhere in the world - which is why it was used for remote shoots.

So, depending on the needs of the shoot the video assist will evaluate and provide a technical solution that aligns with the production’s budget.

Once they have a live video signal up and running to the required monitors they will wait until shooting begins. They will trigger video to record for both rehearsals and any takes that the camera rolls on. After the camera cuts, they will call out ‘playback’ and then loop the video footage of the last take that was shot on the monitors. 

Using QTake software they will also label each take that is done. Giving it the same name as the scene, shot and take that is on the clapperboard. This is a way of archiving what has been shot and makes it easier to relocate previously shot takes - which is especially necessary when directors need to quickly track down a specific take from a scene that may have been shot weeks ago.

VT will also collaborate with the sound department to provide audio that is synced up with the video footage. If you’ve ever seen a photo of a director on set wearing headphones, they are for listening to a transmitted audio signal that is being captured by the sound recordist that is synced up to the video feed on the monitor.    

           

TIPS

Earlier I mentioned that it’s commonplace for video assistants to label and archive each take. They may also take this one step further by marking specific takes. As they’ll usually sit near the director’s monitor if they hear the director make remarks about a take being good they’ll be sure to mark that specific take. The director may also directly ask VT to mark a take. 

This often happens during the shooting of commercials, which involve a back and forth discussion between the director, the agency and the client - who need to approve each shot before the production moves on. So, if, say, the director thinks they got it on take four, they may ask VT to mark that take. If they have time they’ll then do a couple extra takes for safety. Then once they’ve got those extra takes in the bag the director will ask VT to loop the marked take on the client’s monitor and then  go over to them to confirm that they are happy, approve that take and then the production can move on.

On some shoots, the video assist may be asked to perform a mix and overlay. This can be done using QTake software and involves overlaying video or images on top of a take. For example, some commercials may need to incorporate specific text or a company’s logo on a shot. VT can overlay and then position this logo so that the cinematographer and director can find a frame that compliments and accounts for this.

Or, there may be a series of planned match cuts that the director wants to do. VT can then find the shot that they want to match cut with, overlay it on top of the live feed and mix down the opacity of the other take. They can then position the frame for an optimal match cut. 

Most software these days is able to auto trigger video to record. So when the cinema camera starts recording it will trigger the video device to record at the same moment and likewise it’ll cut when the cinema camera cuts. However, occasionally when working with some setups - such as some film cameras - the video may not auto trigger and it’ll be up to VT to manually start recording video once they hear the call of ‘roll camera’.

Read More
Gray Kotze Gray Kotze

How Casey Neistat Changed Vlogging Forever

One reason that I, along with the rest of the online world, am drawn to Casey Neistat is because of the filmmaking in his videos. Although they may appear rough and handmade, if you look a bit closer you’ll quickly see that his films are backed up by an array of innovative filmmaking techniques that he uses to present stories as a creative, experienced documentary filmmaker.

INTRODUCTION

It may come as a bit of a surprise from a part time YouTuber, but I actually don’t watch many YouTube videos - well not now anyway. But there was a time when I was living in Japan around 2015 or 2016 where I’d watch every single release from one particular YouTuber every single day. Those videos were Casey Neistat’s daily vlogs. 

There were a few reasons that I, along with the rest of the online world, were drawn to Casey Neistat. For one, he’s a super charismatic and entertaining person on camera with strong opinions. For another, the non-stop freneticism that is his life, and the amazing situations that he puts himself in, was incredible to see documented. This combined with an honest, pretty intimate view of his life and daily uploads created a super close ‘relationship’ with millions of online followers. 

But there was something else that immediately drew me to his videos: the filmmaking. Although they may appear rough and handmade, if you look at his videos a bit closer you’ll quickly see that they are backed up by an array of innovative filmmaking techniques that he uses to present stories as a creative, experienced documentary filmmaker.

So let’s break down his approach, vlogging gear, some of the cinematic techniques that he uses and explain why they are the very backbone of what made Casey Neistat’s vlogs so groundbreaking.

STORY STORY STORY

You can have all the tricks in the book, but they mean nothing if they aren’t used to create some kind of narrative arc. So before we look at his specific filmmaking techniques let’s unpack how he constructs story in his films. Even his more freeform personal docs that document his day, still have a story arc to them. 

He’ll sometimes start with a cold open, cut to a hint at what will happen, the setup, they’ll have him doing something, the content, and then he’ll wrap it up, the wrap up.

Within the broader story of a video they’ll also be these little mini arcs that follow the same formula.

This is the same introduction, body and conclusion structure that academic writers use, the same setup, action and climax formula that screenwriters use and the same way that oral storytellers present fables. It’s a formulae that for whatever reason resonates with humans.

Of course, as an experienced filmmaker he also mixes things up chronologically. But the way that he structures each day, video essay, or even long take interview using this kind of formula is foundational to creating structure out of the chaos that is life that acts as a hook that the audience can relate to.

He also uses titles, physical chapter markers, visual gimmicks (tape measure plane) and handmade stop motion animation to introduce locations, context or explain concepts that enforce the structure of the story - in the same way that documentary filmmakers do.


FILMMAKING TECHNIQUES

Although what Casey mainly does in his videos has been called vlogging, what his videos really are are personal documentaries. And, as with most personal documentaries, the content is prioritised over getting technically perfect shots. This means that some moments may be out of focus, over exposed, lit using the screen of a phone or include moments of him picking up a tripod. 

Part of the appeal of his style is that he uses real filmmaking techniques but then deconstructs them a bit and leaves them rough around the edges, including moments of imperfection.

So, what are some of the practical cinematography and editing techniques that he uses to enhance his storytelling visually. One technique he uses a lot is the whip or swish pan.

For most of his techniques he shoots with the edit in mind. He quickly pans the camera off to the side to end a shot. Then in a later clip he’ll repeat this same movement as before and cut between these to get a seamless transition between locations. 

If you break it down frame by frame you can see that he cuts the two shots so that the one ends and the next begins on a blurred movement. Because this  happens so fast the eye isn’t quick enough to see exactly where the cut happens and two shots can be stitched together without it being noticeable.

This technique has been used quite often in cinema. Usually when filmmakers want a way to join two long shots together which need to be filmed in separate takes. For the smoothest transition possible it helps to make this cut during the most minimal frame possible such as a dark, blank wall - rather than a complex shot with actors.

Another editing technique he and a ton of other YouTubers use is the jump cut. This is where a cut is made that joins two shots which have the same, or similar, framing. Usually this means doing one take using a single clip and chopping out sections in the middle of it to exclude mistakes, fumbled lines of speech, or to just include the best bits of a take. 

In more formal filmmaking this is usually avoided by shooting multiple angles and shot sizes of scenes and cutting between these different angles to smooth over any mistakes. However some movies, such as the French New Wave film Breathless, have also used this technique to deliberately break down the established forms of filmmaking. It renders a more ‘breaking the fourth wall’, ‘hand made’ feeling which fits the tone of Casey’s vlogs.

He also uses jump cuts to playfully push the story forward in time. By shooting a single take shot from a locked off, still perspective, he can move himself around into different parts of the frame and then in the edit, remove all of that excess footage and quickly cut between him in different positions. This makes him whimsically bounce around the frame and gives the feeling that time is passing.  

Or he’ll sometimes combine this with a match cut where he uses an edit to transition between two frames that have similar compositional or subject traits - another technique found in cinema.


While he uses jump cuts to shorten and speed up his videos, he’s also done the exact opposite for certain videos to achieve a different effect. In some videos he has used long takes - where he lets an extended portion of a clip play without cutting. These tend to really suck the viewer into the moment and work well for heartfelt monologues - as long as those monologues don’t have any distractions or lapses in performance. 

Like all of these techniques the long take has also been used in many films, often for moments where the filmmaker is trying to pull the audience into the world of the film and the performances on screen as much as possible without breaking the illusion with excessive cutting.

Another well worn technique he uses are timelapses. This is where footage is captured at a lower frame rate and then played back at a higher frame rate in editing software. This ramps up the motion of the footage, speeding it up. 

This technique is often used by filmmakers as a visual mechanism to illustrate the passing of time. It’s particularly useful in vlogs because they often display a lot of action over a short period of time such as a day or even a few hours. Timelapses can be placed in between two shots to visually show the passing of time and that these two moments are not continuous.

Casey shoots his vlogs with a variety of different shots from a variety of perspectives. He shoots wide establishing shots, usually in the form of overhead aerial footage to establish the space that he is in. He shoots POV or point of view shots where he’ll point the camera in front of him to capture an image that mimics his perspective on what he is seeing.

Like in most documentaries he’ll grab observational footage of people, who sometimes engage with him behind the camera, or who sometimes appear natural and unaware of its presence.

He’ll also sometimes set up a frame on a tripod, record a bit of the environment and then enter the frame and start talking in an interview style. And of course he shoots the classic vlogging shot - a self portrait with a wide angle lens capturing himself as he talks directly to the audience through the camera - which he’ll handhold.

A large part of what photographically makes his vlogs so dynamic comes from the camera movement and framing. Casey is great at finding interesting angles and frames on the fly. He’ll mix the perspective between high and low angles or use framing devices such as this shot, where he places the camera inside a phone booth, to create a natural frame for himself while at the same time showcasing the dynamic environment of the background. 

The camera moves largely come from him physically moving his body in different ways. Whether that be on his famous boosted board, a bicycle, surfboard, or just him walking. 

Part of what makes the way in which he moves the camera so effective is because of the variety. Contrasting shots with fast motion, locked off shots, high angles, low angles, create a feeling that the story, through the cinematography and editing, is constantly getting propelled forward.          

 

VLOGGING GEAR        

So, how does he do this? Well, the answer is with quite a few different tools and cinematic toys. The cameras and gear that he’s used has changed quite a bit over the years but let’s go over the current setup he uses.

Most of his gear is, I guess, what you’d call consumer or prosumer because its relatively reasonable price points make it accessible to the general public. As I mentioned before, getting the shot is more important in his vlogs than ‘perfect cinematography’. Function rules.

He shoots aerials with a Mavic 2 Pro drone, that comes in a tiny form factor that fits in a backpack but which also resolves great images and puts it at the top of the consumer drone market.

He’s recently taken to shooting super fisheye POV and vlogging shots with the Insta360 X3 that he attaches to a pretty ridiculous selfie stick. And for most action or water sequences he uses a GoPro. At the moment the Hero 9.

So those are some of the more niche cameras that he uses. Now let’s take a look at his main vlogging camera setup. 
For years he stayed in the Canon ecosystem, using the 6D as his main camera with either a 16-35mm or a 10-18mm wide angle zoom lens. However, he’s now moved to Sony and shoots his videos with a 4K workflow. 

His main camera is the A7S III. It’s light, shoots in 4K, has slow mo capabilities, can shoot in super low light conditions, and importantly has a swivel screen so that he can see what he’s recording when he shoots in selfie mode. This is paired with his go to lens - the Sony 12-24mm f/2.8. A large part of his look comes from using super wide angle lenses up close, that distorts the edges of the frame a bit and maximises how much background we see in a shot. 

Shooting at a wider focal length also minimises the amount of visible camera shake there will be when shooting handheld.

He attaches this setup to a GorillaPod, a lightweight, mold-able tripod which can act as a selfie stick and can also be quickly positioned in tight, small spaces as a tripod. He also carries a lightweight Manfrotto Element Traveller tripod, which is small, portable and can be used for higher elevation tripod shots. 

Finally, he’ll mount a lightweight Rode VideoMic Pro+ shotgun mic on top of the camera to capture ambient sound or when he talks directly to-camera.  


   

CONCLUSION

I guess the answer to the question ‘What makes Casey Neistat’s videos so groundbreaking?’ is that he effectively took a bunch of established filmmaking techniques and his own experience in documentary filmmaking and applied it to tell stories in a more deconstructed YouTube vlog format. 

Although his videos appear super improvised, rough and chaotic - and to an extent they probably are - they are also carefully and thoughtfully shot, crafted and assembled with a high degree of filmmaking know-how - which wasn’t really the norm before Casey. 

While a vlogger’s personality and the situations they put themselves in are of course a large part of the appeal, Casey’s vlogs changed the game by also applying a level of filmmaking that elevated the vlog genre as a whole. 

Read More
Gray Kotze Gray Kotze

Cinematography Style: Charlotte Bruus Christensen

Let’s look into Charlotte Bruus Christensen's philosophical approach to shooting movies and then take a look at some examples of the gear she uses to execute her cinematography.

INTRODUCTION

The visual language of cinema is to a large extent determined by the context of the story. Some moments need to be slow and creeping, some moments need to feel hot and pressured, while at other times it should feel organic and natural. Charlotte Bruus Christensen’s work can be characterised by an overall classically Hollywood, cinematic, filmic, widescreen look, mixed with naturalism, which then uses the context of the story as the basis for applying the correct psychological perspective.

In this video I’ll take a closer look at the Danish cinematographer’s work, by unpacking some of her philosophical thoughts on the medium and then go over some of the gear that she uses to physically bring stories to the big screen.  


PHILOSOPHY

“It’s interesting how you hit those different genres. It adds to the way that you think about, you know, lighting a scene or moving the camera. I think it just gives you, a sort of, another way in technically and also style wise to how you approach a story. It gives you sort of a framework and then you think there are those rules but then you break them.”

From horror films like A Quiet Place to period dramas like The Banker and psychological mystery films like Girl On The Train, her photography has covered a range of different genres.When coming up with a look for a film she’ll use the visual associations with each genre as a kind of general jumping off point, but will then narrow down the look and sometimes go against expectations as things progress. 

 The process for preparing for each film shifts. For example when working on Fences, originally written as a play, with director Denzel Washington, a lot of the early focus went to working with the actors, and nailing down the feeling of how each scene would be performed using rehearsals. Whereas when working with a different director slash actor John Krasinski they would go over older films as references in the build up and then be much more flexible and reactive with how each scene was filmed once they arrived on set.

“For A Quiet Place, John Krasinski, the director and actor, both of us were like there’s something about Jaws. I know it’s not a sort of direct, like you may not spot that in there, but the ways they were sort of lining up a three shot and this while thing of in Jaws you don’t see the shark until very late. There’s things that inspired us. I think also it’s a very educational process that we all sort of constantly do. When you make a movie you educate yourself further and further and further.”

She uses these films and shots as references in a way that takes into account their tone, feeling and type of storytelling - rather than directly borrowing from their look. For example, using a classically slow, steady, reactive, quietly moving camera to build a feeling of tension in scenes. And then letting the horror come from how the performances are captured and how the actors react to the off screen threat.

This feeds into another cinematic technique that she uses, where a psychological approach to story is taken through the imagery. She tends to shoot scenes grounded in a similar widescreen, classical, filmic base look but then tweaks things like framing, camera movement and lighting depending on the idea or effect she’s after.

For example, the buildings and places in The Banker were almost as important to the story as the characters were. So to better present the spaces she shot many scenes from a lower angle with a steady frame that more fully displayed the height of the architecture in the background.

While a film like The Hunt, pulled more subtly from the Dogme 95 stylistic guidelines by shooting naturalistically on location and using a lot of handheld camera movement to present intimate, personal close ups of authentic performances.      

So, although both these examples were bound by a similar warm, film-esque look with shallow depth, real locations and natural lighting - the subtle variations in her cinematic techniques differentiates how audiences may psychologically interpret these two films - while also maintaining her own perspective. She uses these little variations in different contexts to enhance the psychological feeling that she wants the audience to have.

“And then also a whole sort of psychological thing of how you make people nervous, you know. If they’re in court this thing of sort of shining light into their face and over expose them to make them feel so small and in the spotlight and sweaty and heat and all these sort of things you would do to make people break.” 

These effects come from discussions with the director, combined with her own point of view on how they want the images to feel. To get the most out of collaborations with the director and to serve their vision, usually means helping get the best performances out of actors.    

“The most important thing I think I really value and try very hard to create freedom for a director and the cast while also producing a cinematic image.”

This is a balance that most cinematographers have to tread between getting the best image that they can, while at the same time being flexible enough to compromise with the actors and people in front of the lens.

Sometimes this may mean changing a pre-planned lighting setup and adapting that on the fly when actors and directors come up with new ideas for blocking on the day. Or it may mean quickly having to re-frame to capture an actor that isn’t tied down to hitting a specific mark on the set. 

More often than not this process takes the form of an organic back and forth discussion with the creative heads of departments. This is why it’s so important to be able to collaborate and compromise on a film set to best tie the ideas that are brought to the party into the best iteration of the story that’s possible.   

GEAR

I mentioned earlier that most of Christensen’s cinematography has quite a consistent, warm, classical, filmic look to it. I’d pin this down to two gear selections which she regularly makes. 

The first is her use of anamorphic lenses. Although she has shot in the Super 35 format with vintage spherical lenses like the Cooke Speed Panchros, the majority of her feature film work has used anamorphic lenses. Particularly the C-Series set of anamorphics from Panavision, which is sometimes supplemented by other more modern Panavision anamorphics like the T or G-Series. 

These lenses create a native widescreen aspect ratio and render images with a natural smoothness and warmth to them that has long been seen as a trademark of traditional Hollywood cinematography. 

The second fairly consistent gear selection she makes is to shoot on film. Of course this isn’t always possible from a production standpoint or necessarily the right creative choice for all films, but she has shot a large portion of her work photochemically on all the variations of Kodak Vision 3 colour negative film.

When she does shoot digitally she tends towards the more filmic sensor in Arri cameras, like the old Alexa Plus or the Mini. The choice to shoot photochemically is in part an aesthetic one, but it’s also one that is determined by the style of working that she’s after.

“The way you light the film, the way you work with film. You know, you’re on set. You look towards the scene. You don’t disappear into a video village and try things out. You look, you light, you use your light metre and you shoot. I think that for us there was a nice feel to that. And then, you know, obviously the very soft, cinematic look where we could really use the anamorphic lenses, you know, with the emulsion.”

Depending on the needs of each project or scene she’ll select different speed stocks. For the interior scenes on Fences she used the more sensitive 500T which allowed her to expose the darker skin tones of the actors at T/5.6 in the dim spaces while still having enough latitude to preserve the brighter information outside the windows without it blowing out. Whereas this interior scene from The Banker was shot on the less sensitive 50D stock. This finer grain film stock, along with her lighting, evoked the 1950s, Hitchcockian period look that she was after.

To enhance this look, she lit the actor with a hard light - an 18K HMI. The light beam was positioned and cut so that it hit the forehead and created a rim light highlight ping on the skin, which is reminiscent of older films from the period which used hard light sources in a similar way.

I think Chirstensen’s overall approach to lighting was influenced early on by her work on films by Dogme 95 directors like Thomas Vinterberg. This filmmaking movement came with various rules that included limiting the excessive use of artificial lighting.

Her lighting tends towards a naturalistic look, where the sources of the light, even when they are artificial, are motivated by real sources of ambient light. Therefore, coming back to those interior scenes from Fences, she spots the quality of the sunlight that is coming through the windows and supplements its direction and quality by using daylight balanced HMI units. 

Then to balance out the look so that the actors do not appear too much in a shadowy silhouette she adds fill light using Arri Skypanels - which imitates and lifts the natural sunlight that comes from outside and bounces back, more softly, off the walls.

Most of her lighting uses this similar approach of supplementing the existing sources of light that are naturally present at the location, whether that’s in the form of sunlight, street lights at night, or artificial light from practical lamps inside a home.  Just as she subtly tweaks her lighting in different ways that play to story, time period or some kind of motivated idea, the way in which she moves the camera is also an important feature of her work. 

“If you’ve been busy with the camera, if it’s been handheld, or you’ve been running with the camera and you cut then to a still image then it’s like, ‘Oh my God. Something is going to happen.’ It was very minimalistic in a way. You move the camera a little bit or you cut from a running shot to still. These kind of very simple, minimalistic tools were very powerful.” 

How the camera moves is often talked about, but what is discussed less often by cinematographers is the kind of movement that is present in two different shots which are cut next to each other. Something Christensen likes to think about is how to contrast two forms of camera movement - like a rapid dolly move to a slow creeping push on a dolly - for a more abrasive emotional effect. This contrast is especially effective when it’s set against the rest of the movie that is shot with subtle, slow, barely noticeable camera moves.

She uses a lot of these slow, steady, traditionally cinematic moves in her work which is done with a dolly and a track. Sometimes to get to lower angles she’ll ‘break the neck of the dolly’ and shoot from low mode.

Another consistent feature in her work is the use of a handheld camera. This is especially present in her early work with Dogme 95 directors, as shooting with a handheld camera was another of their aesthetic rules, but she’s also continued to use this technique, particularly for more intimate close ups, throughout various other movies shot in the US.        


CONCLUSION

“I love going in and seeing the whole team and everything is going off. What you planned to do. And I come on set in the morning and go , ‘Really? Can I enter this and go in and say something?’ I always get excited about just the physics of the staff and the people and some mechanic that I love about this.” 

Read More
Gray Kotze Gray Kotze

The Pros And Cons Of Shallow Depth Of Field

Let's dive into what depth of field is, the factors and settings that can change it and then go over some of the pros for shooting with a shallow depth of field, as well as go over some of the reasons why shallow focus may actually be undesirable.

INTRODUCTION

Ever noticed how some shots in movies have a blurry background, while in others everything is pin sharp across the entire frame? This is due to the depth of field of an image and is more often than not a conscious choice that is made by filmmakers. 

Shots with a super soft, out of focus background have what we call a shallow depth of field. While those that have large areas of the image in focus have a deep depth of field.

Let’s break this down a bit more as we dive into what depth of field actually is, the factors and settings that can change it, and then go over some of the pros for shooting with a shallow depth of field, as well as go over some of the reasons why shallow focus may actually be undesirable. 


WHAT IS DEPTH OF FIELD?

Depth of field is a measurement of the distance between the nearest point that a lens renders in sharp focus and the furthest object that is sharp. 

For example, one could shoot a close up shot of a character on a telephoto lens where the nearest point of focus are their eyes and the furthest point of focus are their ears. In this example, the distance between these two points, the depth of field, is a measly 3 inches. This is what we’d call shallow focus. 

In another example, a camera may shoot a long shot on a wide angle lens where everything from the foreground to the horizon is in sharp focus. In this example the distance between those points is so far that we just call it infinity. This is what we call deep focus. 

Based on those examples, we can establish that there are a few different variables that change how much depth of field an image has. In fact there are three variables: the focal length, the distance to the in-focus subject and the aperture of the lens.

Shots captured with a telephoto lens that has a long focal length - such as a 290mm zoom have a much shallower depth of field than shots that use a wide angle lens - such as an 18mm lens - which will create a deeper depth of field. So one way to create a background with more blur is to choose a longer focal length.

The second variable for depth of field is determined by where the focus distance is set. The nearer to the camera that the subject is and the tighter the shot, the shallower the depth of field will become. This explains why when you shoot an extreme wide shot that focuses on the horizon most of the frame will be sharp.

Finally, the third variable that filmmakers can use to change the depth of field is the aperture or stop of the lens. The wider open the iris on the back of a lens is, the lower its T-stop will be and the shallower the depth of field it will produce. 

One reason why fast lenses such as T/1.3 cinema lenses are desirable are because cinematographers can shoot them wide open to create a background full of soft bokeh. 

When a long focal length lens, a subject close to the camera and a fast aperture are all combined - much to the horror and disgust of the focus puller - the depth of field that a camera captures will be very shallow.

Inversely a wide focal length lens, a subject far away and a closed down stop will mean that the depth of field will be very deep and the focus puller can relax.

There’s also a fourth variable, the sensor size, which doesn’t directly affect the image's depth of field but does affect it indirectly. Shooting on cameras with a larger sensor size produces images that have a wider field of view. To compensate for this extra width, cinematographers will either shoot on longer focal length lenses to produce a comparable field of view, or are forced to physically move the camera closer to maintain a similar frame. 

As we now know, those two actions, using a longer focal length and focusing on a subject closer to the camera will both make the depth of field shallower. 

PROS OF SHALLOW DEPTH OF FIELD

The biggest cliche about images with a blurry background is that they look ‘cinematic’. The idea of a ‘cinematic’ shot can’t only be tied down to a specific image characteristic. I mean, obviously there have been plenty of gorgeously shot pieces of cinema that don’t use a shallow depth of field. 

However, sometimes cliches have an inkling of truth to them. To understand the link between images with a shallow depth of field and cinema, we need to go back to the days before digital cinema cameras. 

In the early days of video, most cameras had little sensors, wider lenses and slower apertures. While movies captured on 35mm film used a larger film plane, longer, faster lenses. 

So the ability to capture images using a shallow depth of field was technologically limited to filmmakers that shot for the big screen, while deeper focus had associations with the less highly regarded video format.

Although this has now changed, with advances in digital technology making it easy for even entry level cameras or smartphones to simulate a shallow depth of field, I’d argue that there’s still an unconscious mental association that persists between a shallow depth of field and ‘cinematic’ movies in the mind of the audience.

With that out of the way, I’d say that the single greatest practical use of shooting with a shallow depth of field is because it allows filmmakers to control what they want the audience to see and ‘focus’ their attention to.

The smaller the depth of field, the less information in a frame will be in focus and the more power the focus puller has to show where the audience should direct their gaze. 

It makes it possible to more easily isolate a certain character or detail in a frame. The more you isolate a character from the background, the more they stand out and become the central point of the story. A shallow depth of field therefore empowers filmmakers to visually tell stories from a more subjective viewpoint. 

Depending on the context, a shallow depth of field can also be used for other, more creative, applications. Because a super shallow, drifting focus makes images feel dreamy, it can be used as a tool to chronologically differentiate certain scenes from others - such as using it as part of a different visual language for flashback scenes.

Shots that drift in and out of focus may also be used as a deliberate technique to emulate a less controlled environment and make certain sequences like action feel faster, more panicked and more dynamic.

From a purely technical point of view, shooting a lens wide open also lets in more light and makes it easier to achieve exposure in darker shooting environments. This also means that smaller lighting setups will be needed for scenes in darker spaces, like night exteriors - where shooting at a deep stop is rarely practically possible. 

Another technical point is that cinematographers choose certain lenses over others because of their visual characteristics and how they render an image. The wider the aperture and the shallower the focus, the more pronounced these characteristics, such as their bokeh and focus falloff, become. 

It’s almost seen as a bit of a waste to shoot a beautiful, vintage set of lenses at a deep stop. As you close down to around T/8 or T/11 most lenses will become increasingly sharp across the frame and will be more difficult to differentiate from each other. So for those who want to create a level of soft texture to the images, shooting at a faster stop is prefered.      

  

CONS OF SHALLOW DEPTH OF FIELD

While shooting with a shallow depth of field is wildly popular in the film industry, there are also some reasons and situations where it may not be desirable.

I mentioned before that shallow focus can be used to tell stories by guiding the audience’s gaze towards a specific part of the frame, but inversely a deeper focus can also be used to tell a story in a different way.

Shooting a film with a deep stop, where more of the frame is in sharp focus, allows the audience to peruse the environment and pick out information from it themselves - rather than having those details spoon fed to them with shallow focus by a filmmaker. In this way a deeper focus presents stories in a way that is subtly more objective.

Another persuasive case for a deeper depth of field is that it allows you to see more of the set and environment that the character is in. I remember a focus puller that I used to work with who would voice his surprise, especially at younger DPs, who would always shoot every shot with the aperture wide open and make the background as blurry as possible. 

Why travel all the way to a beautiful location, or spend loads of money constructing an incredible set, only for the audience to not see any of it because the background is so out of focus?

Deeper focus shots that see the location are a useful tool for the audience to place where exactly the character is in their minds. 

Moving on to the practical side, and being fully transparent, that focus puller may have advocated for a deeper depth of field because it makes their job of keeping the subject sharp much easier. The shallower the depth of field is, the less margin for error focus pullers have to accurately pull focus and maintain a higher ratio of shots that are usable.

This is why if there is a particularly challenging focus pull, the DP may chat to the 1st and stop down the lens a couple of stops to help achieve more accurate focus. If you’re short on shooting time, sometimes it’s better to sacrifice a smidge of buttery smooth bokeh in order to maximise the number of takes that will have usable focus. Rather have four usable takes for the director to work with in the edit than one take that is in focus that has a shallower depth of field.

Another case where a deeper depth of field may be preferred is when shooting a two shot. As the name suggests this is a single shot with two people in the frame. Sometimes these two characters may be placed at different distances apart from the camera. When shooting with a shallow depth of field, this may mean that only one of the people can be rendered in sharp focus, because the depth of field doesn’t extend far enough to the second character.

A solution to this is to shoot with a deeper depth of field and get the focus puller to do what is called splitting the focus. This is where the lens is stopped down and focused to a distance in between the two characters - so that the depth of field extends nearer to get the first person in focus, and further to get the back person in focus at the same time.

Before I mentioned that shooting wide open accentuates the interesting optical qualities of the lenses, however, for certain films the look may be more suited to shaper images. The more that a lens is stopped down, the deeper the depth of field becomes and the crisper and more accurately resolved the image will be.

This is particularly useful when shooting with certain old, wide angle anamorphic lenses such as the Cooke Xtal Express set. The wide focal lengths in this set have a huge amount of focus falloff when shot wide open with the ‘sweet spot’ of the lens only rendering sharp focus in the very centre of the frame. 

So to minimise actors looking soft on the edges of a shot and to sharpen up the lens to an acceptable level, some DPs prefer to shoot these lenses with a deeper focus at a stop such as T/5.6 or T/8. 

Read More
Gray Kotze Gray Kotze

How Virtual Studio Sets Are Changing The Way Movies Are Made

A recent advance in filmmaking technology is taking place in the field of film sets. This is being altered by improvements in LED wall technology combined with gaming engines. Let's take a look at how we’re quickly heading towards a point where the idea of shooting big budget shows and movies in real world locations is becoming less and less popular.

INTRODUCTION

Filmmaking is a creative discipline which is constantly changing and being driven forward by changes in technology. 

Whether that’s the change from black and white film to colour, the introduction of anamorphic lenses that led to a widescreen aspect ratio, or the creation of digital cinema cameras and the advances in CGI and post production software which allowed filmmakers to artificially create shots that wouldn’t have been possible before.

Advances in technology have an undeniable influence on filmmaking. 

One of those recent advances which I’ll look at in this video is the way in which the film set, the space in which a movie is physically shot, is being altered by improvements in LED wall technology combined with gaming engines. And how we’re quickly heading towards a point where the idea of shooting big budget shows and movies in real world locations is becoming less and less popular.  


WHY NOT SHOOT IN REAL LOCATIONS?

If you’ve never been on a film set and don’t know much about how movies are made it may surprise you to find out that on many productions the environments that the characters are filmed in are not actually locations in the real world.

There are two types of filming environments that can be set up, real world places - which is called shooting on location -  and fake environments that are artificially created to mimic a space - which is called shooting in a studio.

You may wonder what the point of shooting in a studio is when the real world has no end of beautiful, easily accessible locations. It boils down to a few reasons.

The first considerations are time and money. Even though it’s costly to rent studio space and build a set from scratch, sometimes this is still a cheaper option than shooting on a real location.

For example, some scripts may require multiple scenes shot in a diverse range of interiors. It may be cheaper and easier to build one tent set, one interrogation room set, one office set and one prison cell set next to each other in a studio which the crew can quickly bounce around between, rather than doing multiple hour location moves, or even inter-country moves, between each real world location.

Another more obvious reason to shoot on artificial sets is because it may be impossible, or at least very difficult, to access certain locations in real life. Trying to gain access to shoot in the Oval Office probably isn’t going to go very well.

Thirdly, shooting in a studio gives filmmakers a far higher degree of creative and practical control. When you set lights in a studio they will provide a consistent level of illumination for as long as necessary. When you’re in a real world location the sun will move throughout the day and the quality of the ambient light will constantly change. 

When shooting outside in real locations it might rain, there may be clouds or there may be full sun. You’ll constantly have to adapt your plans and scheduling depending on weather forecasts and what kind of look you’re after. This isn’t isn’t an issue when shooting inside a soundstage where you can create your own permanent artificial sun.

Finally, shooting in a studio is sometimes necessary to achieve certain shots with specific gear. For example, doing a telescoping movement through an interior on a Technocrane, or getting a high bird’s eye view perspective, may only be possible in a studio where that gear can practically be brought into the space and where set walls can be moved around, or the set ceiling removed, to accommodate the gigantic rig.         



HISTORY OF ‘VIRTUAL’ SET TECHNOLOGY

“Every step that we take in the film business is incremental. Digital didn’t just appear on the scene. It had been precursured with Genesis’ and DVs. It didn’t appear all of a sudden. It feels like it sometimes that the adoption of digital happened overnight. But it actually didn’t.” - Greig Fraser ACS, ASC, Cinematographer 

When you compare movies from the 30s and 40s with contemporary films it’s much easier to see which sets are not real in the older films. This background is clearly not real, but what about this one? It may look like a real interior location but this background is actually created by a giant LED screen.

To better understand this cutting edge soundstage of the future it’s best to start at the beginning and go through a brief historical overview of quote unquote ‘virtual set backgrounds’.

One of the earliest ways of creating fake backgrounds in movies was with matte paintings or painted backdrops. This is where an artist was employed to physically paint a landscape or set background onto a sheet of glass. The painting would try to incorporate as much of an illusion of depth as they could using a 2-D surface.

Actors, foreground set design and props were then filmed and placed in front of these painted backdrops to trick the audience into thinking they were at a real location.

To save on the inflexibility, lack of photorealism and lack of camera movement, the next technological step forward used the same idea but replaced it with film projection.

Rear projection, as it was called, used a large screen surface with a film projector mounted behind it that could project photorealistic backgrounds that had been pre-filmed at a real location. This also meant that moving backgrounds could now be projected to give the illusion of motion.

Although this was later improved upon with front projection, it still didn’t always sell these backgrounds as 100% reality.    

Moving forward in time to digital effects, the next technological breakthrough came from chroma key compositing. Again, this used a similar principle as before, but instead of painting or projecting a background image that could be captured in camera, this time a consistently coloured blue, or green screen backdrop was used. 

Green and blue are the most commonly used background colours for chroma keying as they are uniform, distinct and differ significantly from the hues that are present in human skin and most other human environments. 

Using software, this specific green or blue channel of colour can be keyed out and removed from the shot. A secondary shot can then be layered behind this foreground layer in post production, replacing the background with whatever footage they’d like and creating the illusion of depth.

Although this technique has been widely used to create artificial set backgrounds for years, it’s still not perfect. One of the main challenges of shooting with a chroma key is that it does not provide realistic lighting, like a real life ‘background’ in a shot would.

“Cause there’s always the problem.You know, you’re flying above a planet like Earth. If you do a barrel roll how do you suitably light that all around? You’re not going to do a real barrel roll. So trying to solve that problem led us to creative volume.” - Greig Fraser ACS, ASC, Cinematographer 



LED VOLUME WALL

Creative volume, or volume lighting, is a way of describing the latest innovation in virtual background technology.

“The stage of the future is a series of lights on the walls. It’s walls that are made of light emitting devices.” - Greig Fraser ACS, ASC, Cinematographer 

This is a gigantic LED wall, and sometimes also a ceiling, which can display and playback photo-realistic video or stills using Epic Games’ Unreal gaming engine -  kind of like a massive TV. This system can also use camera positional data to change how the background moves. So when the camera moves, the background can move accordingly, creating parallax and an almost perfect visual illusion.

“There’s another shot on that same ice pathway on the ice planet where the camera was booming up. And in camera it’s perfect. There’s one long walkway disappearing. Obviously there was a practical walkway and then the digital wall. And so the digital walkway, as the camera’s booming up, had to change its relationship so that the perspective from the camera was the same.”  - Barry Idoine, Cinematographer 

This enables most shots to be done completely in camera without much post production tweaking necessary.  This wall also solves the lack of interactive lighting problem that’s encountered when using a green or blue screen.

Greig Fraser used this system, which they called The Volume, to shoot large portions of The Mandalorian in studio. Having no green screen meant that there were no green light tinges to the set, or green reflections on the actors metallic suit.

The Volume is a 20 foot high, 270 degree wall with a circumference of 180 feet, complete with a ceiling. This newest iteration of the technology featured LED pixels which were only 2.84mm apart from each other - close enough for it to produce photorealistic backgrounds. 

This allows crews to use the gaming engine to map 3D virtual sets as a background using the same technique as early matte paintings or rear projection but with the added bonus of creating realistic parallax movement that mimicked that of the camera movement, and interactive lighting that provided naturalistic shadows, illumination and reflections.

These backgrounds are created by using a series of digital photographs taken on a camera like a Canon 5D which can then be stitched together to create one stretched out background that covers the 270 degree wall.

To change between locations in different cities, or even different planets, the production design crew just needs to swap out the foreground art elements, like the floor and any props near the characters. 

The correct background will then be set on the LED wall, any lighting tweaks will be adjusted, the actors called in, and then they’re good to go. This allowed them to be able to change between an average of two different locations in a shooting day.   

“Instead of blue, green screen, we can now see the environments and actually see them as live comps. For all intensive purposes. We’ll actually be able to go inside a car on stage and for the actors and the photography to look like you’re actually driving.” - Lawrence Sher, ASC, Cinematographer

One of the big advantages of working like this is that cinematographers can use this LED screen to control the ‘weather’ however they want. If they want to shoot the same sunset for 12 hours at a time they can do so. If it needs to be cloudy, or sunny that can be accomplished by switching out the background and adjusting the light.

One limitation that shooting in this way still has is that the actors need to be about 15 to 20 feet away from the LED wall in order to create enough separation between the actors and background for the image to look realistic.

Apart from this one downside, this new technology of creative volume is a massive step forward in virtual set technology, which allows filmmakers a new degree of studio control and an ability for cinematographers to capture the images that they want in camera without leaving it up to post production.

Also remember this technology is still in its infancy. As it continues to get used on more shows in the future, such as the upcoming Netflix period production 1899, it will continue to improve, costs will slowly reduce and it will become more user friendly and faster for crews to work with.

We’re rapidly approaching the stage where filmmakers will be able to shoot scenes relatively easily in whatever photorealistic environments they imagine - without even needing a ton of post production manipulation. 

As always technology pushes filmmaking forward, and will hopefully bring the industry back to the sweet spot of capturing films as much in camera as is possible. 

Read More
Gray Kotze Gray Kotze

Getting Kodak To Bring A Film Back From The Dead: Kodak Ektachrome

Now that the much beloved Kodak Ektachrome is back on the market after bring discontinued, let’s take a closer look at how exactly the film was resurrected, break down what makes Ektachrome different to other existing Kodak films, and look at how 35mm motion picture Ektachrome was brought back by special request to shoot the second season of Euphoria.

INTRODUCTION

It’s 2013. The digital camera has been introduced and you can now capture images with the click of a button. It soars in popularity while film sales plummet. 

In a move to cut costs Kodak begins discontinuing its more niche films. Finally, all the variants of the legendary Kodak Ektachrome for both stills and motion picture got the chop. Cut to 2017. 

“Kodak is proud to announce the return of announce the return of one of the most iconic film stocks of all time: Kodak Ektachrome.”
Now that the much beloved Kodak Ektachrome is back on the market, let’s take a closer look at how exactly the film was resurrected, break down what makes Ektachrome different to other existing Kodak films, and look at a film industry use case by going over why Ektachrome was used to shoot the second season of one of the most popular contemporary TV shows.  


HOW EKTACHROME WAS RESURRECTED

Kodak started ceasing manufacturing Ektachrome 64T and Ektachrome 100 Plus in 2009. This was quickly followed by the rest of the line up until 2013 when all Ektachrome products were scrapped.

After seeing a bit of an uptick in the sales of film - especially in photography - Kodak made the move to bring the emulsion back. However it was no easy task. Manufacturing film on an industrial scale requires significant investment. 

You can think of making a filmstock as being kind of like baking a cake. First you need to assemble all of the ingredients. 

This is where Kodak hit the first snag. Because it had been discontinued from the market, it was difficult to find suppliers that would supply them with the necessary ingredients - or chemicals - to make it.

Ektachrome is a complex film that requires about 80 different chemical components. Eventually they managed to source or manufacture all the necessary ingredients and could begin producing and testing the new film.

This starts with using a cellulose triacetate base - a plasticy substance - which is then coated with multiple different layers of chemicals. These chemicals are mixed in different containers in the dark and applied to the support roll until it is coated. It is then cooled, dried and is ready for shooting where it will be exposed to light for the first time.

Initially Kodak rolled out the film so that it could be shot in 35mm by still photographers, in Super 8mm cartridges and in 16mm. However, 35mm motion picture Ektachrome wasn’t made available. Well, not yet anyway. But we’ll come to that later.  
Once the Ektachrome film has been shot it can then be developed in an E-6 chemical process where the image emerges and is set so that it can be viewed and worked with under light. 

This development process starts by passing the film through a chemical bath in the same way as colour negative film is in C-41 processing. But, because it is a reversal or slide film, it also has an extra step with a reversal developer that turns it into a positive. 

But, you may wonder, what exactly is reversal film?


WHAT IS EKTACHROME

In a previous video I went over Kodak’s Vision 3 colour negative film, the most popular stock for motion pictures. When this film is shot and then developed it produces a negative where the colours and areas of highlights and shadows are inverted. This negative is scanned and then digitally converted to a positive image so that the image is flipped back to normal.

Kodak Ektachrome works differently. It’s a reversal film which is different to a negative film. 

This means that when it is shot and developed in the E-6 process that I mentioned before it produces a positive image on the film. So the image can immediately be viewed by just projecting light through it and when it is scanned you get a positive image without needing to do any conversions.

If this is the case then why is negative film more commonly used than reversal film?

One reason is because reversal films have a much tinier dynamic range than negative stocks do. A modern colour negative stock like Kodak’s Vision 3 range is capable of capturing detail in an image with up to around 14 stops of dynamic range between the deepest shadow and the brightest highlight. 

So it can see details in extremely dark shadowy areas metered at f/ 1.4 without going to pure black, while also maintaining details in super bright areas of the image up to f/ 180 without blowing out to pure white.

Ektachrome on the other hand has a far smaller dynamic range of about 4 or 5 stops. So if it is set to capture details in shadows at f/1.4, the highlights will start to blow out at only f/ 5.6.

This means that cinematographers need to be far more careful with exposure and that scenes need to be lit much flatter, with a small difference between the bright and dark parts of an image. 

If you compare the sensitometric curves of Kodak 500T, a colour negative film, and Ektachrome, it shows that detail can be captured in far more stops of light for colour negative than for the reversal film. 

Ok, enough technical talk. What kind of look does Ektachrome produce?

Firstly it is a high grain stock, so it has more texture and therefore less perceived resolution than Vision 3 stocks. I’d describe Ektachrome colour as ‘hyper real’. It basically captures colour with far greater vibrance, richness, contrast and saturation than real life. The shadows have a lot of blue in them and the blacks are very dense. 

The highlights are quick to blow out to white but have a filmic roll off to them which is pleasing to the eye.

In a nutshell, reversal films are much more grainy, punchy, contrasty, vibrant and saturated than negative films - which makes the look more extreme and elevated. 

These, along with its limited exposure range and slow speed of 100 ISO, make it more of a niche film, hence it’s prior discontinuation, however if you want that punchy, exaggerated colour then Ektachrome is hard to beat.  

     

USE CASE

“In the second season we use Ektachrome, which I’m pretty excited about. We asked Kodak to manufacture 35mm Ektachrome which was I think discontinued in 2003 or something. And now they’re doing it again and it’s exciting. Sometimes we’re shooting, like, night exteriors on this stock which is a 100 ISO stock. It’s a little bit of a nightmare but when you get it back it’s rewarding.” - Marcell Rév, Cinematographer

Because of its niche usage, Ektachrome was only being manufactured for the smaller gauge Super 8mm and 16mm formats, as well as in 35mm for small still photography rolls.

The Euphoria creative team pushed to use the stock to capture large portions of season two of the show. After talking to Kodak and putting in a large order they were able to get them to manufacture Ektachrome 100D in 400 foot 35mm motion picture rolls.

There were two main reasons that they pushed to shoot on film. One, they originally wanted to shoot the entire show on film. And two, the switch from digital to film creatively reflected the visual evolution of the characters and the show through a desire to break out of realism and a move into presenting it in a larger than life way that echoed how the characters saw themselves as the star of their own movie.

The elevated colour, contrast and grain of Ektachrome was the perfect choice for this emotional expression of a more sensory experience.

“To be honest we always wanted to shoot it on film - the first season too. We were just not really allowed to. We’re changing a little bit of visual direction and I think film was a choice for season two.” - Marcell Rév, Cinematographer

After the success of the first season, HBO relented to their creative demands and allowed them to capture the second season photochemically.

Instead of processing the film with the recommended E-6 process that would render the film as a positive, Marcell Rév instead got the lab to cross process the reversal film so that it came back as a negative instead of a positive. 

Deliberately messing with its intended processing meant that the colours were even more saturated and contrasty and that the negative came back with a green tint to it once it was scanned. 

He then got his colourist Tom Poole at Company 3 to do very specific colour correction that removed the green tint, but maintained the strange, over saturated colour.

For some darker night scenes where he needed a more sensitive stock he also used Vision 3 500T negative film. He underexposed it and pushed it a stop to try and increase the grain and then got his colourist to grade it to closer match the Ektachrome footage. The reversal and negative footage didn’t exactly match, but they at least occupied a similar visual world. 

The biggest challenge that he had to face by using two different film stocks was that they required two vastly different approaches to lighting. Due to the different dynamic ranges of the two films, that I mentioned earlier, the negative film could be lit with far more contrasty lighting conditions, while any scenes shot with Ektachrome had to be shot with lighting that was much flatter, with lots more fill light, in order to create a similar look.

They also required vastly different amounts of light. Getting exposure at 500 ISO needs far less light than getting exposure at 100 ISO. 

So any night or interior scenes shot on Ektachrome meant that increased power generation and higher output light sources were required.

All of this work resulted in images which had strange colour casts, were grainy, amped up, saturated and pretty magical.       

Read More
Gray Kotze Gray Kotze

Cinematography Style: Adam Newport-Berra

In this episode I’ll dive into Adam Newport-Berra's philosophical approach to shooting movies and then take a look at some examples of the gear he uses to execute his photography.

INTRODUCTION

Adam Newport-Berra’s work as a cinematographer has been spread across every category of filmmaking: music videos, documentaries, narrative features, shorts, TV and commercials.

Although he changes his approach to photography depending on the tone that the director wants, there are some common threads across projects that he’s shot.

He leans heavily on prep work before he gets to set, he focuses on selecting crew with the right energy and he surrounds himself with like minded collaborators. 

I’ll use this video to further unpack Adam Newport-Berra’s cinematography by going over some of his thoughts and philosophies behind creating images and then dive into some more practical examples and look at some of the gear setups he has used to pull them off. 

PHILOSOPHY

“I think the most inspiring person, as far as people I look up to, is Robby Müller:  Dutch cinematographer. How he was able to carry his sort of touch and his vision but make it fresh every time with every director. I want to be hired for my working style, the way I collaborate with the director, how I’m able to bring out their vision. And I think Robby was always able to do that.”

Adam Newport-Berrra’s cinematic journey began when he studied film at NYU. It was there that he started building relationships with his peers and shooting shorts. When it came to crewing up he often ended up behind the camera due to his prior experience and familiarity with cameras: which was informed by him shooting skating films with his friends when he was young.

As his career evolved and he moved into the HOD position of cinematographer on bigger budget projects, choosing crew and getting the most out of them became a priority. When employing someone, whether on a crew or in any business, the skills that they have are of course important. 

However, when you get to hiring a certain level of personnel, you can begin to assume that all the applicants with enough experience will be competent at their job. But, something that he sees as equally, if not more, important is the energy and kind of personalities that they bring onto set.

“I had to spend a lot of time interviewing people and, you know, making calls and reference calls to figure out my crew. Because the energy of the crew is very important for me. Creating the right environment is just crucial. I expect everyone in my crew to have ideas. Down to the loader, the grip. Everybody. Like, everyone’s going to bring something to it. The more you arm them with ideas and your vision the more you set them up for success the more you’re going to get out of them.” 

One way that he gets everyone on the same page is due to the prep work that he does before shooting begins. Early in the process, once he gets the script and is beginning discussions with the director he’ll compile a mood board or look book based on any ideas he comes up with. 

This includes a compilation of visual references, such as frame grabs from films, or pictures from still photographers.
He uses this in the initial exploration phase to see what ideas resonate with the director. 

“I try to take my first impressions from the script and just put together sort of a collage of images that come to mind. I’m pretty transparent with  the director that this isn’t how I think they should shoot thir film, but rather the sort of things that bubble to the surface as I was reading. And it’s a great way to sort of take the temperature of a working relationship to see what they respond to.”

After feedback and discussions with the director they’ll start to get a better picture about the tone and look of the film that they’re after. They can then dive a bit deeper and go over specifics, such as what kind of lighting is nice, what framing works and which colour wardrobe is best for each scene.
As this gets more and more refined he’ll compile these ideas into a sort of ‘shooting bible’ complete with more specific notes, the sides and location stills or references. This ascribes a PDF page with each scene that they need to shoot. So if they’re shooting scene four, his crew can look at page four of the cinematography bible and see the ideas, technical notes and tone that he has in mind for the scene.

He uses this as a tool to catalogue specific ideas about scenes so that they don’t get lost, which can happen on a shoot that is weeks or even months long which constantly bounces around with a new scene from a different part of the script each day.   

Although of course light can be artificially added and shaped with filmmaking tools, another important way that light can be controlled is by scheduling scenes that take place on location for specific times during the day when the natural light is best.
This is another part of his pre-production workflow and takes place before shooting begins. 

On The Last Black Man In San Francisco he managed to get a key to the house they would be shooting in and did much of his pre-production work at a desk inside the space. As the sun moved outside the building he could observe the time of day where it cast the most interesting shadows and gave the most shape to the space. 

He could then note the time and try to schedule the interiors so that they were shot within a specific time window.

In this way the preparation that he puts into discussions with directors, the formation of visual ideas and the more practical scheduling, before even having a camera in his hand, is a large, unseen part of his cinematography.    

GEAR

As I mentioned, beautiful natural light can be achieved through careful scheduling, but a bigger part of his job is how he works with artificial, film light sources.

He often uses a lot of coloured lighting in his work which is tinted to colour temperatures other than just the standard 5,500K daylight and 3,200K tungsten light.

To get different hues he places gels in front of large fixtures, such as warm sodium vapour gels, and uses RGB LED lighting fixtures which allow a specific colour temperature and tint to be dialled in.

“I’m not technically that proficient with gels. I kind of just look at stuff and figure out what I like and then tell the gaffer that’s what I like and they remember it. I’m totally game for using LED RGB colour. It’s just difficult because I love the quality of fresnel lighting that comes from tungsten lights and I feel like LED lights aren’t quite to that quality level yet.” 

A technique he has used is to create different colour pockets of light in different parts of the set. For example using some fixtures that are warmer and then keying through windows with something cooler. Having different areas with different colours creates more separation in the frame and a greater sense of dimensionality between different planes of the image. 

He often uses practical lights that we see in the shot, like warm, tungsten lamps, or overhead off-green fluorescents.

An advantage of using RGB lights over gelling is that it allows you to change the colour of a light during a shot. For example if you watch We Cry Together you can clearly see the Skypanel LEDs that he bounces through the windows change their colour temperature from a cool blue at the beginning, to more of a warmer purple, into pink and finally into a warm orange. 

I’d hazard a guess that he rigged all of these Skypanels to a DMX board that his gaffer could control remotely during the long take shot. I’d also hazard a guess that the motivation for shifting the colour of the lights during the shot is meant to mimic the shifting of emotions that happens between the characters during their argument.

Another show that he worked on that featured a combination of coloured lighting and cueing lighting effects was during the last two episodes on the first season of Euphoria. 

“I was a bit intimidated and daunted by taking on such a huge production so late in the game. You know, by the time I came in they’d been shooting for like eight months. I wanted to bring my own vision and my own style and my own inspiration to it but I also had to honour what had already been done. I was worried that that was going to be difficult.”

This balance between bringing in new ideas and staying true to the established look is a challenge for DPs that are employed to shoot later episodes in a series. 

Usually the cinematographer that is hired to shoot the pilot episode of a series, in collaboration with the director and showrunner, crafts the basis of the look for the season and the show. This cinematographer will establish how certain locations are lit, and once lit, they tend to stick to this lighting for the remainder of the series.

For example, Adam Newport-Berra shot the first episode of The Bear and established the camera zoom language, the use of practicals, different pockets of coloured lighting and the saturated, contrasty look that he’s known for. This remained in later episodes.

Coming back to Euphoria, he stuck to the lighting plans that had already been established in earlier episodes by Marcell Rev, but was allowed to light some newly introduced locations and advocated for using a mixed medium approach for certain scenes that were shot with 16mm film.

He’s shot on both 16mm and 35mm film and in different digital formats. He largely sticks to Arri cameras for digital capture, such as the Alexa Mini, the Alexa 65 and the Amira.

A lot of his work is done with spherical prime lenses that have a fairly clean look to them, such as the Master Primes. His selection of focal lengths is varied, but he has been known to use wide angle lenses for close ups of characters. 

Going through his work, a consistent visual feature that I found that spanned a few different projects is his use of slow push ins with zoom lenses. This BTS photo shows him working with what looks to be an old Angenieux 10:1 25-250mm that is strapped up to a Microforce.

A Preston Micro Force is a zoom control device which is linked up to a motor - and works in a similar way as a wireless follow focus. It’s powered by a cable which is plugged into a battery port and another zoom cable links up to a motor that slots into the zoom gear on the lens. The Micro Force is rigged onto an arm on the head of a tripod with a bracket called an Oppenheimer. 

This allows the operator to use the Micro Force as a handle to pan and tilt the camera, while also allowing them to trigger a zoom in or out button with their finger. How hard the button is pressed determines the speed at which the motor will zoom. The motor torque can also be adjusted to make the zoom faster or slower. 

He uses this device often to smoothly push into or pull out of shots, particularly establishing shots or to pick out a detail in a wide frame by pushing in to it.           
 Another type of shot which he’s used in quite a few different projects is the long take, where the camera floats around either on a Steadicam, like in We Cry Together, handheld on the shoulder for the short film Run/On or presumably with some kind of tracking vehicle like a Russian Arm with the camera mounted on a stabilised remote head on Baby Keem’s music video.      

CONCLUSION

“My advice to everyone I think is to just keep making things and not be too precious. That’s something that we all struggle with is this idea that, like, whatever we make has to be the perfect calling card for who we are and what we believe in and what we want to be making. I’m always unsatisfied with where I’m at. I always want to be at the next level. And you’ll find that any DP is going to feel that way. You just need to keep creating things and surround yourself with people that have similar values and beliefs as you and will push you to create things.”

Read More
Gray Kotze Gray Kotze

Choosing The Right Aspect Ratio For Your Film

There are a host of different standardised aspect ratios out there that filmmakers have used. From the almost square 1.19:1 to the extreme widescreen 2.76:1. In this video I'll go over a few of the most popular aspect ratios used in cinema and unpack some of the rationals for shooting in different ratios.

INTRODUCTION

Although we’ve become accustomed to watching online video content, like this video, in a 16:9 format, films can actually be shot and presented in all kinds of different shapes and sizes: or aspect ratios.

These rectangular form factors usually have a horizontal width to them that better mimics the point of view of the human eye.

With all of these different aspect ratios to choose from you may wonder which of them is the best to shoot your next film in. So, in this video I’ll try to aid that decision by first going over some popular aspect ratios that are used in cinema and then explain why using different aspect ratios makes footage feel different by using a few creative examples from some popular films. 


CINEMA ASPECT RATIOS

The dimensions of a 2-D video frame are measured in pixels, which can be simplified and factored down into a fraction. For example this video clip is made up of 1920 horizontal pixels and 800 vertical pixels. When we divide 1920 by 800. We get 2.4. Therefore we say that this widescreen frame has an aspect ratio of 2.40:1.

Throughout the history and development of cinema, certain aspect ratios emerged which became standardised choices for capturing and projecting an image. Cinema started before the age of digital editing software: where it’s easy to crop video to any ratio desired.

Back then, the aspect ratio was determined by how cameras captured images onto film. For example, early silent films used the full horizontal length between the perforations of a 35mm film negative, and used the horizontal space of 4 perforations. So each frame represents a picture that is 4 perforations long, also called a 4-perf pulldown. This gave a 4:3 or 1.33:1 aspect ratio. 

Over the years, changes in camera technology created new aspect ratios. When sound was first introduced in cinema it had to be added as a thin strip on the negative and therefore cut into the horizontal space of the 1.33 frame, creating a squarer 1.19:1 aspect ratio.

This was later deemed by the public to be too square, so it was replaced by a wider, standardised 1.375:1 or what was called an Academy aspect ratio. This became the dominant aspect ratio for Hollywood films from the 40s and was popular until the introduction of widescreen.

Partly to address dwindling audience numbers and to differentiate the cinema experience from the 4:3 TV format, a more expansive 1.85:1 aspect ratio was introduced. 

This was shot in the same way using 35mm film and spherical lenses, but this time the top and bottom of the image was effectively cropped until they arrived at a surface area that was closer to three perforations tall, rather than four.

To get an even more expansive, grander looking feel, required the creation of a new format that used a different kind of lens: anamorphic. 

This format still covered the full width of the 35mm frame and used four perforations of height - but it captured this space on the negative using anamorphic lenses which stretched and distorted the image to cover the full frame. 

This squeezed image could later be de-squeeezed until it got back to an image without distortion. De-squeezing it created more horizontal length and produced an aspect ratio of 2.39:1 which is commonly rounded up and called 2.40.     

There are some slightly more niche aspect ratios like: 1.43:1 Imax, 1.66:1 , 2:1, and even the ultra wide 2.76:1 Ultra Panavision 70 ratio. 

But far and above the two most common cinema aspect ratios today are 1.85:1 and 2.40:1. Most cinema releases that are digitally projected are done so using a DCP, which is basically a hard drive with the digital file on it. The two most common aspect ratios for a DCP are either scope - which has a 2:40 aspect ratio - or flat - at 1.85:1. So, getting to any niche aspect ratios requires cropping the video file within one of those two formats.   

THE EFFECT OF DIFFERENT ASPECT RATIOS

Now that we know some of the available aspect ratios that can be used, it’s time to talk about what makes these formats different from each other. Like picking a lens and lighting a scene, the aspect ratios that filmmakers choose for their project is also a creative choice.

The reasons for this choice range from more concrete, practical reasons, to more difficult to define, emotional impulses. And, although there are nuances to choosing certain niche ratios over others, the decision of what aspect ratio to shoot in basically just boils down to a choice of how wide or how narrow you want the frame to be.

“My intent was to have the entire movie play in the gigantic 1:43 aspect ratio on a giant Imax screen. Compositionally I really started falling in love with that concept because Superheros as figures tend to be less horizontal. A lot of the work we’re doing is trying to restore the full frame.” - Zack Snyder, Director

One component to this choice are the historical ties that come with certain formats. In the same way that shooting with a handheld camera has ties to the documentary genre and therefore has ties to realism.

Generally, the narrower the aspect ratio is the more old world it feels and the wider it is the more modern it feels. This tie to period is a product of the history we discussed earlier where Hollywood silent films were shot in narrower ratios like 1.33:1.

So a movie like The Artist, uses 4:3 as a pretty obvious visual cue to mimic the aspect ratio of the time period of the silent movie format that the story tells. 

“He was talking like this to say, what about doing a silent movie. I thought he was like maybe doing a silent movie in the modern way but he said, ‘I want to do a movie in the 20s: black and white and silent and square format.’” - Guillaume Schiffman, Cinematographer

Particular aspect ratios can also be chosen to represent footage taken with a particular recording device, whether in a mockumentary style with timecode and frameline overlays like in District 9 or to emulate the point of view of whoever is recording from a smartphone like in Searching.

Some filmmakers have used different aspect ratios within the same movie. For example The Grand Budapest Hotel delineated the three different time periods in the film by shooting them in different aspect ratios. The 1930s scenes were shot in the Academy aspect ratio, the 1960s scenes were shot in an anamorphic ratio and the 1980s scenes used a widescreen ratio.

Other directors, like Christopher Nolan, have used different aspect ratios in their films because they like to use the native aspect ratios of whatever format they are using without cropping it to conform to a common ratio. Dunkirk cuts between some scenes that are shot in the taller Imax format and others shot in the longer 35mm anamorphic format.

From these kinds of more overt rationale for shooting in a specific aspect ratio, we can also choose a frame size in order to exploit its cinematic advantages.

Extremely wide aspect ratios like 2.40 or even 2.76 contain more lateral space. It makes them a better format to layer information within a shot. Multiple planes can be used when framing, to include characters and story elements in the foreground, middle ground and background. 

“When you’re in Mini’s there are always two plays going on simultaneously. There’s the foreground play where the actors are closer to the lens and to the camera and they’re doing their things. But there’s always a background situation and the audience really has to keep track of the foreground play and the background play.” - Quentin Tarantino, Director 

Because a wider screen ratio is able to see more background information it’s better at contextualising the geography of the space in a location. The extra width also lends itself to landscapes and expansive, epic sets to show off the grandeur of the location by expanding it. 

For this reason it’s popularly used to capture big budget epics, with large set pieces with lots of moving parts that requires a certain photographic width to include them.

On the other end, narrower aspect ratios are better for films that deliberately want to withhold context, or frame single characters from a subjective point of view. Son of Saul did this by framing with a narrow 1.33 frame that was glued to the protagonist.

How wide a frame is can also influence the emotions that audiences ascribe to the image. Super wide frame shots of desserts can make them seem vast and unending, while a tight shot of a character in an almost square frame can make a space feel psychologically much more contained and cut off.

“Obviously it’s unusual to have something in this day and age to have something in this aspect ratio. I mean, a confined space obviously felt like 1.33 so that’s what we’re gonna do. I don’t know where I even first heard of 1.19 but I heard about it and was like, maybe this is one opportunity where we can do this…It’s somewhere between 1.33 and 8x10 photography.” - Jarin Blaschke, Cinematographer 


CONCLUSION

Ultimately, choosing an aspect ratio is entirely dependent on the context of the film. Different ratios can be used to invoke historical ties, present a specific recording point of view, differentiate between timelines, be used to accentuate certain cinematic advantages or even create a kind of emotional impact.

Above all, whatever aspect ratio you choose, make sure that the choice is a considered one that leverages this element of filmmaking to further the telling of your story.

Read More
Gray Kotze Gray Kotze

The Last Colour Negative Motion Picture Film In The World: Kodak Vision 3

Let’s use this video to examine the last remaining range of colour negative film stocks, go over how to choose the right film, how to identify each stock’s specifications based on the label of their film can, and talk about the visual characteristics that contribute to the ‘shot on Kodak’ look.

INTRODUCTION

Shooting on film, in both photography and in the world of cinema, has seen a bit of a resurgence in recent times. After the release of capable, high end digital cinema cameras - like the Arri Alexa in 2010 - many may have thought that the era of shooting movies photochemically was done and dusted. However, over a decade later, motion picture film still exists.

But, unlike in photography where there are still quite a few different films to choose from, in the world of motion picture film there is only one commercially mass produced category of colour negative film that remains. From one company. Kodak Vision 3.

So, let’s use this video to examine the last remaining range of film stocks, go over how to choose the right film, how to identify each stock’s specifications based on the label of their film can, and talk about the visual characteristics that contribute to the ‘shot on Kodak’ look.

CHOOSING THE RIGHT FILM

When cinematographers shoot on film there are three basic criteria that will inform what film stock they choose to shoot on: the gauge, the speed and the colour balance.

First, you need to decide what gauge or size of film you will shoot on. This may be determined on the basis of budget, or due to a stylistic choice based on the look of the format.

The four standardised film sizes to choose from are: 8mm, 16mm, 35mm and 65mm.

The smaller the width of the film is, the less of it you need to use and the cheaper it will be but the less sharpness, clarity and more grain it will have. The larger the width of the film, the more you will need, the more expensive it’ll be and the higher fidelity and less grain it will have.

Next, you’ll need to decide on what film speed you want to shoot at. This is a measurement of how sensitive the film is to light and is comparable to EI or ISO on a digital camera.

Basically, the more light you’re shooting in, the lower the film speed needs to be. So bright, sunny exteriors can be shot on a 50 speed film, while dark interiors need to be shot on a 500 speed film.

Finally, films come in two colour balances: daylight and tungsten. This refers to the colour temperature of the light source that they are designed to be shot in. So when shooting under natural sunlight or with film lights like HMIs that have a colour temperature of approximately 5,500K it’s recommended to use a daylight stock. When shooting with warmer tungsten light sources, a tungsten balanced film should be used to get the correct colour balance.

As a side note, it is still possible to shoot a tungsten film, like 500T, in cooler sunlight.

Kodak recommends using a warm 85 filter and exposing the film at 320 instead of 500. However, some cinematographers, like Sayombhu Mukdeeprom, prefer to shoot tungsten stocks in daylight without an 85 filter and then warm up the processed and scanned images to the correct colour balance in the colour grade.


HOW TO READ A FILM LABEL

Within the Kodak Vision 3 range there are 4 remaining film stocks in production. Two daylight balanced stocks - 50D and 250D - and two tungsten stocks - 200T and 500T.

One of the best ways to further unpack the technical side of what makes up a film is to look at the information on the label that comes with every can.

The biggest and boldest font is how we identify what kind of film it is. This is broken into two parts. 50 refers to the film speed or EI that it should be metered at. So cinematographers shooting a 50 EI film will set the ISO measurement on their light metre to 50 to achieve an even or ‘box speed’ exposure of the image.

‘D’ refers to daylight. So this is a daylight balanced film.

The second part, 5203, is a code to identify what type of film it is. Every motion picture film has a different series of numbers that is used to identify it. So 35mm Kodak Vision 3 50D is 5203. 8622 is 16mm Fujifilm Super-F 64D. 7219 is 16mm Kodak Vision 3 500T.

It’s crucial that all cans of film that are shot are labelled with this code when sent to the film lab for development so that the film can be identified and developed at the correct box speed.

This brings us to the next text, develop ECN-2. This refers to how the film needs to be developed. ECN-2 development is basically the same process of passing the film through a series of chemical baths as C-41 - which is used to process colour negative film in photography.

However, it also includes an extra step where the remjet layer on the Vision 3 film is removed. Remjet is used to minimise the halation of highlights and decrease static from the film quickly passing through the camera at 24 frames per second.

Next, we have a table that indicates how the film should be exposed in different lighting conditions. Under daylight no extra filters are required and the film can be exposed with an EI or ISO of 50.

When shooting with a 3,200K tungsten light source Kodak recommends using a cooling 80A filter - which changes the light from 3,200K to 5,500K or daylight. Adding this filter lets through less light, so in this situation Kodak recommends exposing the film with an EI of 12.

This 35 means that the film comes in a 35mm gauge width. These numbers refer to the kinds of perforations it has on the sides of the film. 

And, the final important number refers to how many feet of film the roll contains. 

When shooting on 35mm the most common roll length is 400ft -  which is used for lighter camera builds. But 1000ft rolls can also be used in larger studio magazines that allow filmmakers to roll the camera for longer before needing to reload.   


KODAK VISION 3 CHARACTERISTICS

There’s a good reason why many DPs who shoot on digital cinema cameras still try to create a Kodak ‘look’ for footage using a LUT or in the colour grade. 

Whether it’s the result of the long legacy of shooting movies on film, or whether it’s just that filmic colour is actually more visually appealing, the film look remains sought after. However, it’s important to remember that the look of film has changed over the years due to the methods used by manufacturers.

For example, many iconic Hollywood films from the 70s that were shot with the famous 5254 have a more neutral, crushed, grainy look than modern Vision 3.

Also, keep in mind that modern productions shot on film are all scanned and then graded in post. So the colour in the final file may be different depending on how much the colourist remained true to, or strayed from, the original colour in the negative.     

Kodak film has always been considered rich, with good contrast and warmer than Fujifilm - which has more pronounced blues and greens.

As it’s the most modern, the Vision 3 range is the cleanest looking motion picture film stock produced. The most sensitive of the bunch, 500T, has very low grain - even when push processed.

For this reason, filmmakers who seek a deliberately high grain, textured image these days regularly opt to shoot in 16mm, rather than the lower grain 35mm.

The colour produced is on the warmer side - which helps to create beautiful, rich looking skin tones that are more saturated than Kodak’s older Vision 2 stock.  

Vision 3 film also has a big dynamic range of approximately 14 stops - which is more than older films. This means that when it’s scanned and converted to a digital file, the colourist is able to do more with it, such as use power windows to recover highlights from over exposed areas.

“As a colourist my job is to try to build a good contrast level and keep the detail in the lowlights. I find that the 5219 stock was designed so that I can have that contrast and the detail as well without having to do anything extra like power windows to pull the detail out.” - Mike Sowa, Colourist

What I especially love about the film is how it renders the highlights with a subtly blooming halation effect and how it renders detail in a way that is not overly sharp. 

With modern post production colour, it’s possible to almost exactly replicate this look with digital footage. You can get almost identical colours, you can add scans of film grain on top of the image. But, to me, what is still not achievable in post is recreating how film renders details in an organic way that digital technology is still not able to recreate.  

CONCLUSION

So that brings us to the end of this video. As always, a final thanks to all of the kind Patrons who keep the channel going with their support and receive these videos early and free of ads. Otherwise, until next time, thanks for watching and goodbye.

Read More
Gray Kotze Gray Kotze

How Jordan Peele Shoots A Film At 3 Budget Levels

Jordan Peele is a director who possesses a true love of genre - especially the horror genre. His films have used genre as a structural framework, which are filled in with satirical stories that explore wider themes, ideas and issues in society, told through the eyes of his protagonists. In this video I’ll explore the work of Jordan Peele by looking at three films that he has directed at three increasing budget levels: Get Out, Us, and Nope.

INTRODUCTION

Jordan Peele is a director who possesses a true love of genre - especially the horror genre. His films have used genre as a structural framework, which are filled in with satirical stories that explore wider themes, ideas and issues in society, told through the eyes of his protagonists.

Telling stories in such a bold, direct manner, that at times challenge and poke at the audience’s own insecurities and deep set fears has sometimes meant that his films have gotten polarised reactions.

In this video I’ll explore the work of Jordan Peele by looking at three films that he has directed at three increasing budget levels: the low budget Get Out, the medium budget Us, and the high budget Nope to unpack the methodology behind his filmmaking and his career.  


GET OUT - $4.5 MILLION

From his background in sketch comedy, Peele transitioned to another genre for his debut feature film. 

 “I think horror and comedy are very similar. Just in one you’re trying to get a laugh and in one you’re trying to get a scare.” - Jordan Peele

Both genres rely on careful pacing, writing, reveals and filmmaking gags that are used to invoke an emotional response from the audience. He also brought his appreciation for direct satire and social commentary from sketches into the horror screenplay.

In fact, some of the films that inspired him were stories written by Ira Levin, like The Stepford Wives and Rosemary’s Baby - built around the horror genre and underpinned with a satirical commentary on society. 

“Those movies were both extremely inspiring because what they did within the thriller genre was this very delicate tightrope walk. Every step into ‘weird town’ that those movies make, there’s an equal effort to justify why the character doesn’t run screaming. That sort of dance between showing something sort of weird and over the top and then showing how easily it can be placed with how weird reality is. That’s the technique I brought to Get Out.” - Jordan Peele

Justifying the actions of the characters so that the audience does not question the decisions that they make is particularly important in the horror genre or any genre that incorporates elements of the supernatural into a story.

Slowly backing the characters up into a corner until they have no escape is what creates the necessary suspenseful environment. 

He pitched the script to Blumhouse Productions - who have a track record of producing low budget horror films, under the $6 million mark, that they are later able to get wide studio releases for that catapult them to financial success due to the wide commercial audience for horror.

It was through Blumhouse that he was connected with DP Toby Oliver who had previously shot other films for the production company.  

“It began as the fun of a horror story. In the middle of the process it turned into something more important. The power of story is that it’s one of the few ways that we can really feel empathy and encourage empathy. When you have a protagonist, the whole trick is to bring the audience into the protagonist’s eyes.” - Jordan Peele

Peele puts us in the character’s shoes through the way that he structures almost all of his stories around a central protagonist. He also uses the placement of the camera, how it moves and the overall cinematography to make us see the world from the point of view of the main character.

Oliver lit most of the film in a natural way, presenting the world to the audience in the same way that the protagonist would see it.

“My pitch to him was that I thought the movie should have really quite a naturalistic feel. Not too crazy with the sort of horror conventions in terms of the way it looks. Maybe not until the very end of the movie where we go towards that territory a little bit more. With the more stylised lighting and camera angles.” - Toby Oliver 

Instead, the camera often tracked with the movement of the protagonist or stayed still when he was still. 

They also shot some POV shots, as if the camera literally was capturing what the character was seeing, or used over the shoulder shots that angled the frame to be a close assimilation of the actor’s point of view.

This framing technique, combined with a widescreen aspect ratio, also stacks the image so that there are different planes within the frame.

“What I love to do as a DP is to have story elements in the foreground, midground and background. When you’re looking through the frame there’s depth that’s telling you something more about the characters and story as you look through it.” - Toby Oliver 

One of the challenges that came with the film’s low budget was an incredibly tight 23 day shooting schedule. To counter this they did a lot of planning about how the film would be covered before production started - which included Peele drawing up storyboards for any complicated scenes and walking through the scenes in the house location and taking still photos of each shot they needed to get, which Oliver then converted into a written shot list.

They shot Get Out using two Alexa Minis in 3.2K ProRes to speed up the coverage of scenes, using Angenieux Optimo Zoom lenses, instead of primes, which also helped with the quick framing and set up time that was needed.

Overall, Get Out was crafted in its writing as a contained, satirical horror film, shot with limited locations, fairly uncomplicated, considered cinematography through the eyes of its protagonist, and pulled off on the low budget by shooting on a compressed schedule with pre-planned lighting diagrams and shot lists.   

US - $20 MILLION

“It really is a movie that was made with a fairly clear social statement in mind, but it's also a movie that I think is best when it's personalised. It’s a movie about the duality of mankind and it’s a movie about our fear of invasion, of the other, of the outsider and the revelation that we are our own worst enemy.” - Jordan Peele

Building on the massive financial success of Get Out, Peele’s follow up film took on a larger scope story that demanded an increased budget. Again, Blumhouse Productions came on board to produce, this time with an estimated budget of $20 million.

Like Get Out, Us was also written as a genre film, this time leaning more into the slasher sub-genre of horror. 

“I think what people are going to find in Us is that, much like in Get Out, I’m inspired by many different subgenres of horror. I really tried to make something that incorporates what I love about those and sort of steps into its own, new subgenre.” - Jordan Peele

This time Peele hired Michael Gioulakis to shoot the project, a cinematographer who’d worked in the horror and thriller genre for directors such as M. Night Shyamalan and David Robert Mitchell.

One of the early challenges that they faced in pre-production was a scheduling one. Because they had four leads, who each had a doppelganger in the movie, and changing between shots with those doppelgangers required hours of hair and make-up work, they needed to precisely plan each shot.  

“Because you could never shoot a scene like you normally would where you shoot this side and then shoot the other side, we ended up actually boarding pretty much the whole movie. Which helped us to isolate who would be seen in which shot in which scene and then we could move that around and structure our day accordingly with costume and make up changes.” - Michael Gioulakis

The actors would arrive on set and do a blocking from both sides of the character. When shooting they then used a variety of doubles and stand-ins, who would take up one of the dopplegangers positions so that the actor had an eyeline to play to. They would shoot the scene from one of the character’s perspectives and then usually come back the next day and do the other side of the scene.

For some wider two shots they left the camera in the same position, shot one shot with one character, one empty plate shot without the characters and one shot with the character in new make up. Or they would shoot the scene with a double and did a face replacement in post production. 

Not only was continuity very important for this, but also the lighting setups had to remain consistent between shots.

“I kind of like the idea of heightened realism in lighting. Like a raw, naturalistic look, just a little bit of a slightly surreal look to the light.” - Michael Gioulakis

A great example of this idea can be seen in the opening sequence inside the hall of mirrors where he used soft, low level LED LightBlade 1Ks with a full grid Chimera diffusion to cast a cyan light to give a more surreal feeling to what should be darkness.

Like in all of his work, Peele’s cinematographers often play with the contrast between warm and cool light and the connotations that warm light during the day is comforting and safe and bluer light at night is colder, more scary and dangerous. 

This isn’t always the case, but generally in his films, Peele paces moments of comforting characterisation during the day with moments of darker terror at night.

One of the trickier sequences involved shooting on a lake at night. Instead of going the usual route of mimicking moonlight, the DP created a nondescript, key tungsten source, punctuated by some lights off in the background to break up the darkness. 

His gaffer put a 150 foot condor on either side of the lake, with three 24-light dinos on each condor to key the scene. They then put up a row of 1ks and sodium vapour lights as practicals in the background.

The film was shot with an Alexa and Master Primes - on the 27mm and 32mm for about 90% of the film. He exposed everything using a single LUT that had no colour cast at the low end which rendered more neutral darker skin tones.

In the end, Us was shot over a longer production schedule that accommodated for double-shooting scenes with the leads, stunt scenes, bigger set design builds, and digital post production work by Industrial Light & Magic. 


NOPE - $68 MILLION

“First and foremost I wanted to make a UFO horror film. Of course, it’s like where is the iconic, black, UFO film. Whenever I feel like there’s my favourite movie out there that hasn’t been made, that’s the void I’m trying to fill with my films. It’s like trying to make the film that I wish someone would make for me.” - Jordan Peele

For his next, larger budget endeavour he turned to the UFO subgenre with a screenplay that was larger in scope than his previous films, due to its large action and stunt set pieces and increased visual effects work. 

Even though it was a bigger movie, the way in which he told and structured the story is comparable to his other work in a few ways. One - it was written as a genre film, based on horror with offshoots of other subgenres. Two - it was told over a compressed time period using relatively few locations. Three - it featured a small lead cast and told the story directly through the eyes of his protagonist.

With a larger story and a larger budget came the decision, from esteemed high budget cinematographer Hoyte Van Hoytema, to shoot the film in a larger format.

“So I talked to Hoyte. Obviously scope was a big thing and I wanted to push myself and I asked him, ‘How would you capture an actual UFO? What camera would you use?’ And that’s what we should use in the movie. Both in the movie and in the meta way. And he said the Imax camera.” - Jordan Peele

So the decision was made that to create an immersive, otherworldly, large scope cinema experience they would shoot on a combination of 15-perf, large format IMAX on Hasselblad lenses and 5-perf 65mm with Panavision Sphero 65 glass.

They stuck to Imax as much as they could, but had to use Panavision’s System 65 for any intimate dialogue scenes, because the Imax camera’s very noisy mechanics that pass the film through the camera make recording clean sync sound impossible. 

They shot the daytime scenes on 65mm Kodak 250D and dark interiors and night scenes on Kodak 500T. They also used Kodak 50D to capture the aerial footage. He developed the film at its box speed without pushing or pulling it to ensure they achieved maximum colour depth and contrast ranges without any exaggerated film grain.

The most challenging scene for any cinematographer to light is a night exterior in a location which doesn’t have any practical lights to motivate lighting from. 

Unlike the night exteriors in Us, which were keyed with tungsten units from an imagined practical source, van Hoytema chose to instead try to simulate the look of moonlight. There are two ways that this is conventionally done. 

The first is shooting day for night, where the scene is captured during the day under direct sunlight which is made to look like moonlight using a special camera LUT.

The second way is to shoot at night and use a large, high output source rigged up in the air to illuminate a part of the exterior set. However the larger the area that requires light, the more difficult this becomes. 

Van Hoytme came up with an innovative third method that he had previously used to photograph the large exterior lunar sequences on Ad Astra. 

He used a decommissioned 3D rig that allowed two cameras to be mounted and customised it so that both cameras were perfectly aligned and shot the same image. 

He then attached a custom Arri Alexa 65 which had an infrared sensor that captured skies shot in daylight as dark. A Panavision 65 camera was mounted to capture the same image but in full colour.

In that way they shot two images during the day that they could combine, using the digital infrared footage from the Alexa 65 to produce dark looking skies and pull the colour from the film negative of the Panavision 65.

This gave the night sequences a filmic colour combined with a background which looked like it was lit with moonlight and allowed the audience to ‘see in the dark’.

“Shooting on Imax brings its whole own set of challenges to the set. So for somebody that hasn’t shot on Imax you definitely bump yourself out of your comfort zone. By doing tests it became very evident, very early, that the advantages by far outweighed the disadvantages or the nuisances.” - Hoyte van Hoytema

While maintaining many of the story and filmmaking principles from his prior films, Nope was pulled off on a much larger budget that allowed them to shoot in the more expensive large format, with more money dedicated to post production, stunts and large action sequences that the bigger scope script required. 


CONCLUSION

Jordan Peele’s filmic sensibilities that value genre, stories which contain broader social commentary, told with a limited cast, in limited locations, through the sympathetic eyes of its central protagonist have remained the same throughout his career as a writer and director.

What has changed is the scope of the stories he tells. Each new film he’s made has seen increasingly bigger set pieces, more complex action scenes and larger set builds which are captured by more expensive filmmaking techniques.

This increase in scope is what has influenced each bump up in budget - all the way from his beginnings as a low budget horror filmmaker to directing a massive, Hollywood blockbuster.

Read More
Gray Kotze Gray Kotze

Why Top Gun: Maverick Action Scenes Feel Unbelievably Real

The runaway financial success of Top Gun: Maverick that makes it, at the time of this video, the sixth highest grossing movie in US box office history can be boiled down to numerous factors. This video will look at one of those factors: its aerial action scenes.

INTRODUCTION

The runaway financial success of Top Gun: Maverick that makes it, at the time of this video, the sixth highest grossing movie in US box office history - coming out ahead of even an Avengers movie - can be boiled down to numerous factors. 

It was built on existing copyright and boosted by the success of the original Top Gun. It starred Tom Cruise. It pulled at the sentimental strings of a huge audience that missed the big, Hollywood blockbusters of old while still revitalising it with something fresh. It was directed with a deft handling of emotion. And - what we’ll talk about in this video - it was executed with amazingly filmed aerial action sequences that kept audiences right on the edge of their seats.

IN-CAMERA VS VFX

But, what is it that differentiates these moments of action from many of the other blockbuster set pieces that we’ve become used to? I’d pin point it to an effective use of ‘in-camera’ photography. In other words using real effects more than visual effects.

“I think when you see the film you really feel what it’s like to be a Top Gun pilot. You can’t fake that.” - Joseph Kosinski

Much of the appeal of what makes up a blockbuster comes from the sequences which feel ‘larger than life’ and offer a spectacle. Whether that means large choreographed dance routines, car chases, bank heists or displays of superpowers.

Every scene like this requires a filmmaking solution beyond the realms of just shooting two actors talking. 

On the one end we have practical or in-camera effects. This is where real world filmmaking, engineering solutions and optical trickery are mixed - such as shooting miniatures or using forced perspective.

At the other end we have CGI, where computer software is used to manipulate and create those images.

Almost every big budget movie nowadays, including Top Gun: Maverick, uses a combination of both practical photography and computer-generated imagery. However some films, like Maverick, prioritise in-camera effects in order to achieve shots with a greater tie to reality.

“You can’t fake the G-forces, you can’t fake the vibrations, you can’t fake what it looks like to be in one of these fighter jets. We wanted to capture every bit of that and shooting it for real allowed us to do that.” - Joseph Kosinski

Once director Joseph Kosinski and cinematographer Claudio Miranda had the shooting script in their hands they had to start making decisions about how they would translate the words on the page into awe inspiring aerial action set pieces.

Shooting aerial sequences is a large practical challenge. 

First, they broke the aerial shots that they needed into three types of shots: one, on the ground shots, two, air to air shots, and three, on board shots. 

1 - ON THE GROUND

To execute the many aerial sequences in the movie they turned to David Nowell, a camera operator and specialist aerial director of photography who had worked on the original Top Gun film. 

“If you analyse the first Top Gun about 75% of all the aerials we actually did from the mountain top because you can get stuff on a 1,000mm lens that you just can’t quite get when you’re filming air to air. And I brought that forward to Joe Kasinski, saying, ‘You have to do this on this movie. This is the difference it makes.’ And so, we did. We spent almost a week on the new Top Gun just on the mountain top getting all the different shots that they needed.” - David Nowell

Cinematographer Claudio Miranda selected the Sony Venice as the best camera for this shoot - for reasons we’ll get to later. This digital footage was warmed up a lot, given deep shadows and had artificial 35mm film grain added to it in the grade to give the footage a similar feeling to the original - with its warm, bronzed skin tones.

To further enhance the original Top Gun look, Miranda consulted with Jeffery Kimball, the cinematographer on the 1986 film, who passed on information about the graduated filters that he shot with.

Grads or graduated ND filters have a gradient level of ND that is strong at the top and decreases at the bottom, either softly or with a hard definition. Usually grads are used to shoot landscapes or skies. When the darker ND part of the filter is placed over the sky it produces a more dramatic, tinted look.   

To capture all the angles that they needed for these scenes meant that a massive camera package was used. Six cameras could be used for the on-board action, four cameras could be mounted to the plane's exterior at a time, the air-to-air shooting was another camera and a few cameras were needed for the ground to air unit.

Like the original they decided to shoot on spherical lenses and crop to a 2.39:1 aspect ratio. This was due to spherical lenses having better close focus abilities and being smaller in size than anamorphic lenses, which allowed them to be placed in tight plane interiors. 

To get shots of the planes from the ground, a camera unit was equipped with a Fujinon Premier 24-180mm and a 75-400mm zoom. They also carried two long Canon still lenses that were rehoused for cinema use: a 150-600mm zoom and a 1,000mm lens.

When this wasn’t long enough they used a doubler from IBE Optics. This 2x extender attaches to the back of the lens via a PL mount and doubles the focal length range. So a 75-400mm zoom effectively becomes a 150-800mm lens.

Tracking fast moving objects so far away is very difficult, so the operators ended up using modified rifle scopes mounted on top of the camera to help them sight the planes. 

The on the ground scenes captured an F-14 Tomcat, which was re-skinned or made to look like an F18, with digital effects. This is a great example of the kind of intersection between practical photography and digital effects which I talked about earlier.    

2 - AIR TO AIR

Although very useful, on the ground cameras are unable to physically move the camera to track with the aircrafts beyond using pans and tilts. For dynamic, in the air motion and a raised point of view the camera team shot air to air footage.

This required shooting with a CineJet - an agile Aero L-39 Albatros jet that has a Shotover F1 stablised head custom built onto the front of the nose which houses the camera. 

The camera can be operated while the position of the plane is also adjusted relative to the other planes they were shooting by an experienced pilot. 

Since the Shotover is primarily designed to be used from a slower moving helicopter, and on Maverick they were shooting a fast moving Boeing F/A-18F Super Hornet fighter jet, they needed to come up with a technical solution.

“The one big change for Top Gun is that the Shotover systems that we’ve used for years…was never fast enough to go any faster than what a helicopter would do. But then Shotover…they updated the motors that would take the high torque needed to pan and tilt while flying 350 knots, that’s close to 400 miles per hour.”  - David Nowell   

For certain sequences that required a shot looking back on aircrafts, they used an Embraer Phenom 300 camera jet that had both front and back mounted Shotovers.

The Venice that was mounted on the Shotover was paired with a Fujinon zoom, either a 20-120mm or a 85-300mm zoom. Some helicopter work was also done with the larger Shotover K1 that had an extended case that could house Fujinon’s larger 25-300mm zoom.

3 - ON BOARD

Arguably the most engaging and jaw dropping footage in the film comes from the cameras that are hard mounted onto the plane itself.

There are two ways that this kind of footage can be shot. The most common technique involves placing actors in a stationary car, spaceship, plane or whatever kind of moving vehicle it is, on a studio soundstage. 

Outside the windows of said vehicle the technical crew will place a large bluescreen, greenscreen or nowadays, a section of LED wall. The actors then pretend the vehicle is moving, do their scene and the crew may give the vehicle a shake to simulate movement. 

In post production this bluescreen outside the windows is replaced with either footage of the background space they want to put the vehicle in, such as highway footage, or with an artificial, computer generated background.

The two main reasons for shooting this way is that, one, it is usually a cheaper way of shooting and two, it offers a far greater degree of control. For example, it allows the actors to easily repeat the scene, the director can monitor their live performances and talk to them between takes, the sound person can get clean dialogue and the DP can carefully light so that the scene is exposed to their liking. 

Instead of taking this more conventional approach, Top Gun’s creative team made the radical decision to shoot this footage practically - in real life. 

To prepare, the actors underwent three months of training, designed by Tom Cruise, so that they could withstand the extreme forces that would play out on them during filming.

Along with the difficulties involved in the actors giving complex emotional performances while flying at extremely high speeds, rigging the on board cameras to capture these performances was no easy feat.

The main reason that Miranda went with the Sony Venice was due to its Rialto system. This effectively allows the camera to be broken in two: with one small sliver that has the sensor and the lens and the other which has the rest of the camera body and the required battery power. These units are tethered by a cable.

1st AC Dan Ming, along with a team of engineers, came up with a plan to mount six cameras inside the F18.

They custom machined plates that could be screwed into the plane that the cameras were mounted to. Three Venice bodies and a fourth Venice sensor block were mounted in front of the actors in the back seat of the jet. These were tethered to a body block and battery rack that they placed near the front seat where the real pilot was. 

Two additional sensor blocks were also rigged on either side of the actor to get over the shoulder shots. Again, they were tethered to body blocks at the front of the plane.

As I mentioned, fitting that many cameras into such a tight space meant that the lenses need to be spherical, have good close focus and be as low profile as possible. Miranda went with a combination of 10-15mm compact Voigtländer Heliar wide-angle prime lenses and Zeiss Loxia primes. 

Earlier I mentioned that this method of hard mounting the cameras came with a lack of control. This is perhaps best seen by the fact that once the plane took off, not only were the actors responsible for their own performances but they even had to trigger the camera to roll and stop when they were up in the air.  

“Ultimately when they’re up there it’s up to them to turn the camera on and play the scene. I mean, the biggest challenge is not being there to give feedback. So you’re putting a lot of responsibility and trust in our cast. So, that was a unique way of directing the film for those particular scenes but it’s the only way to capture what we were able to get.” - Joseph Kosinski

Filming in this way meant that they’d do a run, come back and sometimes find out that parts of the footage wasn’t useful because of the lighting, or the actor’s eyeline being in the wrong place, or even because an actor didn’t properly trigger the camera to record.

However the footage that did work looked incredible and gave a feeling of being in a real cockpit - complete with all the vibrations, natural variations in light, and realistic adrenaline filled performances from the actors. These images wouldn’t have been the same had they shot these scenes in a studio. 

Four cameras were also hard mounted directly onto the exterior of the jet. Again they used the Rialto system with wide angle Voigtländer primes. Another advantage of using the Venice is that it has a wide selection of internal ND filters. 

This meant that they didn’t need to attach a mattebox with external NDs to decrease the exposure which would have made the camera’s profile too big for the interior shots, and would have probably been impossible to do safely on the exterior cameras due to the extreme high speeds of the jet. 

CONCLUSION

Top Gun: Maverick brings us back to an era of filmmaking where real effects are used to tell stories and the CGI that is used is done subtly and largely goes unnoticed by the audience. 

For years now, by and large, I’ve been nonplussed watching most action in films. The overabundance of CGI effects triggers something in my brain that tells me that what I’m watching isn’t real, which makes the action feel less exciting. 

By putting us in an environment where each and every manoeuvre is physical, real and visceral it makes the stakes real. This leads to a real emotional connection and immersion in the story.

There’s a reason why you often hear some auteurs sing the praises of in-camera effects and disparage the overuse of CGI. Maverick uses the best of both worlds. The crew executed most of the action with bold, practical photography, which was safe and innovative. 

Subtle digital effects were then brought in later when necessary to make up for those shots which were practically impossible. 

I can only hope that Hollywood executives take this away as one of the reasons for the film’s financial success and encourage these kinds of filmmaking decisions going forward. There’s always a time and a place for great VFX in cinema, but sometimes shooting things practically is the best way to go.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Matthew Libatique

In this edition of Cinematography Style, we’ll unpack Matthew Libatique’s cinematography.

INTRODUCTION

“There’s an abundance of ways to shoot a film. In this world because we have so many people who are trying to make films, being original is also really difficult. You really have to go with a kind of abandon when you’re trying to create something special.”

From gritty, low budget movies to the biggest, blockbuster superhero flicks in the world, Matthew Libatique’s cinematography has covered them all. Directors are drawn to his appetite for creative risk taking, his bold, subjective, in your face close ups combined with his deep experience and on set knowledge that has covered a broad array of technical shooting setups.

In this edition of Cinematography Style, we’ll unpack Matthew Libatique’s photography by unveiling some of his philosophical thoughts and insights about working as a DP, as well as breaking down some of the gear and technical setups he has used to achieve his vision. 


BACKGROUND

“I started becoming interested in films because of the camera. In undergraduate school I saw Do The Right Thing. It was like a mind explosion of possibility. It was the first time I ever saw a film that made it feel like it was possible for a person like me to make films.”

Even though Libatique majored in sociology and communication during his undergraduate studies at university, he was still strongly drawn to the camera. This led him to enrolling to study an MFA in cinematography at the AFI. It was there that he met a director who would be one of his most important and long running collaborators: Darren Aronofsky. 

He shot Aronofsky’s early short film Protozoa, and when it came to working on their debut feature film, Pi, Libatique got called up to shoot it.

“The director gives you a motivation, an idea, a concept. And then you can build off of that. And the more they give you the more you can do.”

After the success of Aronofsky’s early films, Libatique began working as a feature film DP with other A-list directors, like: Spike Lee, Jon Favreau and Bradley Cooper.

PHILOSOPHY

“When I was becoming interested in filmmaking in undergrad I didn’t study film. It was in sociology and communications. The one camera they had was an Arri S and it had a variable speed motor on it. The variable speed motor was set to reverse. So when I got the footage back I had double exposed everything. And I looked at it and it was a complete and utter f— up on my part. But then I was sort of inspired by the mistake. I always look back on that moment and I’ve kinda made a career on those mistakes working out.”

I’d ascribe Libatique’s appetite for visual risk taking, which include what may be seen as ‘mistakes’ or ‘happy accidents’, as being a large part of what informs his photography.

What I mean by visual risk taking is that the films that he shoots often carry a visual language which doesn’t conform to what is seen as mainstream, Hollywood, cinematic conventions - such as steady, flowing camera moves, neutral colour palettes and more restrained contrast levels with highlights that don’t blow out and turn to pure white.

At times, his camera movement and lighting deliberately distorts and challenges what is seen as a perfect, clean image, by finding beauty in imperfections.

For example, his debut film Pi was shot on the highly sensitive black and white reversal film. This film has an exposure latitude that is far more limited than traditional colour negative film. What this means visually is that there is a tiny range or spectrum between the brightest parts of the image and the darkest parts of the image, and that areas of overexposure are quick to blow out, while shadowy areas of underexposure are quick to get crushed to pure black.

This resulted in an extremely high contrast black and white film, the complete opposite of Hollywood’s more traditionally accepted colour images that have gently graduated areas of light and shadow.

Another example of visual risk taking is using body mount rigs on Aronofsky movies like Requiem For A Dream where he strapped the camera directly onto actors for a highly subjective, actor focused point of view.

Even in his recent, high end budget work on a Superhero movie like Venom, he often directed light straight into anamorphic lenses, deliberately producing excessive horizontal flares that dirtied up the image.

Often these stylistic ideas will come from the director, especially when working with a director that is more hands on about the cinematography, like Arnofsky. But other times, visual ideas evolve from a combination of discussions and real world tests prior to shooting.

When prepping for A Star Is Born, Libatique brought a camera into director and actor Bradley Cooper’s house while he was working on the music to shoot some camera tests with him. A lot of ideas came out of this test that informed the language of the film. This included a red light that Bradley Cooper had in his kitchen, which inspired the use of magenta stage lighting for many of the performances in the film.

A final stylistic philosophy which I’d attribute to Libatique is his continual awareness of the point of view of the camera and whether the placement of the camera takes on a subjective or an objective perspective. 

In many of his films, particularly in his work with Aronofsky, he’s known for using a highly subjective camera that is one with the subject or character of the film. He does this by shooting them almost front on in big close ups that are tight and isolate the character in the frame. 

This is also paired with a handheld camera that he operates himself. By shooting with the characters in a reactive way as if he’s tethered to them it also makes the shots more character focused and subjective.

This isn’t to say that he always does this. Some other stories he’s shot in a wider, more detached, objective style. But whatever the movie he’s always acutely aware of where he places the camera and the effect that it has on the audience.


GEAR

Earlier I mentioned that he shot Pi on black and white reversal film, 16mm Eastman Tri-X 200 and Plus-X 50 to be precise. Unlike modern digital cinema cameras that have something like 17 stops of dynamic range, this reversal film that he shot on only had about 3 stops of range between the darkest shadows and brightest highlights.

This required his metering of exposure to be very precise. If he let the highlights be 4 stops brighter than the shadows then they would blow out to white and lose all information. One way he narrowed down the precision of his exposure was with reflective metering.

“The thing that has really stuck with me throughout my career is the impact of reflective lighting.”

There are two approaches to metering or judging how much light there is. One is called incident metering. This is where the light metre is placed directly between the source of the light and the subject - such as in front of an actor’s face - facing the light, to determine how much light is directly hitting them.

Another way to metre light - which Libatique uses - is reflective metering. Instead of facing the metre towards the light, he faces it towards the subject. This way the light metre measures the amount of light that is hitting the subject and bouncing back - hence reflective metering.

“I’ve been using a reflective metre my entire career until this digital revolution. And even so I use a waveform that gives me a reflective reading of somebody’s skin tone because that’s the only way that I know how to expose anything.”

He mixes up his choice of format, camera and lenses a lot depending on the story and practical needs. For example, some of Aronofsky’s work he’s shot in 16mm with Fuji Eterna film for some and Kodak Vision stock for others.

Much of the rest of his work prior to digital cinematography taking over was shot on 35mm - again alternating between Fujifilm and Kodak stocks for different projects. 

Since digital has taken over he mainly uses different versions of the Arri Alexa - especially the Alexa Mini - but does occasionally use Red cameras.

He even famously used a Canon 7D DSLR with a 24mm L series lens to shoot the subway scenes in Black Swan, which he shot at 1,600 ISO at a deep stop of T8 ½. He did it in a documentary style, even pulling his own focus on the barrel of the lens. His colourist Charlie Hertzfeld later manipulated the 7D footage, especially the highlights, until it could be cut with the rest of the grainy 16mm footage.

His selection of lenses is as varied as his selection of cameras. He switches between using spherical and anamorphic lenses. Some examples of lenses he’s used include Panavision Ultra Speeds, Cooke Anamorphics, Zeiss Ultra Primes, Panavision Primos and Cooke S4s. 

On A Star Is Born, he carried two different anamorphic lens sets - the more modern, cleaner Cooke Anamorphics, and the super vintage Kowas - and switched between them depending on the feeling he wanted.

He used the Kowas, with their excessive flaring, hazing and optical imperfections for more subjective close up moments on stage. Then for the more objective off-stage work he switched to the cleaner Cookes.

Overall most of the lighting in his films does tend to gravitate more towards the naturalistic side. But, within that, he introduces subtle changes depending on the nature and tone of the story.

For the more comedic Ruby Sparks a lot of his lighting, although naturalistic, was very soft and diffused on the actors faces. While Straight Outta Compton, which tips a bit more into a tense dramatic tone, had harder shadows, less diffusion and an overall lower exposure while still feeling naturalistic. 

So while his lighting is always motivated by reality; the texture, quality, direction and colour of it changes depending on how he wants the image to feel. 

Since the rise in LED lighting, he often uses fixtures like LiteGear LiteMats, Astera Tubes and of course Arri Skypanels. When he can, he likes rigging them to a board so that he can precisely set levels and sometimes even make subtle changes as the camera roams around in a space.

Although he has used every kind of rig to move the camera, from a MOVI to a Steadicam to a Dolly, he is partial to operating the camera handheld on the shoulder. I think in some contexts this can be seen as one of those creative risks that we talked about earlier. 

For example, even on the large budget, traditional blockbuster - Iron Man - which you would expect to only have perfectly smooth dolly, crane and Steadicam motion - he threw the camera on his shoulder and gave us some of those on the ground, handheld close ups which he does so well.          


CONCLUSION

Although he uses a lot of different tools to capture his images, he doesn’t do so haphazardly. Being a good cinematographer is more than just knowing every piece of gear available. It’s about knowing how you can use that gear to produce a tangible effect.

Sometimes that effect should be a bit more subtle, but certain stories call for boldness.

His images may take large creative risks that go against conventions and expectations, but those risks are only taken when they are justified by the story. 

Read More
Gray Kotze Gray Kotze

5 Reasons Why Zoom Lenses Are Better Than Primes

In this video let’s flip things in favour of our variable focal length friends by unpacking five reasons why zoom lenses are better than primes.

INTRODUCTION

As we saw in a previous video, there are many reasons why in some photographic situations prime lenses are a better choice than zoom lenses. The fixed focal length or magnification of prime lenses provide: a more considered perspective when choosing a focal length, better overall optical quality, a larger aperture, a smaller size and better close focusing capabilities.

In this video let’s flip things in favour of our variable focal length friends by unpacking five reasons why zoom lenses are better than primes.  

 

1 - ZOOM MOVEMENT

The first reason for choosing to use a zoom lens goes without saying - it allows you to do a zoom move in a shot. Most shots in cinema and other kinds of film content shoot with a fixed level of magnification and do not zoom during a shot. 

I think this is in part due to the traditional language of cinema built by a legacy of many older films which were photographed with prime lenses, before usable cinema zooms were widely manufactured and prime lenses were the de facto choice. 

However, during the 1970s and 1980s using in-camera zooms to push into a shot or pull out wider without moving the camera gained more popularity amongst filmmakers. 

There are many stylistic motivations behind using zoom movement. It can be used to slowly pull out and reveal more information in a shot until we see the full scope of the landscape. It can be used as a slightly kitsch, crash zoom - where the camera rapidly punches in to reveal a character, to emphasise a certain line, or land a comedic punchline.

Because of their flexibility and ease of use, which we’ll come to later, zooms have also been widely used when shooting documentaries - particularly fly on the wall type doccies. In some films this type of zoom movement is extrapolated from these documentary conventions in order to lend a visual style of realism associated with the documentary look, or even to mock this look for comedic emphasis.

The list of reasons to zoom within a shot goes on and has a different stylistic or emotional impacts depending on the context in which it is used. It should be noted though that most filmmakers are careful about not overusing zooms, as they can easily become a bit tired, distracting and cliched, unless they form part of an overall considered visual style.          

2 - PRECISE FRAMING

Of course pushing in with a lens requires a zoom, but what about those films that don’t use any in shot zooms but still decide to shoot on zoom lenses?

Another reason cinematographers may use a zoom is because they make it easy to precisely frame a shot. 

When you shoot with a prime lens’ fixed focal length on the camera and you want to change the width of the frame you need to physically move the position of the camera. This is easy when you are shooting handheld with a little mirrorless camera. 

But when you are using a hefty cinema rig, on a head and a set of legs that is so heavy that it requires a grip team each time you move and level the camera, using zooms becomes more appealing. 

With primes you may need to slightly reposition a frame by moving the camera forward six inches, realise this is too far forward, and then have to move the camera back again three inches until that light stand is just out of frame. With a zoom lens you can just just change the focal length from 50mm to 45mm without moving the camera or tripod.

A great example of this happens on most car shoots. I’ve worked as a camera assistant on loads of car commercials and about 99% of the time when using a Russian Arm to shoot moving vehicles, DPs choose a zoom lens over a prime lens. 

It’s far easier and more practical to use a wireless motor to adjust the zoom on the barrel of the lens to find the correct frame from inside the Russian Arm vehicle, than it is to get the driver of the vehicle to keep repositioning the car a couple of metres on every run until the frame is perfect.

It is also easier to find the correct position for the camera without needing to move it when using wider primes: either with a pentafinder, a viewfinder app, or just based on the experience of the DP. But, when you use longer focal lengths, like a 135mm or 180mm prime lens, because of the lens compression it becomes infinitely more difficult to find the correct frame without needing to move the camera. 

There are also less prime focal length options at the longer end - which we’ll talk about later. Therefore for telephoto shots, zooms are regularly used for their ability to punch in or out until the correct frame is found. 

3 - FASTER SETUPS

With an increased precision in framing shots, comes a faster set up time. On a film set time very much equals money. The quicker you can set up and shoot each shot, the less crew overtime, rental days on gear and location fees you have to pay.

When you’re working on a tight budget without the possibilities of extended over time or extra pick up days, taking longer to set up shots means that the director is afforded to film less takes, with less time to craft the performances of actors or set up choreographed action.

Using zoom lenses speeds up production in a few ways. For one, if you shoot everything with a single zoom lens, it means less time spent changing lenses, swapping out matte boxes and recalibrating focus motors.

As we mentioned previously, it also means that grip teams don’t need to reposition heavy and time consuming rigs, like laying dolly tracks. If the track was laid a little bit too far forward, the operator can just zoom a little bit wider on the lens to find the frame, rather than starting over from square one and re-laying the tracks.

Another practical example is when using a crane or a Technocrane. If you use a 35mm prime lens on the camera, balance it on a remote head, perfectly position the arm and then realise that the lens is not wide enough and you need a 24mm focal length instead, the grip team needs to bring down the arm, the camera team needs to switch out the lens, the Libra head technician needs to rebalance the head with the weight of the new lens, and finally the grip team then brings the crane back into the correct position. All this could take 10 minutes or more. 

If instead the DP used a zoom lens with a wireless focus motor on the zoom, this change would take less than 10 seconds.

10 minutes may not sound like a lot, but if this keeps happening throughout the day this can quickly add up to an hour or two of wasted shooting time - which is both expensive, means less footage will be shot and therefore gives the director less precious takes to work with in the edit. 


4 - FOCAL LENGTH OPTIONS

A prime lens set usually covers a fair amount of different focal lengths on the wide end, but, when it comes to telephoto options beyond about 100mm their selection is usually very limited.

For this reason, DPs that like shooting with long focal lengths that compress the backgrounds in shots often hire a zoom. For example, the Arri Alura can punch in all the way to 250mm. While the longest focal length available on a set of modern cinema prime lenses such as the Arri Master Primes is 150mm.

So for cinematographers who want to use long, telephoto lenses, zooms are usually a better option. 

Many zooms also offer an overall greater range of focal lengths, for example an Angenieux 12:1 zoom offers a field of view all the way from a wide 24mm lens up to an extended 290mm compressed field of view.

For shoots that are in remote areas or in locations which cannot be accessed by a camera truck, carrying around a full set of spherical primes in three or four different lens cases is far more logistically challenging for the crew than just putting a single zoom lens on the camera and walking it in to set. 

This makes zooms far more flexible and practical when compared to primes, especially sets of older vintage prime lenses, such as the Zeiss Super Speeds that only come in 6 focal length options from 18mm to 85mm.    


5 - BUDGET

The final reason may seem a little counterintuitive because when you compare the price of a single prime lens with that of a single zoom lens, the zoom lens will almost always be more expensive. 

However, prime lenses are almost never bought or rented as individual units. They come in sets: such as a set of 6 lenses, or a set of 16 lenses. 

When the rental price or buying price of a full set of primes is tallied up it is almost always more than that of a comparable, single zoom lens that covers the same amount of focal lengths.

Therefore, when the budget of a shoot is a bit tight, it may come down to either pleading with the rental house to break up a lens set into a very small selection of two or three primes that cover a limited range of focal lengths, or hiring a single zoom that you can use to cover every field of view that is required for the shoot.

In this regard, a zoom lens is far more realistic and practical.    

Read More
Gray Kotze Gray Kotze

How We Made A New Yorker Short Documentary: With Jessie Zinn

Highlights from my chat with director Jessie Zinn about the film Drummies. We discuss the process of making a short documentary - from coming up with the initial concept, hiring gear, cutting it together, to finally distributing the finished film.

INTRODUCTION

“That gimbal was terrible. And it was so heavy.” “Didn’t it overheat a couple of times? Cause it was also in the middle of summer and we were shooting in the northern suburbs where it gets up to like 40 degrees which is like in Fahrenheit in the 100s. And we were, like, sweating, and the gimbal was making a noise.”

If you’re watching this you may know me as the disembodied voice behind this channel, who edits these video essays made up of diagrams, shoddy photoshop work and footage from contemporary cinema. But what you may not know is that I also work as a cinematographer in the real world. 

So, I thought I’d mix up the format a bit and chat to a real world director, Jessie Zinn, who I’ve shot some films for. One of those films that we shot during the height of the COVID lockdown, a short documentary called Drummies, was recently acquired by the New Yorker and is now available to view for free on their site - which I’ll link for you to check out.

Our chat will break down the process of making that short documentary - from coming up with the initial concept, hiring gear, cutting it together, to finally selling the finished film. 

Also, if you want to see the full, extended podcast with Jessie, and also donate to keeping the channel going, it’ll be available on the Master tier on Patreon.

CONCEPT

Before the cameras and the lights and the calling of action, every film begins as an idea.

09:32 “I deas come through very unorthodox channels and different backends and ways of finding out about subjects and topics. I think it’s definitely worth noting that I always make films or I’m always interested in making films for me. And that’s not to say that I don’t have an audience in mind because of course  I’m always very aware of who this is for ultimately; who is going to watch it. But I always approach a subject and have interest in making a film based on a subject that I’m just personally really, really interested in or feel a sense of passion towards in some sort of capacity.”

In the case of this film it was born out of seeing a photographic series by Alice Mann on the drummies or drum majorettes of Cape Town.

“Drummies is about a team of drum majorettes in Cape Town. It’s sort of like an intersection between cheerleading and marching band processions. It’s had this really interesting political history in South Africa because it was also one of the first racially integrated sports in South Africa during apartheid. And post-apartheid it’s become almost this underground cult world amongst young girls in schools. In particular in public schools. It’s both a sport where it brings a sense of community and family to them but also provides potential possibilities for upward social mobility.”

PRE-PRODUCTION

With a concept in place, Jessie then went about identifying and getting access to the potential characters that would be in the film and who we would focus on during the shooting process. 

“So I actually cast for Drummies and I did that remotely because I was still in the States. So I asked their coach to send through Whatsapp videos of the girls - basically auditioning.”

From there she cut down her ‘cast’ to four or five characters who we would do the majority of our filming with. In the final edit this was later cut down to three characters.

Before bringing in any cameras, she did audio only interviews of her cast using a Zoom recorder and a lapel mic.

“People have their guard up when you first meet them. But in others, often with children, the first encounter is often sort of the least filtered. And so, I knew beforehand that there would probably be some audio soundbites and material that I could gather from those pre-interviews which maybe wouldn’t have been possible with having a whole camera setup around. Actually in the final film some of that audio and voice over is from those pre-interviews because some of it was such great material.”   

When Jessie contacted me to shoot the project she passed on snippets of this material to me, so that when we met to chat about the film I already had an idea of the kind of characters that we would be photographing. 

She also put together a script and a treatment which I could read through. As a cinematographer, it’s always fantastic to get this kind of thorough vision early on up front as it facilitates the discussions we have about finding a look, or overarching style, for the film.      

“It’s always deliberately decided beforehand, at least with the short docs that we work on. You know, I’ll sit down with you often at a coffee shop somewhere and I’ll be like, ‘Here’s a couple of reference films and reference images.’ Then you’ll look at them and then you’ll say, ‘OK. I think this is what we can do based on these references and based on the real people.’”

In the real world, our characters were sitting around during this hot, summer vacation, unable to do much because of the covid restrictions. This led to us discussing the idea that the footage should feel dreamy, as if they were suspended in time - which is also a line that came from one of the interviews.  

To visually represent this feeling we decided to shoot a lot of the non-dialogue scenes in slow motion with a heavy, worn, often malfunctioning Ronin gimbal that we managed to borrow from another Cape Town documentary filmmaker.

“This was something that we discussed beforehand. That we wanted there to be a dreamy aesthetic and in terms of the actual movement using a handheld, rough aesthetic wouldn’t have achieved that. You definitely don’t want to limit the dreams that you have in terms of aesthetics for your film but you also do need to be very practical about it and I think that’s what we often get right. We sit down and say, ‘these are the things we would like’ and then ‘this is the version of these things that is actually achievable.’ 

PRODUCTION

“It’s all about prep - literally. You know you’ve done your job well when you get onto set you can stand back and do very little. If you’re having to do a lot on set then you know you haven’t done a great job - basically.” 

With all of Jessie’s prep, creative vision and our discussions about the film’s look coming together, I then, as the cinematographer, need to come up with a list of gear that we’ll use to bring these ideas to the screen. 

When it comes to this, one of the biggest limitations is dictated by budget. Doccies are generally made with pretty limited funding, much of which is saved for post production finishing of the film. So for these kinds of projects I usually put together a gear list with two options: one, the kind of best case scenario with a full selection of the gear I’d like to rent, and two, a more stripped down list which is a bit lower cost.  

A little bit of back and forth with the gear house may ensue until we come up with the best gear package that meets our budget.

For this film it meant shooting on a Sony FS7, with my four Nikon AIS prime lenses - a 24mm, 35mm, 50mm and 85mm. Although most of the film was shot on the two wider lenses. To add to this dreamy look that Jessie and I talked about I shot everything with a ¼ Black Pro Mist filter which gave the image a nice, diffused feeling.

As I mentioned we managed to get a free Ronin for the shoot - which was the old, original Ronin that was a bit tired but we managed to make it work. I also used a Sachtler Ace tripod, which some may see as being too lightweight for a camera like the FS7.

But, it’s small, easy to carry and makes finding a frame and levelling it off that much quicker. And speed in documentaries is often more important than fractionally more buttery smooth pans and tilts off a larger tripod head. 

Although it hopefully does not look like it, every single interior shot in the film was lit with a combination of artificial film light sources and my most important tool: negative fill.

The lighting package I carried around consisted of a budget-friendly Dracast LED bi-colour fresnel, two 8x8 blackout textiles and a couple of C-stands to rig them with. Rigging the blackout to C-stands or hastily tying them to window frames is precisely how not to rig - but since I was working alone and had very limited time to set up each scene I had to make do.

When you’re working with a limited budget, the easiest and most cost effective way to control light isn’t by adding light, but by removing it. 

As an example let's break down the changing room scene in the film.

“With that scene which would lead into the final performance because there was this theme throughout the film of being able to achieve dreams but also being held back from achieving those dreams. Because of COVID they weren’t actually able to perform and compete in all of these games and so, the changing room is of course tethered to reality. They have to get changed before any sort of performance. But they also weren’t performing at the time because of COVID restrictions and so I thought that that sort of worked.”

For that reason, I wanted to push this sequence visually about as far as we could into a dreamy state, while still maintaining a link to the real world. We took this orange, yellow palette of the changing room that we were presented with, amped up the warmth in the lighting to the max and pumped a bunch of smoke into the room to create an extra layer of diffusion.

With the help of an assistant I blacked out all the windows and doors which were letting through sunlight - except for a single window behind the characters. Outside this window I put up our LED fresnel, warmed it all the way up and I think even added a ½ CTO gel so that the colour temperature of the light went from tungsten to orange. 

Again, doing most of the heavy lifting by removing light and then carefully placing a single backlight to create contrast and a more amplified visual world.

During the shoot, a technique which Jessie and myself often employed was to cordon off and light a specific space, almost like a set, then place the characters within that space and let them converse or act as they would naturally. That way you maintain naturalistic conversation and action but are able to also better sculpt the cinematography into the form which best suits the film. 

POST-PRODUCTION

Once production wrapped, Jessie went about editing the film herself.

“I also often edit my films and so that is a big part of crafting a documentary. You’re not finding the story in the edit but you’re definitely chipping away at the basic model that you’ve planned. So when I’m on set I’m also shooting or directing with the edit in mind.”

“Drummies was a good exercise in learning how to trust my instinct. Basically the very first assembly that I laid down on the timeline which I had to deliver to the programme I think it was like three days after we’d shot the film. Which is insane, again. And so it was like a fever dream of staying up into the night to get this assembly done. And after that obviously I did many different versions and different edits where things changed and the structure changed. But when I looked at the final film it was actually almost identical in terms of structure to the very first assembly that I’d put down, like months ago, which also was the same as the script and the treatment.”

So again we come back to this idea of prep and how having a refined vision for the film going into the shooting process, even in a more unpredictable medium like documentary, is so valuable for a director to have.

After the completion of the music composition, sound mix and the final grade - which was based on the reference of another vibrant, colourful and slightly dreamy film - The Florida Project - Jessie is left with a completed film. But what comes after you have the final project?

“A couple of years ago people would say that the be all and end all are film festivals and that determines the success of your film. ButI don’t think that’s the case anymore because there are incredible online avenues for streaming services where you can put your film out there and it can get tons of views and potentially gain an audience that is much wider and larger than a film festival.”

‘So, Drummies did go to a few festivals but it was valuable because that’s how the film got distribution in the end. Both POV and The New Yorker showed interest because they’d seen it at a festival called Aspen Shorts Fest and they both reached out to me by email and said we’re interested in seeing the film, we’d like to have a look. And both of them basically came to me with offers within a few days and so that was the first film where I’d received pretty standard almost classical distribution interest in the film as far as broadcasters and sort of news channels are concerned. Whereas my two previous films got Vimeo Staff Picks which is, I'd say, a little bit more unorthodox and more current than those avenues.”

“People often think that if your film doesn’t get into an A-list festival then it’s the end of your film. Which is just so not true. Because the festivals that Drummies played at are I would say are probably B type of festivals. And those festivals got way more distributors interested than some of the A list festivals that I’d heard about. And so, never underestimate the space that your film is screening in is a valuable lesson that I learnt.” 

Read More
Gray Kotze Gray Kotze

5 Reasons Why Prime Lenses Are Better Than Zooms

As is the case with all film gear, there are a number of reasons for and against using prime lenses versus selecting zoom lenses. In this video I’ll go over five reasons that make primes superior to zooms - and follow it up with a later video from the other side of the argument about why zoom lenses are better than primes.

INTRODUCTION

If you’re into cinematography, photography or capturing images at all you’re probably aware that there are two types of camera lenses - prime lenses and zoom lenses. 

Primes have a fixed focal length, which is measured in millimetres. This means that when you put this lens on a camera, the angle of view of what it sees, how wide it is, or how much the image is magnified is set at one distance and cannot be changed.

Zooms have a range of variable focal lengths. So by moving the zoom ring on the barrel of a lens you can change how wide an image is - in some cases all the way from a very wide angle to a close up telephoto shot.

As is the case with all film gear, there are a number of reasons for and against using prime lenses. In this video I’ll go over five reasons that make primes superior to zooms - and follow it up with a later video from the other side of the argument about why zoom lenses are better than primes. So if you like this content, consider hitting that subscribe button so that you can view the follow up video. Now, let’s get into it.

 

1 - CONSIDERED PERSPECTIVE

We can think of the width of a frame, or a focal length, as offering a perspective on the world. 

This close up, shot with a wide focal length, sees a lot of background and places us, the audience, within the same world as that of the character. This close up, shot with a longer focal length, isolates the character more from the background, blurs it, and compresses, rather than distorts the features of their face.

The great thing about a prime lens’ fixed focal length, is that it also fixes the perspective or feeling of an image. When you choose what prime to put on the camera you are therefore forced into making a decision about perspective. 

This isn’t to say that you can't do the same with a zoom, but when you work with a variable focal length lens it’s far easier to just plonk down the camera at a random distance from the subject and then zoom in or out until you get to the shot size that you want.

If you’re using a prime, you need to first decide on the focal length you want and then are forced to reposition the camera by moving it into the correct position. As they say in photography, it makes your legs become the zoom. This is especially useful as a teaching device for those learning about lens selection and camera placement. 

So, prime lenses force you more into thinking about the focal length that you chose, which may elevate the visual telling of the story by making it a deliberate decision, rather than an incidental decision.   

2 - OPTICAL QUALITY

The practical reasons behind choosing a lens are important, but so too is the look that the lens produces. Due to their design, prime lenses are considered to possess a higher quality optical look than most equivalent zooms. This is mainly because the construction of primes is much simpler and the design more straightforward than that of zooms.

Inside a lens you’ll find different pieces of curved glass. Light passes through this glass to produce an image. Because prime lenses only need to be built as a single focal length they can use less of these glass elements - and, the glass elements inside the lens don’t have to move in order to zoom. 

Less glass means less diffraction of light, which usually means sharper images. Also, prime lenses only need to be corrected for optical aberrations like distortion and chromatic aberration for a single focal length. Zooms need to do this for multiple focal lengths, which is trickier to do. 

Therefore, your average prime lens will be sharper with less distortion, or bending of the image, and more minimal colour fringing between dark and light areas. 

I should add as a caveat that modern, high-end cinema zooms are constructed to a high degree of optical quality that is comparable to many prime lenses, but you pay a pretty penny for that level of cutting edge engineering. When you’re looking at zooms and primes in a comparable price range, primes usually have the winning, optical edge.        

3 - APERTURE

A lens’ aperture is the size of the round opening at the back of the lens that lets in light. A large opening, which has a lower T or F stop number, like T/ 1.3, means that more light is let in, while a smaller opening, with a stop such as T/ 2.8 means that it lets in less light.

Once again, because of the extra glass and more complex design required to build zoom lenses, primes tend to have a faster stop. 

When it comes to cinema glass, each extra stop of light that a lens can let in is precious and demands a higher price tag. Shooting with a wide aperture comes with a few advantages. It means you can expose an image in dark, lower light conditions. It allows you to create more bokeh - the out of focus area that separates the subject from the background and is generally considered ‘cinematic’. 

This allows you to also be more deliberate about what is in and out of focus and is a way of guiding the audience's gaze to a certain part of the frame. So, for those cinematographers or photographers that want fast lenses, primes are the way to go.      

4 - SIZE

If you’re working in wide open spaces, with a dolly that holds a heavy cinema camera, then the size of the lens is less of a concern. But the reality is that more often than not that’s not the case and having a physically smaller lens attached to the camera makes things much easier.

By now we know that zooming requires extra glass and extra glass requires a larger housing. This means zooms are heavier, longer and wider than primes. 

Cinema rigs that need to be precariously balanced - like drones, a gimbal or a Steadicam - are often unable to take longer cinema zooms because of their excess weight and length. Some of the bigger zooms are so large that they can’t be effectively operated as a handheld camera on the shoulder and are limited to only being viable for use on a dolly or a tripod.

So, if you need to work in a confined space, want to move the camera on a lightweight rig, or shoot with a handheld camera then a big zoom lens may not be for you.   

 

5 - FOCUS

The fifth and final reason not to use a zoom lens is because of its focusing limitations. Every lens has a range of distances that can be rendered in sharp focus and a range that will fall out of focus. 

Cinema and photo lenses can focus on objects far away up to infinity, but they are limited as to how near they can focus on an object. This limit is called close focus or MOD - minimum object distance. Generally, zooms have inferior close focusing capabilities. 

For example, the closest that an Angenieux 24-290mm zoom can focus, set at a 50mm focal length is 4’. A Zeiss Master Prime 50mm lens on the other hand can render objects in focus up to only 20” away. 

So, if you want to shoot a close up of a face or a detail, you can get much tighter with a prime lens than with a zoom lens before you lose focus.

Additionally, when it comes to many zooms built for photography, because the glass shifts when you zoom in, the focus also changes. So if a person is in focus at 70mm and you then zoom into them to 200mm without touching the focus ring, the shot may go out of focus and be soft.

Most cinema zooms are able to remedy this and maintain sharpness across the entire focal length range, however for simpler, pin sharp focus that doesn’t shift, and a superior MOD - primes beat zooms.

Read More
Gray Kotze Gray Kotze

The Most Popular Cinema Lenses (Part 5): Zeiss, Cooke, Panavision, JDC

In this episode, let's take a look at four more cinema lenses which are popular in the film industry.

INTRODUCTION

As a cinematographer, picking the right gear that you need to use on a shoot is, of course, extremely important. One of the most important bits of gear to put on your rental list are cinema lenses.

They are chosen on the basis of two factors: one, the kind of look that they produce and two, their practical and ergonomic advantages or disadvantages. The look and aesthetic feel for each project may be different and the kind of shooting environment and setups needed will also be different depending on the situation and story.  

So to better equip you, and I guess because I’m a bit of a lens nerd, I’m going to look at three more cinema lenses that are popularly used in the high end film industry, breaking down their look and their practical design. And, at the end of the video take a look at one lens contraption that’s a bit of an oddity but a pretty interesting piece of kit, so stick around.


JDC XTAL XPRESS

First up, we have the JDC Xtal Express lenses, an interesting set of vintage anamorphic primes made for 35mm.

It’s a bit tricky to verify, but based on what people have told me this set of lenses is mainly made up of spherical glass from vintage Cooke S2 and Cooke S3 elements, combined with Japanese anamorphic elements. Hence why they are sometimes called the Cooke Xtal Express. This glass was then packaged and housed as a lens by Joe Dunton Cameras - JDC. 

They were later bought out by the gear rental house Panavision, who are a major supplier of renting out their inventory. 

Although perhaps not as well known as some of the bigger producers of anamorphic glass, this set of lenses is still fairly popular amongst DPs. Based on my anecdotal experience as a 2nd AC, particularly amongst European and UK cinematographers.

On the surface they are a bit of an odd looking bunch. Different focal lengths come in different lengths, different weights, with different sized front diameters and different stops. Although they are fairly well constructed, they were made a while ago and do need to be handled with care as they are not as robust as the housings on other modern lenses, such as the newer Cooke Anamorphics, which we’ll get to later. This may mean turning down the strength of the torque on a wireless focus motor and being more gentle when pulling focus.

Practically, their longer build, especially on longer focal lengths like the 100mm, may make them a bit front heavy on cameras and therefore a bit tricky to balance on a gimbal or Steadicam without adding extra weights to the body to balance it out. 

While they are long, they aren’t particularly heavy, so are still good lenses for applications such as handheld camera movement.

Their different front diameter sizes also means that camera assistants need to switch out the back of the mattebox to different sizes, or use custom made reduction rings depending on the lens.

But, what they lack in practicality they more than make up for with their interesting production of images. They produce what I’d call quite a heavily vintage anamorphic look. This means they have quite a bit of distortion at the edges of the frame, especially at wider focal lengths, which will lead to straight lines, such as door frames, bending.

They also have heavy falloff around the edges of the images. This means that characters will appear sharp in the centre of the frame but, when placed on the edges of an image may seem out of focus even though they are not, due to the lens’ anamorphic falloff and softness.

Wide open they have a beautiful, soft, dreamy look with oval bokeh and lovely flares - perfect for those with a taste for vintage anamorphics.     


COOKE ANAMORPHIC/i S35

To compare the Xtal’s to a more modern anamorphic option, let’s take a look at the Cooke Anamorphic/i lenses which are made to cover a 35mm sensor. 

Whereas the Xtal’s use older Cooke S2 glass, the newer Anamorphic/i lenses are made with glass that have modern coatings. So while they are sharper across the frame, they still maintain some of those lovely anamorphic image characteristics when shot wide open, such as blooming, horizontal flares, oval bokeh and a more gradual anamorphic falloff around the edges. 

On a scale from soft and vintage glass, like the Xtal’s, to super sharp, super clean modern glass, like the Zeiss Master Anamorphics, I’d put the Anamorphic/i lenses somewhere in the middle.

They come in a large range of 10 focal lengths, which is a lot for a set of Anamorphic primes that typically have fewer focal lengths than spherical lens sets. This includes everything from a wide 28mm, to a macro 65mm for close up work without needing diopter filters, and a telephoto 300mm.

They cut together well and carry the same, consistent colour reproduction as other modern lenses in the Cooke spherical range, such as the S4s and the S5s. This has been dubbed ‘the Cooke look’ - which refers to their overall warmth, high contrast, natural colour reproduction and their sharp but smooth, pleasing rendering of details.

This look is paired with the modern construction of a super solid, durable lens housing with accurate, well spaced distance and iris markings and a smooth focus action when you turn the focus gear. This, along with their more standardised sizes across the range, makes them a pleasure for camera assistants to work with. 

A final useful feature comes from the i part of their name. This refers to a contact point situated on the mount of the lens that connects to a contact point on the PL mount of most high end cameras. It’s used to send metadata from the lens such as its focal length, focus distance and stop to the camera where it is recorded for each frame. This is both useful as it negates the need for focus pullers to programme distance measurements into some wireless follow focus systems and it provides information that is useful in post production special effects work. 

ARRI / ZEISS ULTRA PRIMES

Next, let’s turn to spherical lenses and take a look at a very popular product in the Zeiss lineup - the Ultra Primes. If we look at the timeline of cinema lens releases from Zeiss, the Ultra Primes were developed after the Super Speeds and before the Master Primes.

This is fitting as I think both technically and in terms of their look they also occupy the middle ground between those two spherical prime lens sets. The Super Speeds have a subtle vintage quality to them, the Master Primes are supremely sharp, clean and modern, and the look of the Ultra Primes sits somewhere in between them.

Their look is very similar to that of Master Primes and the two sets can easily be cut together. Their colour reproduction is accurate but neutral, or slightly cooler than something like a Cooke, they are sharp and flare ever so slightly more than Master Primes - which flare very little. The choice of whether to shoot on Master Primes or Ultra Primes often comes down to their technical designs.

In terms of their build they are lightweight and compact, a little bit chunkier than the Super Speeds, but smaller than the Master Primes. With a stop of T/1.9 they are very fast, produce considerable, circular bokeh and are good for low light conditions. But, are slightly slower than the T/1.3 Super Speeds and Master Primes.

They come in 15 different focal lengths, ranging all the way from an 8mm extreme wide angle to a 180mm telephoto. This means they have the widest focal range coverage of pretty much any prime lens set available. This gives them an edge over the Master Primes for cinematographers who are very specific about their focal lengths and like to cover every in-between prime, such as having a 28mm focal length option between a 24mm and a 32mm. 

Like the relatively modern Cooke lenses we discussed, the Ultra Primes also offer a high quality, durable build with good markings that make them quick and easy to work with.  

PANAVISION FRAZIER LENS SYSTEM

Finally let’s take a look at something a bit more unusual. This quote unquote ‘lens’ might not be strictly popular, but I guess you could say it’s popular for achieving a specific application. It’s not even a lens per say, but rather a lens system that can be attached to a camera for specialty macro shots where the lens needs to squeeze into a space and be incredibly close to the subject, sometimes at unusual angles.

Often referred to as a snorkel or a periscope, the Frazier Lens System is attached to a cinema camera via a PL mount. At the end of its long extended barrel filled with glass and mirrors, there’s a second lens mount which is designed to take the capture lens - which is usually a small, manual iris, stills lens - such as a Nikon AIS.

So why would you shoot with this contraption? It has a swivel tip that allows you to tilt, pan or rotate what the camera sees without having to rotate the camera itself. Because of its thin design it also means a lens can be squeezed into very tight spaces close to what it is photographing. This makes it a popular way to capture and move the camera around miniature sets.

A drawback is that it has an aperture of T/ 7.1. This means that a lot of light is needed to achieve exposure. Because of its build it is; however, easier to light macro shots without the camera blocking the light source. Since you usually want to shoot extreme macro shots at as deep a stop as possible to make it easier to focus, this T/ 7.1 aperture isn’t actually a big deal.

They have an iris and a focus gear on the barrel near where the PL mount is so that assistants are able to manipulate focus - while the focus on the taking lens is set at a distance and left there.

To better explain a real world application here’s a BTS photo I snapped on a commercial shoot I was working on as a 2nd AC. This system was used to get the lens within about an inch of our subject's eye to achieve an extreme macro shot and still have space to get in studio light. We used the Frazier system on an Alexa Mini with a Nikon AIS taking lens.

Read More
Gray Kotze Gray Kotze

What We Don't See In Movies Is More Important Than What We Do

Since cinema is not a 360 degree medium - well, not yet anyway - and our perspective is not omnipresent, the choice of what is included and what is excluded from the screen is a choice that is left up to the filmmakers.

Let’s take a look at some examples of off screen action from well known films to uncover why this is done and why sometimes what we don’t see in movies is more important than what we do.

INTRODUCTION

What is it about this scene that makes it feel scary? It takes place in the middle of the day without any frightening, dark lighting. It’s set in a suburban kitchen, not a haunted house. Kitchen chairs aren’t all that terrifying. 

What makes it scary isn’t because of what we do see, it’s because of what we don’t see.

Since cinema is not a 360 degree medium, well, not yet anyway, and our perspective is not omnipresent, the choice of what is included and what is excluded from the screen is a choice that is left up to the filmmakers. 

It’s their job to create the perspective that the audience is presented with. So, let’s take a look at some examples of off screen action from well known films to uncover why this is done and why sometimes what we don’t see in movies is more important than what we do.

WRITING

The biggest decision that determines what we see in movies is made during the process of writing the script.

Just as the screenwriter wields control over what scenes they choose to include, they also control by extension what is not seen. Like in life, the world of fictional stories has an extensive timeline. It’s the job of the screenwriter to pick the moments from this imaginary timeline that best add to the telling of the story.  

Quentin Tarantino’s first film, Reservoir Dogs is a classic example of not allowing us to see the full story timeline. 

We open on a scene at a restaurant with a group of men conversing. The scene ends with them stepping out to perform a diamond heist. Cut to credits. Instead of the next scene opening on them performing the robbery, which would happen chronologically, we cut to chaos. One of the characters is critically injured. The robbery already happened.

Skipping over what would usually be the most important set piece in a heist movie is of course a very deliberate writing choice. Tarantino chooses not to show us that part of the story timeline. This sets up the tension and the key question in the entire film - what happened during the heist and why did it go wrong?  

The cliched mistake that first time filmmakers make in their debut short film is not understanding that what you don’t show is as important as what you do. They’ll open on a character waking up, showering, getting in the car, driving to a restaurant and then having a conversation. 

Something writers should always ask themselves is ‘what does this add’ and ‘is there a better way to present this information to the audience’? Does a character waking up add some crucial characterisation? Does it add to the core story or any themes? 

Or, can we characterise characters in other scenes in other ways - by combining multiple facets of information into scenes and suggesting events rather than showing them.   

By withholding extraneous scenes, filmmakers can actually be more deliberate about how they choose to deliver a story on screen. 

SOUND

While the foundation of the story can be altered by what is left out of the script, the filmmaking decisions of what to exclude in scenes - which are made during the process of filmmaking or in the edit - can also shape how we emotionally interpret them.

Off screen sound can be used to supplement off screen images.

Rather than seeing the body as it drops down the well with a loud, revealing ‘clang’, we cut to reaction shots of the characters and empty shots of the location. Without needing to see the action, we can hear it and therefore tell how deep the well is. Seeing the characters react creates more tension than if we were to cut to the primary source of the sound and action.

Then, we begin to hear another off screen sound. The ominous drumming from an unknown, approaching threat. Again, this use of off screen action combined with off screen sound creates a tense mood and tone that would be diminished were we to merely cut to the source of the sound. 

Not seeing the source of the approaching threat creates more suspense than if we were to show it.

This kind of sound is what we call diegetic sound - sound that comes from the setting of the world of the film. Non-diagetic sound - which comes from the world of the filmmakers, such as music, is another way of influencing the mood or tone in a movie. 

TONES

So we know that music and off screen sound in movies can create a tone or mood, but we can also do this with the camera by choosing what images it captures and, more importantly, by choosing what we don’t see.

Off screen action can be used to build up tension and suspense, but it can also be used to draw out different emotional responses from an audience, such as comedy.  

Edgar Wright has used this gag in a few different films. There’s something funny and unexpected about a random arm reaching into frame with an object. If he were to shoot this wider so that we see the person who the reaching arm belongs to it wouldn’t work as well comedically. 

It would just be a regular action. He also uses this technique of shooting objects entering frame in extreme close ups during his snappy montage edits to advance the story forward in time. In this way framing is used so that the action of the object reaching in is unseen, unexpected, unpredictable and amusing.   

Information can also be withheld in darker situations for a different effect. In No Country For Old Men, we cut from a conversation about the nature of choice and chance, and a woman's impending death, straight to a shot of the hitman leaving the house, checking his shoes for blood. 

There’s something bone chilling, ambiguous and deeply affecting about this scene that comes from brushing over the assumed act of violence that we don’t see and rather sitting in quiet with Chigurh. Not seeing what happens is a more contemplative way of communicating violence and the themes that the movie deals with. 

One of the genres that plays with off screen action the most is horror. I’ve used the analogy before of horror being like pulling off a plaster. It can be done with a quick yank, like a jump scare, or it can be pulled off slowly, with a gradual, prolonged, pain. 

By filming scenes from the protagonists point of view and delaying putting the threat on screen for as long as possible, the tension and terror is cranked up to ten. What we don’t see is far more psychologically terrifying than what we do. 

FINANCIAL LIMITATIONS

Coming back to the example of Reservoir Dogs, another reason why filmmakers may want to skip over showing certain things on screen may be influenced by financial limitations.

Certain scenes are more expensive than others to shoot. It’s pricey to effectively shoot action, car chases, crowds, or most kinds of traditional set pieces.

 Whether that’s a massive car pile up, like in Red Rocket, or a bank heist, like in Reservoir Dogs. Excluding set pieces that need to access specific locations and contain lots of action is sometimes logistically necessary. 

In these cases it’s either on the screenwriter or on the director to find creative ways to communicate that these moments in the story took place while not being able to show it. 

This might be through presenting it off screen with sound from a news broadcast discussing the aftermath of the event in the background or by skipping the event in the story timeline and making how it is discussed by characters the source of the drama and suspense.


PERSPECTIVE 

Finally, a movie like Rashomon uses what we’ve discussed as the entire premise of the film. 

The movie is built around a narrator who conveys four different accounts of the same incident. Each is presented by a different witness, with different details, to question the nature of truth and how it is reported. 

The entire premise of Rashomon relies on not showing the actual events as they unfold from an objective point of view. Eyewitnesses account for what is not seen to comment on how everyone has their own bias and subjective view of how life unfolds.

What we see and what we don’t see in a movie is defined by the filmmaker. By not showing certain information and by showing other information they are creating a perspective on how the story is told. 

What is shown and what is not are inextricably linked. Both are conscious choices. So the next time you’re writing, shooting, editing or watching, remember that what you don’t see is at least as important as what you do.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Linus Sandgren

In this episode I’ll dive into Linus Sandgren's philosophical approach to shooting movies and then take a look at some examples of the gear he uses to execute his photography.

Cinematography Style: Linus Sandgren

INTRODUCTION

Being a cinematographer is a collaborative process. They need to work with technical crew, the director and even indirectly with the screenwriter - due to how they visually interpret the script. 

Linus Sandgren is a DP who focuses on these collaborations, especially the director, to find a mutually agreed upon look for each movie that takes the tonal and thematic aspects of the screenplay into account.

In this episode I’ll dive into his philosophical approach to shooting movies and then take a look at some examples of the gear he uses to execute his photography.   

PHILOSOPHY

Creating images in a crafted way isn’t just a matter of arriving at a location, pointing a fancy camera at the action and hoping for the best. Some of the most important and influential work that cinematographers do actually happens before then. 

In initial meetings before shooting they will be presented with a script or with a concept for the shoot. In order to execute the shoot they first need to come up with ideas for a look or visual style for the film. There are two facets to this: uncovering the tone and themes present in the screenplay, and incorporating the tone that the director envisions for the movie. 

“You need to have a great relationship with a director. I think, like, normally I try to figure out how the director works and what he likes and I like to try to adapt to that director’s style of working. My experience with directors is they also need us to have a partner to communicate and collaborate to create the visual storytelling.”

This doesn’t mean that Sandgren sees directors as having fixed styles. More often than not the tone and feeling that needs to be communicated visually will be determined by the content of the story. 

“It was sort of a 180 from La La Land to do First Man. His style of doing La La Land was very much whimsical, a musical. So, in First Man he wanted it to be very realistic and emotionally immersive.”

Just looking at the way the camera moves differently in these two films communicates two different feelings: one is smooth, theatrical, co-ordinated and flowing, while the other is rougher, more intimate and realistic. But, we’ll get more into how he practically executed those tones later on.

So, while the director’s style of working may influence the visual approach, so does the screenplay.

In pre-production Sandgren therefore asks himself two questions. One, how do you take a script and come up with a tone or themes that are pertinent to the story? Two, how do you then take those concepts and apply them practically to create the images. 

On No Time to Die he chatted with director Cary Fukanaga during prep to come up with these answers.

“Another thing that Cary then discussed with me a lot was the sort of expectations that we have on a Bond film is that it should be entertaining, it should be should be a sort of joyride for the audience that we go through a film and feel like it’s sort of one, long, big adventure. Then we start to discuss, sort of, themes and keywords which to me is really important as a cinematographer. I feel like I always start with the most simple words between me and the director. Like, for example it could be loss, or grief, or death or things that matter for the story.”

With these keywords decided upon, he can put down emotional tones or themes on a screenplay or story timeline and then breaks down how he can achieve this effect in each scene visually. 

For example, for the emotionally jarring opening scene it was important that the images were not beautiful, postcard, landscape snow images but rather that they felt a bit more ominous, eerie, dangerous and isolated by making the images more of a monochromatic blue rather than crisp whites and a beautiful sunny blue sky with warm backlight. This is a good example of how cinematography and a look can be used to subtly influence the emotional impact of a story.

He is also deliberate about how he hires crew. In the same way that the gear will impose it’s so own visual feeling on a film, so to does the crew that you hire. Every person will set up a shot differently, come with their own technical opinions, creative suggestions and style of physically operating the camera.

“Sometimes I love to operate. So I put an operator off to not work on a film because I like to operate. Like on First Man I actually operated A Camera but on La La Land I had operators. It depends on how I want to tell the story and when I feel it’s important to be closer to the actual making of the image. And sometimes it’s better to have someone else do it.”

First Man used a lot of handheld work and immersive operation of the camera. Sometimes with handheld operation it’s easier to find the right movement to tell personal stories when the camera is on your own shoulder. 

GEAR

With an understanding of the kind of tone and feeling that he needs to impart in the images, he can now go about choosing the correct gear to execute that tone.

One aspect of this is choosing the camera and lenses for a project. For example, on Don’t Look Up, he pitched the idea that the look should be rooted as a political thriller and that the comedy should come from the script and actors, rather than shooting it as a brighter comedy. 

So, to ratchet up the tension of a thriller he used camera movement, mainly working off a dolly, and also a stylistic trademark of the director Adam McKay - zooms.

“When things sort of hit the fan and got even more suspenseful in dialogue scenes we could dolly to create tension with the dolly moves. But also we went a lot to longer zooms. For practical reasons we didn’t go to anamorphic zooms, we went for spherical zooms.”

Due to the extra glass needed in anamorphic lenses they usually let in less light, while spherical zooms have a faster stop which means they let in more light and can be shot on darker lit sets. This is particularly true for the long focal lengths they were shooting at - such as 300mm. 

He selected Fujinon Premier Cabrio zooms like the 25-300mm, which has a stop of T/ 3.85 at the long end of the zoom. 

Since they were shooting on film which has a slower EI than the native EI of most digital cameras, they needed faster lenses to be able to shoot in low light scenarios.

Sandgren also rented an Aaton Penelope which he always had on standby with a 40mm spherical Macro Kowa lens. He used this setup for moments where the characters felt really nervous or tense by pushing in inches from their eyes to visually communicate a feeling of internal, existential terror that the characters were going through in that moment.

While he has used the digital Alexa on some of his commercial work, he likes using 35mm film on features when presented with the option. 

Some other 35mm film cameras that he’s used include: the Arriflex 235, the 435, the Arricam LT, and Panaflex XL2 from Panavision. 

These have been paired with lenses like: Hawk V-Lite 1.3x Anamorphics, Canon K35s, Atlas Orion Anamorphics, the Camtec Modded Vintage version of the Zeiss Ultra Primes, and anamorphic glass from Panavision such as the C, E and G-Series. He alternates between anamorphic and spherical lenses for both aesthetic as well as practical reasons.

He has used a variety of film stocks, including Fujifilm Eterna Vivid stocks before they were discontinued, but now selects from the Kodak motion picture stocks which are still available for use. 

He doesn’t stick to a single stock and often chooses a range of stocks for different scenarios that include: fine grain daylight stocks for bright exteriors, like 50D, more sensitive daylight stocks, like 250D, and high sensitivity tungsten stocks, like 500T, for interiors and night scenes. 

His preference for film is also due to the different gauges or formats that it comes in. He can select different formats  to communicate different tones or differentiate time periods or locations in a movie. 

For example, on First Man he used lower gauge, higher grain 16mm film for scenes inside the spacecraft to create a more realistic documentary-like 60s and 70s feel that was more raw and authentic. He then switched to 35mm, with a more normal grain and increased contrast, when the story moved to the NASA industrial world and for scenes inside his home. To create an extreme contrast between the spaceship interiors and the more surrealistic moments of actually being on the moon, he turned to the super crisp, large format 70mm Imax. 

Constructing a realistic feeling sun required coming up with a strong, single light source. He got Attitude Specialty Lighting who make 100K softsuns to construct two custom, high output 200K softsuns to provide a large area source that mimicked hard sunlight.

He often uses strong single directional sources of light, such as HMIs or tungsten units that he’ll gel to get to a daylight colour temperature. He then fills in little pockets of light and creates colour contrast in interiors by using practical sources built in, or placed on the set, that you can see in the shot.

But, his approach to lighting and the gear involved changes depending on each story. This could mean using a wall of LED screens with projected NASA archival footage to create in-camera space backgrounds with more realistic light and reflections. 

Or he may go in the opposite direction of realism, into a world where the lighting is determined by emotion and classical Hollywood tropes. This was done in La La Land by setting up an overhead blanket light at 4000K, hitting the walls with colourful practical fixtures and then rigging a tungsten spotlight to a dimmer which he could bring up or dim down to isolate characters in a nostalgic moment of focus.  

These evoke different tones, one which is concerned with realism and the other plays into visualising abstract emotion.  

CONCLUSION

These examples go to show that Sandgren’s style cannot be limited to a singular look, technique or gear preference. Each project requires a different feeling and each director he works with has different sensibilities and interpretations of the script. 

It’s his job to find a collaborative balance where the tone and themes in each scene are established, and then use that as a jumping off point to create visuals which represent a feeling.

Read More
Gray Kotze Gray Kotze

Frame Rate In Cinematography Explained

Let’s unpack what frame rate is and how it is practically done with a camera and editing software, look at some common frame rates that are standardised in different places around the world, and with that knowledge, take a look at some creative ways that frame rate can be used to tell a story.

Frame Rate In Cinematography Explained

INTRODUCTION

Most of cinema unfolds in real time to replicate a version of reality. But sometimes reality isn't the best way to tell stories. What if we wanted to suspend two characters in time while the world around flies by, draw out an iconic, awe-inspiring moment of action, or invoke an anxiety-inducing choppy rapidity. 

In filmmaking we can use the tool of frame rate, frames per second, or FPS, to control how motion is captured and presented on the screen. 

So, let’s unpack this further by first explaining what frame rate is and how it is practically done with a camera and editing software, look at some common frame rates that are standardised in different places around the world, and with that knowledge, take a look at some creative ways that frame rate can be used to tell a story.  

WHAT IS FRAME RATE?

When you shoot a digital video clip it may appear as one single entity, but to understand frame rate it’s important to realise that one clip is actually a representation of loads of individual pictures, or frames.

These pictures give the illusion of motion - hence motion picture. It works the same way as leafing through still images in a flipbook. If you do it fast enough it’ll look like the little picture is moving.

Before digital, all that film cinema cameras were was a big box that took individual photos very quickly at rates such as 24 pictures in a single second. Instead of a page with a picture on it, a frame was captured as a single image onto a roll of film.  

When people talk about frame rate they usually only talk about one thing: how many frames per second a camera is able to capture. However this is only half of it. To understand frame rate properly you need to know that motion can actually be manipulated in two ways. 

One, the capture frame rate refers to how many individual pictures or frames a camera can capture in one second.

 Two, the project or playback frame rate which is set in editing software and determines at what speed the captured frames are played back.

Changing these two variables in different ways can create normal motion, slow motion and fast motion. 

Let’s say a camera’s capture frame rate is set to 24 frames per second. A clip is shot and brought into editing software, where the playback frame rate is also set to 24 frames per second. This results in on-screen motion which appears normal to the human eye.

This time, the camera is set to 48 frames per second, while the project playback frame rate stays at 24 frames per second. When you hit the spacebar and the clip is played, because the capture frame rate is double the project frame rate the motion will appear to be twice as slow as normal. Also called slow motion.

Finally, let’s set the camera to capture action at 12 frames per second while maintaining a 24 frames per second playback frame rate. This time, the capture frame rate is half the project frame rate, and the motion will be sped up twice as fast as normal. This is fast motion.      

Usually, 24 frames per second is set as the base frame rate for a project, and then the capture frame rate is manipulated to change how we perceive motion, but not always. 

Different frame rate bases also exist. But before I dive into what some of the most common of these basic frame rates are, I’d like to thank today’s sponsor that made this video possible, Squarespace.   

COMMON FRAME RATES

There are two common base frame rates that are used for the majority of film and television productions. 

The first, 24p is based on the standard film exposure rate of 24 frames per second that delivers a look that the human eye has become accustomed to. 23.976 is also commonly used as a progressive scan format based on NTSC television. Regions where NTSC is used include North America, parts of South and Central America and Japan. So productions shot for these countries usually use a 24 or 23.976 frame rate as a project base.

The other common frame rate is 25p, which is used for PAL television. PAL regions cover most of the rest of the world, where 25 frames per second are used as the project frame rate. Although these two bases are most common, there are other alternatives, mainly for broadcast TV production, like 50 or 60p.

There have even been odd experiments like the Hobbit series of movies which were shot and displayed at 48fps. Which, contrary to Peter Jackson’s thinking, got quite a bit of pushback. 

Projecting at this higher frame rate means there is less of the standard feeling of motion blur from movement that we as an audience have become used to. Everything is more crisp and slick. Which in my mind makes productions shot at higher frame rates feel artificial, because our eyes are used to perceiving movement in a more natural way with a kind of motion blur.

For these higher frame rate bases remember that we also need to change our capture rate to compensate. So for normal motion on the Hobbit, the camera shot at 48 frames per second. 

To get motion that is twice as slow as real life they therefore have to boost the capture frame rate to double the playback frame rate - which was 96 frames per second.

While those are some of the most  common frame rate bases, let’s go over some popular capture frame rates. As I have mentioned, to get slow motion a common frame rate is to double the base rate. So, 48FPS for NTSC and 50FPS for PAL. 

Then there are also frame rates such as 33 which is somewhere in the middle of standard slow motion and regular speed and gives the feeling of a subtle suspension of time.

Once we get into the higher frame rates they are usually rounded up to numbers such as 100, 120 or 200. 

Then you get extreme slow motion for specialist shots where time almost feels like it’s standing still. This is usually done on a Phantom camera, at frame rates such as 1,000.  

CREATIVE USES

Let’s take a look at three examples where frame rate is used in a creative way. You can’t really talk about frame rate and motion without bringing up the work of Wong Kar-Wai. 

One technique that appears in many of his films is the use of step printing. 

To get this effect with a film camera, they shot with a low frame rate, in this case eight frames per second. Instead of printing each frame one time and getting a sped up effect, he printed each frame three times to fill in the gaps and arrive at 24 frames in a second.

Instead of getting a new picture in every frame of playback, we get three of the same frames, then a new image for three frames, and this pattern continues.

In different contexts it may have different effects, but Wong and his DP Christopher Doyle often used this trick for action scenes with lots of movement. The stutter and motion blur it gives adds a freneticism and wildness to the scenes.

Another creative way to capture action sequences is with slow motion. On 300 this was done for many sequences in the movie to a melodramatic extent. Snyder earmarked specific shots that he wanted his DP Larry Fong to capture in slow mo. This was mainly done at varying frame rates on the Arri 435ES up to about 100FPS and on the PhotoSonics ER for really high frame rates up to 360. 

Using slow motion in this way for action sequences suspends a moment in time and allows the audience to fully appreciate and be awed by the movements of the character that at real speed would be over in a flash.

This slowed down speed also feeds into the focus that the characters themselves have in that moment and shows how keenly they are perceiving the fight unfold. In this case it also further removes the story from the world of realism and places it in more of a comic book like world where the laws of time and gravity are broken. 

Finally, coming back to the king of frame rates, let's take a look at this shot from Chungking Express. There are a few things going on here.

First, because people in the foreground are rapidly rushing by in a blur, we can establish that this was shot at a low frame rate, such as eight frames per second and projected at a base of 24 frames - not step printed like before. This speeds up the motion. 

Second, the two main characters are moving as if in slow motion. This was simply done by getting them to move extremely slowly in real life. So, even though the film is sped up, the characters remain suspended in time. To me, this has the psychological effect of isolating those two characters and showing how detached they are from the rest of the world. They are so alone that time operates differently on them and leaves everyone else a faceless blur. 

It also elevates the relational and romantic bond between them - as if no one else exists in the world.  

Read More