Gray Kotze Gray Kotze

How Iñárritu Shoots A Film At 3 Budget Levels

In this video I’ll break down how Alejandro González Iñárritu directed his first low budget feature Amores Perros, the mid budget Birdman up to the blockbuster level The Revenant.

INTRODUCTION

Gritty, textural, real and raw are how I’d describe the look of Alejandro González Iñárritu’s highly acclaimed movies. From the breakneck success of his first independent budget film all the way up to high budget blockbusters, his movies are the product of a clear directorial voice, that goes after telling difficult, risky stories that at times interweave non-linear narratives and express the psychological state of each character’s strife. 

In this video I’ll break down how Iñárritu created his first low budget feature Amores Perros, the mid budget Birdman up to the blockbuster level The Revenant.


AMORES PERROS - $2 MILLION

Iñárritu’s success as a feature film director didn’t happen overnight. He began his career in a different field of entertainment, radio, which progressed into a producing job in TV, which led to him creating his own production company, Zeta Films - where he produced commercials, short films and even a TV pilot.  

Since Amores Perros was an incredibly ambitious, logistically and structurally challenging first feature, it helped that he carried some experience as a director into it along with an established relationship with crew. 

“Most of the people that worked with me on this film, almost all of the head of departments from Rodrigo Prieto to Brigitte Broch the production designer, all this team, we had been working together for many years doing commercials - I have a production company. 

So, in a way, that complexity, it was a language that we had already established between us. So it was my first film but definitely was  not my first time on a set.” - Alejandro González Iñárritu

The story, which he worked on with screenwriter Guillermo Arriaga, was constructed using three subplots which all feature human character’s different relationships with dogs which were interwoven and connected by a common plot point. 

This push to make such a tightly packed and difficult first feature on a relatively low budget was partly due to the difficulties involved in producing a movie in Mexico at the time.

“Your first film normally was at the mercy of the government and then you just show it to your friends. Because there was no money, nobody wanted to see any Mexican film at that time. 

There was an anxiety that runs that it was your only real opportunity to say something and to express yourself. So, I think it has to do with: you want to include everything you wanted to say.” - Alejandro González Iñárritu

The story and also how it was stylistically told through the cinematography leaned into the extreme. 

Most of the film was shot with a handheld camera and wide angle lenses - which, combined with the dynamic camera operating from Prieto - injected a gritty, raw realism into the story. This technique of moving with characters on wide angle prime lenses - from 14mm to 40mm - is something that he would continue to use in his later movies.

Shot on a wide angle lens.

This wider warped intensity that was used to capture the more intense and chaotic characters, was flipped when photographing the outcast figure Chivo. They instead shot him with much more telephoto focal lengths on an Angenieux 25mm-250mm HR zoom lens - which had the effect of further isolating him from the environment. 

Shot on a telephoto lens.

Another large part of the extreme, raw look of the movie was created by how the 35mm film was developed in the lab. By skipping the processing step of bleaching the negative, which is called bleach bypass, it creates a desaturated, higher contrast look with exaggerated, more pronounced film grain.

In other words all the vibrancy in the colour gets sucked out, except for a few colours like red which remain abrupt and punchy, the highlights get brighter and are more prone to blowing out, while the shadows more easily get crushed to pure black with little detail.

Iñárritu has stated that this bleach bypass look was a way of emulating and exaggerating the look of Mexico City, which is quite polluted with particles in the air that makes things feel hazy and grey. It also added more contrast to the city light which could otherwise look quite flat.

Iñárritu’s bold, risky vision that combined an extreme, raw narrative with an extreme, raw look, went down very well at festivals, where it won the prestigious Cannes Critics’ Week, inspired him to create a trilogy of like minded films which he called his Death Trilogy, went on to have success with audiences - making back around 10 times its low budget, and through doing so jump started his career as a director.



BIRDMAN - 16 MILLION

“I shot that film in 19 days - less than four weeks - and it was crazy.” - Alejandro González Iñárritu

Birdman came about during a free schedule window he had, while waiting for the correct winter season to shoot his next much higher budget movie - The Revenant. The script’s $16.5 million budget was financed in a co-production between Fox Searchlight, who initially got the script but had a budget cap on what they could spend, and New Regency, who were producing Iñárritu’s other film in production.

There were some resonances in the script between the lead actor Michael Keaton’s own career and that of the lead character - who was well known for being cast as a superhero and struggled to regain his perception as a quote unquote ‘prestigious’ theatre actor.   

Iñárritu pitched a radical idea that the entire film should happen in a single, long take - or more accurately have the appearance of a single take through combining and disguising the cuts from various individual takes.

This long take concept was partially based on the idea of interconnecting various characters - like he’d also done in his prior work - and capturing the intensity and energy of backstage, without giving the audience cutting points or moments to breathe. 

Unlike Amores Perros - which creates an extreme intensity through quick cutting and a large amount of camera angles - Birdman took the other extreme of creating intensity by keeping the camera always in motion and not cutting.

This stylistic decision was also one that was formed out of necessity.

“The reason I think I got into this different kind of approach or design of making films was because of need. Sometimes the restrictions and limitations are the best - at least creatively. I didn’t have enough money. I didn’t have enough time.” - Alejandro González Iñárritu

Getting an ensemble cast of actors and constructing the stages meant that the shooting schedule was limited to only 19 days. These choreographed long takes are incredibly difficult to shoot, as it requires perfection not only from the actors but also from the crew and camera operators, however, accomplishing, for example, a 10 minute long take can knock considerable shooting pages off the schedule in a small amount of time.

Taking this approach meant that the final film had to be fully designed in pre-production, before shooting, rather than discovered or re-constructed in the edit.

The ‘editing’ happened up front both in making changes to the script by getting it down to 103 pages from its initial 125 pages, and in the months leading up to the shoot where cinematographer Emmanuel Lubezki and Iñárritu worked out the blocking with stand ins, a camera and a mock set in a warehouse which they mapped out with textiles and c-stands.  

The film was shot on Alexa cameras, mainly the Mini, in either a Steadicam build for smoother tracking shots or handheld operated shots by Lubezki. Like Amores Perros it was filmed on wide angle lenses, a combination of Master Primes and Summilux-Cs - which are both very clean, sharp sets of prime lenses. 

To keep a naturalistic feel to the lighting and to practically be able to shoot 360 degrees on sets, Lubezki designed it around only using practical sources that you could see within the shot. Whether that was overhead stage lighting, bulbs on makeup mirrors or overhead fluorescents in a corridor.

This meant that colour temperatures were mixed and at times cast monochromatic hues over the image - which may have gone a bit against traditional expectations of maintaining skin tones, but gave the images a more naturalistic and real feel that is present in Iñárritu’s movies.

The digital Alexa allowed him to roll for long takes and expose at a very sensitive 1,200ASA with the lenses opened up to a T/2 aperture. This allowed them to shoot in low light environments while also preserving a big dynamic range between highlights and shadows. This helped them when shooting a shot that went out into a non-locked off Time Square at night, where they had no control over the lighting or how it was balanced.    

Although Birdman was shot in a vastly different style - it maintained a feeling of raw realism, chaotic energy and gritty intensity that interconnected different characters in the story - just like he had in his debut feature. 


THE REVENANT - 135 MILLION

Following Birdman, Iñárritu leaped into shooting his next, much higher budget feature that he had been prepping for many years: The Revenant - an 1823 action filled Western with a revenge story. 

“I prepared that film in 2011. And I started scouting and storyboarding. And I was very excited about the experience to allow myself to go to the nature. And then I realised that there is no that kind of romantic thing of losing yourself in nature. No. It’s a war. You’re at war with nature to make it work - what you need.”  - Alejandro González Iñárritu

Again, Iñárritu proposed a radical approach to making the film. He wanted to shoot all the extremely isolated, natural spaces entirely on location, rather than shooting in a studio with bluescreens and locations created by visual effects. He also wanted to shoot the film chronologically.

To envelope audiences in the world and push the realism as far as possible his DP, Lubezki, also pushed to shoot almost entirely using natural light.

Not only did shooting it for real produce a visual style that is unmatched in realism, but placing the actors in the real environment and shooting chronologically put the actors through real, raw, intense conditions that, probably, accentuated the level of realism in the performances.

“The conditions were massive. The physical scenes that he went through were extremely precise. Actually dangerous. Because if you do a bad move, the choreography with these kinds of stunts with such a speed and camera movements that are so precise, you put yourself at risk.” - Alejandro González Iñárritu

These many stunt sequences were made even more challenging as, in a similar style to Birdman, Iñárritu decided to design many of these sequences as long takes. Throughout the production they used one of three methods of moving the camera: a Steadicam for smoother tracking shots, Lubezki operating a handheld camera, or using a Technocrane for moves at speed, over difficult terrain or for booming overhead camera moves.  

A technical challenge that emerged from shooting long takes in uncontrolled natural light, was how to balance exposure without it getting blown out or getting too dark - when moving from a dark area to a light area, or visa versa. 

Lubezki’s DIT was tasked with pulling the iris. This is where a motor is attached to the aperture ring on a lens, which transmits to a handheld device that can be turned to change the aperture during a shot - either opening up and making the image brighter, or stopping down and making it darker. This has to be done carefully and gradually so as to avoid these changes in exposure being noticeable and distracting.

After initially choosing to shoot the day scenes on 35mm film, to maximise dynamic range, Lubezki decided to switch to a purely digital workflow - again shooting on the Alexa Mini as the A cam, the XT for Steadicam and crane, and the Alexa 65 for vistas or moments where they wanted the greater width and resolution of the large sensor. 

Again, they also used wide angle Master Primes and Summilux-Cs - more specifically the 14mm Master Prime and occasionally a 12mm or 16mm. When on the larger field of view Alexa 65 he would often use a 24mm Prime 65 lens.

Like his other films, The Revenant preserved an intense, raw, real chaotic feeling, however at a higher budget that could afford one of the biggest stars in the world, a very long production schedule, more complicated visual effects shots, in very challenging, slow shooting environments, with many complex action and stunt sequences.

Read More
Gray Kotze Gray Kotze

5 Reasons You Should Shoot With A Gimbal

Let’s look at five reasons why filmmakers use gimbals in both videography and on high end productions alike.

INTRODUCTION

Gimbals are often associated more with videography or prosumer camera gear than they are with big budget movies. However, this shouldn’t be the case. For years, this method for stabilising cameras and operating them in a handheld configuration has been used on many industry level shows, commercials and movies.

So let’s use this video to look at five reasons why filmmakers use gimbals in both videography and on high end productions alike.  

1. MOVEMENT

For a long time in the early days of cinema it wasn’t possible to shoot with a handheld camera that could move with actors and could be operated by a single person.

Instead, filmmakers that wanted to move these heavy cameras needed to do so on a dolly - a platform which could be slid along a track using wheels. This trained the audience's eyes for decades to accept this smooth tracking movement as the cinematic default.

To this day, this language of smooth, flowing, stable camera movement has persisted and is often sought after by directors and cinematographers. Gimbals are able to achieve a similar movement, without needing tracks and a dolly, by using sensors that detect when a camera is off kilter and correcting that by evening it out with motors in real time.

These motors can control three axes of movement, hence why these devices are also called 3-axis gimbals. They can adjust and even out the up and down motion, known as tilt, the side to side motion, known as pan, or rotational motion known as roll.

Different gimbals can be set to different modes to control the axes of movement that you want. For example you could limit the motion to a pan follow mode, where the motors stabilise and lock the tilt and roll axes and only react and follow when the operator pans the camera horizontally.

Or you could enable pan and tilt follow, where only the roll axis is locked so that as the operator moves the gimbal horizontally or vertically, the gimbal will follow along with the movement of the operator. Gimbals can therefore be quite reactive to the handheld motions the operator makes, so are a useful tool in situations that require floating, smooth moves that need to track the motion of an actor or moving object.

Because they’re operated handheld, the kind of movement you get from a gimbal will have more of a floating, drifting, stability to it with small, meandering deviations in moves which is caused by the manual operation of it, compared to something like a dolly, which is super stable, heavy, and tethered to a specific line of track that creates more precise, cleaner moves. Certain filmmakers may want this drifting feeling of motion that is attainable from a gimbal.  

2. UNEVEN TERRAIN

One advantage that a gimbal has over alternative grip rigs that also produce smooth camera movement, is that they can be more easily set up and operated over uneven terrain or in remote locations.

While it is possible to lay tracks on uneven outdoor locations - by first building a wooden platform to use as a smooth and level base, it is miles easier to operate the camera handheld on a gimbal and use your feet to move over uneven surfaces.

If venturing into very remote locations it also means that all that productions will need to carry is a gimbal camera build, some batteries and maybe a box of lenses. Compared to a massive truck and a full grips package - which may not be able to make it up to certain mountain locations.

Filmmakers may also want dynamic movement that squeezes through tight spaces where larger cinema grip rigs would otherwise not be able to fit - like through car doors or inside tight interiors. Or they may need the camera to move up or down a slope, which could also include something like stairs, which dollies can’t do since they need a stable, level platform to lay tracks on.

3. TIME & MONEY SAVER

On top of these advantages around moving the camera, gimbals are also a great tool for productions as they have the potential to save time and money. Paying for a single gimbal operator, or even having DPs operate the gimbals themselves and getting the first ACs to build and balance them, can provide a good saving on the grips budget. 

On some shoots, you may be able to get away with using a gimbal for stable motion rather than having a dolly sitting in the truck for smooth moves on stable surfaces and a Technocrane waiting to be set up for moves across uneven terrain.

On top of the gear costs, you also save on crew costs, as choosing to use a dolly or a Technocrane will come with the costs of hiring a larger grip team to set up and run the gear.

While these savings may be less important on some jobs like high end international commercials that have the money to pay for whatever tool is deemed necessary - another type of saving that a gimbal provides that may still be valuable is time.

It’s almost always easier to walk through shots, make adjustments to positioning and do a run through on a handheld gimbal - without needing to get a team to lay tracks, or spend lots of time between setups building and positioning grip rigs.

And on these enormous jobs where the most expensive thing on set may be paying for a celebrity performance or locking off a pricey location - time, as the saying goes, is money. So gimbals may save money not only on the lower rental cost and fewer crew requirements - but also by speeding up the time taken to set up each moving shot.     

At this point, some of you may be saying, “Why not just use a Steadicam? It’s a similar cost and gives you a similar feeling of motion.” Well, gimbals actually come with one party trick that Steadicams don’t.


4 - REMOTE HEAD

So, what is this extra capability that a gimbal has that a Steadicam doesn’t have? Because most large production gimbals like the Movi Pro or Ronin 2 have motors that control three axes of movement and have controls in the form of a joystick or wheels that enable operators to wirelessly control how the camera tilts, pans and rolls - it is effectively a remote head. 

This means that it can double up and be used both in a handheld gimbal configuration for some shots, then rebuilt as a remote head and attached to rigs such as a crane, car arm, or as a remote head on a dolly.  

So instead of hiring both a Steadicam to do tracking shots on the ground, and a Libra head that is attached to a crane for an aerial shot - productions can get away with using only one gimbal. 

When gimbals are rigged as remote heads they basically transmit a signal between the wheels moved by the operator and the gimbal. There are three wheels that each represent pan, tilt and roll. 

So if an operator wants to pan the camera left to right they can roll the side wheel forward. This will then send an instantaneous signal from the wheels to the head - which will pan the camera without the operator needing to be physically near the head. 

Gimbals can also be used for ‘transition shots’ that change between two different builds or operating methods during a single take. For example, this shot, which slides the camera in a remote head mode up a story on a wire rig while being controlled wirelessly by an operator. It then gets unclipped from the rig and grabbed by an operator who can proceed to use it to follow the action like a handheld gimbal.


5. FLEXIBILITY

The final reason that many filmmakers opt to use a gimbal is due to the greater flexibility that it provides. This is a matter of taste and feeds into a preference for how directors or DPs like to structure their filming.

Some like to be more traditional, formal and deliberate and move the camera from some form of a solid base like a dolly. This provides a clear move from A to B which can be repeated multiple times, is predictable and relies on actors perfectly hitting their marks by following a prescribed movement based on the pre-established blocking by the director.

However, some other filmmakers like to work in a different way that is more open to experimentation, improvisation and embracing little magical moments that may be discovered.

This may be appropriate for directors who like working with non-professional actors, or in semi-documentary shooting environments - where the ability to change shots on the fly is very important.

It may also be good for directors who like to shoot in long takes - where they can work with actors and give direction as they go, putting the actors in a fully dressed shooting environment and then chasing after them, finding the best angles and moments of performance through shooting. 

Having the freedom of a gimbal is great for these longer takes, as the camera can be transported great distances, up stairs, through gaps and over most surfaces as long as the operator's arms can hold it. It also eliminates the need to place tracks - which have a chance of being seen in the shot if shooting in an expansive 360 degree style where we see everything in the location. 

Gimbals are great for scenarios where actors aren’t given an exact mark or blocking and are instead encouraged to move around and find the shot. Because their position is not locked onto a track, the camera is free to roam and explore - getting all the benefits that come from operating a handheld camera, while at the same time preserving some of that traditional cinematic stability that we discussed. 

CONCLUSION

As we’ve seen gimbals offer value not only to lower budget, more improvisational, documentary based shooting, but also to larger productions who seek stabilised motion in unusual spaces, or with dynamic moves, with a rig that saves on set up time, can double up as a remote head and offers the ability to transition between different styles of operating the camera. 

As high quality cinema cameras continue to get smaller and smaller and are more easily able to fit and balance on gimbals, this rig will continue to soar in popularity and be an increasingly useful tool for stable, cinematic movement. 

Read More
Gray Kotze Gray Kotze

The Crop Factor Myth Explained

Let’s go over a more detailed explanation on what ‘crop factor’ is, how it works and a misconception about it.

INTRODUCTION

There’s an idea in photography that cameras with different sized sensors have what we call ‘crop factors’. A large format Alexa 65 has a crop factor of 0,56x compared to a Super 35 camera. A 90mm lens multiplied by 0,56 is 50mm. Therefore, many people say that using a 50mm lens on this camera is going to look exactly the same as using a 90mm lens on this camera.

The truth is that this isn’t exactly 100% correct - for quite an important reason. So, let’s go over a more detailed explanation on what ‘crop factor’ is, how it works and the big misconception about it.   

WHAT IS CROP FACTOR?

As photochemical film photography emerged and cinema cameras were created, there was a push to create film with a standardised size - that could be used across different cameras from different manufacturers and be developed by different laboratories around the world. That film had a total width of approximately 35mm and therefore was called 35mm.

When digital cinema cameras started getting manufactured, they replaced film with photosensitive sensors that stuck to the approximate size of film’s 35mm 4-perf capture area.   

However, along the way some other more niche formats emerged: from smaller 16mm film that was a cheaper alternative, to large format 65mm which maximised the resolution and quality of movies at a higher cost, to tiny ⅔” video chips from early camcorders to smaller micro four thirds photography sensors.

The issue is that when you put the same lens on two cameras with different sensor sizes they will have a field of view that is different, where one image looks wider and one looks tighter.

So, for prospective camera buyers or renters to get a sense of the field of view each camera would have, many manufacturers started to publish what they called a ‘crop factor’ to determine this. 

This means you take your lens’ focal length - for example a 35mm lens - and multiply it by the crop factor of the camera - such as 2x - to arrive at a new focal length number, 70mm. This means that on this smaller sensor your 35mm lens will have approximately the same field of view or magnification as a 70mm lens on a Super35 sensor.   

Since Super 35 sensors are considered the standard size, this has a crop factor of 1x. Camera sensors larger than 35mm would have a crop factor of less than 1x and sensors that are smaller than 35mm would have a crop factor of more than 1x.

THE CROP FACTOR MYTH

So where does the myth part come in? Well, the issue is that many people interpret crop factors as saying that shooting with a 70mm lens on a Super 35 sensor is exactly the same as shooting with a 35mm lens on a smaller sensor with a 2x crop.

What’s important to note is that while the level of magnification of the image may be the same, there are still a bunch of other characteristics that lenses have that will make images different depending on what focal length is chosen. 

So what we should say is that a 70mm lens on a Super 35 sensor has approximately the same field of view of a 35mm lens on a smaller sensor. We shouldn’t say that a 70mm lens on this camera is exactly the same as a 35mm lens on this camera in every way - as different focal lengths come with other secondary characteristics beyond just their field of view.

Rather than different sensors magnifying or zooming out on the width of what we see it’s better to think about it in different terms. If you put the same lens on two different cameras with a larger sensor and a smaller sensor, the way that the light enters the lens and creates an image will be the same. 

The only difference is that the camera with the smaller sensor has less surface area to capture the image with. This makes it feel like the image is ‘cropped in’ in comparison to the larger sensor which can capture more of the surface area and therefore produce an image which feels wider.

Calculating crop factor and then changing the lens on the camera to a more telephoto lens - may make the width of the images match, but will also change the very nature of the image by altering the depth of field, the compression and the distortion in the image. 


THE EFFECTS OF FOCAL LENGTHS

The smaller the sensor is, the more cropped in the image will be and therefore the wider focal lengths you will need to use. Whereas the larger the sensor is, the wider the shot will appear which means cinematographers will often choose longer, more telephoto lenses.

One of the secondary effects from using longer focal lengths is that it will create a shallower depth of field. This means that the area that is in focus will be much narrower on a telephoto lens, which means the background will be softer with more bokeh. 

This is why movies shot on cameras with large format sensors bigger than Super 35, like the Alexa 65, which cinematographers pair with longer focal length lenses will have a much shallower depth of field, with soft, out of focus backgrounds.

It is a misconception that larger sensors create this effect. In fact, it is the longer focal length lenses that do this.

Another effect that focal lengths have is on how compressed the image is. Wider focal lengths expand the background and make objects behind characters appear further away.

Telephoto lenses compress the background and have the effect of bringing different planes closer to the character.  

For this reason, cameras with smaller sensors that need to use wide lenses, may produce images that appear a bit ‘flatter’ without much depth, especially in wide shots. While large format cameras, with their longer lenses, compress the background to create a bit more of a layered perception of dimensionality.

Wider lenses also have a tendency to distort the image more. So, shooting a close up of an actor on a Super 35 camera with a wider focal length will expand their face and make their features unnaturally larger, while using a longer focal length on a large format camera with the same equivalent field of view will compress the faces of actors a bit more which many say is a bit more flattering. 


CROP FACTORS OF DIFFERENT SENSORS

Although modern digital cinema camera sensors come in many shapes and sizes, in general they conform to a few approximate dimensions. 

Some cameras come with the option to shoot a very small section of the sensor that is equivalent to 16mm film. This has an approximate crop factor of 2x compared to Super35. 

This little format will usually be paired with wider lenses designed for 16mm - such as the Ultra 16 Primes which range from 6mm up to 50mm focal lengths, which with a crop factor applied produces a field of view of around 12mm-100mm when adjusted for Super35. As we discussed, this 6mm will produce an image with extremely limited bokeh and a deep depth of field that feels quite dimensionally flat. 

Next we have Super35 sensors which are usually considered standard, such as we find on an Alexa 35 or Red Helium. Each manufacturer produces sensors with subtly different dimensions - but most will be the approximate size of 4-perf 35mm film and produce the standardised field of view, where a 18-24mm focal length feels wide, a 35-50mm lens is about a medium, and anything longer at around 85mm starts to have a compressed, telephoto feel.

Anything bigger than Super35 size is usually considered to be ‘large format’. This includes ‘full frame’ sensors modelled on still cameras that are approximately 36x24mm. Some examples are the Arri Alexa Mini LF, the Sony Venice 6K or the Sony FX9.

These cameras will have a crop factor of somewhere around 0.67x, which bumps a wider perspective up to around 32mm, a medium feel to around 65mm and a telephoto lens to about 110mm.

65 cameras like the Alexa 65 push this even more with their approximate 0,56x crop factor that makes a 45mm lens a wide, a 90mm lens a medium and a 150mm a telephoto. As we discussed, shooting a wide field of view with a 45mm will produce much more compression, bokeh and dimensionality than using a 12mm lens on a 16mm camera - even though they’ll produce a similar field of view. 

It’s important to note that these crop factor numbers are all relative to what sensor size is considered the ‘standard’. For example, in still photography a full frame sensor is usually considered normal with a 1x crop factor, which means smaller APSC sensors which are roughly close to Super 35 will have around a 1.5x or 1.6x crop factor. 

What is much more important than getting super technical about these crop factor numbers is understanding how larger or smaller sensor sizes affect the field of view and understanding all the secondary effects that using different focal lengths will have on the image.  

Read More
Gray Kotze Gray Kotze

Why The Book Is Often Better Than The Movie

What are some reasons that makes books difficult to adapt into movies?

WHY ADAPT MOVIES?

Cinema has a long history of transforming literary works into movies. This makes sense for a few reasons.

Firstly, a large proportion of audiences will already be aware of the story and characters. Therefore, it’s easier to market the movies and get the existing fanbase into seats without needing to sell a completely new concept, world or story to audiences with promotional materials. 

Secondly, some of the best, most inventive and iconic stories which build their own worlds were written as novels. So, there is great subject matter to choose from. However, adapting an existing story to the screen also comes with some baggage.


DIFFERENT IMAGINATIONS

 When reading novels, your brain uses the descriptors written by the author and your imagination fills in the image - kind of like how AI can create images from prompts.

Depending on what AI you use, or what prompt you give, you’ll get different variations of images and different interpretations. The same is true of humans.  Different directors and audience members will interpret texts differently, not only visually, but also thematically.

This is especially true in texts where the descriptions are a bit vague. Here’s an example. When I say gollum from Lord of The Rings, anyone who has seen the movie will immediately think the character looks like this.

However, illustrator Tove Jansson imagined and represented gollum like this, based on Tolkien’s description of a ‘slimy creature’, ‘as dark as darkness’ with ‘big, round pale eyes in his thin face’.

This disparity in how the character was imagined made Tolkien add an extra adjective ‘small’ to the description in later editions. The point is, different people will imagine things differently.

This applies to the landscapes the stories take place against, how the characters look, what actors are cast, or how key props or objects are rendered. If these representations go against the mainstream audience imagination they may not be well received.

If directors manage to get past this first hurdle and present a visual world that is palatable to the majority of the audience and aligns with the mainstream imagination, they are faced with another hurdle.


EXPRESSING INTERNAL THOUGHTS

How do you express internal monologues, omniscient narration and emotions of characters, that is easily done in the literary form? 

One technique that filmmakers have is to use voice over from either the character themself, or voice over from a narrator. However, in many contexts this technique can quickly get overused and disrupt the flow of the movie. Whereas novels can break down thoughts, emotions, and internal explanations at any point they wish, easily, through text.

Other methods that have been used to provide information and context to audiences include: dream sequences, flashbacks and one character telling another a story or explaining something on screen.  

These attempts to express internal thoughts about the plot in the form of dialogue can often come out as clunky exposition, which is another reason why translating the thoughts of characters in books to the screen is a challenge.

Good adaptations focus on the characters: and allow the story to be told through the actions of the characters, rather than aligning plot points and then manipulating the characters to get to the plot.

SHOW DON’T TELL    

Two good rules to overcome exposition are: show rather than tell and delay giving expositional information for as long as possible. 

This is usually reliant on great performances from actors who can project their internal emotions and thoughts externally. Likewise, the language of the camera can also be used to express information. 

Take this scene from No Country For Old Men - which conveys a huge amount of information without any dialogue. It’s shot from the perspective of Chigurh, so we’re seeing things unfold at the same time he is - delaying revealing expositional information.

He opens the door with a cattle gun. Lots of information here: firstly he’s clearly in a rural, farming area that would have such a tool, secondly he can clearly adapt to his surrounds using whatever he finds, thirdly, he makes a noise and is confident enough in his violent ability that he doesn’t seem to care about people noticing him.

He finds unopened letters - we know that whoever he’s looking for has been gone a while. There’s an unmade bed, hangers from hastily packed clothes and an open window - we know the person has left in a hurry. He grabs some milk from the fridge and drinks it. The milk is still good so whoever he’s after couldn’t have left more than a couple days ago. 

The camera pushes in - getting inside his head. His thoughtful, slow, calculating psychopathic calm behaviour as an intruder is very uneasy.

Then after this mountain of information has been revealed entirely visually - it’s confirmed through a later dialogue exchange with a woman.

This is how good adaptations of books reveal information - by leaning on cinema’s visual tools, and controlling the flow of information to the audience, rather than by overusing expositional dialogue. 



STORY STRUCTURE

Another structural difference between books and movies is their length and how they are designed to be consumed.

Novels are by their very nature intended to be read over an extended period of time, in different sittings. Authors can delve into extreme detail like describing the world, adding backstory, getting inside the heads of characters, and they can elongate plots.

Movies are designed to be consumed in a single one and a half to three hour sitting. This means the plot from adapted works often needs to get enormously condensed, simplified, restructured or reinvented to make sense within the more limited time frame.

This can be at odds with what fans want - who are used to the greater plot nuances and depth in the original work.

A solution to this has been to create a series of instalments - breaking the movies into multiple parts. This hasn’t always been successful.

Pacing an adaptation through the writing and editing needs to strike a balance between giving justice to the original story and plot, re-writing or removing excessive side storylines and not overstretching the existing material.

The way that Peter Jackson ends The Return Of The King is a good example. After the ring is destroyed in the film’s climax and the characters return to the Shire, Jackson cuts out the entire ‘Scouring of the Shire’ storyline from the book - where the hobbits retake the Shire through another battle to end Saruman’s rule.

Adding this would have both extended the movie’s run time too much, and goes against the classic three act structure in movies by introducing a second inferior climax after the true climax of destroying the ring.

      

CREATING TONE

One of the most challenging parts of adapting an existing work to the screen is finding the correct tone that pays homage to the story’s intention: whether that’s creating a feeling of wonder, an uneasy suspense, or action.  

A number of filmmaking tools can be used to achieve this feeling: from the score to the set design to the lighting. An example of visually creating different tones can be seen in how cinematographer Andrew Lesnie, Peter Jackson and the rest of the crew created a unique look for each ‘realm’ or location - which also expressed an emotional tone.

The Shire is green, lush and characters are backlit with golden sunlight that is comforting, homely and natural.

Bree needed to feel a bit more aggressive with a sense of foreboding. So they pushed a yellow-green tint in the grade that made skin tones a bit more sickly and lit it with hard light sources with jagged shadows.

For the magical safe haven of Rivendell they pushed a comforting, autumnal warm look in post production, lit scenes with more diffused, softer lighting with less intense shadows, and introduced digital diffusion into the image that created a blooming, smudgy, halation effect in the highlights that would come from using a strong Pro-Mist diffusion filter.    

Which again, contrasted heavily with the scenes in Mordor that tried to suck all life and vibrancy from the almost monochromatically neutral palette, lit by constantly gloomy, cloudy light.  

Each region carried its unique emotional tone not only through the visuals but also through the music.

CONCLUSION

Adapting fiction to the screen is beset by challenges: from bringing imagined imagery to life, expressing the internal thoughts of characters, restructuring and shortening the storyline to create an appropriate tone that aligns with the original source material.  

Truly doing stories justice requires directors to have a clear vision, which they refine and structure with careful pre-production planning, unhindered by ulterior financial motives, which is then supported and executed by a superb cast and technical crew.

Read More
Gray Kotze Gray Kotze

The 2 Ways To Shoot Car Scenes

There are two main ways of pulling off driving shots: with a process trailer, or with a poor man’s process trailer. Let’s break down how these two techniques work, the gear involved, and some reasons why filmmakers may choose one method over the other in different situations.

INTRODUCTION

A general rule in cinematography is the more variables a scene in a script has, the more difficult it is to film. 

Car scenes come with a lot of moving parts…literally. This presents some challenges. However, since over the years countless scenes have been written, set and filmed inside of moving cars, some standardised cinematographic methods have emerged to handle these situations.

In fact there are two main ways of pulling off these shots: one method is done practically with real locations and a rig called a process trailer, and the other way is achieved through a bit of filmmaking trickery and is called a ‘poor man’s’ process trailer.

So, let’s break down how these two techniques work, the gear involved, and some reasons why filmmakers may choose one method over the other in different situations.



PROCESS TRAILER

To be able to cut together a dialogue scene filmmakers need to shoot multiple angles of the scene being performed, multiple times. Therefore, it’s important that there is a high level of consistency among all of the different takes, so that when shots filmed at different times are placed next to each other there is an illusion that the scene is continuous and unfolding in real time.

This is why cars present a bit of a snag. Consistency over a long shooting period can be difficult when traffic is unpredictable, the background outside the window changes, the driver alters how fast or slow they are accelerating, and the lighting conditions morph as they drive past different areas that may cause shadows or different angles of light. 

Also, asking an actor to drive and perform dialogue at the same time can be a bit too much multitasking and diminish the performance, or even be dangerous as their attention to their driving will be compromised.

For this reason, car dialogue scenes shot while driving on roads are almost always done with a rig called a process trailer. Sometimes also called a low loader, this is a trailer with wheels and a platform big enough to fit a car on - which the actors sit inside - that can be towed by another vehicle. The car that is shown on camera is referred to as the ‘picture vehicle’.

Process trailers need to sit very low to the ground so as to give the illusion that the picture car is driving. If it is too raised then the perspective will be off.      

Most low loaders are designed with a front cab section that the driver sits in and a rear section behind the cab with mounting points for lights, a director’s monitor, space for essential crew to sit, gear to get stored, and generators to be mounted that can run power.

This front section then tows the process trailer which the picture car sits on. 

The actors sit inside the picture vehicle and the camera operator is placed on the trailer, usually outside the vehicle, and films the scene as the actor’s play out each take and pretend to drive. Meanwhile the actual driving will be done inside the front cab by a professional low loader driver. 

Any camera operators, focus pullers or other essential crew that need to be on the trailer section have to be harnessed in for safety and contained by a barrier of mounted poles that grips build.

The route that will be driven by the low loader will be carefully planned ahead of production and will almost always involve getting permission and paying for permits from the local government. They will often insist that a police escort is used to drive in front of or near the process trailer and may even require that some roads need to be closed or blocked off during shooting for the safety of the public. 

To provide a consistent background and limit blocking off roads to a small area, the driving route will usually either be looped, or it will be a route that has a turning point at the end of it - which can be driven each time for multiple takes.

This turning point will have to be scouted in advance by the driver to ensure there is enough space to perform a u-turn with the elongated, low clearance vehicle.

There are a number of different shot options that can be used on a process trailer.

Often, cinematographers will shoot from a stabilised platform like a tripod or car mount which the grips can secure on the process trailer, outside the picture car. 

Common angles are shooting a two shot front on, through the windscreen, then punching in for close ups on each actor though their respective side window. 

The camera could also be rigged or shot handheld from inside the picture vehicle.

I’ve also witnessed some DPs who like to operate a handheld camera on an Easyrig, and position themselves just outside the open side windows - especially for car commercials.

If shooting through windows DPs will almost always use a rota pola filter. As the name suggests this polariser glass can be rotated by turning a wheel, to position the filter so that the polariser effect minimises the reflections from the glass. This allows the camera to see the actors inside the car without being blocked by reflections.

Some complex camera moves beyond locked off frames can also be achieved, such as this. Which is done by shooting off a Scorpio 10 - which is a small telescoping crane arm - that is rigged to the process trailer and moved by a grip. It has a mini Libra head attached to it, which means the camera can be tilted, panned or rolled remotely by an operator using wheels.

In this case they achieved these tricky moves by removing the car’s side doors and shooting through the passenger side of the vehicle. They shot all the moves practically then later inserted the window’s glass and reflections with VFX.     

Cinematographers will often get their team to rig a fill light on the process trailer. Usually this is quite a soft light with some spread, such as Skypanels or an HMI with diffusion. This will lift the level of ambient light inside the vehicle which will be much darker than the bright levels of natural ambience outside, yet be soft and not too directional so as to mask that the shot is being lit at all. 

It’s also possible to use a car mount, where the camera is attached directly onto the car by grips through using a mounting mechanism that is usually secured through poles with suction cups that stick onto the car’s body, or with a rig called a hostess tray.

In this case, some actors may be able to do their own stunt driving without a process trailer for scenes without dialogue. It may also be possible to use a professional driver that stands in as a double for the actor, which requires shooting at an angle that doesn’t reveal too much of the driver’s identity. 

POOR MAN’S PROCESS TRAILER

As nice as it is to shoot driving scenes for real by using a process trailer, this method does come with a number of disadvantages. For this reason, filmmakers also came up with a second, artificial method for capturing these shots - which is referred to as the ‘poor man’s process trailer’.

There are a few different methods of doing this, but basically it involves placing the stationary car in a low light environment, like a studio, then using lighting, giving the car a shake, and creating a simulated background to give the illusion that the actors are in a moving car. 

There are four main ways that this can be done: with projection, with lighting, with a green screen or with volumetric lighting from giant LED screens.

Although there are different nuances to each of these methods, they are set up in a similar way. First, a background is placed behind the area where the shot will take place. So if the shot is a close up of a driver, then a projection surface will be set up behind the picture vehicle in the same directional line that the camera is pointing. 

A projector will then hit that surface with a pre-recorded clip shot out of a moving vehicle which plays out for at least the length of a full take. It’s, of course, important that the video loop in the background is shot at the same angle as the shot which you line up and is moving in the right direction.

The alternative to this is to replace the projection surface with an illuminated green or blue screen. Then in post production key the green and replace it with the video clip of the moving background. 

A higher budget version of these two methods is to use volumetric lighting - which is basically gigantic LED video panels that output a brighter luminance. Again, clips can be played by these video walls, which can usually be controlled by software that can also do things like defocus and shift the perspective of the video.

Because of the increased levels of light output from these panels, you also get more realistic production of light, so any highlights in the video clip will produce brighter areas or reflections in the lighting, and any changes in colour will change the colour of the light that hits the subject. 

Once you’ve set your background, then it’s time for cinematographers to work on the lighting. How this is done will depend on whether it’s a day or night scene and is based on the discretion of the cinematographer. But, usually it will involve adding some kind of ambience to the scene, like a bounced source that softens and spreads the light while raising the general exposure levels. 

Then you’ll usually want to set up some moving light sources to simulate that the car is in motion. This can be done by loosening the lock off on a stand and swivelling a light around. Or, what I’ve found works quite well, is to get a few people to swing around some handheld LED tube sources. 

To get the feeling of camera motion to be realistic, I’ve also found that shooting with a handheld camera combined with getting people to randomly shake the picture vehicle tends to give a more accurate feeling of motion than shooting a locked off frame.


ADVANTAGES AND DISADVANTAGES OF THE PROCESS TRAILER

One of the biggest reasons why filmmakers chose to shoot car scenes in this way rather than with a real process trailer is because of budget. Shooting with a full police escort and locking off roads can become incredibly pricey.

The cost of renting out a studio, or even shooting outside in a parking lot, is going to be far cheaper than renting a low loader and paying for road closures and permits. 

Having said that, if you want to shoot with a large volumetric LED wall in a big film studio, then it quickly moves away from being a ‘poor man’s’ method and will probably be a similar financial spend to shooting with a real process trailer.

So, taking budget out of the equation, what are some of the advantages and tradeoffs of each method? 

Basically, shooting with a process trailer offers a realistic look, with little vibrations and real world, interactive lighting, that is difficult to match, while shooting with a ‘poor man’s process trailer’ offers much more ease and convenience.

It’s much easier to record clean dialogue when working in a soundproof studio environment than when working out on real roads. 

It’s also significantly faster and easier to change between setups when working in a studio, than when working from a low loader - which often requires driving the vehicle to a stopping point, then getting a full grips team to re-rig the camera so that it is safe and secured.

It’s also far more convenient for the director. When shooting in a studio they can clearly watch every shot on a monitor, and stroll over and give notes or have conversations with actors between takes. Whereas shooting on a process trailer usually involves more staggered communication and direction over a radio.

Studio environments also give cinematographers far more visual control. For example, when shooting outdoors the sun position will change over time, the light may go in and out of clouds during takes changing its exposure and quality, you may get unwanted reflections off the glass or hit a bumpy section of road that moves the camera too much.

In a studio lights can be set to a consistent intensity and position, lens changes and camera moves are easy, as you work off a flat, unmoving surface - ensuring that take after take can be visually repeated in the same way.

Also any technical glitches are easy to fix right away, whereas if a focus motor slips or a monitor loses a transmitted video feed on a process trailer, you’ll have to bring the whole moving circus to a safe stop to fix the problem - which is time consuming.

A final disadvantage to working with a process trailer is that it forces you to face the camera towards the windscreen of the picture vehicle or shoot side on. 

Shooting from behind and looking out of the front windscreen requires renting a niche vehicle where the driver’s controls are placed behind the picture vehicle - which can be a lot of effort for a single shot, compared to just turning the car around inside a studio against a moving background.     

CONCLUSION

Despite the logistical challenges and inconveniences, some filmmakers with a budget still opt to shoot the real thing. Prioritising the realistic visual nuances that come straight out of the box when working with a process trailer. 

Read More
Gray Kotze Gray Kotze

Cinematography Style: Néstor Almendros

Let's take a look at the influential work of cinematographer Néstor Almendros; specifically, his use of natural lighting, his taste for cinematic simplicity and focus on providing directors with his creative insights and knowledge of cinema.

INTRODUCTION

Although cinematographer Néstor Almendros did most of his work in France, often for groundbreaking French New Wave directors, like Éric Rohmer and François Truffaut, he is probably best known for his collaborations with Hollywood directors on some iconic movies in the 70s and 80s.

Directors were often drawn to his pioneering way of working with natural light, his taste for cinematic simplicity and a focus on providing them with his creative insights, beyond just his technical expertise.

In this episode let’s break down how his thoughts and philosophies on cinematography influenced his photographic style, and also take a look at some of the gear and techniques that he used to pull off some breathtaking images.    


PHILOSOPHY

Many of the filmmaking techniques that Almendros would later use on larger feature productions, were actually gleaned in his earliest explorations into cinematography.

Three of these concepts which appear as threads throughout his filmography are: his knowledge of movies, his focus on natural lighting and his push for cinematic simplicity.

He developed an early love for cinema, so much so that he eventually became a bit of a cinefile and started writing movie reviews. He cites this as one of the best educational resources for DPs, claiming that the technical side of photography can always be learnt or executed by film technicians and crew that you work with, but having a solid foundation and understanding of what films came before and what the current trends are in cinema and photography are incredibly important. 

In his early years he wanted to be a director. This made having a perspective on the narrative a must. It’s the job of the cinematographer to be sensitive to the needs of the story and have the necessary cultural background to draw from. 

An example of a movie that he was influenced by was the early Italian neorealist film La Terra Trema - shot by cinematographer G. R. Aldo. He was blown away by how Aldo used naturalistic lighting in a way that was very different from the other much more stylised and overlit movies of the time - which blasted hard, frontal key light at actors. 

Instead of shooting on sets in film studios and shining hard, spot lit key light, fill light and backlight at actors as was the norm, many of these Italian neorealist films used available, naturalistic light in real locations, which may be hard with unflattering shadows under direct sun, a soft, gentle feel under cloudy conditions, or use the last remnants of dusk light remaining in the sky after sunset.

His appreciation for naturalistic light may also have been influenced by his filmmaking originating in shooting documentaries in Cuba - where he could only work with a camera and available light as there was not enough budget to hire lights or a team of electricians.

To overcome the low levels of light inside some of the houses they would shoot in, they came up with the idea of using mirrors to bounce the sunlight that was outside into the house through windows then bouncing it off the ceiling. 

This technique of softening light by bouncing it became important later, but so too did the function of mimicking the direction of the natural sunlight by angling it through windows and increasing its strength.

In other words, taking an existing source of natural light and strengthening it by artificial means. 

Almendros inspired a major transition in thinking about lighting. Rather than being bound to film school concepts like three-point lighting, he instead wanted the lighting in his films, even when using artificial film lights, to be motivated by what the natural light sources - like the sun - does in real life. 

He talks about this in his autobiography:

“When it comes to lighting, one of my basic principles is that the light sources must be justified. I believe that what is functional is beautiful, that functional light is beautiful light. I try to make sure that my light is logical rather than aesthetic.”

Another aspect to Almendros’ work is an appreciation for simplicity both in practical, technical terms as well as aesthetic terms. Again, this may perhaps have evolved from the beginning of his career in documentary and on low budget films, where he became accustomed to making do with a lack of resources.

A great example of his economical way of working, was on his first narrative feature: La Collectionneuse which he shot for French New Wave director Éric Rohmer. Because of an extremely limited budget they were faced with a choice early on: either shoot in the less expensive 16mm or shoot extremely economically on 35mm.

They went with 35mm. 

A shooting ratio refers to how much footage was shot in relation to the length of the finished movie: so if 20 hours of footage was shot to complete a 2 hour movie then the production had a shooting ratio of 10:1. La Collectionneuse had an insanely low shooting ratio of 1.5:1. This meant that they shot only 1 take for most shots.

Working in such an economical way has a few advantages: it forces directors to think about exactly what they want and have a refined vision. As Almendros says: “the problem is that when there are many options there is a tendency to use them all.”

Even later on in his career when he was working with larger budgets on Hollywood productions, he always gravitated to finding the simplest method using the tools that would most easily produce an image with functional, realistic light that told the story.

For example, not using a big truck full of lights and a large team of electricians, if he could get a more authentic image from only using natural light.

Visually, many of the films he shot also have a certain stylistic simplicity to them. Although of course it depended on which director he worked with, he often shot quite deliberately composed, static frames without many dynamic camera moves. This was especially true when working with Rohmer who liked shooting stationary frames from a tripod head.

So, Almendros can be characterised by his cinematic knowledge, naturalistic lighting, and simplicity - but how did he translate that over with the techniques and gear that he used?

GEAR

Nowadays the default for most cinematographers is to base their placement of lights, and quality of illumination on the real life sources that exist in the location. 

As I mentioned before, that wasn’t always the case. The prior standard was that actors should almost always be well illuminated in clear, strong pockets of light which were usually placed in front of the talent, from above, and shined directly at them. This clearly illuminates the face without shadow, however isn’t what light does in real life.

Compare this shot of how actors used to be lit when placed next to a window, to how Almendros did it in Days Of Heaven. There are two big takeaways. 

Firstly, Almendros places the light source outside the window, shining in, mimicking the direction that the sun would in real life. While the other shot keys the actor with a high, frontal source of light - that doesn’t make sense in the real world.

Secondly, the quality of the light is different. Almendros uses a much more diffused light that is far softer with a natural, gentle transition from shadow to brightness. While the other example has a very clear, crisp shadow caused by very strong, undiffused artificial light.

A technique he often used to get this soft quality of light when shooting interiors was to bounce lights, often from outside a window, into the ceiling. This reflected the source around the room, decreasing the intensity of the light, but lifting the overall ambience in the room in a natural way.

He liked using strong sources with high output to create his artificial sunlight for interiors on location or in studio sets, such as tungsten minibrutes, old carbon arc lights, or, later on, HMIs.

Although he is known for his use of naturalistic soft light in movies like Days Of Heaven, he also did use hard light at times when it was functional and could be justified by a realistic source.

He also often favoured lighting with a single source - meaning one lighting fixture which pushed light in a singular direction. He often did this by using practical light fixtures - like lamp shades with tungsten bulbs - and not adding any extra fill light to lift the exposure levels in the space. 

In Days Of Heaven he even took this idea and transferred it to the oil lamp props, which he had replaced with electric quartz bulbs that shine through orange tinted glass, which were wired under the shirts of actors and attached to a belt with batteries they could wear.

These innovative solutions led the way to what is nowadays easily done with battery powered LEDs.

He paired this warm practical light with another lighting technique he would master - exposing for very low levels of ambient dusk light in the sky. 

Days of Heaven is probably best known for using this dusk light known as “magic hour”, but it’s actually something that he’d been doing since his first feature.

This was especially difficult as for most of his career he worked with a Kodak film stock that had a very low ASA rating compared to today’s standards. Kodak 5247 is a tungsten balanced film that was rated at only 125 EI - which is around 5 stops slower than a modern digital cinema like the Sony Venice 2 that can shoot at 3,200 EI.

To expose at these extremely low levels of natural light he would rate the 125 ASA film at 200 ASA on his light meter, 2/3rds of a stop underexposed. As it got darker he would then remove the 85 filter - which changes the colour temperature of tungsten film to daylight but also darkens the image by one stop. 

Then as it got progressively darker he’d also change to lenses with a faster aperture that let in more light, ending wide open on a Panavision Super Speed T1.1 55mm - poor focus puller.

If they needed to push things even further into low light shooting he would sometimes even film at 12 or 18 frames per second and change the shutter from 1/50 to 1/16. In this case they’d also ask the actors to move more slowly than usual to mask the otherwise sped up feeling of motion you’d get - reaping the final moments of available natural light before everything became dark.

However, when he wasn’t working with the smallest amounts of available light, Almendros actually preferred to not shoot with a wide open aperture. He felt the best depth of field was slightly stopped down, so that the background wasn’t a complete blur and could still be made out, yet was slightly soft so as to isolate the characters from the frame and make them stand out.    

He also innovated other DIY tech that could be used to simulate naturalistic lighting. For example, he ignited flame jets attached to gas tanks, which could be easily handled and had a controllable flame. These could be brought near actors to naturalistically illuminate them for scenes involving fire, rather than using electric lights - which was standard practice before.

To achieve a wide shot of locusts flying off in-camera, without post production visual effects, Almendros again pulled a technique from his knowledge of cinema - this time a movie called The Good Earth. They suspended helicopters just above the shot and released seeds and peanut shells, then to get the effect of the insects taking off they got the actors to perform their actions in reverse. 

This was shot on an old Arriflex which could shoot film backwards that would later play out the original action in reverse. 

This innovation also extended into camera movement. Days Of Heaven was the first film to use the Panaglide - Panavision’s lightweight alternative to the Steadicam, which could be used to get sweeping, tracking shots with actors over uneven natural terrain.  

CONCLUSION

Much of what we take for granted in cinematography today, like shooting in low light, using practical sources, and thinking about motivated natural lighting rather than three-point lighting, are all innovations aided by Almendros’ work. 

He was able to use his knowledge of cinema to inform his taste and storytelling techniques, then pushed established technical boundaries and ways of thinking to make his cinematography extremely beautiful but also extremely influential.

Read More
Gray Kotze Gray Kotze

What Makes Anamorphic Lenses Different?

Let's dive a bit deeper into how anamorphic lenses work and what makes them different from regular spherical glass.

INTRODUCTION

Choosing a set of lenses is one of those tools in a cinematographers toolbox that can influence how stories are visually presented and how they are practically shot. 

Anamorphic glass is different to normal spherical lenses in many ways, so much so that shooting anamorphic is considered a different format altogether.

But what is it that makes these lenses different? Let’s do a bit of a deep dive.



WHAT ARE ANAMORPHIC LENSES?

When light passes through an ordinary lens, it captures an image which is correctly proportioned and can be used straight away. Anamorphic lenses are a bit funky. Because of their differently shaped, oval, cylindrical glass elements, when light hits them it gets squeezed and produces an image which is compressed. 

Kind of like how fairground mirrors can squeeze reflections to make you look long and lanky.

This contorted format was invented for two reasons: to create a widescreen aspect ratio, while at the same time maximising the quality or detail in the image. To understand how this works we must quickly dive into the origins of how old 35mm film cameras worked.

35mm film runs vertically through a camera, which captures the full width of the negative at a height of four of these little perforations tall. This is all well and good when shooting in the old, tall academy aspect ratio, but what if you wanted to film and present images in a widescreen format?

Well, you could take that taller frame and chop off the top and bottom with a mask. But that meant that when capturing or projecting you would waste a lot of expensive film that would just be blacked out and because the full height of the negative was cropped, the recording area was smaller, which decreased the clarity and quality of the recording and increased the amount of film grain you’d see.


SQUEEZE FACTOR

Anamorphic lenses fixed this by recording the full height and width of each four-perf frame, by squeezing the image to cover the entire negative. When screening the film in a cinema they then attached an extra anamorphic lens onto the projector which desqueezed the image - reverting it back to its correct proportions by stretching it out by the same amount that the anamorphic lens originally squeezed it.

The amount that the lenses compressed the image is called the squeeze factor. This refers to the ratio of horizontal to vertical information captured by an anamorphic lens. So regular spherical lenses that capture normal looking images have a factor of 1x, where the horizontal and vertical information is the same. While anamorphic lenses usually have a squeeze factor of 2x. This means that twice as much horizontal information is squeezed into the image than vertical information.

Although a 2x factor is the norm for anamorphic glass, there are also some lenses with different squeeze factors out there - which we’ll get to a bit later.



DESQUEEZE

Now that all post production, and almost all cinema projection, happens digitally rather than with film - the method for desqueezing footage has also changed.

You can now import files shot with anamorphic lenses into editing software, and apply settings to desqueeze the footage digitally, for example by a factor of two, to make the images uncompressed in a native widescreen aspect ratio.



ASPECT RATIO

Although the exact aspect ratio, or width, for anamorphic capture and projection may change ever so slightly depending on a few factors, it will usually either be 2.35:1 or, nowadays, 2.39:1 - which is more commonly referred to in more general terms by rounding it up and calling it a 2.40:1 aspect ratio.     


SUPER 35

Before I mentioned that an alternative method to get to this widescreen aspect ratio is by shooting with spherical lenses with a 1x factor, and cropping off the top and bottom of the frame.

Although this method yields less surface recording area and slightly diminished quality and resolution of detail, it is still a very commonly used format over anamorphic for a number of reasons.

Because widescreen Super 35 records extra information on the top and bottom of the frame, this can be useful in post for things like CGI, stabilising the image with software, or cropping out unwanted things by reframing up or down.


SPHERICAL VS ANAMORPHIC CHARACTERISTICS

Spherical lenses are also usually sharper across the width of the frame, meaning that details on the edges of the shot that are in focus will remain sharp. Anamorphic lenses have a sweet spot in the middle of the frame that will be sharp when in focus, while the edges of the frame will resolve detail less sharply. 

Another difference between spherical and anamorphic lenses is how their bokeh is rendered, which is the out of focus area in an image. Because the glass elements in 1x spherical lenses are rounded they produce rounded balls of bokeh.

However, anamorphic lenses, with their cylindrical elements that squeeze the image, create bokeh which takes on more of an oval shape. This shape is also affected by where the cylindrical glass element is placed within the lens. 

Most true anamorphic lenses place the cylindrical element at the front of the lens, with regular circular elements behind it. These are called front anamorphics and produce that classic ovular distortion in the background.

There are also rear anamorphic lenses, which instead place the cylindrical element at the back of the lens, with the rest of the circular elements in front of it. This is often done to create Frankenstein anamorphised zooms, which takes an existing spherical zoom lens and adds a rear anamorphic element to the back of it. 

This has the same effect of squeezing the image, however rear anamorphics often lose the oval bokeh shape, which becomes a bit more rounded or even, in some cases, rectangular.

Probably the most defining characteristic of anamorphics is their flare. When direct, hard light enters these lenses it produces a horizontal flare across the width of the image - which is usually quite pronounced.

Spherical flares tend to generally be a bit tamer and subtler and flare in a more circular way, rather than horizontally.


ANAMORPHIC IN A DIGITAL WORLD

Many digital cinema cameras use a sensor that approximately modelled the size of a Super 35 negative with a 4:3 ratio. This meant that most existing anamorphic lenses which were designed for four-perf film would be able to cover the width of digital sensors without vignetting.

Like on film, these 2x anamorphic lenses could cover the full height and most of the width of the sensor, filling a greater overall surface area than shooting a cropped Super 35 image with spherical lenses.

However, not all digital sensors used such a tall 4:3 ratio. Some sensors were designed to be more of a 16:9 size. There are some anamorphic lenses with a different 1.3x squeeze factor, instead of the standard 2x squeeze, that cover these wider sensors and still produce a widescreen image with a 2.40 aspect ratio.     

In recent years, full frame and large format digital cameras have seen a surge in popularity. Due to these sensors being significantly larger than Super 35, most 2x anamorphic lenses don’t have glass elements wide enough to cover these sensors without them seeing inside the lens and the image vignetting.

Anamorphic lenses with various squeeze factors have been designed to cover these format sizes, from 1.3x to 1.8x.



LIMITATIONS

A limitation of shooting large format or full frame anamorphic lenses is that you have a smaller selection to choose from and this glass is typically more expensive than comparable spherical options.

Spherical primes sets also usually come with far more focal lengths to choose from. For example a modern set of spherical primes like the Master Primes come in 16 focal lengths, whereas a modern set of anamorphic primes like the G-Series come in eight focal lengths. 

This sometimes means that cinematographers like to pair a set of anamorphic primes with a longer zoom - which may either be an anamorphised rear zoom, like we mentioned before, or a front anamorphic zoom like the Panavision 70-200mm.

Another potential limitation of anamorphics, especially front anamorphic zooms, are that because of their more difficult design and increased number of glass elements, they usually have a slower stop than their spherical peers - which renders a tad less bokeh and makes it more difficult to shoot in very low light conditions.

They are also on average physically larger and heavier than spherical lenses - with the gigantic Primos being a great example of just how hefty a prime anamorphic lens can get. Although many lighter alternatives do also exist.

Close focus can also be an issue. The extra glass in anamorphic lenses means that the MOD, or the closest point that the lens can render an object in sharp focus, is usually not very near. 1x spherical glass is normally far better at this. So, if filmmakers want to shoot an extreme close up on an anamorphic lens, they will need to use an extra diopter filter which allows them to achieve more of a macro focus.

Visually, anamorphics produce more distortion, with the wider focal lengths, around 40mm and wider, bending the edges of the frame - which is especially noticeable when shooting something with a straight line like a door frame. 

WHY CHOOSE ANAMORPHIC LENSES?

So, broadly, spherical lenses offer a greater practical flexibility to cinematographers, while anamorphic lenses offer a specific look, in exchange for a few practical tradeoffs.

Overall, DPs who like a clean look to their footage, which is sharp across the frame and free of aberrations or distortion often like to go with spherical glass and crop to get a wide aspect ratio. Whereas those looking to add a touch more visual character to the footage to make it a little less perfect, which is often done to counteract the sharpness of modern high res digital cameras, may prefer the look of older anamorphic lenses.

Having said that, there are exceptions to this. Old, vintage spherical lenses exist which offer a lot of imperfections, as well as modern anamorphic lenses which are very sharp and clean. 

In the end, anamorphic lenses can give projects a look that has long been considered classically cinematic, with their oval bokeh, lateral flares, falloff and native widescreen ratio. However, this does come with a few practical tradeoffs which may need to be considered by filmmakers.

Read More
Gray Kotze Gray Kotze

Mixing Film And Digital Footage: Killers Of The Flower Moon

Let's break down the cinematography - specifically the use of colour and LUTs - in Killers Of The Flower Moon.

INTRODUCTION

Colour is a tool that plays a crucial role in cinematography and can be manipulated to craft a bunch of different looks. 

Some of these looks can be pretty heavy handed, like the quote unquote ‘Mexican filter’ - which punches up the warmth everytime the world of the story moves south of the US border.

Traffic (2002)

But other applications of colour, like in Killers Of The Flower Moon, is a bit more subtle, yet has an unconscious effect on how the story is taken in by audiences.   

The workflow behind this cinematography combines old school thinking with new school technology. This comes from the choice to shoot on both film and digital cameras in different situations, and by thinking about LUTs and digital colour correction in terms of old photochemical techniques.

So, let’s explain how these principles work by breaking down the cinematography in Killers Of The Flower Moon.

MIXING FORMATS

Killers Of The Flower Moon responds to the long discussed topic of film versus digital with ‘Why not both?’ This was a decision influenced by DP Rodrigo Prieto who needed to think not only about how this choice affected the look and feel of the movie, but also how it would practically influence the way they shot it.

“Part of the role of the cinematographer is to find ways so that the camera and the lighting really represent what they are feeling. What’s the energy of the scene? What should it feel like? A lot of the discussions were about that.” - Rodrigo Prieto, Cinematographer

For scenes that featured Osage characters or took place in that world, Prieto wanted to represent colour in a way that felt natural, which, to him, was best achieved by using a film negative with a film print.


COLOUR NEGATIVE FILM

The main film he used was Kodak’s 35mm 250D 5207 stock which shot scenes that took place during the day. Because 250D can be shot at a higher ASA rating than Kodak’s other daylight stock, it means that less light is needed to expose - which makes it possible to shoot both some interiors as well as exteriors on the stock. 

He swapped this out, during Osage ritual scenes,for the less sensitive 50D stock when he wanted shots to have a tighter film grain that was less noticeable. He also sometimes used 500T when filming night interiors under tungsten light. 

They ran this film through Arri’s newest 35mm cameras, the Arricam ST and LT. With the LT version being better for more compact rigs like Steadicam, or remote head shots, and the ST version being able to record at a higher 60FPS versus the 48 of the LT.

For one extreme slow mo sequence they also pulled out an old Arriflex 435 to shoot at 150FPS in the wide and a digital Phantom to shoot the tighter shot at 700FPS.  


BLACK AND WHITE FILM

A few select frames which needed to look like old archival footage were also shot on Kodak’s 5222 black and white film stock. To push this newsreel look even further without using visual effects or post production work they shot the film on Scorcese’s own 1917 Bell & Howell 2709 camera. 

It needed to be hand cranked for the motors to move the film past the gate which meant the frame rate would change depending on how fast or slow the handle was turned. This gave the footage that unusual feeling of motion that is associated with the format. 

So, apart from these exceptions, if 35mm felt like the right format for most of the movie then why did they also shoot some scenes with a digital cinema camera?


SONY VENICE

This mainly came down to a technical limitation of film. The most sensitive colour negative film stock that Kodak produces is rated at 500 ASA and is balanced for tungsten light. This pales in comparison to the low light capabilities of a camera like the Sony Venice 2 that has a dual base ISO of 3,200.

Since certain scenes needed to be shot in the blue - the small window of light after the sun has set but before the cool, ambient light disappears altogether - they needed a camera that could shoot and expose with a very small amount of natural light. 

Having almost three extra stops of sensitivity, meant that the Venice could maximise the shooting window after sunset and continue to shoot scenes quite late as the natural ambience started to disappear. 

This was especially useful as they were shooting on Panavision T-series anamorphic lenses that have a stop of T/2.3 - which is not a particularly fast lens that lets in a huge amount of light - unlike faster spherical glass such as the Master Primes with their T/1.3 aperture. 

So the Sony Venice was chosen instead of 35mm to shoot scenes that took place in low light such as dusk scenes where they only used natural ambient light or wide night exterior scenes. 


4 FILM LUTS

Although this traditional 35mm Kodak colour film was the basis of the look, it did deviate quite a bit from there to serve the story thanks to being combined with a more modern post production workflow.

Scorcese and Prieto did this by coming up with four different LUTs - which are pre-established looks that can be added on top of the footage that affect the colour, saturation and contrast. Kind of like adding a filter using consumer photo editing software. They used these different LUTs to support certain feelings at different points in the movie.

Instead of creating looks based on simple colour ideas in grading like warm versus cool, saturated vs desaturated or high vs low contrast, the way that they conceptualised and implemented these LUTs in the workflow used old school concepts based on how cinematographers used to work with selecting, processing and colour timing film - before digital cinematography.

The first of the four looks that they created was a ‘normal film’ LUT that precisely emulated Kodak 5219 colour negative film stock and produced natural filmic colour. 

This was used for early scenes with Osage characters and could be applied to the digital footage shot on the Venice or the 35mm film scans to achieve the same regular film development and print look.

The second LUT, mainly used to present the European characters and world, tried to emulate the look of very early colour still photography processes such as Autochrome and Photochrom which was developed in the early 1900s.

Autochrome has more desaturated colour than modern colour negative film, with a brighter, more washed out violet hue in the skies caused by more magenta in the blues, desaturated greens and yellows and reds that take on an orange hue.

The third LUT - used in the latter half of the film once the story gets darker and bleak - was based on a photochemical development process created by Technicolor called ENR. This is fairly similar to bleach bypass development and creates less saturated colours across the image and increased contrast.

The final look emulated three-strip Technicolor which has a more vividly rich, saturated colour, with deeper, punchy reds and greens - and was used for more magical, experimental moments in the story or the separate coda scene which breaks away from the main narrative timeline.

Oh and one scene also used a day-for-night LUT to make footage that was shot under direct sunlight during the day appear to be night time footage.


POST WORKFLOW

These LUTs were created by senior colourist Yvan Lucas at Company3, in collaboration with Prieto, before production began. Once principal photography started, they would receive either RAW files from the Venice, or processed and scanned digital files from the developed 35mm film.

The dailies colourist would apply the appropriate LUT to the footage in Baselight grading software and then proceed to do minor colour correction where necessary using another old school way of working called ‘printer lights’. 

This copies the old method used to colour time 35mm footage from the lab, by only adjusting primary colour and exposure variables. This could be done through software by raising or lowering the offset which is the overall exposure of the image, increasing or decreasing the values for the red, green and blue colour channels, and increasing or decreasing the cyan, yellow and magenta values.

Basically, making broader adjustments to colour and exposure without using more nuanced correction or masking tools.

Scorcese and his editor worked with these dailies while cutting. Then, right at the end of the process once they had the final cut they could come in and do more nuanced tweaks to the colour in the final grade or by doing things like isolating secondary colours or adding power windows and masks.       


LUT COMPARISON

The process of subtly affecting the colour will likely go unnoticed by 99% of the audience as the movie plays. Which is what you want - you don’t want it to be so overt and obvious that changes in colour become distracting.

However, if we compare these different LUTs side by side, especially the starting Kodak look and the final ENR look we can see that they are drastically different. 

The first normal film LUT is applied when the story is more comfortable, about the environment and follows the practices of Osage characters. The colours feel natural, rich and real with nice contrast, strong, deep greens and a comfortable range between the deepest blacks and brightest highlights - which shows detail in the shadows.

As the story progresses and we delve into the moral darkness, desperation and corruption of the story and behaviour of the characters the colour and life starts getting, literally, drained away from the image.

Compare the Kodak look to the ENR look in this frame which is shot in a similar lighting and colour environment. All that deep, lovely natural colour in the greens and blues, and healthy magenta in the skin tones has been completely sucked out and replaced by warm, very desaturated tones. 

Likewise, the contrast has been increased in the ENR LUT so much that the sky is almost blowing out to a pure white and the parts of the image in the shadow have lost almost all detail - which overall feels harsher and less natural.

This circles back to Prieto’s initial commentary on cinematography. That the images filmmakers create should represent the energy and feeling of the characters in the scene as the story progresses.

Read More
Gray Kotze Gray Kotze

How Oppenheimer Reinvented Imax

The process behind creating Oppenheimer.

INTRODUCTION

There’s a famous saying that ‘life imitates art’. 

I’d say this rings true for the process of creating Oppenheimer - where the story of a team’s struggle to push the bounds of science to create a new technology was mirrored by the film crew’s very own push towards creating new Imax camera tech to tell that very story.

This included advances in lighting, lensing as well as the creation of a new 65mm black and white film stock. So, let’s break down some of the technical advancements that influenced the visual storytelling in Oppenheimer.     

HOW IMAX WORKS

If you’ve seen this film in an Imax cinema it’s difficult to deny that Imax is a truly gorgeous format. What makes it different from shooting on other camera systems is the size of the negative that it captures images on. Basically Imax takes the image capture area of a traditional 35mm camera and supersizes it - flipping the 65mm film sideways to create a negative that is over eight times larger.

This gives Imax footage an unprecedented resolution of detail, low film grain, a wide field of view and makes it possible to shoot with an extremely shallow depth of field - that can give close ups and wide shots alike a very blurry background.

However, pushing the bounds of photochemical image capture comes with a number of downsides.


WHY SHOOT ON IMAX

But, before we delve into these issues, why shoot what is effectively an intimate, dramatic portrait of a character on Imax in the first place? 

Well, director Christopher Nolan and cinematographer Hoyte Van Hoytema like to first begin by identifying the creative needs of the project, then start unpacking the technical solutions and innovations that they can come up with to make those artistic decisions a reality. 

In the words of Van Hoytema "Large format photography gives clarity and places the audience in the reality you are creating for them." So, Imax was used as a creative solution to immerse the audience not only in the action sequences and setpieces, but to intimately envelope audiences in the face and performance of the protagonist, and get them caught up in his psychological state and emotions.

Using Imax to capture every nuance of detail on a face, rather than just for aerial landscapes and explosions. 

SYSTEM 65 VS IMAX

However, if you’re observant you may notice that in Oppenheimer, and in fact all of Nolan’s films which use Imax, different shots alternate between a widescreen aspect ratio - with a black letterbox on the top and bottom, and shots with a taller frame.

These two different aspect ratios reflect the two different film formats used to shoot the movie. 

All the widescreen shots were photographed with a 2.20:1 aspect ratio on a 5-perf Panavision System 65 camera, which runs 65mm film vertically through the gate. 

While the taller shots were done on a 15-perf Imax camera that runs 65mm film horizontally and uses the largest possible recording surface area. Either presenting the full 1.43:1 Imax negative, or, in most distribution cases, cropping off the top and bottom to get to a 1.90:1 ratio.

Some scenes even intercut between these two different ratios. Why is this? Why not just shoot everything in Imax? The answer is because of sound.   



SOUND ISSUES

Running so much film through the Imax camera so quickly means that it makes a huge noise. This gets picked up by the microphone that records the dialogue on set.  

So if you watch a few scenes, you’ll notice that any moments with dialogue tend to be shot in a widescreen ratio on 65mm with the quieter 5-perf camera. While other moments without synchronised dialogue that can be covered up by sound design,  ADR, or even denoising audio in post, are shot on the noisy Imax camera. 

Sometimes Nolan even cuts from reaction shots filmed on Imax to dialogue shots filmed on 65mm during the same scene. 




CUSTOM LENSES

Another technical snag they ran into stemmed from Nolan wanting to visually present Oppenheimer's abstract idea of what was happening on a subatomic level. 

Again, they started with the creative need - shooting practical microscopic visual effects - and from there came up with a technical solution that involved creating a new, custom prototype macro snorkel lens for Imax.
This was designed by lens aficionado Dan Sasaki at Panavision who was tasked with custom engineering a waterproof, wide angle macro lens. He managed to produce a 24mm and a 35mm - with the 24mm lens being able to focus within a ridiculous 1mm from the lenses front element.

Part of the reason why these lenses didn’t exist before and why they were incredibly difficult to produce is because Imax glass needs to be very wide in order to avoid vignetting - seeing inside the barrel of the lens - which creates black circular edges around the image.

The solution to this is to make the diameter of the glass elements larger. However, because it was a probe lens there was a restriction on the width of the elements. Sasaki managed to solve this by adding an extra five glass elements inside the barrel of the lens.

Creating this new lens resulted in some spectacular practical effects. Slow motion, 48FPS, Imax macro shots of spinning beads on a wire rig and burning thermite which were shot underwater in a tank - which they hit with a single, strong, hard light source.  

For some extreme slow mo shots they also used a 35mm Arriflex 435 - which could shoot at a higher frame rate of 150FPS.

NEW FILM STOCK

Like his early film Memento, Oppenheimer uses black and white and colour as a way to delineate different timelines and perspectives within the narrative. Nolan went so far as to write whether each scene was in black and white or in colour in the screenplay.

Again the starting point was the creative decision to present Oppenheimer’s perspective by using colour and another character Lewis Straus’ perspective using black and white. 

Once again this led them to a technical snag. Kodak had never cut and finished their Double-X 5222 emulsion in 65mm before. It had only been produced for 35mm and 16mm.

Double-X has lovely contrast and a strong dancing grain in the midtones which is much more pronounced than were they to shoot on colour Vision-3 stock and convert it to black and white in post. 

So, with some work, Kodak managed to manufacture their film stock in the larger gauge size that was needed for 5-perf as well as 15-perf Imax for the first time. 

It also meant that a change needed to be made to the Imax camera. Because monochrome negative has a different backing than colour negative’s remjet layer, it created various exposure artefacts and scratches due to the polished chrome pressure plates that came into contact with the film inside the Imax camera.

To fix this Imax manufactured a custom ‘black oxide’ vacuum pressure plate that eliminated the issue. 

Fotokem, the film lab that processed the black and white stock also had to adapt. They spent three days shutting down their 65mm colour development equipment, cleaning and converting it to the chemicals needed for black and white.

This turnaround meant that the production had to schedule around shooting all the black and white scenes - sometimes waiting up to a week before being able to view the processed rushes.

TWO VISUAL APPROACHES

On top of presenting the perspectives of the two characters in colour and black and white, Nolan and Van Hoytema also employed other cinematographic techniques to do this.

If you look at this scene and this scene you can clearly see through the use of shot size and angles whose perspective each takes on. They use close ups of the character of importance, followed up with over the shoulder shots, taken from their perspective to put the audience directly in their shoes.

On top of this, they more often used a handheld camera and a wider angle 40mm or 50mm focal lengths to shoot the more uninhibited Oppenheimer. While using more static, composed shots on a longer 80mm lens to shoot the more calculated Strauss.    

LIGHTING INNOVATIONS

The team’s technical innovations weren’t limited only to the camera, but also to the lighting.  

To save time and give them the ability to dial in specific colour and brightness values, van Hoytema’s gaffer rigged the lights with latency free transmitters and receivers that fed to a DMX board. 

This meant that the electrical team could rig the lights, then van Hoytema and his gaffer could be on set and adjust all of them by dimming, adjusting colour temperature or even RGB values, from the DMX board. Even if the light was rigged blocks away. 

This was useful for perfectly balancing and matching up the colour balance of the artificial light sources with the natural light that was coming from the same direction. 

Like all the other technical decisions, the lighting was also influenced by the story. Most of Oppenheimer’s scenes were lit with slightly softer light sources like LEDs or HMIs that were diffused, while the black and white sequences of Strauss were lit with much harder, direct light from old tungsten units. 

CONCLUSION

I think what all of these innovations show is that technical breakthroughs come as a result of needing to fulfil a creative choice - not the other way around.

Nolan and Van Hoytema first come up with the best visual ideas that serve the telling of the story, then from there treat it as an engineering exercise, assembling the best team that they can to innovate and invent the best technological solution. 

Read More
Gray Kotze Gray Kotze

How Does A 3D Cinema Camera Work

Let’s go over how these cameras work, what extra film crew is needed to operate them and some methods that work best with this format.

INTRODUCTION TO 3D

Over the years 3D cinema has seen a few periods of resurgence and falling off. 

Although it's in a bit of a dip at the moment, it does still exist as a niche format with its own unique method of capturing native footage on specialist 3D cinema cameras.

So let’s go over how these cameras work, what extra film crew is needed to operate them and some methods that work best with this format. 



HOW 3D CAMERAS WORK

Unlike regular 2D cameras which capture and project a flat image against a flat surface, 3D cameras create the impression of footage having depth with different planes of dimensionality.

They do this by capturing images in the same way that human eyes use to see.

Light enters two eyes, is processed by the brain, to allow us to see the world with depth perception. In the same way, two cameras can be placed near each other at slightly different angles to capture two similar images. 

When each image is viewed by a different eye, the brain is tricked into thinking that the 2D images have different planes of depth. 

Presenting slightly offset images to the right and left eye is also called stereoscopic display, and is the same concept used for other 3D depth perception such as VR.

In 3D cinemas different images are fed to different eyes by using 3D glasses. There are some different varieties, such as glasses that use different colour lenses that only take in projected images of a certain colour, or glasses that have different polarised lenses for each eye that allow each eye to see a different picture.

It is now also possible to create a 3D film without a 3D camera, by manipulating 2D footage with post production software that separates the images into different planes. This Is the case for some movies that are shot in regular 2D that have been re-released in 3D. But, that’s a topic for another day.




3D CAMERA BUILD

There are two types of 3D camera rigs that can be used to get the two cameras close to each other to shoot these slightly offset images. 

The first is called a parallel rig - where the cameras are placed horizontally alongside each other on a sliding plate. The second type of 3D camera configuration is called a beamsplitter. 

This is where two cameras are placed at a 90 degree angle, pointing into a sort of cube that houses a mirror inside angled at 45 degrees. This mirror is like a one way mirror that you see in police interrogation scenes. This means that one side of it acts as a mirror that bounces the light off the 45 degree slope, while the other camera that is level sees through the other side of the mirror - shooting at a slightly offset angle.

As the footage from both cameras will converge to be almost exactly the same, it’s important that the cameras, lenses and any accessories used are all identical. 

This means renting two sets of everything in the camera package. So shooting with one 3D camera rig will require 2 cameras, and 2 sets of lenses. Shooting with two 3D cameras will need 4 cameras and 4 sets of lenses.

This also applies to accessories like focus motors. Often the wireless follow focus will come as part of the specialised 3D camera package. If using a system such as a Preston with a zoom lens, this may mean using two MDR boxes and six motors: 2 motors to control the focus on each camera, 2 motors to control the zoom and 2 motors for the iris.

However, there are some focus solutions that exist that minimise all those extra cables and MDR boxes, such as the CMotion CVolution - which has a single motor box that can drive up to 8 different motors at a time, and run the focus from one wireless handset using multiple motors on multiple cameras.  

All of this gear, combined with aligning two cameras alongside each other means that 3D cameras are very buky, heavy and awkward to move around. For that reason there are some best practices for working with this camera system, and also, a unique crew member that you don’t have on a regular 2D shoot.



CREW ROLES

There are two important control settings which are unique to native 3D cameras and they are both controlled by an extra crew member called a stereographer. 

These two 3D settings are the interocular distance and the convergence.

The interocular distance refers to the space between the lenses on the two respective cameras. On a parallel rig this can be adjusted by sliding the cameras further apart or closer together on the horizontal axis, while on a beamsplitter it can be changed by adjusting the distance of the lenses from the mirror.

Usually this is controlled by the stereographer with an extra motor that can turn gears to adjust this distance between the cameras. 

The reason that this distance between the lenses may need to change for different shots is because, as a general rule, the further away the subject is from the camera, the greater the distance between the two cameras should be. While the closer you are to the subject the nearer the lenses have to be to each other.

So wider shots, such as 3D aerial footage, may need to be shot with a parallel rig where the cameras can be more easily separated, while a beamsplitter 3D camera may work better for a close up where the camera lenses can be brought very close together using the mirror as a divider.

The stereographer will monitor this setting as well as the convergence, which is the angle at which the cameras are adjusted to line up the screen plane so that the images overlap correctly and are properly aligned.

If using zoom lenses this also includes making sure that both lenses are set to the exact same focal length and the width of both lenses, as well as the stop and level of exposure and other settings on each camera correctly matches.

The job of the cinematographer on a 3D film may involve some added technical complications, but otherwise their framing, lighting and other aspects to the cinematography will remain mostly the same as when shooting a 2D film.

The camera operator will have to deal with the implications of a heavier, more cumbersome camera build, but otherwise will also retain their usual role, with an important difference.

They will usually need to frame their shots a little wider than they normally would, allowing for extra frame room on the edges of the shot. This is because the 3D images may need to be slightly cropped into during post production if they need to adjust the convergence of the images.

For this reason they need to keep important parts of the frame away from the edges of the shot, for example, fully framing the top of the actors head without giving them too much of a ‘haircut’.

Custom frame lines can also be added to the monitor, which crop in on the fully captured image, so that operators know the bounds of framing that they need to abide by.

VT, who handle transmitting a live video feed from the cameras to monitors and playing back that footage, is another job on set which may get a bit trickier. 

For example, on Prometheus, this involved setting up a video village for a multicam shoot with two 3D monitors, and two 2D monitors that could be flipped between seeing the left eye and right eye camera on a single 3D camera rig.    



SHOOTING 3D

On top of these layers of complications, there are some other extra best practice rules that are often followed when shooting 3D. 

Cinematographers may be advised to light with a little more contrast between dark and light parts of the frame than they normally would. Creating different planes of light and shadow helps to better sell the illusion of depth from 3D, than if everything were lit to be flat - with a low level of contrast.

For purely practical reasons it may also be easier for DPs to work with zoom lenses rather than primes. This is because changing to a new prime lens focal length requires realigning cameras each time, while zooming in and out minimises the setup time it takes to correctly overlap the images.

Because you are using two cameras, rather than a single camera, it also means that each rig is significantly heavier and bulkier. This means that if you want to move the camera you will have to use rigs that can take a heavier payload - such as a remote head on a Technocrane.

Or if something like a Steadicam is required it will take careful technical preparation to make sure the rig can correctly balance.

The large size of these cameras also make it difficult if not impossible to shoot in tight spaces, such as car interiors, get very low angle shots, or place the camera in any situation that usually requires a small build.

When it comes to moving the camera - stability of motion is usually encouraged. Shooting handheld is advised against as too much movement can be off putting and mess with the audience’s sense of motion. Which is why 3D cameras often rely on shots operated from a base, like a tripod or dolly, or shots with stable motion like Technocrane or Steadicam.


CONCLUSION

It’s easy to see why 3D is niche for a reason. Capturing native 3D images is quite a challenge both from the technical side but also from a financial point of view. It adds layers of complication to the production workflow which can, in many ways, limit filmmaking choices.

However, in the right context 3D can work really well in recreating the feeling of being in a real world environment and increasing the audience’s engagement in the world of the story. While at other times it can feel a bit gimmicky, distracting, unnatural and actually pull the audience out of the movie.

Whether or not it will make a mainstream cinema comeback again remains to be seen. 

But, maybe these 3D rigs will be reimagined, reconfigured and applied to new visual projection experiences that use different but similar principles like the new Sphere or the world of VR. Only time will tell.

Read More
Gray Kotze Gray Kotze

5 Reasons To Light Films With Colour

There are many reasons to use different coloured lighting when you shoot a scene. In this video we’ll go over 5 of them.

INTRODUCTION TO COLOUR

Since the dawn of three-strip film in the 1930s every cinematographer has been aware of the importance of using colour to create images. Different palettes can be constructed by using production design, introducing a look in the grade, or, as we’ll discuss today, with lighting.

There are many reasons to use different coloured lighting when you shoot a scene, so today we’ll go over 5 of them. By looking at a few clips from popular movies as well as some footage I’ve lit using different hues. 




1 - MOTIVATED LIGHTING

Every environment that we walk around in has different frequencies of light bouncing around which - to both our eyes and cameras alike - read as colour.

Therefore, one of the most common motivations for illuminating with a vibrant hue, is to replicate or supplement what we call practical sources - lights that are seen within the frame of the shot - such as a lamp.

Here I’ve set the colour temperature of the practical in the background to 3,200K, then to increase its intensity and direction I’ve added a film light rigged above as a backlight. I’ve set it to the same colour temperature as the practical - supplementing the warm light that already naturally exists in the frame.

A big trend now is for DPs to build practicals into sets or locations - which are usually RGB so that they can be adjusted to different hues to add accents of colour to the lighting. Those practicals could be Astera Titan tubes posing as fluorescent bulbs, RGB strips built into signage, or yellowly, warm kitchen lights. 

As well as adding coloured practical sources that can be seen in shots, another idea linked to this is to create motivated colour lighting - where the light isn’t necessarily seen in the shot, but the direction, colour and quality of the illumination still feels relatively realistic and motivated - like that light could be there just outside the shot.

One way to do this when shooting an interior is to shine a light through a window - with the imaginary motivation for it being that there is some kind of street lamp outside that window.

Here I’ve used a hard source with a sodium vapour gel outside a window which gets cut into strips of light and shadow by shutters. I’ve then used a cooler fill inside to motivate the dusky blue natural light outside and lift the ambience a bit. I’ve rigged this light on a C-stand so it backlights her in the same direction as the natural light coming from the window. 

So, although we don’t see the exact source of light in the shot, it’s probably within the bounds of reality to imagine that there might be an old sodium vapour street lamp outside this apartment window.

Creating these extra imaginary sources is especially useful in night scenes for lifting the exposure so that there’s enough light to be able to shoot, but there are also other psychological motivations for lighting with colour. 



2 - AGITATE VISUALS

When shooting exteriors or interior scenes that take place during the day, the default colour temperature to light with and set the camera to is around 5,600K. 

If both the colour balance and light source match, you’ll get ‘normal’ looking natural colour - with white surfaces reading as white.

Some filmmakers like to use this look as a base for daylight scenes - but then add smidges of more vivid colours to parts of the frame. Now, I can’t speak to the intention behind these lighting choices, but I have noticed that within certain contexts this can have an agitating effect.

Take this location from The Bear. In different scenes there’s this ever present, just off camera red light - which adds a different hue to the otherwise naturally lit space. It’s subtle, but to me this visually ratchets up the tension just a tad - which suits the ongoing chaos and agitating tone that also comes from the story and direction.

There’s something about using a strong hue in contrast to the otherwise natural lighting, especially when it hits the skin tone of a character, which is slightly off putting and brash.

Uncut Gems is another film that does this - often mixing garish pink and off-green lighting, with more traditional CCT sources. This gives skin quite a sickly, unhealthy tone, which psychologically adds to its already tense, off kilter, chaotic atmosphere.    


3 - DEPTH & SEPARATION

So far we’ve seen that lighting with colour can be done for motivational purposes or to induce a psychological feeling, but it can also be used for more classically photographic or aesthetic reasons.

More often than not, cinematographers like to use lighting to create different planes within the image. This creates an illusion that the 2D image has depth, feels more dimensional and can be used to separate characters from the background to make them better stand out.

They can do this by lighting different areas in the image, such as the foreground and the background, with different amounts of light, or with different colours.

The colours that cinematographers choose to place within the different planes of the image are also quite deliberate. Complementary colours are hues that sit at opposite ends of each other on the colour wheel. These tend to be quite visually pleasing to the eye when they are paired together. 

This is why you often see films use a warm practical lamp in the background, that counteracts with a cooler light on the subject in the foreground. 

Not only are these colours complimentary, and therefore aesthetically appealing, but they are also placed in different planes or dimensions in the image - cool, warm and cool - increasing the feeling of depth and breaking up the subject from blending into the background.

Some other complementary or often used colour combinations besides cool and warm include: a dark orange and green, red with a lighter blue or cyan, or a pink-purple with a tungsten yellow.


4 - EMOTIONAL REALISM

There are many cinematographers, such as Vittorio Storaro, who like to talk about the psychology of lighting with certain colours. While the idea that different colours can be used to evoke specific emotions or themes is a whole other subject on its own, I think it's safe to acknowledge that as a whole colour affects mood and how images are experienced by audiences.

For that reason, cinematographers can sometimes deliberately exaggerate colour outside the bounds of reality so that the world represents how the character feels, rather than what it looks like in reality.

This is something that DP Marcell Rév referred to as emotional realism when shooting Euphoria.

As an example let's take this bathroom space, which in real life is lit by a rather ugly overhead tungsten bulb, and apply two different lighting looks to it to evoke two different feelings.

I’ll start by rigging the 60C on a polecat, so that we don’t see it in the reflection when shooting. It provides a nice downward, top light onto the subject.

Then I’ll place a second source so that it shines through the window, adding ambience and a different plane of light on the back wall. 

For the first lighter, brighter, more upbeat tone and look, I’ll set the overhead source to a 5,600K daylight temperature, with a layer of diffusion to soften its intensity. I’ll set the outside hard COB light to 5,600K with an added quarter CTS gel to give it a warm, morning glow.

For the second harsher, darker tone, I’ll remove the diff on the 60C to make the downlight harder and set it to a fluorescent green gel look. I’ll then remove the CTS gel on the light outside and instead warm it up to 2,700K so that it takes on a tungsten look.

I’ll then decrease the camera’s exposure and balance the lights through dimming their intensity values to get a look that feels much dingier and far emotionally darker in tone than the first lighting set up.

By just tweaking the colour, quality and intensity of the two sources we can create two looks that put the character in a radically different emotional space.

5 - STYLISED COLOUR 

Apart from all the above reasons for using colour it can also be used because, well, it just looks kinda cool. 

One of the easiest ways to create a more visually stylised look with lighting is by using different hues. This is often done in short form content like music videos where realistic lighting can be thrown out of the window, and colour is used to elevate the visuals and inject an energy into them. Using a stylised colour palette rather than a naturalistic one.

It’s also a favourite party trick of YouTubers. Light the background wall with one RGB colour, then place another colour RGB source that only illuminates the subject in the foreground and just like that…you’re a YouTuber.

The reason this works comes back to the idea of using complementary colour and lighting different planes in the frame to separate the subject from the background.

Read More
Gray Kotze Gray Kotze

The Filmmaking Pyramid: How To Start Your Career

Is it better for filmmakers to be a specialist or a generalist?

INTRODUCTION

Is it better for filmmakers to be specialists, who focus on developing one niche skill to the absolute max, or is it better to be a generalist whose knowledge is spread out across a diverse range of expertise?

In this video we’ll unpack this question by looking at what I call the film crew pyramid and think about the best way to build the creative and technical side of your career.



THE FILMMAKING PYRAMID

Just like the corporate structure within a company, or the ranking system within the military, the types of jobs on a large industry film set are organised in a hierarchical way.

While companies may produce a product, like fast food, or a service, like financial management, a film production company produces a movie.

At the top of the crew pyramid you have the ‘executive’ positions such as the director and producer who have a large degree of control over how the movie is produced, or are greatly responsible for coordinating the logistics and financing behind making it.

The role of these executives at the top of the pyramid is usually that of a generalist. In other words they have a very broad skill set and knowledge. They lead through delegating multiple tasks to other crew, who fall into different departments, a bit further down the chain.

For example, the director will communicate with the cinematographer to create a visual style, framing and lighting of the film. Or, once it’s been shot, they will work with an offline editor to cut the movie with the pacing, feel and story that is most suitable.

The further down the pyramid you go, the more specialised the role of the crew becomes. 

For example, let’s look at the camera department. It’s headed by the cinematographer, a generalist whose role also includes liaising with many other departments, such as the art department that builds sets and does production design, the lighting department that rigs the illumination and the grips department who execute the camera movement.

Under the DP we have the camera operator. They sit behind the camera, frame shots, expose the image and generally control how the camera is used while filming - under the direction of the cinematographer. 

This role requires a broad knowledge base, experience and technical expertise. For example, they’ll need to understand the best ways to assemble the camera in different situations, how to compose shots, how to best change all the key camera settings, where to position the camera for the best lighting, and the list goes on.

Further down we have the focus puller or 1st AC - whose role starts narrowing and becomes a bit more technical and specialist - rather than generalist. They build the camera into its different forms, coordinate and troubleshoot any problems with the gear, and, as the name suggests, pull the focus so that the subject in the image remains sharp.

The loader, or 2nd AC is an assistant to the focus puller, and performs almost an entirely technical role - doing things like the clapperboard, marking actors and reloading the camera.

Then, right at the bottom there is the camera trainee, who works under the 2nd AC and acts as a general assistant to the camera department through more basic technical tasks like charging batteries or organising the gear in the camera truck.

If we zoom out on this pyramid you can see a trend. The higher up the job is, the more people they manage, and the more of a generalist they have to be - who fully understands all aspects of the filming process.

While the further down the pyramid you move, the more your job becomes specialised on performing a few, niche technical tasks.                  

It should also be noted that this structure only really applies to larger long form jobs and industry sets. For solo shooter work, or documentaries, many of these jobs may be combined into one, and fewer crew may be hired who will be expected to perform more tasks in a generalist role. Such as a documentary DP, who also needs to light and record sound by themselves.


SPECIALIST  VS GENERALIST SKILLS

This filmmaking pyramid model was set up to use hierarchy as a way of on the job training - which started crew off by focusing on specific technical tasks, which would later continue to expand into more general knowledge as they moved up the ladder.

For example, a trainee starts off being exposed to film equipment, learning the names of all the gear and charging batteries. They move up to 2nd AC, where they need to be able to quickly locate all that gear, change lenses and learn the correct distance at which to put the clapperboard. These skills transfer up to 1st AC, where they need to use all that gear to build the camera any number of ways, and use their perception of distance to pull focus.

In this way film technicians continue broadening their skillset moving up or occupy the role of cinematographer, which requires a comprehensive knowledge of not only the camera but also lighting, grips, post production and of course a photographic eye for storytelling.

Therefore progression up this pyramid usually requires building more of the technical fundamentals of creating images at the bottom and later acquiring more of an overall creative view on how images can be used and created in different situations to suit the script.   

This old school method of working your way up the ladder isn’t the only way to become a cinematographer. Today, there are more younger DPs stepping into this position than ever before, who may get there through a combination of producing and showcasing really good lower budget work, creating connections with directors, building their skills through film school, using online educational resources, or some combination of the above.



 DIFFERENT KINDS OF FILMMAKERS

The idea of being more of a specialist or more of a generalist can also apply to the creative instincts of the filmmaker. For example, some cinematographers like to be known for creating a specific kind of a look - which is specialised and standardised across many projects.

This look could be shaped by things like selecting the same kinds of lenses, choosing to shoot on the same format, employing the same kinds of lighting techniques or framing shots in the same way.

Some directors are also known for creating a specialised feel, tone and look throughout their filmography. Movies by Wes Anderson will feature handmade production design, pastel colour palettes, symmetrical, centralised framing, straight tracking camera moves from a dolly, and very specifically directed performances. 

Whatever the movie is, this specialised brand of filmmaking will be carried through.

On the other hand there are filmmakers who work in more of a generalist way - which is probably a more common way of working, especially for cinematographers. 

What I mean by this is that they acquire a knowledge of cinematography which can be applied to a wide range of situations and shaped into different forms so that the visual style is built slightly differently to create a suitable feel for each unique story and director that they work with. 

Some of their work may be more stylised, some may be more naturalistic.

Having a broad skillset as a DP can be useful as you’ll be able to take on a greater variety of projects and adapt to what the director wants. 

While more specialist practitioners may only get the call from directors who want their movie shot in that particular style associated with the DP. If this is a bold style, for example with lots of colour, contrast, grain and haze, it may also draw more eyes to your work - especially to short form commercial or music video directors who want pieces to have a specific look and feel. This could be useful especially in building a career early on.   

CONCLUSION

 In today’s era of filmmaking there are no rules about how you should build a career as a cinematographer. But perhaps it's useful to keep this pyramid in the back of your mind, to remind yourself of how you need to develop.

In my experience I’ve found it useful to start accumulating skills in more niche positions in the camera department by working as a 2nd AC. Learning about all the other facets of filmmaking by being exposed to it on many different sets. Then broadening my creativity through watching films and deepening my experience by taking any opportunity to shoot that you can. 

Regardless of how you go about it, you’ll need to move from inexperienced to experienced, from more of a specialist to more of a generalist and from a technical role to a creative role. All the way growing from a more focused, smaller, specific world, to seeing and crafting the wider, bigger picture.  

Read More
Gray Kotze Gray Kotze

What Makes IMAX Different

Lets go into the details of Imax by looking at what makes these cameras are different from regular cinema cameras, how it affects what the footage looks like and why this high quality format is only used on a small handful of movies.

INTRODUCTION

There are two parts to getting a movie into a cinema.

First, the film needs to be captured with a camera.

And second, it needs to be presented by projecting that image onto a screen.

Imax basically took both of these parts from traditional cinema and supersized them, creating a camera system that captured enormous, top quality, high resolution images, and combined that with a projector that cast the film onto a gigantic sized screen.

So let's dive a bit further into the details of this technical process by looking at what makes Imax cameras different from regular cinema cameras, how it affects the footage and why this high quality format is only used on a small handful of movies. 

IMAX CAMERAS

To understand Imax and what makes this format different, we need to first know how most mainstream movies are captured. Up until about a decade ago, almost all movies were shot and presented using 35mm film. These have since been replaced by digital cameras, but we’ll get to that a bit later. 

35mm film is a photochemical emulsion that, as the name suggests, has a total width of 35mm, which records by passing it vertically through the camera. However, as some of that frame width needs to be used to accommodate sprocket holes, which are called perfs, that pass the film through the camera - the total usable size of a 35mm frame is around 24.90mm.

Different methods can be used to record 35mm frames at different heights, which I went over in another video, however the tallest possible 35mm film size that can be shot is 18.70mm.

If we compare this to the Imax frame, with its approximate size of 70x48mm we can see that it’s over eight times the size of what traditional cinema cameras capture.

The larger the surface area of a piece of film, the more photographic information it will contain. This means that an Imax image resolves much more detail and has very little visible film grain. Basically, Imax footage will look significantly cleaner and higher quality.

But, how do these cameras record such a big frame?

Instead of using 35mm film, they use a larger gauge 65mm stock. Again, once the extra space for the perfs on the side has been removed, it leaves us with a frame width of around 48.5mm.

If you compare 35mm and 65mm frames on film, you may also notice that the perfs for the 65mm sprockets are on the top and bottom rather than on the left and right. This is because 65mm Imax film runs horizontally through a camera, rather than 35mm which runs vertically. 

Passing through the gate horizontally gives Imax the advantage of being able to record a frame width that is larger than its 48mm size - which now becomes the frame height.

Each frame that Imax cameras record is 15 of these little perforations long, which is why this format is sometimes referred to as 15/70.

IMAX CAMERA GEAR

There are only 26 of these incredibly niche Imax cameras in the world. As such it’s a bit tricky to find information about these cameras but one of the most popular Imax film cameras is the MSM 9802, which is a sort of general purpose camera with a magazine that can hold 1,000ft of film.

Then there are also the Imax MKIV, MKIII and MKII bodies, which includes a “lightweight” version that holds 500ft of film.

Because the size of the film is so large, Imax have designed special lenses that have enough coverage to be used with these cameras without vignetting. They are made from glass by Carl Zeiss which are rehoused and designed specially for Imax.

Other third party lenses, such as rehoused Hasselblad medium format lenses from Panavision or their large format Sphero and System 65 lenses can also be used to cover the Imax format.   


DIGITAL IMAX CAMERAS

Since the early 2010s digital cinema cameras have largely overtaken using physical film to shoot movies.

However, because of its enormous negative size, many have argued that there is still no digital equivalent that can match up to the extremely high resolution, photographic feel and colour that Imax film delivers - which is estimated to be the digital equivalent of around 18,000 pixels.


DIGITAL IMAX CAMERA GEAR

Having said that, Imax has moved with the times and approved a list of high end digital cinema cameras, which they consider to be ‘Imax certified’. These include the 6.5K Arri Alexa 65 Imax, the Alexa LF and Mini LF, the Sony Venice, 8K Raptor and Monstro from Red and the Panavision DXL2 - with its rehoused Red sensor.


PROJECTION

Just like the Imax camera, that captured footage by moving the film horizontally, so did the first Imax projector need to be reconfigured and designed so that the film could pass horizontally past a gate, pause for a tiny fraction of a second as the light from the projector bulb illuminated the image on the massive Imax screen, before moving to the next piece of film.

The area that they projected onto was also much larger than traditional cinema screens - with the average screen size being around 18x24m.


ASPECT RATIOS

15-perf Imax is shot and presented in its tall, native aspect ratio of 1.43:1. This squarer format is a great ratio for using more symmetrical, central compositions, for framing singular characters, and incorporating more lateral height into the shot.

While the film industry was transitioning to shooting on digital cinema cameras, so too were cinemas changing from film to digital projection. In 2008 Imax introduced their first 2K Xenon digital projector - which they used to extend the number of Imax screens by expanding to regular cinemas - not only the custom built, enormous Imax screens.

Digital projection changed Imax in two important ways: firstly it changed the aspect ratio for 2K digitally projected Imax movies to a wider 1.90 frame - that chopped off the top and the bottom of the original Imax frame. Secondly it greatly reduced the resolution of the projection from the estimated 12K to 18K pixels in the Imax film negative to a 2K file.

In 2014 a 4K digital Imax projector was also introduced. This higher resolution projector used lasers as a light source to project an image twice as bright with a better contrast ratio than the 2K projectors. 

The increased power of this projector allowed it to show both the 1.43:1 full Imax negative on larger screens, or the wider 1.90 frame on smaller screens.

This means that some movies can now be distributed to both 2K Imax venues in 1.90:1 and some Imax film projectors in 1.43:1. 

Filmmakers need to keep all of these formats in mind when lining up their shots, to find a frame that works well in a wide aspect ratio as well as a tall one. Usually this means that the original Imax negative will include a lot of headroom for characters, which can then get cropped for the wider format.  


VISUAL CHARACTERISTICS

So, what is it that makes movies shot on Imax feel different from regular 35mm film or the equivalent Super 35 digital cinema cameras? Other than highly resolved detail and clean shots with minimal specs of film grain or noise.

Because the size of the negative that you are shooting on is so large, it produces a different field of view. In other words if you use the same lens on a Super 35 camera and an Imax camera, the Imax footage will look a lot wider, while the Super 35 footage will look more ‘cropped in’.

Cinematographers will often compensate for this extra width by using longer focal length lenses. So, where they would normally use something like a 21mm lens on a Super 35 camera to get a wide field of view, they’d have to go to a 50mm lens on Imax to get a similar wide perspective.

Using longer focal length lenses means that the depth of field will be shallower. 

So, practically, this means that Imax cameras can film wide shots on longer lenses, and achieve a shallow depth of field - where the background is out of focus. 

Whereas Super 35 cameras would use wider lenses to shoot a similar perspective which renders much more of the shot sharply in focus.


DRAWBACKS

Imax does come with some notable downsides. The first and most obvious drawback is cost. Imax cameras are incredibly niche which makes them a bit expensive to rent. But, more importantly, the price of buying and processing the amount of 65mm film required to shoot on Imax is incredibly high. 

This is because 15 perforations per frame moving through the gate 24 times in 1 second, works out to around 337ft of 65mm film for every minute that is shot.

If we do a super rough calculation: 400ft of 65mm film costs $619, developing and scanning one minute of film can cost somewhere around $490, so you’re probably looking at paying somewhere close to $1,000 for each minute of film that is shot. Of course they'll be bulk discounts and whatnot for features but, point is it’s not going to be cheap.

Another negative is that because the 15 perfs of film need to run through the camera so much quicker than regular 4 perf 35mm, Imax cameras are incredibly noisy when shooting. This makes it difficult if not impossible to record clean sync sound audio without hearing the whir of the film running.

Finally, they are incredibly bulky, heavy and cumbersome to operate. They were designed to be operated from tripods or on a remote head on a helicopter - certainly not handheld.

These limitations meant that Imax was only used for many years to shoot exhibition sorts of films, like short documentaries with aerial footage and wides of landscapes.

Imax footage was later incorporated into feature films in key action sequences, while the majority of the film’s dialogue sequences were filmed using more traditional 35mm camera systems.

However in recent years, filmmakers like Chritopher Nolan who have access to large budgets have pushed hard for the format to be used to shoot entire narrative feature films. This has involved finding solutions to some of the technical challenges posed by these bulky, noisy, expensive machines.

Recent news has also come to light that Imax will release a new, redesigned version of the Imax camera, based on feedback from filmmakers, in 2024. 

This may make the format even more usable and easy to work with as a feature film making tool. So, it seems that as long as cinema releases remain a thing and as long as there are directors and audiences that appreciate this super high quality format then Imax should still be around for quite some time.

Read More
Gray Kotze Gray Kotze

Tips For Shooting A Cinematic Documentary

In this video I’ll break down the importance of creating a coherent form and visual style for your film, and go over some technical tips you can use to do so.

INTRODUCTION

Documentaries come in all shapes and sizes. Some use re-enactments, others use interviews, while some films only stick to using observational footage. In this video I’ll break down the importance of creating a coherent form and visual style for your film, and go over some technical tips you can use to do so.


FORM

The visually strongest documentaries have a coherent style and overarching vision to them. Basically, choosing a visual approach and sticking to it for the entire film: whether that form relies on a mixed medium approach with archival footage, lets interviews play a prominent role, or means shooting in a completely observational style. 

Going a bit deeper, within that overall form, you should think about how you want to visually present your footage and try to maintain a coherence to this look throughout the film. For example, if you’re shooting interviews try to use the same framing techniques for all the interviews throughout, or if you want a steady, observational feel to scenes, then commit to filming only using a tripod whenever is realistically possible. 

You want the form and style of the documentary to be coherent and enveloping, not stylistically fractured and distracting.

When it comes to assembling the puzzle that is documentary, I’d say that there are 4 main pieces you can select - 3 of which you have visual control over. Number one, the interview, number two, B-Roll, number three, what I call ‘scenes’ and number four, archive - older footage or photos which have already been shot by other people, which is out of your control.

Some documentaries include all of these pieces, while some only include one - such as archive, or footage from observational scenes.

I like to think about footage other than interviews in terms of being either B-Roll or ‘scenes’. B-Roll is footage that usually doesn’t include dialogue between characters, and is composed of shots that create a mood or convey information. For example establishing shots of a location, more textural or atmospheric shots of objects, or undirected shots of subjects that are used to create a tone.

When I talk about ‘scenes’ this refers to sequences where there is either a dialogue exchange between characters, or a series of continuous shots of a subject are filmed that shows them performing some kind of action. Basically building out a mini narrative sequence of shots, rather than just filming more stand-alone, tonal shots. 

INTERVIEW STYLES 

Interviews are different as they provide direct, personal testimony from the subject that can be a great tool for conveying information, their thoughts and opinions or creating a narrative arc.

Let’s go over some tips for visually setting up an interview.

The first two decisions you want to make are how you frame the shot, and where the character’s eyeline will be. Whatever choice is made, I’d recommend sticking to the same format for all the sit down interviews in the film to preserve that overarching visual style we discussed earlier.


OFF-CAMERA INTERVIEW

The most traditional way to shoot an interview is by framing the subject in a medium shot or close up - where the bottom of the frame sits around the waist or chest of the character and the top of the frame includes a little bit of headroom. Usually the character will be placed either a bit more left of frame, or right of frame - which opens up their body a little bit and creates some empty, negative space on one side.

The interviewer will then sit next to the camera, with their head at around the same height as the lens. They will usually position themselves on the same side as the negative space. So if a character is placed on the left of frame, with more empty space on the right of frame then the interviewer will sit camera right so that the eyeline of the subject - where they look when they talk - will be orientated towards the open space, which makes the frame feel a bit more comfortable and natural.

This is called an ‘off-camera’ eyeline. A practical reason to shoot interviews like this is because the subject can talk to the interviewer face to face in a more conversational manner. 

An off-camera eyeline can also be combined with a host of different farming ideas. Such as placing the character more in the centre of the shot, using a tighter shot size or a wider composition, all the way to shooting the interview from a more extreme profile angle.

Again, whatever frame you decide to go with it’s usually best to be consistent, rather than framing each interview in the film using a different angle and shot size. 

An alternative to this is to use a ‘to-camera’ eyeline - where the subject’s gaze is fixed directly on the lens, looking straight at the audience as they talk. Because they are looking the viewer in the eye, this technique may create more of an intimate, personal feeling than a traditional, off-camera interview.

However, it comes with a practical downside. Trying to get a subject to look straight into the lens when they speak can be quite difficult. Often their eyeline will naturally drift off-camera to the interviewer who is asking the questions.

But, there is a solution to this which I’ve used. It’s based on a technique famously created by documentarian Errol Morris.

TO-CAMERA INTERVIEW

The Interrotron setup uses a teleprompter that is set up in front of the camera's lens. This would usually project text onto a piece of angled glass so that the person in front of the camera, like a news presenter, can read from it. However what we want to do is swap out the text with the face of the interviewer.

There are a few ways of doing this, but the setup I use involves wirelessly transmitting a video feed over wifi - using something like a Hollyland Mars wireless transmitter - from a low cost DSLR which is pointed at the interviewer, to an iPad, which is placed at the base of the teleprompter - that then gets projected onto the teleprompter screen.

I’ll then run a wired feed with an SDI cable from the camera that is actually shooting the interview to a monitor which I can place in front of the interviewer. That way the subject can see the interviewer on the teleprompter - which will make them look straight into the lens - and the interviewer can see the subject on a monitor.

This allows them to have a natural conversation, while the subject maintains a to-camera eyeline - kind of like how you would on a video conference call.  

The straight-on directness of to-camera eyelines lends itself to symmetry and central framing. Therefore the subject is usually placed in the centre of the shot. Here’s an example of a tight, personal to-camera eyeline interview that I shot for director Jessie Zinn for a short doccie called Faces Of Fortune.

SHOOTING A SCENE

Now that you’ve decided on an interview format, and shot the character’s testimonial, you’ll probably want to fill it in with some of those observational ‘scenes’ or B-Roll that I talked about earlier.

There are many ways to do this. A lot of the approach to filming scenes will be dependent on the subject matter of what you’re filming. If you’re shooting in a war zone then obviously you’ll just have to run and gun and grab whatever footage you can. However, if a more controlled shooting environment is possible here’s how I tend to approach it.

Usually the director will want to give the subject or subjects some kind of action to perform - something which they would naturally do in real life. This could be anything from having a conversation with their friends, trying on clothing, to taking care and interacting with an animal.

Once I know what they will be doing I like to ‘block’ the scene. I use this term very loosely to mean that we’ll work out a space where the action will take place. 

I’ll then create a sort of mental frame for the wider shot that we’ll need to film, think about what composition will work well for this, what objects we may want to add or remove from the shot and what I’d like the lighting to do.

The reason I always like to start wide is because then you know the cutoff point for where you can place gear, like a microphone or a light, without it being in the shot. 

You don’t want to start by framing a close up and then move wide, only to find out that you need to stop shooting to move gear or remove an object from the frame that is visually unappealing. What you want is to set things up as much as you can so that once you start rolling the camera you don’t need to stop until the scene is finished playing out.

This means that if you choose to light a space, you want to do so in a way where the light placement will work well for the entire sequence. In other words, lighting the space itself, rather than a specific close up shot.

For this reason I usually like to use as few light sources as possible. For example if it’s a dialogue scene in a bedroom with a single window, I’ll place the fixture outside the window to give the natural light a bit more shape, contrast and consistency.

This frees me up to move quickly within the room without having to avoid gear or stands, and rapidly reframe shots to capture all the angles needed to cut the scene together as the actions play out in real time.

Again, I like to do this by first framing a wide, then as the scene plays out pushing in closer to shoot mids or close ups. Often if it’s a conversational scene, I’ll return to shooting a wide or a two-shot of the characters talking as it progresses, and finally go back to end on the wide as the scene dwindles out - to give the director a quieter, long shot to end the scene on. 

Although it may seem counter intuitive, when you’re shooting a conversation it’s equally as important, if not more important, to get shots of whoever is not speaking, not only getting shots of whoever is talking.

We like to call these ‘reaction shots’. Having these in the edit allows the director to cut to a shot of whoever is not talking, and place it over shots of the character talking when they want to cut up and only use little bits of what they are saying. It’s a good way of hiding what would otherwise be jump cuts.      

Although we’ll always try to get all the shots we need in real time, it is sometimes possible once you finish a scene to go back and ask the subject to repeat a specific action again that the camera may have missed. Usually it helps to film these shots tighter, so that they can be used as an insert without breaking continuity or jump cutting.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Ben Richardson

Let’s explore how Ben Richardson's career has evolved, all the way from his start in animation, to working on some of the most popular contemporary long form television series by breaking down his philosophy on cinematography and some of the gear he uses to achieve his look.

INTRODUCTION

Ben Richardson’s cinematography has subtly evolved from an organic, looser, handheld look, to a bit of a cleaner, smoother style. However, his photography has also maintained a naturalistic feel to it that makes it seem that what is happening on screen is true to life. 

Let’s explore how his career has evolved, all the way from his start in animation, to working on some of the most popular contemporary long form television series by breaking down his philosophy on cinematography and some of the gear he uses to achieve his look.  


BACKGROUND

Ben Richardson’s first adventure into cinema didn’t start with live action filmmaking, but rather with animation. He moved from England to Prague where he found a close knit creative community, as well as a low cost of living, that allowed him to work and survive outside of the industry system of filmmaking. There he directed and shot the animated short film Seed.

“For me animation was a way to have a kind of control without having to have a huge budget. You could accomplish things with animation kind of with just sheer will power. You know, you’re literally crafting every frame. You don’t need a lot of light. You don’t need a lot of equipment.”

In Prague he met his first major creative partner, director Benh Zeitlin, who was also there looking to make an animated short. They later worked together on the live action short film Glory At Sea - a stylistic precursor to their first feature Beasts Of The Southern Wild.

There are some notable similarities between the animated short Richardson shot and directed and the first feature that he photographed: both were shot on a textural 16mm film medium, both featured a handmade, cobbled together, art direction aesthetic and a neutral colour palette with greys, blues and browns.


PHILOSOPHY

Richardson’s work on the feature also incorporated another key thread in his work: the ability to visually translate the feeling of a character and their performance to the screen. 

“I just remember having this experience of how she felt on screen and how you could translate the presence of this amazing little performer in the room to the camera. And it was the very subtle sort of movements and the way you framed her and the way you sort of held her that just, sort of, really translated what we could all feel in her physical presence into the screen image.”

Creating images that truly capture the feeling of the actor’s performance is a big focus of his. 

Whether that was with the rough, loose handheld work on Beasts Of The Southern Wild that always chased around the character, keeping the camera low to the ground at her eye level to better empathise with her perspective of the world, or whether it was by using more smoothly operated camera work and long telephoto lenses to compress more hardened, calculated characters against the landscapes and make them looming and ever present in Yellowstone.

Whatever cinematic perspective he uses, his approach to photography is never heavy handed or reliant on stylised visual tricks.

A technique I noticed he often uses when framing is incorporating foreground elements into the shot. So, instead of shooting a ‘clean’ shot without any elements placed between the subject and the camera, he’ll often place something in the foreground of the shot while keeping the focus deep on the subject.

To me, this has two functions. For one it creates a bit more depth in the frame and makes images feel less flat and more dimensional. It can also create quite an objective feeling, like the camera and therefore the audience is a silent observer, crouching in the grass or sitting across the room watching a scene play out. 

Richardson almost always shoots and lights in a way that is very naturalistic and tries to suck the viewer into the story, so that the camera’s presence is unnoticeable and offers as little distraction as possible.

His films often rely on shooting in real locations, rather than studios, use natural palettes which aren’t overly vibrant, and, most importantly, lights spaces in a naturalistic way - which truly represents the way the locations feel in real life, in much the same way as he tries to capture the feeling of actor’s performances.     

“The references we were looking at in terms of colour palette and the lighting side of cinematography were very naturalistic and I definitely wanted to work with that and I wanted to work with as much natural light as possible. And what that really came down to was just a tremendous amount of tech scouting, making sure that I knew exactly where and when the sun was going to be doing the things we wanted it to be doing.”

This process of observing how the natural light interacts with each shooting location, and using that to schedule the perfect shooting time for each scene is a great way of controlling lighting conditions without having any lights. 

Because these shooting windows were quite narrow on Beasts Of The Southern Wild this did mean any time they ran over schedule they significantly cut into their available shooting time. Such as this complex fishing scene - which they shot in the final 17 minutes of light remaining in the day. Covering the action with a single, long take in the wide shot and scrambing together the rest of the coverage with the final 11 minutes of light to spare.


GEAR

When working with a small crew and limited gear, the best way to light is to manipulate the light already present in a space by making small tweaks to balance it. In other words controlling how bright or how dim the light is in certain parts of the frame.

“Even simple things like opening and closing doors, you know, switching out a 25 watt bulb for a 15 watt bulb in the corner and, you know, tiny little shifts to keep balancing this place but all driven by this attempt to be very naturalistic and give the environments room to breath in the same way that the performances were.”

To preserve continuity and enhance the naturalistic look on other long form projects like The Mare Of Easttown, he used larger tungsten light sources to recreate the feeling of sunshine - specifically 10Ks and 20Ks. 

Rather than shining these directly through windows he used CRLS aluminium bounce reflectors. These little panels act like mirrors that you can use to bounce hard light sources - but because they are made from polished aluminium rather than glass they reflect light with a slightly softer quality.

You can also use them to increase the distance between the source and the subject by placing the reflectors outside the window, and then the lamp away from the window pointed at the reflector. This creates less fall off and gives the light the quality of natural sunlight.

Because real ambient sunlight indoors comes from the sun bouncing off different objects in a room - it has quite a broad, soft, low intensity. Therefore most of his day-interior lighting tries to mimic this soft, spread out quality.

While his day-exteriors usually just use whatever the natural light is doing - whether that is hard, direct sunshine, or a dimmer, diffused cloud cover. Like most DPs when shooting with hard sunshine the characters are often blocked and positioned so that the sun either lights them from the side or from behind as a backlight.

When it comes to cameras, most of his feature films have been shot with a single camera. However, because television series, with their many episodes, usually need to be shot quite quickly with quite a lot of coverage, he’ll often have to bring in multiple cameras. 

Yellowstone used three cameras - Arri Alexa Minis - to cover scenes. Normally when using so many cameras it makes it a bit practically easier to use zoom lenses. This is because the position of one or two cameras may need to shift so that the other cameras won’t see them. Having a zoom lens, rather than a prime, lets the camera operators then adjust the frame by zooming in or out just a little bit to avoid getting the other cameras in shot.

However, Richardson liked to shoot with fixed prime lenses - the Zeiss Ultra Primes. Knowing what prime lens was on each camera helped him structure the scenes more easily and know what coverage he would be getting - even if he wasn’t looking at every shot on a monitor. 

For example, if he put A cam on a 24mm Ultra Prime and B cam on a 135mm lens, he knew he was getting one wide shot and one close up. However if he had both A and B cam on a 24-290mm Angenieux zoom lens, the operators would be able to zoom in and out and pick up different shots at different focal lengths in a way that may have been a little less focused and deliberate.

Richardson often likes to choose longer, telephoto focal length lenses. This is especially noticeable on shows like Yellowstone - where he would shoot close ups of characters on 135mm or even 180mm prime lenses. 

Even wide shots, which are more often shot with wider angle lenses, he likes to shoot on telephoto lenses. This compresses the image - making it feel like the background landscapes are closer to the characters than they actually are. Because using a longer focal length means placing the camera further away from the actors to get a wide shot, it also means that these shots will often incorporate some foreground elements between the camera and the actors - which we mentioned before.    

Unlike some DPs that only like using old vintage lenses with interesting swirling characteristics, anamorphic bokeh or heavy flares, Richardson often prefers spherical glass which is a bit more modern, consistent, sharp, and preserves some shape of the background in the bokeh. These lenses include the Zeiss Ultra Primes, Zeiss Master Primes, or Leica Summilux-Cs.

Again, this look pushes a naturalistic feel rather than an overly stylistic one.

Other than Beasts Of The Southern Wild, which was shot on 16mm film, he almost always uses a Super 35 digital Alexa: like the Mini or the XT.

He’s also used basically the same LUT file over his past 5 projects with the Alexa, to get a consistent, natural colour look that he’s happy with.

Read More
Gray Kotze Gray Kotze

The 2 Ways To Film Stories

Let’s use this video to explore the different ways that camera, lighting, sound and editing choices can be used to create either a subjective feeling or an objective experience.

INTRODUCTION

In the process of writing, shooting and editing a movie filmmakers are faced with a choice. Do you want to tell the story in a way that is more objective, and detached from the character, or in a way that is more subjective and replicates the character’s point of view or emotions?

Let’s use this video to explore the different ways that camera, lighting, sound and editing choices can be used to create either a subjective feeling or an objective experience.



OBJECTIVE VS SUBJECTIVE FILMMAKING

In cinema, the perspective that you create is incredibly important. It affects how the audience interprets the story, feels about the characters and the kind of emotions that they experience along the way.

Although there are two perspectives: an objective one and a subjective one - there is also lots of middleground between them, and room for this to shift throughout a movie.  

There also isn’t only one technique or way of creating a perspective. It is usually the result of a handful of different methods and ideas that are combined to create a point of view on the story. 

Let’s start with objectivity. This is when filmmaking techniques convey information from an omniscient point of view: as if the shot is being observed by an audience member. There is little emotional emphasis attached to the perspective as the camera passively observes the action from a distance.

Most shots in movies are quite objective. 





CAMERA LANGUAGE

To me, this feeling is best achieved by using a stable camera, operated off a tripod or dolly. If there is camera movement, it is usually because it is motivated by the characters themselves moving.

I have another more comprehensive video on shot sizes, but basically wider shots like long shots or medium shots, place the characters a bit further from the camera and therefore feel a bit more objective - like the camera is observing their actions from a healthy distance away.

Positioning the camera so that it sits at around the same height as the characters, and making sure it is filming at a neutral angle without a tilt, is another way of imposing a feeling of objectivity.

Subjective shots, on the other hand, may try to place the audience in the shoes of the character on screen by using what is called a point-of-view, or POV, shot. This is often achieved by cutting from a shot of the character to a shot that represents what they would be seeing.

Such as this sequence which cuts from a shot of a character whose eyeline is a bit off camera, to a shot of their perspective which mimics the same angle as their gaze.  

Or, this sequence, which mimics a person’s point of view by using a macro lens to shoot the image through a scope - that the character is looking through. This POV technique is also sometimes done by using a black mask or shooting through other devices like binoculars, a keyhole in a door, a telephoto lens of a camera, or to recreate the perspective of looking through night vision goggles. 

Some movies, like Enter The Void, even leaned on this POV technique so heavily that they incorporated the character blinking, by cutting to a couple black frames incredibly quickly to create this illusion. 

An entire genre, of found footage movies, has even been created around the idea of only using shots taken from the literal POV of a character who is filming everything. 

But there are also other camera techniques, other than using literal POV shots, that can be used to create a visual experience which is more subjective and focused on the character. 

Using a tight shot where the camera is placed physically closer to the actor during filming creates a more subjective, closer bond between the audience and the character, than filming in a more objective wide shot.

Shooting from either a low angle, to make characters appear larger than life, or a higher angle that diminishes them, is a way of imposing a feeling on the image. This usually makes the audience experience the shot in a subtly more psychologically subjective way - where the camera makes the character feel stronger or weaker. 

Although it depends a lot on context, I often find that handheld camera movement creates a more subjective language. Perhaps this is due to handheld motion’s connection to the documentary genre - where the character may interact with the camera and is more aware of its presence.

This handheld motion can also be combined with an over-the-shoulder perspective, to make it feel like we are following in the literal footsteps of the character. Again, bringing the audience and the character closer together. 

Another type of camera rig that can be used to create a subjective feel is called a Snorricam - which attaches the camera directly onto the body of the protagonist. This glues the audience onto a close up of the character, so that we move with them in a subjective way.  

A good rule of thumb is that the more stylised and experimental the camera language is, the more it is usually trying to impose a feeling on the audience, get them into the mind of the character, and therefore the more subjective it is. 

While, the more naturalistic the camera’s role is, the more it creates an observational, detached perspective that is objective.    



SOUND

Sound is a filmmaking tool that also plays an important role. Objective shots have sound that presents more of a general recording of the space. We pick up on things like dialogue and any ambient sounds in the location, such as a room recording of broadcast sound from a TV that is on, while excluding special sound effects, or audio that wouldn’t naturally occur from the mix.

Subjective feeling shots will often play with sound, and may share the sound that characters hear with the audience - in a similar way to how the camera uses a POV shot to share the perspective of a character. A good example of this is when the audience hears the same sound that a character is hearing in headphones or on a telephone.

Another, more extreme version of subjectivity is if the plot and filmmaking mechanisms dive into the mind of the character - through what is called a mental subjectivity. 

This could be done by using a voice over spoken by the character, by showing their dreams, using a shallow depth of field to evoke their memories, or even using visual effects or camera trickery to convey hallucinations that they may be experiencing. 

When the language of cinema takes on how the character is experiencing the moment it is subjective.



EDITING

How quickly or how slowly scenes are edited also has an effect on how stories are told. Usually, shots that play out in longer takes or scenes with very few transitions between shots tend to mimic how we experience time in the real world and therefore usually feel more natural and objective.

While rapidly cutting through many shots incredibly quickly has more of a deliberate emotional effect and can create feelings such as anxiety or tension - feelings that emerged artificially due to subjective editing.



LIGHTING

A final cinematographic concept that can be used to tell stories in different ways comes down to how cinematographers use light.

Objective storytellers like to lean into using naturalistic lighting. This is when either only natural ambient light is used, or, more often, when the film lights that are used to add illumination to a shot are placed in a way that is motivated by the light in the shooting location.

A good example of this is how when shooting interiors big sources of light with a daylight colour temperature are placed outside windows, shining in, to mimic the direction and quality of sunlight, while also adding a more cinematic contrast and increasing the levels of light inside the room.

For more subjective stories, cinematographers can employ more expressionist lighting techniques. This may include using RGB LED lights or fixtures with coloured gels, to add different unnatural hues of light to a scene. This use of vibrant, exaggerated colour elevates stories out of the real world, to create a feeling or a tone that embodies how a character sees the world in that moment - rather than merely presenting the world plainly and objectively as it really is. 

Read More
Gray Kotze Gray Kotze

How Greta Gerwig Shoots A Film At 3 Budget Levels

Let’s look at three films made by director Greta Gerwig at three wildly different budgets: the no budget Nights And Weekends, the mid-range budget Lady Bird, and the Hollywood blockbuster Barbie - to see how her approach to filmmaking shifted along with her filmmaking resources.

INTRODUCTION

Each filmmaking choice is based on its artistic merits but it can also be determined by budget. This leads to some movies being more loose, improvisational, realist and character focused, while others can create their own worlds, be larger than life and precisely manicured.

To show this let’s look at three films made by director Greta Gerwig at three wildly different budgets: the no budget Nights And Weekends, the mid-range budget Lady Bird, and the Hollywood blockbuster Barbie - to see how her approach to filmmaking shifted along with her filmmaking resources.


NIGHTS AND WEEKENDS - $15,000

Gerwig’s first interaction with director Joe Swanberg came when she had a minor role in his movie called LOL.

This began a creative partnership that resulted in her writing and starring in another of his low budget films. These micro budget, independent movies of the early and mid 2000s were dubbed Mumblecore and were grouped together as part of this new filmmaking movement. 

They are distinguished by their heavy focus on naturalism, characters, dialogue, improvisation and lack of a traditional cinematic style or look.

Swanberg and Gerwig’s next collaboration which they co-wrote, directed and starred in, called Nights and Weekends, fell into this Mumblecore category. 

“It’s about a long distance couple who have some problems and it’s kind of tracked over a year of their relationship. And it was shot over 18 months.”  - Greta Gerwig

Most mainstream movies are heavily reliant on a structured plot, with characters that bring that story to life. Mumblecore films flip that convention by focusing on characters and their dialogue as the core of the film - with the plot being quite minimal and a byproduct of the actions of the characters.

Although this focus on the relationship of characters over plot had been done before in films such as Before Sunrise, normally it was accompanied by a more traditional filmmaking aesthetic. 

Because Nights and Weekends had almost no budget, they disposed of these cinematic conventions and worked extremely loosely - almost like a documentary - and brought on Matthias Grunsky a documentary and indie cinematographer to shoot the first half of the movie.

“We didn’t have a script we had, like, an outline of certain scenes that we needed to shoot to tell the story and roughly maybe had an idea of what’s going to happen on the day. 

Greta and Joe would basically go for a coffee. Before they left I would talk with them about roughly where they will be, let’s say the apartment. So I would light that apartment by myself, not knowing exactly what’s going to happen and then they would come back and we would start rolling: very improvised because no one really knew what was going to happen.” - Matthias Grunsky, Cinematographer 

In filmmaking some of your major expenses are normally actors, crew, locations, production design, gear rental and post production. 

Shooting in this super improvised, naturalistic and loose style meant that they could limit the crew to only a cinematographer and their producer who recorded sound, the cast to basically two actors, the locations to borrowed apartments or public spaces and the gear to a single handheld HD video camera.

This low budget filmmaking methodology imposed the naturalistic, documentary feel of the film, and vice versa, this super character focused, realistic style was suited to this filmmaking methodology.

This meant that the audiences for these films were usually very niche, which made it more difficult to sell or make money from these movies. As a result, Nights and Weekends was crafted for a film festival run, to try and garner critical and festival acclaim, rather than a more commercially focused widespread cinema release. 

LADY BIRD - $10 Million

Following this experience, Gerwig shifted her focus to acting for a number of years, before she returned to write and directed her first solo feature - Lady Bird.  

“The working title for the movie was Mothers And Daughters. To me that was the core of it and I feel like every scene I wanted to keep going back to that theme even if it literally wasn’t between a mother and a daughter. I wanted to keep exploring that because I think your relationship with your parents is how you first understand love.”  - Greta Gerwig

Lady Bird was written in a far more structured way that conformed to more of a genre (coming-of-age drama) than that of her much looser, more improvised first film. 

The screenplay had much more mainstream appeal, could be better marketed to a cinema going audience, and therefore was able to raise a mid-level indie film budget of $10 million from IAC films.

This budget meant that the movie could be shot and produced with a much stronger visual style than her more haphazard looking first no budget film. 

“The biggest thing I think a director can do is create almost a bubble of magic safety for their actors and for their department heads and hold a perimeter so that they feel safe to play and bring their whole selves and also with the actors that they feel that they own those roles.”  - Greta Gerwig

One of the department heads she brought on to shoot the film was cinematographer Sam Levy, who this time had a few more resources to work with than just improvised dialogue and an HD camera from 2006.

The visual direction or perimeter that she gave Levy to work within was that the movie should feel like a memory. 

To try and evoke this visually, Levy chose to shoot on an Alexa Mini, and, instead of shooting in the sharper, higher fidelity 3.4K ARRIRAW format, he decided to shoot in the lower resolution 2K format.

On top of this, they wanted to use grain and texture to evoke more of an organic feeling of memory. But, instead of just adding a film grain emulation in the colour grade, as is normal, they added digital video noise that they pulled from the Alexa.

They did this by recording a black signal on the Alexa, pulling the noise from that, and overlaying it on top of the image. Levy and his colourist Joseph Bicknel could now amplify or decrease the noise, especially in the mid tones using the same idea of how you could bump up the gain on older ENG cameras to 12 or 16 DB to create more video noise and texture.

Another feeling that Gerwig wanted, was that the camera’s perspective shouldn’t be intensely subjective, super tight and handheld, nor should it be extremely wide, objective and detached. She wanted it to feel like the audience was an arm length away from the characters. 

They created this sense of distance and a proscenium in the frame by playing out most of the scenes in stable medium shots, close ups which weren't too tight or wide shots which weren’t too far away.   

Using these techniques along with a more traditional filmmaking methodology, Lady Bird was presented with a much more cinematic look that elevated the story from realism, rather than the rough, purely naturalistic feeling of Nights And Weekends.   

After finishing the movie A24 bought the worldwide rights to distribute it. It turned out to be a very successful acquisition, being both a critical success and making close to $80 million at the box office against its $10 million budget.  


BARBIE - $145 Million

“I wanted it to be obviously dazzling and hilarious and this kind of imaginative cornucopia of crazy Barbiness, but I also felt like actually in that state of, kind of, being in that sugar high is the place where you have the opportunity to be even more emotional because people aren’t really expecting it. So in a way, by design, it allows you to get at things that are harder because you have the cover of a glitter parade.” - Greta Gerwig

Gerwig paired up with the iconic global toy brand Barbie, to write and direct her first studio feature at an enormous blockbuster level budget.

Visually, she drew a lot of influence from Technicolor musicals with their vivid, colourful palettes and hand painted backdrops.

They, therefore, made the decision to film the portions of the movie set in Barbieland in a studio soundstage, where they built giant Barbie sets, erected enormous painted backdrops, at such a large scale that they caused an international shortage of pink paint. The size of these practical set builds was a massive financial undertaking. 

Gerwig hired cinematographer Rodrigo Prieto to collaborate on lighting this artificial world and executing these sequences practically.

“I wanted to do practical builds for everything and I also wanted to use, anytime I could use, whatever film technique from like 1959 was. So I had to build the entire thing in miniature then shoot the miniature, then composite that into the image. I spoke with Rodrigo Prieto, Sarah Greenwood and Jacqueline Durran (the DP, set designer and costume designer). I talked to them for a year before we were even in like prep, prep.” - Greta Gerwig 

The first lighting decision they made was that Barbieland should always be sunny. To create a sense of separation between the characters and the flat painted backgrounds he decided to create a beautiful backlit look. His team placed multiple enormous artificial sun sources called Soft Suns in each corner of the set, and had one roaming sun that he could move around on a lift. 

This meant that whichever direction a character was facing he could turn on one of his sun sources that was positioned behind them and give the scene a backlight. 

The spill from these lights was so strong that they would hit the set and bounce light back to fill in the actors. The problem with this was that because almost the entire set was pink this bounced fill light became very magenta, which turned the faces of the actors pink. 

So to kill some of this magenta, but still maintain some bounce he draped the rest of the set that wasn’t being photographed in a grey material. This returned a natural bounced light without a tint.

Other visual decisions to make Babieworld more exaggerated and theatrical included using frontal, symmetrical framing that placed Babrie in the literal centre of each shot, using a LUT that emulated the rich, saturated colours of Technicolor, and using wide angle lenses and a large format Alexa 65 to exaggerate and expand the size of the sets in a cartoonish way. 

They contrasted this look with photography in the real world that used a more neutral colour palette, a more subdued and desaturated film negative LUT and longer focal length lenses that compressed the background.  

Barbie completed Greta Gerwig’s evolutionary journey as a director from micro budget realism to blockbuster cinematic fantasy: both in the subject matter of the characters and story, the size of her budget and audience, as well as in how she went about creating the visual world of each film.

Read More
Gray Kotze Gray Kotze

3 Basic Camera Settings Every Cinematographer Should Know

Cinematography is all about controlling light. In this video I’ll take you through three of the most important camera settings that will allow you to control the amount of light that is let into the camera: ISO, shutter and aperture. This is sometimes called the exposure triangle.

EXPOSURE TRIANGLE


The first step to being a good cinematographer is to understand how images are created. 

You need two things: a photosensitive surface, like a camera sensor or a piece of film, and light. How much light hits the sensor and the sensitivity of that sensor determines how an image looks. This is called exposure.  

If too much light is used the image will be too bright, and if too little light is used then you won’t be able to see anything at all.   

Therefore, cinematography is all about controlling light.

In this video I’ll take you through three of the most important camera settings that will allow you to control the amount of light that is let into the camera: ISO, shutter and aperture. This is sometimes called the exposure triangle.  

If you’ve ever used a consumer camera you may know that there is an auto mode that will adjust all of these settings for you. So, you may be thinking, why not just use that?

I’d say there are two main reasons why changing the settings yourself, called manual exposure, is the preferred method for anyone who wants to take image making seriously.

Number one. When you put the camera in auto mode you are basically giving it permission to constantly change and adjust various settings at will to get you to an evenly exposed image. 

The problem with this is that we will often be able to see the video footage changing from dark to light as it adjusts settings until it finds a level it is happy with. This can be very distracting and breaks the illusion of filmmaking, taking the audience out of the immersion in the story as they say to themselves, “OK, this was obviously shot on a camera.”

Number two. Cinematographers want to be able to precisely control how they want the image to look and not leave that decision up to the whims of a camera. They may want a comedy to look light and bright, a drama to look dark and moody, have a deep depth of field where everything in the image is sharp, or make the background soft, blurry and dreamy.


ISO

ISO, ASA, or EI are all terms that refer to the sensitivity of a camera’s sensor or a film to light. The higher the ISO number is the more light will be taken in by the sensor and therefore the brighter the image will be. And inversely the lower the ISO, the less light it absorbs and the darker the image is. 

Therefore, it’s common to set the camera to a lower ISO when shooting outdoors in bright sunlight, and a higher ISO when shooting in darker conditions at night.

However, higher ISOs come with a tradeoff. The more that you increase this setting the more digital noise, those little dancing grainy specs, you will see.

Most digital cinema camera manufacturers will recommend shooting at a base ISO or a native ISO. Although this differs for each manufacturer, base ISOs usually offer the best trade off between a high quality, low grain image and a good dynamic range.

For example, most Arri cameras have a native ISO of 800. Some cameras, such as the Sony Venice 2 also have what are called dual base ISOs: such as a lower native ISO of 800 and a more sensitive native ISO of 3,200.

So, when setting up your camera to shoot each shot you need to pick an ISO based on two factors. Firstly, consider how much noise or grain you want the image to have, and secondly decide how sensitive you need your camera to be to shoot in the lighting conditions.

And, if you’re still in doubt, it’s probably best to set and leave the camera at whatever base ISO the manufacturer recommends, keeping the noise as clean as possible, and rather adjust other settings.


SHUTTER

Instead of changing the sensitivity of the sensor, another way of altering exposure is by controlling how much light hits the sensor. This can be done with the shutter: a section that can open up to expose the sensor to light, or close and prevent light from entering the camera. 

This setting determines the duration for which a camera’s sensor is exposed to light. 

In photography this is measured in shutter speed: the time a single frame is exposed to light, by quickly opening then closing a rectangular blind shutter. Usually this is measured in fractions of a second, like 1/100.

Cinema cameras need to shoot multiple frames within one second, for example 24 frames. Therefore, they need a differently designed shutter: one shaped like a circle that can spin. 

More or less light can be let through depending on what angle is cut out of the circle. For example, 180 degrees may be cut out, so that each frame is exposed to light for half the time by the open shutter and not exposed to light when the closed half passes the gate.

Or, a 45 degree angle can be used, where only a narrow sliver of the shutter is open. This means that each frame is exposed to light for 1/8th of the time. Less light means the footage will be darker, and it also makes motion feel more choppy.

A 180 degree shutter angle is most commonly used because it gives the footage a normal feeling of motion that is not too jerky or too fluid.

Modern digital cinema cameras don’t have a physical shutter that opens and closes, but it uses the same principle by telling the camera’s sensor when to collect light and when not to collect light. 

If using a camera with a shutter speed setting, a good rule of thumb is to set it to be double the frame rate - to get a normal feeling of motion. So if shooting at 25 frames per second, we will set the shutter speed to 1/50. Then if the camera increases its frame rate to 50 frames per second, we will double that and set our shutter speed to 1/100.


APERTURE

Moving even further away from the sensor, we get to our final important camera setting that controls exposure: aperture.

This is the size of the opening at the front of the lens through which light passes to get to the sensor. The size of the hole is controlled by iris blades narrowing or opening and is measured either in f-stops for photography lenses or t-stops for cinema lenses.  

The wider this opening is, the more light is let through and the lower the stop value is. On the other hand, the narrower the iris is, the less light it lets through and the higher the stop number is.

Each stop represents a doubling, or halving of the amount of light that hits the sensor. So, to make the image brighter we need to decrease the stop number by doubling the light let through - for example from 4 to 2.8. Or to make the footage darker and cut down on half the light let through we can increase the stop from 2.8 to 4.

The aperture setting is a very effective way of changing how bright or dark an image is. However, aperture also has another, secondary effect on the image. 

The wider the iris is, and the lower the stop number, the shallower the depth of field will become. In other words a low stop number means that more of the background will be out of focus.

Since cinematographers often like to shoot with a shallow depth of field that makes the background look soft and focuses on a single subject, the camera is often set to a low stop number, such as 2 or 2.8.



THE FINAL SETTING

But, this runs us into a snag. We’ve set the ISO to the base of 800, we’ve set the shutter angle to get normal motion at 180 degrees or 1/50th of a second, and we’ve set our aperture at 2 to get a shallow depth of field. However, the image is still too bright.

How do we make it darker and get it to the level of exposure that we want without changing our other settings? We can use what are called ND, or neutral density, filters. 

These are pieces of tinted glass which are either placed in front of the lens, or inside the camera behind the lens, that decrease how much light is let into the camera without changing the colour of the image. 

Adding a stronger tinted ND filter will take away more light, while a weaker strength filter will take away less light. Each stop of light taken away is represented by a value of 0.3.

So, most cinematographers will set the camera with a consistent ISO (to keep the same grain), shutter (to keep the same feeling of motion) and aperture (to keep a shallow depth of field) for most shots, then use ND filters to cut down on how much light enters the camera.

Read More
Gray Kotze Gray Kotze

Why Movies DON'T Need To Be Shot With Expensive Cameras

In the wake of the announcement that The Creator was shot on a Sony FX3 let's examine how and why it is now possible to shoot high budget films on prosumer cameras.

A BRIEF HISTORY OF FILMMAKING TECHNOLOGY

Throughout the history of cinema, filmmaking has been inextricably linked to technology. 

Each technical innovation in film gear has unlocked new opportunities for how stories can be told. Early silent films used intertitles in editing as a way of expressing speech without using words. The introduction of sync sound introduced written dialogue to the screen. Colour photography elevated cinema out of a more impressionistic black and white to a more realistic world. Smaller handheld cameras allowed filmmakers to move shots organically like a fly on the wall.

With each and every advance in technology came a new stylistic application.  

One of the largest technical leaps forward came when digital cinema cameras, for the most part, replaced cameras that used 35mm film. Before this, dating back to the start of cinema, movies had been shot on celluloid. 

The widespread adoption of digital cinema cameras on industry film sets happened in the early 2010s after Arri introduced the original Alexa. 

This camera was groundbreaking as it offered filmmakers a digital medium that had great colour reproduction and dynamic range - which, for the first time, rivalled the quality and look of 35mm film. But let’s backtrack a little bit.


GAP BETWEEN PROFESSIONAL & CONSUMER CAMERAS

Before digital cinema cameras, there was an enormous gap between the quality of the image recorded by consumer video cameras onto tape that the average person could buy in a store and 35mm film captured on professional industry cameras. However, renting these high end cameras, and buying and processing 35mm film came at a significant price. 

It costs a large amount of money to make a high end movie. You have to pay a big premium for getting well known actors, hire a large technical crew for weeks or even months at a time, and pay for the construction of large sets. These costs are usually greater than the sum of the gear rental. 

Therefore, although it was expensive to rent a high end cinema camera, it still made financial sense in the scope of the more expensive production, to try to capture the best image possible that could be sold and presented to an audience as more of a spectacle.

The thing is, as time went on and digital cinema cameras replaced film cameras, that enormous gap in image quality between cheap consumer video cameras and expensive high end digital cinema cameras, got exponentially smaller and smaller.  

This trend followed a prediction made by a guy called Gordon Moore in the 1960s, who basically said that computing power would roughly double every two years. Not only would this allow equipment that used computing power, like digital cinema cameras, to get exponentially better, but they would also get increasingly smaller and cheaper on a relative basis. 

This has meant that in the last few years, manufacturers have been able to produce video cameras aimed at the consumer or prosumer market which are affordable, small, yet produce images that are incredibly high quality.

Whereas before the quality of video from a high end camera was many multiples better than that of consumer cameras, now the difference between them is much more difficult to notice for the average viewer - if they notice at all.

THE CREATOR USING A SONY FX3

In this changing technical landscape, news emerged that I think may go down in history in the same way that previous technologies altered the course of filmmaking. It was announced that The Creator - a high budget, sci-fi, Imax distributed blockbuster - would be shot on a Sony FX3: a prosumer video camera.

“For a long time the difference between shooting on film and what you might call consumer or prosumer, the stuff you might buy in a store is getting narrower and narrower. And that gap is getting indistinguishable to some extent.”

“We shot The Creator on a new camera, it’s a Sony camera, it's called an FX3. Pretty much the whole movie is shot on this.”  - Gareth Edwards, Director of The Creator

Shooting with the FX3 allowed them to work with smaller crews and therefore film in 80 real shooting locations, rather than spending truckloads of money on building large sets in a studio and shooting against green screens.

Then in post they basically took the real location footage and used VFX to paint the sci-fi landscapes that they wanted on top. The director, Gareth Edwards, claims this method of working turned a $300 million film into an $80 million movie. 

Shooting with a prosumer camera isn’t new to Edwards as his first film, Monsters, was shot on a Sony EX3 video camera.  

Because so much can be achieved in the grade and created in post production nowadays, you want the footage to be as malleable as possible. What you need from a camera can sort of be boiled down into three main categories: resolution, dynamic range and the bit depth of the colour.

If we take these specs from the $3,900 full-frame Sony FX3 and the $75,000 Arri Alexa Mini LF, you can see how small the technical gap between them is. The FX3 can record in 4K resolution, while the Mini LF can do 4.5K. In terms of dynamic range Sony reports 15 plus stops, while Arri claims 14 plus stops. When it comes to bit depth, the FX3 shoots 10-Bit 4:2:2 internally in SLog, whereas the Arri can shoot 12-Bit 4444 XQ in Log-C.

While of course the Arri does outperform visually, especially in the colour department, the point remains that the gap between them is pretty slim when comparing a prosumer and a professional camera - and seems to be closing more and more every year. 

Also, when you have millions of dollars to polish the footage in post production on top of that then the difference in quality between the two cameras becomes even more negligible.


ADVANTAGES OF PROSUMER CAMERAS

So, what are some of the advantages of using a smaller prosumer camera like an FX3?

Firstly, the size of the little Sony means that all the grips gear needed to move the camera cinematically and the supporting accessories are greatly reduced - in both cost, size and crew.

In the BTS, you can see them working with what looks to be a modified Ronin gimbal rig with handles. The operators can use this to adopt a very flexible shooting style, run and throw the camera around, getting stable, smooth movement - while also eliminating the time, manpower and cost of needing a dolly or a Steadicam that would be paired with a high end cinema camera. 

Or, in other shots you can see them using a mini jib arm with the Ronin as a stabilised head. Again, saving hugely on the time, cost and crew needed to run a high budget industry alternative like a Technocrane.

Working with a little camera also downsizes the accessories needed. For example the BTS shows the focus puller using a low cost, little Tilta Nucleus instead of the larger follow focus systems from Arri and Preston that are normally paired with high end cameras.

The FX3 also has the ability to boost the base ISO to 12,800 and shoot in incredibly low light conditions - whereas the Alexa is usually kept at 800 ISO. 

“The Sony FX3 for example: amazing prosumer camera. I’m about to use that on a film that I’m shooting in Thailand. And it’s so ridiculous that it’s so small and it can [record in] such a high ISO for such a small camera.” - Greig Fraser, Co-Cinematographer of The Creator

This high ISO allowed them to work in a very flexible style. For exterior night scenes they could discard the large, high output lighting fixtures that normal cinema cameras require. 

Instead they used battery powered LED carpet lights rigged to a boom pole, that the best boy could use to walk alongside the talent and handhold the source to give enough illumination. 

“The actors had this freedom. If suddenly the shot looked better over here Nancie would suddenly move around with the light and Oren would be talking to Nancie through a little microphone. And we were just really fluid, organic. We’d shoot for like 25 minutes at a time. It was, like, a very liberated experience and it looks like a much bigger film than it cost because of that reason.” - Gareth Edwards, Director of The Creator

SO, WHY DOES HOLLYWOOD STILL USE HIGH END CAMERAS?

Does this spell the end of the Alexa forever and mean that all future Hollywood productions will use the FX3? Well, no, probably not. 

Why are almost all high end movies and series still shot on variations of the Alexa, the Venice and other high end cinema cameras?

For one, many filmmakers and especially cinematographers still like to try and shoot the highest quality image possible - even if the gap between that and a prosumer image is not as big as it used to be.

Secondly, the film industry has been built over many decades with crews, technicians and filmmakers trained to work in a very specific manner. For technicians this means working with production style cameras, grip rigs and high output lighting setups. 

This legacy has also been established on the production side. Studios know what they will get when shooting with an Alexa, and producers are used to budgeting for gear in terms of an Alexa rental fee.

The entire industry is set up around working with high end production cameras and I don’t think that this will change any time soon. 

However, what we may see is that features from these prosumer cameras - such as its high ISO base and smaller form factor - filter into the higher end cameras. And that this prosumer gear will increasingly be adopted across lower budget projects. 

Then, eventually, as Moore’s law continues to take effect, we may find that productions will only need to work with a camera that can easily fit into the hand.

Read More
Gray Kotze Gray Kotze

How To Use A Clapperboard The Right Way

Since I used to work as a 2nd AC for a number of years it was my job to use the slate or clapperboard on professional film sets. Let's go over how to use a clapperboard the right way.

INTRODUCTION

This little piece of equipment is perhaps the most iconic symbol in filmmaking, but it also has a number of pretty important practical purposes.  

Since I actually used to work as a 2nd AC or loader for a number of years, it was my job to use this little guy on set. 

So I thought I’d track down my old slate and provide a bit of an explainer as to what the purpose of it is, what all the information on the board means and go through the ins and outs of the real ways that a slate is used on an industry level set.


WHAT IS A SLATE?

The slate, sticks, or clapperboard are all names for this piece of translucent acrylic attached to a pair of wooden sticks that can go clap.

It has two main uses on a film set: synchronising the sound with the video footage and conveying information. 

While broadcast, documentary or solo shooters may have a mic that records both sound and video footage as one single file, on an industry film set the footage is recorded on a cinema camera, while the sound is recorded separately on a mixer by a dedicated sound recordist.

A slate uses both a visual cue, the action of the sticks coming together, and an audio cue, the sound of the clap, as a reference marker for synchronising them together. Practically this means that editors can find the moment that the sticks collide, and the moment that the clap is heard, and use that as a starting point for both clips.

Now the video and the sound are in sync.

The second use of a slate is to give the editor information. All of the numbers, letters or even symbols that are written on the acrylic section of the slate in a black whiteboard marker are a way of passing on information to the editor.

Each shot in a movie has a unique series of numbers that identifies it, and places it within the order of the story. Properly labelling each take and having a system to find them is important because there may be hundreds or even thousands of different takes on a production. 

Slate information allows the editor to both know which scene in the screenplay the shot fits into and gives each shot a label so that it can be talked about and identified by the editor, as well as correspond to any notes from the script supervisor or camera report. 

On top of this, the 2nd AC may write additional information on the board, such as to notify the editor that VFX work is intended to be added to the shot or to convey the camera settings that were used. 

There are two types of slates that can be used: a smart slate which has a timecode running on it, and a regular, old clapper board or dummy slate which can only use a clap to sync the sound. The timecode on a smart slate can be synchronised with both the camera and the audio mixer. 

How this works is that the audio recording device will have a kind of digital clock that outputs an exact time in hours, minutes, seconds and frames - called timecode. The sound person will then ‘jam’ the timecode by connecting a timecode cable between the audio recorder and the smart slate. This sets the timecode on the slate to be exactly the same - frame perfectly - as the time on the audio recorder. The same is then done to the cinema camera so that all devices are running the same timecode as the audio recorder.

This means that every time the smart slate is opened on camera it displays a running timecode that can be synchronised perfectly to the same timecode as the audio recorder - thus syncing the footage - with the physical clap being a backup way of syncing the sound.  

SLATE INFORMATION

When a screenplay is written, each scene will be given a number. The first scene is 1, the second scene is 2, etc. However, during the process of shooting, scripts are almost never shot chronologically. 

So, to avoid confusion and to stay on top of what scene is being shot and when, it is labelled on the board with a scene number. 

When the editor gets the footage they can then group all of the shots with the same scene number into the same bin.

Next we’ll move onto the slate. Most scenes that are shot will use multiple camera angles and lenses to cover the action. Every time that the camera moves onto a new shot size, angle, or switches a lens this is called a new slate, or slating up.

There are two systems for representing this: the US system and the UK system. The US system is probably the most common and uses letters to represent a slate.

For example, the first board for the wide shot on scene 26 will just say 26, with no letter. When the camera moves onto the second shot, a close up, the board will slate up to 26A, then when a third shot is filmed, the board will slate up again to 26B.

When speaking about slates we use the NATO phonetic alphabet: 26 alfa, 26 bravo, 26 charlie. Although sometimes 2nd ACs can also come up with creative names that start with the first letter of the slate.

You also usually skip the letters I and O, so that they aren’t confused for a one and a zero. 

If shooting a VFX shot, this can either be written on the board or a V can be added before the scene number. The same can be done with an X to show that this was shot by a second unit, or with an R to indicate a scene that is being reshot. Different productions and different script supervisors may also use different letters or methods for communicating these cases. 

The UK system uses numbers instead of letters. So the first shot of the production will be slate one, the next shot will be slate two, and this slate count keeps going up with each new shot setup or lens change until the production ends. 

This means that you don’t always have to announce the scene number, only the slate number - although the scene number will usually also be written on the board. 

Then, for each slate setup, usually multiple takes will also be recorded. This starts at one for the first take and keeps going up. Then when you change to a new slate you start over with take one again. So, an example of a typical board may read 26A, take three.

At the front we also have the roll number. This comprises a letter followed by a series of numbers. Each letter represents what camera shot this footage. If there is only one camera it will be called A cam. This letter goes up for every extra camera that is used. So if it’s a multicam shoot with one main camera, one Steadicam camera, and one action camera, the main camera will be A cam, the steadicam will be B cam, and the action camera will be C cam.

Then each card that goes into this camera, shoots and gets transferred to hard drive will be labelled with a series of numbers. So the first card shot by A cam will be A001, or the 112th card shot by B cam will be B112. If shooting on film, the same principle applies to each new roll of film that is used. So the 8th roll of film shot by A cam will be A008.

All of this information about the roll number, scene, slate and take will be recorded by the script supervisor along with any extra notes, such as what takes are good, what lenses were used for each shot, or what takes may have had lapses in continuity. 

The board is also labelled with fixed information like the production company, title of the production, director’s name, cinematographer’s name and the date. Because this information doesn’t change much, it is usually either done using a printed label from a label maker, or handwritten with a permanent marker on tape.  

HOW TO USE A SLATE?

When 2nd ACs do a slate they need to be as quick and as accurate as possible. 

If you put the board into shot too early, it will block the framing and view of the camera operator. If you leave the board in shot too long while the camera is rolling then you are wasting valuable card recording time or film stock that is running through the camera.

What 2nd ACs do is carefully listen to the instructions of the 1st assistant director.

They will call ‘roll sound’, to which the sound recordist will press the red button and say ‘sound speed’. The instruction ‘roll sound’ is a good cue that it’s time for the 2nd AC to insert the board into the frame. 

This gives the 1st AC a moment to pull the focus to the board so that it is sharp and can be read by the editor. Next the AD will say ‘roll camera’. 

Once the 1st AC is ready they will press record on the camera and say ‘speed’ or ‘rolling’. Because the board is already in frame and sharply in focus when the camera rolls it means that the first frame that the editor will see for each take will be the slate. This helps the editor immediately identify the shot number without needing to search through the takes for it.

Once the 2nd AC hears the focus puller say ‘speed’ they can announce the board. If using the US system this means saying the scene and slate letter followed by the take: for example, ‘26 bravo take three’. 

The focus puller’s I’ve worked for like to then say ‘mark’ or ‘mark it’ to confirm the camera is rolling, the board is sharp and it can be clapped. Then as soon as they’ve clapped the board, the 2nd will get out of the shot and duck for cover as quickly as possible.

Running through the whole process, it’ll sound like this: 

‘Roll sound.’

‘Sound speed’

‘Roll camera’

‘Rolling’

‘26 alfa take three’

‘Mark’

*clap*

In most cases you want the clap to be quite loud so that it is clearly picked up by the boom and lapel microphones, but not to be deafening. Loaders also modify the strength of the clap depending on how close it is to the actors. 

If shooting a tight close up and the board needs to be right in front of the actor’s face, it’s best to notify the boom operator ahead of rolling that you’ll do a soft clap. They will then get the mic close to the board and you can do a nice gentle clap in front of the actor’s face that doesn’t put them off. 

For tight shots, some loaders carry a cute little insert slate which is small enough to be fully read in a close up.

If they fumble the clap or it is too soft to be picked up, the 2nd AC will then loudly announce ‘second clap’ and hit it again.

When positioning the slate in frame, you want it to be as readable as possible - not too small and far away or too large and close to the camera. The rule of thumb when shooting with a Super35 sensor is that the focal length of the lens should be equivalent to how many feet away from the camera the slate is. So, if using a 50mm lens, the board should be 5 feet away. If using a 100mm lens the board should be 10 feet away.

If shooting with anamorphic lenses or with large format cameras that have a wider field of view, the board can be placed a bit closer to the camera than this rule. Good loaders are able to place the board in frame without looking at a monitor based on knowing the focal length of the lens alone.      

There are three different sound variations that can be done, all of which are signified a different way with the clapper board. The most common is sync sound, where the sound recorded is intended to be synchronised with the recorded footage. To convey sync sound, the clapperboard must always enter the shot with the sticks open, showing the editor that it’s meant to be clapped. 

This is often incorrectly done in pop culture references where the board starts closed, then opens in shot and claps.

Wild sound means that sound will be recorded for the take, however this sound does not have to be synchronised with the footage. This is sometimes done when recording ambient noise or sound effects.

Loaders will write ‘wild’ on the board and this time either enter the frame with the board closed, or with their fingers holding the board between the sticks - this will allow the timecode on a smart slate to run, but indicates that the board will not be clapped.  

Then there is MOS - which signifies that no sound will be rolled on this take. Again, a big MOS will be written on the board and the board will enter and exit frame closed - without a clap. 

If multiple takes in a row will be done within a single roll of a take then the loader may write ‘series’ or put an arrow next to the take.

Next we have the end board. This is where either the 2nd AC is not quick enough to fetch their board and complete a front slate after ‘roll sound’ has been called, or if no board has been requested up front, such as if working with animals or children that may be spooked by a loud clap. 

In this case, after ‘roll sound’ is announced, the 2nd AC will say ‘end board’ loudly at the beginning of the take. The scene will then play out in full. Just as ‘cut’ is called to end the take, the 2nd AC needs to quickly and loudly again announce ‘end board’. They will then quickly put the board in front of the camera with the sticks open, but in an upside down position, they will announce the board, ‘26 alfa take three, end board’, give it a clap, then spin the board so that the editor can read it. 

Putting a board in upside down means that the board signifies whatever was previously shot. For example, if no slate was done on a take, then the board can be placed in frame upside down with the sticks closed, a new clip can be rolled, then the board is spun to the correct orientation. This signifies that this board is the label for the previous take.

If shooting with multiple cameras that can all be synced with one board, then the roll number of each camera should be written on the slate, such as A008 and B007. In this case the loader will say ‘alfa, bravo common mark’ before clapping.

If multiple cameras are being used and there is a 2nd AC for each camera, then each loader will announce their camera, in alphabetical order before clapping. For example, ‘26 apple take three alfa mark’, ‘bravo mark’.  

Finally, if a director decides that they only want to record a small portion of the scene, such as a specific line, then P/U will be added next to the take which stands for pick up. This will be announced as ‘26 alfa take three pick up’.

Read More