Gray Kotze Gray Kotze

Cinematography Style: Rachel Morrison

Let’s dive a bit deeper into Rachel Morrison's cinematography by looking at how she got to where she is in her career, her philosophy on filmmaking and some of the technical film gear that she uses.

INTRODUCTION

What makes Rachel Morisson’s cinematography special, both in her work on indie projects as well as high budget studio movies, is how she is able to bring audiences closer to the truth and emotions of the characters with experiential, subjective photography. This comes from her frequent use of a handheld camera, her taste for naturalistic lighting, and desire to always serve the story with every cinematic choice.

In this video let’s dive a bit deeper into her work by looking at how she got to where she is in her career, her philosophy on filmmaking and some of the technical gear that she uses.  



PHILOSOPHY

“I got out of school and got work right away in documentary filmmaking. But then 9/11 happened and I started to get pulled into reality TV that was the one thing that was paying the bills but I could still be behind the camera. And I got to the point when I said I never want to do this again. All I want to do is make narrative films. And, so, I stopped shooting reality and never looked back.” - Rachel Morrison

Part of what inspired her to want to shoot documentary early on came from her love of period photojournalism by photographers such as Dorothea Lange and Gordon Parks - who were able to tell really strong stories, which affected change, with only a single frame.  

Documentary is a great training ground. It teaches cinematographers to make quick decisions, to change camera settings and do technical setups on the fly, quickly find good frames, and how to use natural lighting conditions. 

Another skill that she picked up from shooting reality was the way that she operated a handheld camera.

“Operating is such an instinctive quality - and to come from documentary and basically learn to dance with the camera in response to somebody else’s emotion - it’s a strangely specific skill set. And that was really crucial I think to my journey as a DP.” - Rachel Morrison

These skills she picked up helped her to win a DP job to shoot a young director’s first indie movie - Fruitvale Station. Ryan Coogler was looking for a camera person who had a background in single camera documentary filmmaking who could work on the fly and tell a story in a realistic, subjective, experiential way. 

The language of this film took on almost exclusively handheld camera movement, naturalistic lighting and created a fly on the wall feeling that is reminiscent of documentary filmmaking. 

This is a feature that she has carried through her work. She often operates the camera on the shoulder, moving with characters, or holding frames with a natural handheld looseness and shake, to impart a mix of reality and emotional elevation to scenes. 

Operating the camera with this kind of looseness, lack of rigidity and perfectly composed static frames, gives the images more of an organic feeling which brings the emotions of the characters to the forefront - like we are subjectively observing their reactions realistically, rather than watching technically perfect, rehearsed action on screen.  

After getting a Best Cinematography nomination for Mudbound and moving up to make a studio feature - Black Panther - she found herself at a crossroads. In a middleground where the scripts she was being offered to shoot were either bigger studio movies which weren’t as narratively interesting or nuanced, or smaller indie movies that weren’t as impactful or make-able as some of her previous work.

This led her to forge a new path as a director. Like in her work as a cinematographer, as a director she also sought stories with a strong point of view, which informed, had a larger social impact and asked questions of the audience. 

“For me, as a filmmaker, it’;s important not only to entertain but also to inform and inspire and emote and evoke and, hopefully, to leave the audience with a message of some kind.” - Rachel Morrison



GEAR

Much of her work favours quite a filmic look. Whether that’s the heavily textural Super 16 Fruitvale Station, or the tighter grain 35mm anamorphic look in Seberg.

However, since a fair amount of her work was done at the indie budget level, sometimes the practical and financial benefits from shooting on a digital camera outweighed the look of shooting on film. 

Mudbound was one of these projects. Both Morrison and the director Dee Rees wanted to shoot this period movie photochemically, however after running the numbers on the budget they were presented with two choices - either shoot digitally, or shoot on the more expensive film format and lose two shooting days.

Due to the already tight schedule, they opted to shoot on the digital Alexa Mini with C-series anamorphic lenses from Panavision, and ran grading tests to get the footage to a filmic look they were happy with - which was reminiscent of her reference Gordon Parks’ photo exhibit ‘Segregation Story’ which had a muted colour palette yet rich blacks that weren’t too washed out.       

“I would have loved to have shot this film on film and actually Dee would have too. But we did tests and in the tests I’d already started by rating the cameras at a 1,280 and at a 1,600ASA just to introduce a little bit of digital noise and then working with our dailies colourist to add a grain emulation and really try to get him to match the film dailies as best we could.” - Rachel Morrison

This gear selection choice of shooting with an Alexa sensor and Panavision anamorphic glass is one that she has used on multiple projects: like Dope, Mudbound and Cake, to get a widescreen look with anamorphic falloff while having the convenience of digital. 

Despite often using this combination, her selection of gear is always catered to the story and also the budget - so sometimes changes from project to project. She shot Super 16 film on the Arriflex 416 with Zeiss Ultra 16 lenses on Fruitvale Station - using Kodak 500T for its higher grain look and better sensitivity for shooting in lower lighting conditions at night. 

Or, she even shot the low budget indie The Sound Of My Voice on prosumer Canon 7D and 5D DSLR still cameras. When it came to lighting this movie she pulled from her early work on reality TV, where the light sources needed to be hidden by rigging them out of sight or incorporating the lighting through practical fixtures in the shot so that they could point the camera almost anywhere on the set without seeing lighting gear.  

When moving from these lower budget productions up to an enormous MCU production, she found that the progression wasn’t quite as big a step up or change as many make it out to be, however there were some things that made shooting a studio film different.

To show this, let’s take a look at Morrison’s collaboration with Ryan Coogler on two movies, Fruitvale Station and Black Panther. 

For one, in order to get a large amount of coverage of scenes she had to shoot with a dual camera setup controlled by multiple operators, instead of the single camera self-operated way of working that she did on most indie sets.

While the scale of the gear changed, many of the techniques she used were similar. For example, on Fruitvale Station she wanted to change the palette of the light of the BART station scenes - which in real life were lit by overhead warm fluorescent bulbs. To make the space feel a little less cosy and inviting she got her team to swap the light bulbs with a neutral white fluorescent.

Whereas on Black Panther, when they wanted to light a casino location with a soft, warm overhead source, they fully built the location as a huge set on a soundstage and rigged an enormous amount of LED lights overhead which they softened with diffusion.

Same idea but vastly different scale. Building everything from scratch rather than working with what is already present in a real location. 

She mentions having somewhere around 300 Arri Skypanels on Black Panther, as opposed to working largely with practical sources and a small lights and grips package on Fruitvale Station.   

Different films also demand different looks. Black Panther used a more vividly saturated comic book palette, while Fruitvale went with a more destaurated, toned down, textual realism. 

They occasionally used more exaggerated dutch angles or rolled the camera on a Movi gimbal on Black Panther for more dynamic, comic book framing, while on Fruitvale they kept the camera handheld for more documentary subjectivity.

CONCLUSION 

Whether she is shooting or directing, Rachel Morrison likes to keep the story at the forefront. Carefully selecting screenplays that both entertain and have a message that aligns with her sensibilities. 

She’ll then bring her references, suggestions and visual preferences to the director - who she’ll work with to craft an experiential and often subjective look that is custom tailored to the story. 

Read More
Gray Kotze Gray Kotze

A Movie With NO Crew: The Zone Of Interest

Let's break down how Jonathan Glazer used an anti-filmmaking style, that disregarded artificial lighting, fancy camera moves, close ups and even having on set crew members present during shooting - yet still managed to win an Oscar.

OBJECTIVE FILMMAKING

Most movies embrace all the filmmaking tools they have to influence the audience’s emotions. Whether that is using music to indicate a triumph. Shooting a close up to put the audience in the head of the character. Or removing bright sources of light from the frame to impart a darker mood. 

The Zone of Interest is well, interesting, because it went another route altogether. Director Jonathon Glazer wanted to remove cues that signalled the audience was watching a work of fiction and present the story in the most detached, objective way that he possibly could.

This anti-filmmaking style, that disregarded artificial lighting, fancy camera moves, close ups and even having on set crew members present during shooting, was specifically catered to his vision for telling the story.

“The Zone of Interest is not a story in the sense of a normal film. It’s about a man who has a lovely life with his family. He’s very good at his job. They live in the country with a lovely house with a lovely garden. He also happens to be the commandant of Auschwitz.” - Jonathan Glazer


REFLECTION OF OURSELVES

Part of the creative rationale for presenting the story this way was to neither demonise nor glamorise characters and their actions through the filmmaking language but rather present them with a detached objectivity. 

This placed the ordinary way that we, the audience, live our lives adjacent to the lives of characters in the film that perpetrated heinous crimes. 

It’s easy to demonise, dehumanise and distance ourselves from the bad guys in a movie. It’s far more difficult to wrangle with the idea that atrocities are committed and witnessed by ordinary people that are on some levels a reflection of ourselves - which is the thesis that The Zone of Interest puts forward.  



REALITY TV SETUP

So, how exactly did Jonathan Glazer and his cinematographer Łukasz Żal go about creating this look on a technical level?

“He said we don’t want to make this like a movie set. There was no lighting. There was no film gear on the set. Only, like, cameras.” - Łukasz Żal

Glazer described this style as being like shooting a reality TV show, such as Big Brother, by setting up multiple hidden cameras in a house, then letting the actors roam around and play out scenes as they would - without being limited by blocking, hitting marks or landing in the perfect light. 

Again, the idea being to strip away a traditional cinematic style and instead pursue as pure a feeling of reality as possible. This technique is one that he had already experimented with on his previous movie, Under The Skin, where he placed characters in unscripted situations in a van with hidden cameras to draw out as great a feeling of realism as he could.  

CHOOSING A CAMERA

To be able to set up cameras in this way, but still maintain a high enough level of fidelity and quality to keep the audience immersed, his cinematographer had to take a few factors into consideration when choosing what camera to shoot on.

Firstly, it had to be a production cinema camera with enough dynamic range and colour science to be able to handle shooting in purely natural ambience without balancing the levels with film lighting.

Secondly, he needed to be able to solidly set up a way of getting a live feed from the cameras to a wall of monitors outside the house so that they could watch all the angles of the scenes take place without any interruptions.

And, perhaps most importantly, he needed a camera that had a very small profile, that could be hidden enough to set up multiple angles while being as discreet and invisible to actors as possible.

He landed on the Sony Venice - which has 15 stops of dynamic range and a Rialto system that can separate the lens mount and sensor of the camera through a wired connection tethered to the body that records the data and supplies power.

If we look at some of the behind the scenes photos we can see how little footprint this camera rig has. The Rialto section of the camera with only a lens and a small wireless focus motor could be propped up on apple boxes to be hidden behind a desk, or put on a mini lowboy combo stand where it was held and could be positioned and framed by a small Manfrotto ball head. 

They could also attach the Rialto rig with the ball head to an L-bracket - which could be rigged from a pipe, or even screwed directly into the wall or the ceiling.

This allowed them to set it up in small spaces or even almost flat up against walls, while remaining unseen by the other cameras.

The rest of the components of the rig, such as the body, battery and MDR focus box were wired to the Rialto section and could be hidden away from the camera.

HIDDEN CAMERAS

One of the challenges of shooting scenes without giving actors definite marks or blocking, was knowing where to place the cameras in the best positions that would capture the performances that they wanted.

This meant positioning them so that they would witness everything from an objective viewpoint, which was usually wide and parallel to the action through frontal, side on or top down angles.

These frames were always locked off, except for a few specifically motivated moments of tracking movement, following the motion of characters, which were filmed with a dolly and track.   

As they were shooting scenes with multiple cameras, they had to try their best to hide them so that they wouldn’t be picked up by the other cameras. This was especially challenging as almost all of the angles were long shots photographed with wide angle lenses that had a very wide field of view of the set.

Cameras were sometimes hidden behind furniture, inside part of the production design, by mounting it to the ceiling, or, in some cases where it was impossible to shoot multiple angles without seeing another camera, by doing VFX cleanups in post to digitally remove cameras or gear from shots. 

They could do this, by playing out the scene with the camera in the frame, then removing the camera psychically from the space and shooting a clean plate - which they could then use to remove the camera digitally.

There were also plenty of other moments of really well done VFX, such as adding Auschwitz camp buildings in the background of shots, which, again tried to be as realistic and to draw as little attention to the filmmaking as possible.

This meant assembling a large blue screen wall around the outer perimeter of the property - which could later be keyed out and replaced with CGI buildings. 


NO ON SET CREW

Since they were shooting with a multi cam setup, 360 degrees inside interiors, one way to avoid seeing crew was for there to be no crew on set. But, you may be asking, surely you need a crew next to the actors on set? How do you operate the cameras? How do you pull the focus? How do you transmit a video feed to monitors for viewing?  

Part of Łukasz Żal’s job was creating an onset workflow that allowed shooting to happen almost entirely remotely - without needing any crew to physically be present inside the interior house set while they were rolling.

Normally, film sets use wireless transmitters, like a Terradek, to send a video feed from the camera to a monitor where people can view and assess a live image. However, because they would be working with 10 cameras and because they wanted to place the video village, with a wall of 10 monitors, in a container which sat outside the property, behind the blue screen, and have an uninterrupted feed at all times, the decision was made to hardwire all the cameras.

This involved tethering the cameras to the monitors with cables. Kind of like using an ethernet cable to connect to the internet rather than wifi, this method provides more consistency than transmission.


SET UP

Rather than the traditional method of having the actors block the scene on set with crew watching, have the crew tweak the cameras and lights, then play out the scene with full on set crew and cast, they employed a different method.

Basically the film crew would be separated from the actors at almost all times. The DP would prep for each day by assembling a floor plan for each scene with estimated positions for up to 10 cameras. His team of grips and ACs would then place the various cameras in their assigned positions, with their chosen lenses.

Żal and Glazer would assess all the frames on the video village monitors and make the necessary corrections. Each 1st AC, set up in the basement of the house, would be given two cameras to pull focus on - usually one wider frame that was easier and a more difficult, telephoto frame.

Fresh cards and batteries would be loaded into the cameras then all the crew would leave the set. Once the set was cleared, the actors entered the location and could begin performing the extended scenes throughout the different rooms in the house. 

Shooting on the Venice’s X-OCN ST compression at 6K 3:2 meant that a 1TB card would give them 60 minutes of recording time. If they reached the end of the cards, the cameras would all cut, the actors would leave the house, then the crew would come in from the basement and reload the cameras for another take. 

Each day they were usually able to film two of these different setups.



ANTI-FILMMAKING

There are a number of reasons why I’d describe the look of the film as having a kind of anti-cinematic aesthetic. For one, they shot at a deep stop, rather than with a more traditionally cinematic shallow depth of field. This supported the idea of objectivity by not selecting what the audience must look at by isolating it with focus, but rather inviting them to peruse the frame with their gaze.

They used little Leitz M 0.8 lenses which were small enough to stay out the way, yet sharp. These were shot without any diffusion to preserve a modern, objective look - which was also emphasised by a hard, contemporary, digital look in the colour grade - rather than trying to create something filmic or sentimental in any way.

They exclusively used natural lighting, relying on sunlight for day scenes and a few period accurate practicals at night - without any fill or bounce. Again, rather than shooting exteriors early in the morning or later in the afternoon when the light would feel a bit more cinematic, backlit and beautiful - they purposefully shot exterior scenes in hard, real, unflattering midday sunlight.

To be able to film night exteriors without any artificial light they used a military style infrared camera - which was later upscaled with AI in post. 

 

WHAT WE DON’T SEE

One of the reasons that makes this film so haunting comes from an idea I’ve mentioned before on this channel: what we don’t see is sometimes more impactful and terrifying than what we do.

Rather than showing the genocide happening in the camp next door, we never see it visually, however we hear it with sound design off in the distance - which brings home the idea of their detached filmmaking style.

Showing that pursuing reality and objectivity can, in this case, be more impactful and powerful than using the tools of movie magic to manipulate audiences.    

Read More
Gray Kotze Gray Kotze

5 Reasons NOT To Shoot With A Gimbal

Let’s flip the script this time and unpack five reasons or situations where you may not want to use a gimbal.

INTRODUCTION

Don’t get me wrong - the gimbal is an amazing tool that is perfectly suited to a number of different filmmaking situations. I use them all the time, which is why I made a video dedicated to five reasons why you should work with them. However, there are also quite a few instances where using a gimbal is less than desirable.

So, let’s flip the script this time and unpack five reasons or situations where you may not want to use a gimbal.

1 - PRECISION

The number one issue I run into when using gimbals - especially in more freeform documentary situations where characters aren’t blocked or given an agreed upon movement or positioning - is the lack of precision I have over framing the shot and moving the camera.

Those that have a preference for perfection over their frames may find gimbals a little frustrating. This comes down to how gimbals work. They are constructed with three motors that stabilise and control the horizontal, left to right movement of a camera (called pan), the vertical up and down motion (called tilt) and how the camera rolls from side to side. 

When operating these gimbals handheld and doing a movement on one of these axes there is no way for operators to precisely control exactly where the camera stops and at what speed it moves. The gimbal motors move the camera based on feel, or how much momentum the operator uses to angle the camera - which moves it along that line. 

Unlike other tools, such as a tripod head or operating the camera handheld, which can precisely control the speed of movement and exactly where the camera stops. Gimbals can feel a bit laggy - which means you have to try and anticipate stopping the camera with your own movement before the actual moment that it needs to stop. This means sometimes you land on the perfect frame and sometimes you don’t.

In filmmaking an extremely important component of cinematography is maintaining consistency across multiple takes, this applies not only to the lighting, but also to the camera moves and focus. 

If each take is repeated in a slightly different way due to the gimbal operator, at different speeds, with different framing and with different positioning and blocking it makes it a bit more difficult for other variables like lighting and accurate focus pulling to remain consistent. This may make it difficult to cut different parts of different takes together in a single scene. 

This isn’t to say that there aren’t some very good operators who can control gimbals with a high degree of precision, however, if you want to do a pinpoint pan or tilt move that end on a specific frame, there are other tools that can do this much more consistently and easily. 

A work around is for one operator to move the camera while another operator uses wheels to wirelessly control the camera's movement like a remote head - but this isn’t possible for solo operators.    


2 - BALANCING

Like most rigs designed to carry a camera, gimbals have a payload - a maximum weight limit that they can carry before they start to malfunction. Unlike mechanical production tripod heads that can easily carry the full weight of big production cameras and cinema lenses, gimbals have a much lower weight tolerance that will start to strain the motors.

How the camera is built and how that weight is distributed across the gimbal is also very important. For a gimbal to work efficiently it needs to be balanced so that the weight of the camera is evenly positioned across each axes.

This means adjusting the camera’s position left and right and up and down until it remains level once you release it. If the camera drops forward, for example, it indicates the gimbal is front heavy and that the tilt motor will need to strain to keep it even.

The disadvantage that this has is twofold. One, because gimbals have a lower payload it means that only certain camera and lens combinations will be able to balance. This may mean that cinematographers may not be able to choose certain large anamorphic lenses, long zooms, or shoot on a big production camera with a heavy body.

The second disadvantage is that each time anything needs to be added or removed from the camera build the gimbal will need to be rebalanced to adjust for the change in weight. So every time a lens is changed, a few extra minutes will need to be added between setups. This doesn’t sound like a lot, but if you’re shooting over the course of a day, all these combined rebalancing moments can slow down the pace of shooting a fair amount.

And, as we know, time on a set is money.


3 - MOVEMENT

The smooth, floating camera motion that you can get with a gimbal is desirable for many reasons, but may not be the right stylistic choice for certain filmmakers in certain situations.

Perhaps it's useful to highlight what kind of moves gimbals are good at doing and what situations they aren’t suited to. I think gimbals are at their strongest when used for tracking the organic motions of characters in a relatively straight, predictable line at a moderate speed. For example when following a character from behind or in front. 

Gimbals run into problems as soon as they stop moving. Because they are operated by a person - who cannot hold their muscles completely still for an extended period of time - you inevitably get drift - when trying to compose static shots or movements that begin or end with a static frame on a gimbal. 

These little drifting motions that come from the operator not being able to hold the gimbal in 100% the same position can, for me, feel a little bit distracting, artificial or unintentional in certain situations - compared to a more stable shot from a tripod or dolly that remains solidly locked off.

The same applies to very slow, creeping camera moves. It’s much easier to maintain a consistent line, speed and frame when pushing in on a dolly or a slider than it is to do so on a gimbal. Where, again, as soon as you begin slowing down motion significantly these same, small, off course drifting motions will be felt.

From slow movement to fast, another potential problem from an operator’s perspective is the lack of reactivity that a gimbal has. If you want to do things like lightening fast whip pans, or film quick, choppy, spontaneous moves - the motors on the gimbal may not be able to move the camera quickly enough and come to abrupt, hard stops without drifting.

This is why shooting with a fluid head is great for precise, whipping motion, while a handheld camera can inject scenes with a frenetic feeling that is also accurate and reactive to the movement and actions of the characters.

 

4 - RELIABILITY

Although gimbal technology in filmmaking has rapidly improved from the earlier days - these machines still occasionally suffer from technical glitches. For example, motors vibrating, rigs becoming unbalanced, or motors giving up during an oddly angled shot.

Because these rigs rely on electrical components - rather than other rigs like tripod heads which are purely mechanical - they are more prone to malfunctioning and also to wear and tear over time.

When you pan a tripod there is no doubt it will pan, however when using a gimbal to pan there is a larger chance that there may be a snag and the shot will have to be reset.

Another aspect of reliability and ease of use also applies to weather. If you need a tracking shot done outside in the rain, you can easily set up a dolly, put a rain cover over the camera on the tripod head and shoot the shot. 

However, due to their electronics, most gimbals will not be able to run in the rain, nor is it recommended to cover a gimbal with a rain cover as the extra weight may both alter the balance of the gimbal, make it difficult to properly operate or overheat the motors.

Therefore, purely mechanical grip rigs are almost always going to offer more reliability than their electronic friends.   


5 -  REBUILDING

A final reason why gimbals can be more time consuming to work with over other rigs comes down to their larger set up time when rebuilding the camera between a gimbal rig and another configuration like a tripod.

If you're working with a high end cinema camera like an Alexa Mini for example, the body will have to be stripped down of excess weight, the eyepiece removed, accessories repositioned and the correct plate for the gimbal attached. Typically rebuilding a production camera into gimbal mode will take around 10 minutes. 

Again, this may not seem like a lot, but if you are constantly changing between gimbal and tripod throughout the day, this rebuilding time quickly accumulates into a lot of lost shooting time.

If instead filmmakers chose to work with traditional rigs like a dolly, a tripod and shooting handheld - it is incredibly quick to change between these shooting modes without needing to significantly rebuild the camera each time.

So, time saving on a gimbal is a double edged sword. If you shoot almost every shot on a gimbal with a short zoom lens, where you don’t need to rebuild it and don’t need to do lens changes and rebalance - you can save lots of set up time.

However, if you need to constantly change between gimbal builds and a studio camera build, and work with prime lenses of different weights and sizes, then you’ll end up adding a greater amount of set up time to the schedule.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Autumn Durald Arkapaw

In this episode I’ll take a deeper look at what makes Autumn Durald Arkapaw's cinematography stand out by looking at some of her thoughts and ideas on the filmmaking process as well as delving a bit deeper into some of the gear and techniques that she uses to bring her ideas to life.

INTRODUCTION

Autumn Durald Arkapaw’s cinematography evolved from starting out on indie sets to now shooting big budget Marvel productions. She leans in to creating widescreen, anamorphic images, with lighting that often favours a darker exposure, in both quality and tone, to visually portray more intimate, personal portraits of characters.

In this episode I’ll take a deeper look at what makes her cinematography stand out by looking at some of her thoughts and ideas on the filmmaking process as well as delving a bit deeper into some of the gear and techniques that she uses to bring her ideas to life.


PHILOSOPHY

Although she had an early love for photography, her pursuit of a profession in filmmaking didn't begin immediately after university. Instead she pursued a career in advertising for three years, which allowed her to save enough money to eventually quit her job and start seeking opportunities in the camera department. After this she was able to study at the prestigious AFI. 

These early opportunities included working on a documentary series and filming short form content such as music videos. Both of these forms of filmmaking have more limited prep time - which meant that she developed an ability to work quickly.

Once she moved into long form indie features she could transfer this skill, while also having a bit more preparation time to visually conceptualise the look she wanted.

“I need to feel something first before I can make images to make other people feel something. I think that’s very important to me.” -  Source: Panavision    

This ability to work quickly by formulating a clear photographic vision of what she feels best imparts the emotional tone that suits the story and being able to articulate that vision on a technical level to crew again transferred up when she got opportunities on higher budget work.

“You have to make decisions quickly and be fast. So I think coming from indie films and doing a ton of work on smaller stuff, it all serves you when you, kind of, get to a really high level because you’re going to be asked millions of questions, you have a long schedule, there’s  lot of money on the line, you have a lot of more experienced people around you asking you what you want. And you need to be able to answer quickly and you have to know, truly, what it is that you truly want to communicate.” - Source: AFI

Part of what makes this process smoother is surrounding yourself with skillful and likeminded collaborators - not only through choosing a knowledgeable crew who can take your creative or technical ideas and make them a reality, but also, hopefully, through working with directors who have a clear vision for the film and at the same time are open to collaboration from their heads of departments.

This desire to surround herself with strong collaborators has always been there since her first feature, Palo Alto, which she shot with some of her close friends.

“When you're making a film with friends and it's small like that and you pick your crew wisely it feels like a family. So you don’t really feel when the camera is rolling or not rolling. You try and keep it intimate. So that later all of those elements are translated on screen.” - Source: Camerimage

This idea of maintaining an intimacy to the storytelling and presenting the narrative in a character driven, personal way is a thread she’s carried through her work. Not only in her indie filmmaking endeavours but also in high budget movies like Black Panther where they used big close ups of actors with a shallow depth of field that isolated characters within frames, accentuating their emotions.

Like all cinematographers these stylistic choices, such as framing, camera movement, lens choice and lighting, are all made depending on the content of the story - with the goal always being to visually serve the narrative rather than leaning on style for style’s sake.

An example of this can be seen in how she lit Wakanda Forever. In terms of the cinematography it’s a very different movie to the first one in the franchise - which was shot by Rachel Morrison. This sequel’s narrative leaned into an emotionally heavier tone, around the loss of an amazing actor and character. To visually evoke this tone much of the lighting went far darker than before, with heavier shadows, and softer lenses which at times brought out more of a dreamy mournful feeling.  

  

GEAR

Pretty much all of her long form narrative work has been characterised by her love of the widescreen, Panavision anamorphic look, except for her first feature Palo Alto which she shot spherically on Panavision Super Speeds. 

“There’s so much personality in the frame when you’re shooting anamorphic. I enjoy how people are framed in a wider field of view. I feel like when you have a close up where you have groups of people and maybe one should be focused on you can still relate and have context.” - Source: Panavision

This is a framing technique that she often uses in her work by composing close ups of characters quite central in the frame, with a balanced negative space on either side, which, because of the wide field of view will get filled either with more of the location and set, or will start to feel other characters on the edges.

Or, she will shoot with a more traditional over the shoulder shot reverse shot framing. This is where the camera shoots each character from the same side of a 180 degree line, but at an opposite, but balanced angle, that focuses on one character, yet uses the increased width of the anamorphic frame to feel the shoulder of the other character in the conversation. 

She usually likes to visually capture a hint of the presence of other characters in the space rather than to shoot these shots ‘clean’ so that one character is isolated and shown at a time.

She usually shoots with anamorphic lenses from Panavision - either their more vintage glass like the C-Series that have more falloff on the edges and a beautiful swirly oval bokeh, or more modern sets like the T-Series that come with updated ergonomics that are easier to use, great close focus, more minimal breathing, a bit more contrast and a bit of a larger, sharper, central sweet spot.

Although, she’s also used a detuned version of these modern T-Series lenses - which maintains their modern ergonomics but gives the glass slightly more dreamy, vintage characteristics.

This central sweet spot of anamorphic glass favours her framing style of composing characters in the sharper portion of the frame - with the focus falloff on the edges drawing the eye inward to the middle of the shot. 

Anamorphic glass also gives images a softer, more nostalgic, dream-like quality than most spherical lenses - which, again, suited the emotional tone of Wakanda Forever, which had a very different feeling to the original film that was shot on sharper, more even, regular lenses.

When picking focal lengths she also usually is drawn to wider angle lenses - that again supports her idea of seeing more width and background in the shot.

When it comes to lighting, each location will be different. However, a technique she has used for interiors, especially when working in studios, is to rig a soft overhead source of light to lift the general levels of ambient light in the room in a diffused, natural way. 

On larger jobs this may be done by rigging large amounts of controllable units, such as Skypanels, then rigging a layer of diffusion as a scrim underneath those sources, to soften its intensity and spread it. These will be skirted by a black textile so that the light falls directly downward and is contained to a specific area.

From there she’ll then add sources on the ground such as a backlight, or side keylight, depending on the location and where the natural light can be motivated from.

Another consideration in her lighting is thinking about skin tones. When working with skin with different tonalities she’ll try to strike a balance between getting true, realistic colour, projecting the right mood, which for interiors is usually on the darker side, yet giving characters enough illumination so that their performance doesn’t get lost in the darkness by underexposing too much.

To help capture true, accurate skin tones she likes shooting on either an Alexa, like the large format Mini LF or Super35 format Mini, or on the Sony Venice. She likes using the lower native 500 EI base on the Venice, rather than the higher 3,200 ISO rating, when needing to maximise dynamic range - as she feels the lower base allows her to recover an extra 2 stops in the highlights.

This is especially useful for day exteriors with bright sun and heavy shadow, or even for night scenes lit with a bright fire source.

To further diffuse the light and create a texture in the mid tones she’ll also often use haze to create atmosphere. This enhances the dreamy, nostalgic quality which is built into the lenses as well as the lighting.

Read More
Gray Kotze Gray Kotze

How Iñárritu Shoots A Film At 3 Budget Levels

In this video I’ll break down how Alejandro González Iñárritu directed his first low budget feature Amores Perros, the mid budget Birdman up to the blockbuster level The Revenant.

INTRODUCTION

Gritty, textural, real and raw are how I’d describe the look of Alejandro González Iñárritu’s highly acclaimed movies. From the breakneck success of his first independent budget film all the way up to high budget blockbusters, his movies are the product of a clear directorial voice, that goes after telling difficult, risky stories that at times interweave non-linear narratives and express the psychological state of each character’s strife. 

In this video I’ll break down how Iñárritu created his first low budget feature Amores Perros, the mid budget Birdman up to the blockbuster level The Revenant.


AMORES PERROS - $2 MILLION

Iñárritu’s success as a feature film director didn’t happen overnight. He began his career in a different field of entertainment, radio, which progressed into a producing job in TV, which led to him creating his own production company, Zeta Films - where he produced commercials, short films and even a TV pilot.  

Since Amores Perros was an incredibly ambitious, logistically and structurally challenging first feature, it helped that he carried some experience as a director into it along with an established relationship with crew. 

“Most of the people that worked with me on this film, almost all of the head of departments from Rodrigo Prieto to Brigitte Broch the production designer, all this team, we had been working together for many years doing commercials - I have a production company. 

So, in a way, that complexity, it was a language that we had already established between us. So it was my first film but definitely was  not my first time on a set.” - Alejandro González Iñárritu

The story, which he worked on with screenwriter Guillermo Arriaga, was constructed using three subplots which all feature human character’s different relationships with dogs which were interwoven and connected by a common plot point. 

This push to make such a tightly packed and difficult first feature on a relatively low budget was partly due to the difficulties involved in producing a movie in Mexico at the time.

“Your first film normally was at the mercy of the government and then you just show it to your friends. Because there was no money, nobody wanted to see any Mexican film at that time. 

There was an anxiety that runs that it was your only real opportunity to say something and to express yourself. So, I think it has to do with: you want to include everything you wanted to say.” - Alejandro González Iñárritu

The story and also how it was stylistically told through the cinematography leaned into the extreme. 

Most of the film was shot with a handheld camera and wide angle lenses - which, combined with the dynamic camera operating from Prieto - injected a gritty, raw realism into the story. This technique of moving with characters on wide angle prime lenses - from 14mm to 40mm - is something that he would continue to use in his later movies.

Shot on a wide angle lens.

This wider warped intensity that was used to capture the more intense and chaotic characters, was flipped when photographing the outcast figure Chivo. They instead shot him with much more telephoto focal lengths on an Angenieux 25mm-250mm HR zoom lens - which had the effect of further isolating him from the environment. 

Shot on a telephoto lens.

Another large part of the extreme, raw look of the movie was created by how the 35mm film was developed in the lab. By skipping the processing step of bleaching the negative, which is called bleach bypass, it creates a desaturated, higher contrast look with exaggerated, more pronounced film grain.

In other words all the vibrancy in the colour gets sucked out, except for a few colours like red which remain abrupt and punchy, the highlights get brighter and are more prone to blowing out, while the shadows more easily get crushed to pure black with little detail.

Iñárritu has stated that this bleach bypass look was a way of emulating and exaggerating the look of Mexico City, which is quite polluted with particles in the air that makes things feel hazy and grey. It also added more contrast to the city light which could otherwise look quite flat.

Iñárritu’s bold, risky vision that combined an extreme, raw narrative with an extreme, raw look, went down very well at festivals, where it won the prestigious Cannes Critics’ Week, inspired him to create a trilogy of like minded films which he called his Death Trilogy, went on to have success with audiences - making back around 10 times its low budget, and through doing so jump started his career as a director.



BIRDMAN - 16 MILLION

“I shot that film in 19 days - less than four weeks - and it was crazy.” - Alejandro González Iñárritu

Birdman came about during a free schedule window he had, while waiting for the correct winter season to shoot his next much higher budget movie - The Revenant. The script’s $16.5 million budget was financed in a co-production between Fox Searchlight, who initially got the script but had a budget cap on what they could spend, and New Regency, who were producing Iñárritu’s other film in production.

There were some resonances in the script between the lead actor Michael Keaton’s own career and that of the lead character - who was well known for being cast as a superhero and struggled to regain his perception as a quote unquote ‘prestigious’ theatre actor.   

Iñárritu pitched a radical idea that the entire film should happen in a single, long take - or more accurately have the appearance of a single take through combining and disguising the cuts from various individual takes.

This long take concept was partially based on the idea of interconnecting various characters - like he’d also done in his prior work - and capturing the intensity and energy of backstage, without giving the audience cutting points or moments to breathe. 

Unlike Amores Perros - which creates an extreme intensity through quick cutting and a large amount of camera angles - Birdman took the other extreme of creating intensity by keeping the camera always in motion and not cutting.

This stylistic decision was also one that was formed out of necessity.

“The reason I think I got into this different kind of approach or design of making films was because of need. Sometimes the restrictions and limitations are the best - at least creatively. I didn’t have enough money. I didn’t have enough time.” - Alejandro González Iñárritu

Getting an ensemble cast of actors and constructing the stages meant that the shooting schedule was limited to only 19 days. These choreographed long takes are incredibly difficult to shoot, as it requires perfection not only from the actors but also from the crew and camera operators, however, accomplishing, for example, a 10 minute long take can knock considerable shooting pages off the schedule in a small amount of time.

Taking this approach meant that the final film had to be fully designed in pre-production, before shooting, rather than discovered or re-constructed in the edit.

The ‘editing’ happened up front both in making changes to the script by getting it down to 103 pages from its initial 125 pages, and in the months leading up to the shoot where cinematographer Emmanuel Lubezki and Iñárritu worked out the blocking with stand ins, a camera and a mock set in a warehouse which they mapped out with textiles and c-stands.  

The film was shot on Alexa cameras, mainly the Mini, in either a Steadicam build for smoother tracking shots or handheld operated shots by Lubezki. Like Amores Perros it was filmed on wide angle lenses, a combination of Master Primes and Summilux-Cs - which are both very clean, sharp sets of prime lenses. 

To keep a naturalistic feel to the lighting and to practically be able to shoot 360 degrees on sets, Lubezki designed it around only using practical sources that you could see within the shot. Whether that was overhead stage lighting, bulbs on makeup mirrors or overhead fluorescents in a corridor.

This meant that colour temperatures were mixed and at times cast monochromatic hues over the image - which may have gone a bit against traditional expectations of maintaining skin tones, but gave the images a more naturalistic and real feel that is present in Iñárritu’s movies.

The digital Alexa allowed him to roll for long takes and expose at a very sensitive 1,200ASA with the lenses opened up to a T/2 aperture. This allowed them to shoot in low light environments while also preserving a big dynamic range between highlights and shadows. This helped them when shooting a shot that went out into a non-locked off Time Square at night, where they had no control over the lighting or how it was balanced.    

Although Birdman was shot in a vastly different style - it maintained a feeling of raw realism, chaotic energy and gritty intensity that interconnected different characters in the story - just like he had in his debut feature. 


THE REVENANT - 135 MILLION

Following Birdman, Iñárritu leaped into shooting his next, much higher budget feature that he had been prepping for many years: The Revenant - an 1823 action filled Western with a revenge story. 

“I prepared that film in 2011. And I started scouting and storyboarding. And I was very excited about the experience to allow myself to go to the nature. And then I realised that there is no that kind of romantic thing of losing yourself in nature. No. It’s a war. You’re at war with nature to make it work - what you need.”  - Alejandro González Iñárritu

Again, Iñárritu proposed a radical approach to making the film. He wanted to shoot all the extremely isolated, natural spaces entirely on location, rather than shooting in a studio with bluescreens and locations created by visual effects. He also wanted to shoot the film chronologically.

To envelope audiences in the world and push the realism as far as possible his DP, Lubezki, also pushed to shoot almost entirely using natural light.

Not only did shooting it for real produce a visual style that is unmatched in realism, but placing the actors in the real environment and shooting chronologically put the actors through real, raw, intense conditions that, probably, accentuated the level of realism in the performances.

“The conditions were massive. The physical scenes that he went through were extremely precise. Actually dangerous. Because if you do a bad move, the choreography with these kinds of stunts with such a speed and camera movements that are so precise, you put yourself at risk.” - Alejandro González Iñárritu

These many stunt sequences were made even more challenging as, in a similar style to Birdman, Iñárritu decided to design many of these sequences as long takes. Throughout the production they used one of three methods of moving the camera: a Steadicam for smoother tracking shots, Lubezki operating a handheld camera, or using a Technocrane for moves at speed, over difficult terrain or for booming overhead camera moves.  

A technical challenge that emerged from shooting long takes in uncontrolled natural light, was how to balance exposure without it getting blown out or getting too dark - when moving from a dark area to a light area, or visa versa. 

Lubezki’s DIT was tasked with pulling the iris. This is where a motor is attached to the aperture ring on a lens, which transmits to a handheld device that can be turned to change the aperture during a shot - either opening up and making the image brighter, or stopping down and making it darker. This has to be done carefully and gradually so as to avoid these changes in exposure being noticeable and distracting.

After initially choosing to shoot the day scenes on 35mm film, to maximise dynamic range, Lubezki decided to switch to a purely digital workflow - again shooting on the Alexa Mini as the A cam, the XT for Steadicam and crane, and the Alexa 65 for vistas or moments where they wanted the greater width and resolution of the large sensor. 

Again, they also used wide angle Master Primes and Summilux-Cs - more specifically the 14mm Master Prime and occasionally a 12mm or 16mm. When on the larger field of view Alexa 65 he would often use a 24mm Prime 65 lens.

Like his other films, The Revenant preserved an intense, raw, real chaotic feeling, however at a higher budget that could afford one of the biggest stars in the world, a very long production schedule, more complicated visual effects shots, in very challenging, slow shooting environments, with many complex action and stunt sequences.

Read More
Gray Kotze Gray Kotze

5 Reasons You Should Shoot With A Gimbal

Let’s look at five reasons why filmmakers use gimbals in both videography and on high end productions alike.

INTRODUCTION

Gimbals are often associated more with videography or prosumer camera gear than they are with big budget movies. However, this shouldn’t be the case. For years, this method for stabilising cameras and operating them in a handheld configuration has been used on many industry level shows, commercials and movies.

So let’s use this video to look at five reasons why filmmakers use gimbals in both videography and on high end productions alike.  

1. MOVEMENT

For a long time in the early days of cinema it wasn’t possible to shoot with a handheld camera that could move with actors and could be operated by a single person.

Instead, filmmakers that wanted to move these heavy cameras needed to do so on a dolly - a platform which could be slid along a track using wheels. This trained the audience's eyes for decades to accept this smooth tracking movement as the cinematic default.

To this day, this language of smooth, flowing, stable camera movement has persisted and is often sought after by directors and cinematographers. Gimbals are able to achieve a similar movement, without needing tracks and a dolly, by using sensors that detect when a camera is off kilter and correcting that by evening it out with motors in real time.

These motors can control three axes of movement, hence why these devices are also called 3-axis gimbals. They can adjust and even out the up and down motion, known as tilt, the side to side motion, known as pan, or rotational motion known as roll.

Different gimbals can be set to different modes to control the axes of movement that you want. For example you could limit the motion to a pan follow mode, where the motors stabilise and lock the tilt and roll axes and only react and follow when the operator pans the camera horizontally.

Or you could enable pan and tilt follow, where only the roll axis is locked so that as the operator moves the gimbal horizontally or vertically, the gimbal will follow along with the movement of the operator. Gimbals can therefore be quite reactive to the handheld motions the operator makes, so are a useful tool in situations that require floating, smooth moves that need to track the motion of an actor or moving object.

Because they’re operated handheld, the kind of movement you get from a gimbal will have more of a floating, drifting, stability to it with small, meandering deviations in moves which is caused by the manual operation of it, compared to something like a dolly, which is super stable, heavy, and tethered to a specific line of track that creates more precise, cleaner moves. Certain filmmakers may want this drifting feeling of motion that is attainable from a gimbal.  

2. UNEVEN TERRAIN

One advantage that a gimbal has over alternative grip rigs that also produce smooth camera movement, is that they can be more easily set up and operated over uneven terrain or in remote locations.

While it is possible to lay tracks on uneven outdoor locations - by first building a wooden platform to use as a smooth and level base, it is miles easier to operate the camera handheld on a gimbal and use your feet to move over uneven surfaces.

If venturing into very remote locations it also means that all that productions will need to carry is a gimbal camera build, some batteries and maybe a box of lenses. Compared to a massive truck and a full grips package - which may not be able to make it up to certain mountain locations.

Filmmakers may also want dynamic movement that squeezes through tight spaces where larger cinema grip rigs would otherwise not be able to fit - like through car doors or inside tight interiors. Or they may need the camera to move up or down a slope, which could also include something like stairs, which dollies can’t do since they need a stable, level platform to lay tracks on.

3. TIME & MONEY SAVER

On top of these advantages around moving the camera, gimbals are also a great tool for productions as they have the potential to save time and money. Paying for a single gimbal operator, or even having DPs operate the gimbals themselves and getting the first ACs to build and balance them, can provide a good saving on the grips budget. 

On some shoots, you may be able to get away with using a gimbal for stable motion rather than having a dolly sitting in the truck for smooth moves on stable surfaces and a Technocrane waiting to be set up for moves across uneven terrain.

On top of the gear costs, you also save on crew costs, as choosing to use a dolly or a Technocrane will come with the costs of hiring a larger grip team to set up and run the gear.

While these savings may be less important on some jobs like high end international commercials that have the money to pay for whatever tool is deemed necessary - another type of saving that a gimbal provides that may still be valuable is time.

It’s almost always easier to walk through shots, make adjustments to positioning and do a run through on a handheld gimbal - without needing to get a team to lay tracks, or spend lots of time between setups building and positioning grip rigs.

And on these enormous jobs where the most expensive thing on set may be paying for a celebrity performance or locking off a pricey location - time, as the saying goes, is money. So gimbals may save money not only on the lower rental cost and fewer crew requirements - but also by speeding up the time taken to set up each moving shot.     

At this point, some of you may be saying, “Why not just use a Steadicam? It’s a similar cost and gives you a similar feeling of motion.” Well, gimbals actually come with one party trick that Steadicams don’t.


4 - REMOTE HEAD

So, what is this extra capability that a gimbal has that a Steadicam doesn’t have? Because most large production gimbals like the Movi Pro or Ronin 2 have motors that control three axes of movement and have controls in the form of a joystick or wheels that enable operators to wirelessly control how the camera tilts, pans and rolls - it is effectively a remote head. 

This means that it can double up and be used both in a handheld gimbal configuration for some shots, then rebuilt as a remote head and attached to rigs such as a crane, car arm, or as a remote head on a dolly.  

So instead of hiring both a Steadicam to do tracking shots on the ground, and a Libra head that is attached to a crane for an aerial shot - productions can get away with using only one gimbal. 

When gimbals are rigged as remote heads they basically transmit a signal between the wheels moved by the operator and the gimbal. There are three wheels that each represent pan, tilt and roll. 

So if an operator wants to pan the camera left to right they can roll the side wheel forward. This will then send an instantaneous signal from the wheels to the head - which will pan the camera without the operator needing to be physically near the head. 

Gimbals can also be used for ‘transition shots’ that change between two different builds or operating methods during a single take. For example, this shot, which slides the camera in a remote head mode up a story on a wire rig while being controlled wirelessly by an operator. It then gets unclipped from the rig and grabbed by an operator who can proceed to use it to follow the action like a handheld gimbal.


5. FLEXIBILITY

The final reason that many filmmakers opt to use a gimbal is due to the greater flexibility that it provides. This is a matter of taste and feeds into a preference for how directors or DPs like to structure their filming.

Some like to be more traditional, formal and deliberate and move the camera from some form of a solid base like a dolly. This provides a clear move from A to B which can be repeated multiple times, is predictable and relies on actors perfectly hitting their marks by following a prescribed movement based on the pre-established blocking by the director.

However, some other filmmakers like to work in a different way that is more open to experimentation, improvisation and embracing little magical moments that may be discovered.

This may be appropriate for directors who like working with non-professional actors, or in semi-documentary shooting environments - where the ability to change shots on the fly is very important.

It may also be good for directors who like to shoot in long takes - where they can work with actors and give direction as they go, putting the actors in a fully dressed shooting environment and then chasing after them, finding the best angles and moments of performance through shooting. 

Having the freedom of a gimbal is great for these longer takes, as the camera can be transported great distances, up stairs, through gaps and over most surfaces as long as the operator's arms can hold it. It also eliminates the need to place tracks - which have a chance of being seen in the shot if shooting in an expansive 360 degree style where we see everything in the location. 

Gimbals are great for scenarios where actors aren’t given an exact mark or blocking and are instead encouraged to move around and find the shot. Because their position is not locked onto a track, the camera is free to roam and explore - getting all the benefits that come from operating a handheld camera, while at the same time preserving some of that traditional cinematic stability that we discussed. 

CONCLUSION

As we’ve seen gimbals offer value not only to lower budget, more improvisational, documentary based shooting, but also to larger productions who seek stabilised motion in unusual spaces, or with dynamic moves, with a rig that saves on set up time, can double up as a remote head and offers the ability to transition between different styles of operating the camera. 

As high quality cinema cameras continue to get smaller and smaller and are more easily able to fit and balance on gimbals, this rig will continue to soar in popularity and be an increasingly useful tool for stable, cinematic movement. 

Read More
Gray Kotze Gray Kotze

The Crop Factor Myth Explained

Let’s go over a more detailed explanation on what ‘crop factor’ is, how it works and a misconception about it.

INTRODUCTION

There’s an idea in photography that cameras with different sized sensors have what we call ‘crop factors’. A large format Alexa 65 has a crop factor of 0,56x compared to a Super 35 camera. A 90mm lens multiplied by 0,56 is 50mm. Therefore, many people say that using a 50mm lens on this camera is going to look exactly the same as using a 90mm lens on this camera.

The truth is that this isn’t exactly 100% correct - for quite an important reason. So, let’s go over a more detailed explanation on what ‘crop factor’ is, how it works and the big misconception about it.   

WHAT IS CROP FACTOR?

As photochemical film photography emerged and cinema cameras were created, there was a push to create film with a standardised size - that could be used across different cameras from different manufacturers and be developed by different laboratories around the world. That film had a total width of approximately 35mm and therefore was called 35mm.

When digital cinema cameras started getting manufactured, they replaced film with photosensitive sensors that stuck to the approximate size of film’s 35mm 4-perf capture area.   

However, along the way some other more niche formats emerged: from smaller 16mm film that was a cheaper alternative, to large format 65mm which maximised the resolution and quality of movies at a higher cost, to tiny ⅔” video chips from early camcorders to smaller micro four thirds photography sensors.

The issue is that when you put the same lens on two cameras with different sensor sizes they will have a field of view that is different, where one image looks wider and one looks tighter.

So, for prospective camera buyers or renters to get a sense of the field of view each camera would have, many manufacturers started to publish what they called a ‘crop factor’ to determine this. 

This means you take your lens’ focal length - for example a 35mm lens - and multiply it by the crop factor of the camera - such as 2x - to arrive at a new focal length number, 70mm. This means that on this smaller sensor your 35mm lens will have approximately the same field of view or magnification as a 70mm lens on a Super35 sensor.   

Since Super 35 sensors are considered the standard size, this has a crop factor of 1x. Camera sensors larger than 35mm would have a crop factor of less than 1x and sensors that are smaller than 35mm would have a crop factor of more than 1x.

THE CROP FACTOR MYTH

So where does the myth part come in? Well, the issue is that many people interpret crop factors as saying that shooting with a 70mm lens on a Super 35 sensor is exactly the same as shooting with a 35mm lens on a smaller sensor with a 2x crop.

What’s important to note is that while the level of magnification of the image may be the same, there are still a bunch of other characteristics that lenses have that will make images different depending on what focal length is chosen. 

So what we should say is that a 70mm lens on a Super 35 sensor has approximately the same field of view of a 35mm lens on a smaller sensor. We shouldn’t say that a 70mm lens on this camera is exactly the same as a 35mm lens on this camera in every way - as different focal lengths come with other secondary characteristics beyond just their field of view.

Rather than different sensors magnifying or zooming out on the width of what we see it’s better to think about it in different terms. If you put the same lens on two different cameras with a larger sensor and a smaller sensor, the way that the light enters the lens and creates an image will be the same. 

The only difference is that the camera with the smaller sensor has less surface area to capture the image with. This makes it feel like the image is ‘cropped in’ in comparison to the larger sensor which can capture more of the surface area and therefore produce an image which feels wider.

Calculating crop factor and then changing the lens on the camera to a more telephoto lens - may make the width of the images match, but will also change the very nature of the image by altering the depth of field, the compression and the distortion in the image. 


THE EFFECTS OF FOCAL LENGTHS

The smaller the sensor is, the more cropped in the image will be and therefore the wider focal lengths you will need to use. Whereas the larger the sensor is, the wider the shot will appear which means cinematographers will often choose longer, more telephoto lenses.

One of the secondary effects from using longer focal lengths is that it will create a shallower depth of field. This means that the area that is in focus will be much narrower on a telephoto lens, which means the background will be softer with more bokeh. 

This is why movies shot on cameras with large format sensors bigger than Super 35, like the Alexa 65, which cinematographers pair with longer focal length lenses will have a much shallower depth of field, with soft, out of focus backgrounds.

It is a misconception that larger sensors create this effect. In fact, it is the longer focal length lenses that do this.

Another effect that focal lengths have is on how compressed the image is. Wider focal lengths expand the background and make objects behind characters appear further away.

Telephoto lenses compress the background and have the effect of bringing different planes closer to the character.  

For this reason, cameras with smaller sensors that need to use wide lenses, may produce images that appear a bit ‘flatter’ without much depth, especially in wide shots. While large format cameras, with their longer lenses, compress the background to create a bit more of a layered perception of dimensionality.

Wider lenses also have a tendency to distort the image more. So, shooting a close up of an actor on a Super 35 camera with a wider focal length will expand their face and make their features unnaturally larger, while using a longer focal length on a large format camera with the same equivalent field of view will compress the faces of actors a bit more which many say is a bit more flattering. 


CROP FACTORS OF DIFFERENT SENSORS

Although modern digital cinema camera sensors come in many shapes and sizes, in general they conform to a few approximate dimensions. 

Some cameras come with the option to shoot a very small section of the sensor that is equivalent to 16mm film. This has an approximate crop factor of 2x compared to Super35. 

This little format will usually be paired with wider lenses designed for 16mm - such as the Ultra 16 Primes which range from 6mm up to 50mm focal lengths, which with a crop factor applied produces a field of view of around 12mm-100mm when adjusted for Super35. As we discussed, this 6mm will produce an image with extremely limited bokeh and a deep depth of field that feels quite dimensionally flat. 

Next we have Super35 sensors which are usually considered standard, such as we find on an Alexa 35 or Red Helium. Each manufacturer produces sensors with subtly different dimensions - but most will be the approximate size of 4-perf 35mm film and produce the standardised field of view, where a 18-24mm focal length feels wide, a 35-50mm lens is about a medium, and anything longer at around 85mm starts to have a compressed, telephoto feel.

Anything bigger than Super35 size is usually considered to be ‘large format’. This includes ‘full frame’ sensors modelled on still cameras that are approximately 36x24mm. Some examples are the Arri Alexa Mini LF, the Sony Venice 6K or the Sony FX9.

These cameras will have a crop factor of somewhere around 0.67x, which bumps a wider perspective up to around 32mm, a medium feel to around 65mm and a telephoto lens to about 110mm.

65 cameras like the Alexa 65 push this even more with their approximate 0,56x crop factor that makes a 45mm lens a wide, a 90mm lens a medium and a 150mm a telephoto. As we discussed, shooting a wide field of view with a 45mm will produce much more compression, bokeh and dimensionality than using a 12mm lens on a 16mm camera - even though they’ll produce a similar field of view. 

It’s important to note that these crop factor numbers are all relative to what sensor size is considered the ‘standard’. For example, in still photography a full frame sensor is usually considered normal with a 1x crop factor, which means smaller APSC sensors which are roughly close to Super 35 will have around a 1.5x or 1.6x crop factor. 

What is much more important than getting super technical about these crop factor numbers is understanding how larger or smaller sensor sizes affect the field of view and understanding all the secondary effects that using different focal lengths will have on the image.  

Read More
Gray Kotze Gray Kotze

Why The Book Is Often Better Than The Movie

What are some reasons that makes books difficult to adapt into movies?

WHY ADAPT MOVIES?

Cinema has a long history of transforming literary works into movies. This makes sense for a few reasons.

Firstly, a large proportion of audiences will already be aware of the story and characters. Therefore, it’s easier to market the movies and get the existing fanbase into seats without needing to sell a completely new concept, world or story to audiences with promotional materials. 

Secondly, some of the best, most inventive and iconic stories which build their own worlds were written as novels. So, there is great subject matter to choose from. However, adapting an existing story to the screen also comes with some baggage.


DIFFERENT IMAGINATIONS

 When reading novels, your brain uses the descriptors written by the author and your imagination fills in the image - kind of like how AI can create images from prompts.

Depending on what AI you use, or what prompt you give, you’ll get different variations of images and different interpretations. The same is true of humans.  Different directors and audience members will interpret texts differently, not only visually, but also thematically.

This is especially true in texts where the descriptions are a bit vague. Here’s an example. When I say gollum from Lord of The Rings, anyone who has seen the movie will immediately think the character looks like this.

However, illustrator Tove Jansson imagined and represented gollum like this, based on Tolkien’s description of a ‘slimy creature’, ‘as dark as darkness’ with ‘big, round pale eyes in his thin face’.

This disparity in how the character was imagined made Tolkien add an extra adjective ‘small’ to the description in later editions. The point is, different people will imagine things differently.

This applies to the landscapes the stories take place against, how the characters look, what actors are cast, or how key props or objects are rendered. If these representations go against the mainstream audience imagination they may not be well received.

If directors manage to get past this first hurdle and present a visual world that is palatable to the majority of the audience and aligns with the mainstream imagination, they are faced with another hurdle.


EXPRESSING INTERNAL THOUGHTS

How do you express internal monologues, omniscient narration and emotions of characters, that is easily done in the literary form? 

One technique that filmmakers have is to use voice over from either the character themself, or voice over from a narrator. However, in many contexts this technique can quickly get overused and disrupt the flow of the movie. Whereas novels can break down thoughts, emotions, and internal explanations at any point they wish, easily, through text.

Other methods that have been used to provide information and context to audiences include: dream sequences, flashbacks and one character telling another a story or explaining something on screen.  

These attempts to express internal thoughts about the plot in the form of dialogue can often come out as clunky exposition, which is another reason why translating the thoughts of characters in books to the screen is a challenge.

Good adaptations focus on the characters: and allow the story to be told through the actions of the characters, rather than aligning plot points and then manipulating the characters to get to the plot.

SHOW DON’T TELL    

Two good rules to overcome exposition are: show rather than tell and delay giving expositional information for as long as possible. 

This is usually reliant on great performances from actors who can project their internal emotions and thoughts externally. Likewise, the language of the camera can also be used to express information. 

Take this scene from No Country For Old Men - which conveys a huge amount of information without any dialogue. It’s shot from the perspective of Chigurh, so we’re seeing things unfold at the same time he is - delaying revealing expositional information.

He opens the door with a cattle gun. Lots of information here: firstly he’s clearly in a rural, farming area that would have such a tool, secondly he can clearly adapt to his surrounds using whatever he finds, thirdly, he makes a noise and is confident enough in his violent ability that he doesn’t seem to care about people noticing him.

He finds unopened letters - we know that whoever he’s looking for has been gone a while. There’s an unmade bed, hangers from hastily packed clothes and an open window - we know the person has left in a hurry. He grabs some milk from the fridge and drinks it. The milk is still good so whoever he’s after couldn’t have left more than a couple days ago. 

The camera pushes in - getting inside his head. His thoughtful, slow, calculating psychopathic calm behaviour as an intruder is very uneasy.

Then after this mountain of information has been revealed entirely visually - it’s confirmed through a later dialogue exchange with a woman.

This is how good adaptations of books reveal information - by leaning on cinema’s visual tools, and controlling the flow of information to the audience, rather than by overusing expositional dialogue. 



STORY STRUCTURE

Another structural difference between books and movies is their length and how they are designed to be consumed.

Novels are by their very nature intended to be read over an extended period of time, in different sittings. Authors can delve into extreme detail like describing the world, adding backstory, getting inside the heads of characters, and they can elongate plots.

Movies are designed to be consumed in a single one and a half to three hour sitting. This means the plot from adapted works often needs to get enormously condensed, simplified, restructured or reinvented to make sense within the more limited time frame.

This can be at odds with what fans want - who are used to the greater plot nuances and depth in the original work.

A solution to this has been to create a series of instalments - breaking the movies into multiple parts. This hasn’t always been successful.

Pacing an adaptation through the writing and editing needs to strike a balance between giving justice to the original story and plot, re-writing or removing excessive side storylines and not overstretching the existing material.

The way that Peter Jackson ends The Return Of The King is a good example. After the ring is destroyed in the film’s climax and the characters return to the Shire, Jackson cuts out the entire ‘Scouring of the Shire’ storyline from the book - where the hobbits retake the Shire through another battle to end Saruman’s rule.

Adding this would have both extended the movie’s run time too much, and goes against the classic three act structure in movies by introducing a second inferior climax after the true climax of destroying the ring.

      

CREATING TONE

One of the most challenging parts of adapting an existing work to the screen is finding the correct tone that pays homage to the story’s intention: whether that’s creating a feeling of wonder, an uneasy suspense, or action.  

A number of filmmaking tools can be used to achieve this feeling: from the score to the set design to the lighting. An example of visually creating different tones can be seen in how cinematographer Andrew Lesnie, Peter Jackson and the rest of the crew created a unique look for each ‘realm’ or location - which also expressed an emotional tone.

The Shire is green, lush and characters are backlit with golden sunlight that is comforting, homely and natural.

Bree needed to feel a bit more aggressive with a sense of foreboding. So they pushed a yellow-green tint in the grade that made skin tones a bit more sickly and lit it with hard light sources with jagged shadows.

For the magical safe haven of Rivendell they pushed a comforting, autumnal warm look in post production, lit scenes with more diffused, softer lighting with less intense shadows, and introduced digital diffusion into the image that created a blooming, smudgy, halation effect in the highlights that would come from using a strong Pro-Mist diffusion filter.    

Which again, contrasted heavily with the scenes in Mordor that tried to suck all life and vibrancy from the almost monochromatically neutral palette, lit by constantly gloomy, cloudy light.  

Each region carried its unique emotional tone not only through the visuals but also through the music.

CONCLUSION

Adapting fiction to the screen is beset by challenges: from bringing imagined imagery to life, expressing the internal thoughts of characters, restructuring and shortening the storyline to create an appropriate tone that aligns with the original source material.  

Truly doing stories justice requires directors to have a clear vision, which they refine and structure with careful pre-production planning, unhindered by ulterior financial motives, which is then supported and executed by a superb cast and technical crew.

Read More
Gray Kotze Gray Kotze

The 2 Ways To Shoot Car Scenes

There are two main ways of pulling off driving shots: with a process trailer, or with a poor man’s process trailer. Let’s break down how these two techniques work, the gear involved, and some reasons why filmmakers may choose one method over the other in different situations.

INTRODUCTION

A general rule in cinematography is the more variables a scene in a script has, the more difficult it is to film. 

Car scenes come with a lot of moving parts…literally. This presents some challenges. However, since over the years countless scenes have been written, set and filmed inside of moving cars, some standardised cinematographic methods have emerged to handle these situations.

In fact there are two main ways of pulling off these shots: one method is done practically with real locations and a rig called a process trailer, and the other way is achieved through a bit of filmmaking trickery and is called a ‘poor man’s’ process trailer.

So, let’s break down how these two techniques work, the gear involved, and some reasons why filmmakers may choose one method over the other in different situations.



PROCESS TRAILER

To be able to cut together a dialogue scene filmmakers need to shoot multiple angles of the scene being performed, multiple times. Therefore, it’s important that there is a high level of consistency among all of the different takes, so that when shots filmed at different times are placed next to each other there is an illusion that the scene is continuous and unfolding in real time.

This is why cars present a bit of a snag. Consistency over a long shooting period can be difficult when traffic is unpredictable, the background outside the window changes, the driver alters how fast or slow they are accelerating, and the lighting conditions morph as they drive past different areas that may cause shadows or different angles of light. 

Also, asking an actor to drive and perform dialogue at the same time can be a bit too much multitasking and diminish the performance, or even be dangerous as their attention to their driving will be compromised.

For this reason, car dialogue scenes shot while driving on roads are almost always done with a rig called a process trailer. Sometimes also called a low loader, this is a trailer with wheels and a platform big enough to fit a car on - which the actors sit inside - that can be towed by another vehicle. The car that is shown on camera is referred to as the ‘picture vehicle’.

Process trailers need to sit very low to the ground so as to give the illusion that the picture car is driving. If it is too raised then the perspective will be off.      

Most low loaders are designed with a front cab section that the driver sits in and a rear section behind the cab with mounting points for lights, a director’s monitor, space for essential crew to sit, gear to get stored, and generators to be mounted that can run power.

This front section then tows the process trailer which the picture car sits on. 

The actors sit inside the picture vehicle and the camera operator is placed on the trailer, usually outside the vehicle, and films the scene as the actor’s play out each take and pretend to drive. Meanwhile the actual driving will be done inside the front cab by a professional low loader driver. 

Any camera operators, focus pullers or other essential crew that need to be on the trailer section have to be harnessed in for safety and contained by a barrier of mounted poles that grips build.

The route that will be driven by the low loader will be carefully planned ahead of production and will almost always involve getting permission and paying for permits from the local government. They will often insist that a police escort is used to drive in front of or near the process trailer and may even require that some roads need to be closed or blocked off during shooting for the safety of the public. 

To provide a consistent background and limit blocking off roads to a small area, the driving route will usually either be looped, or it will be a route that has a turning point at the end of it - which can be driven each time for multiple takes.

This turning point will have to be scouted in advance by the driver to ensure there is enough space to perform a u-turn with the elongated, low clearance vehicle.

There are a number of different shot options that can be used on a process trailer.

Often, cinematographers will shoot from a stabilised platform like a tripod or car mount which the grips can secure on the process trailer, outside the picture car. 

Common angles are shooting a two shot front on, through the windscreen, then punching in for close ups on each actor though their respective side window. 

The camera could also be rigged or shot handheld from inside the picture vehicle.

I’ve also witnessed some DPs who like to operate a handheld camera on an Easyrig, and position themselves just outside the open side windows - especially for car commercials.

If shooting through windows DPs will almost always use a rota pola filter. As the name suggests this polariser glass can be rotated by turning a wheel, to position the filter so that the polariser effect minimises the reflections from the glass. This allows the camera to see the actors inside the car without being blocked by reflections.

Some complex camera moves beyond locked off frames can also be achieved, such as this. Which is done by shooting off a Scorpio 10 - which is a small telescoping crane arm - that is rigged to the process trailer and moved by a grip. It has a mini Libra head attached to it, which means the camera can be tilted, panned or rolled remotely by an operator using wheels.

In this case they achieved these tricky moves by removing the car’s side doors and shooting through the passenger side of the vehicle. They shot all the moves practically then later inserted the window’s glass and reflections with VFX.     

Cinematographers will often get their team to rig a fill light on the process trailer. Usually this is quite a soft light with some spread, such as Skypanels or an HMI with diffusion. This will lift the level of ambient light inside the vehicle which will be much darker than the bright levels of natural ambience outside, yet be soft and not too directional so as to mask that the shot is being lit at all. 

It’s also possible to use a car mount, where the camera is attached directly onto the car by grips through using a mounting mechanism that is usually secured through poles with suction cups that stick onto the car’s body, or with a rig called a hostess tray.

In this case, some actors may be able to do their own stunt driving without a process trailer for scenes without dialogue. It may also be possible to use a professional driver that stands in as a double for the actor, which requires shooting at an angle that doesn’t reveal too much of the driver’s identity. 

POOR MAN’S PROCESS TRAILER

As nice as it is to shoot driving scenes for real by using a process trailer, this method does come with a number of disadvantages. For this reason, filmmakers also came up with a second, artificial method for capturing these shots - which is referred to as the ‘poor man’s process trailer’.

There are a few different methods of doing this, but basically it involves placing the stationary car in a low light environment, like a studio, then using lighting, giving the car a shake, and creating a simulated background to give the illusion that the actors are in a moving car. 

There are four main ways that this can be done: with projection, with lighting, with a green screen or with volumetric lighting from giant LED screens.

Although there are different nuances to each of these methods, they are set up in a similar way. First, a background is placed behind the area where the shot will take place. So if the shot is a close up of a driver, then a projection surface will be set up behind the picture vehicle in the same directional line that the camera is pointing. 

A projector will then hit that surface with a pre-recorded clip shot out of a moving vehicle which plays out for at least the length of a full take. It’s, of course, important that the video loop in the background is shot at the same angle as the shot which you line up and is moving in the right direction.

The alternative to this is to replace the projection surface with an illuminated green or blue screen. Then in post production key the green and replace it with the video clip of the moving background. 

A higher budget version of these two methods is to use volumetric lighting - which is basically gigantic LED video panels that output a brighter luminance. Again, clips can be played by these video walls, which can usually be controlled by software that can also do things like defocus and shift the perspective of the video.

Because of the increased levels of light output from these panels, you also get more realistic production of light, so any highlights in the video clip will produce brighter areas or reflections in the lighting, and any changes in colour will change the colour of the light that hits the subject. 

Once you’ve set your background, then it’s time for cinematographers to work on the lighting. How this is done will depend on whether it’s a day or night scene and is based on the discretion of the cinematographer. But, usually it will involve adding some kind of ambience to the scene, like a bounced source that softens and spreads the light while raising the general exposure levels. 

Then you’ll usually want to set up some moving light sources to simulate that the car is in motion. This can be done by loosening the lock off on a stand and swivelling a light around. Or, what I’ve found works quite well, is to get a few people to swing around some handheld LED tube sources. 

To get the feeling of camera motion to be realistic, I’ve also found that shooting with a handheld camera combined with getting people to randomly shake the picture vehicle tends to give a more accurate feeling of motion than shooting a locked off frame.


ADVANTAGES AND DISADVANTAGES OF THE PROCESS TRAILER

One of the biggest reasons why filmmakers chose to shoot car scenes in this way rather than with a real process trailer is because of budget. Shooting with a full police escort and locking off roads can become incredibly pricey.

The cost of renting out a studio, or even shooting outside in a parking lot, is going to be far cheaper than renting a low loader and paying for road closures and permits. 

Having said that, if you want to shoot with a large volumetric LED wall in a big film studio, then it quickly moves away from being a ‘poor man’s’ method and will probably be a similar financial spend to shooting with a real process trailer.

So, taking budget out of the equation, what are some of the advantages and tradeoffs of each method? 

Basically, shooting with a process trailer offers a realistic look, with little vibrations and real world, interactive lighting, that is difficult to match, while shooting with a ‘poor man’s process trailer’ offers much more ease and convenience.

It’s much easier to record clean dialogue when working in a soundproof studio environment than when working out on real roads. 

It’s also significantly faster and easier to change between setups when working in a studio, than when working from a low loader - which often requires driving the vehicle to a stopping point, then getting a full grips team to re-rig the camera so that it is safe and secured.

It’s also far more convenient for the director. When shooting in a studio they can clearly watch every shot on a monitor, and stroll over and give notes or have conversations with actors between takes. Whereas shooting on a process trailer usually involves more staggered communication and direction over a radio.

Studio environments also give cinematographers far more visual control. For example, when shooting outdoors the sun position will change over time, the light may go in and out of clouds during takes changing its exposure and quality, you may get unwanted reflections off the glass or hit a bumpy section of road that moves the camera too much.

In a studio lights can be set to a consistent intensity and position, lens changes and camera moves are easy, as you work off a flat, unmoving surface - ensuring that take after take can be visually repeated in the same way.

Also any technical glitches are easy to fix right away, whereas if a focus motor slips or a monitor loses a transmitted video feed on a process trailer, you’ll have to bring the whole moving circus to a safe stop to fix the problem - which is time consuming.

A final disadvantage to working with a process trailer is that it forces you to face the camera towards the windscreen of the picture vehicle or shoot side on. 

Shooting from behind and looking out of the front windscreen requires renting a niche vehicle where the driver’s controls are placed behind the picture vehicle - which can be a lot of effort for a single shot, compared to just turning the car around inside a studio against a moving background.     

CONCLUSION

Despite the logistical challenges and inconveniences, some filmmakers with a budget still opt to shoot the real thing. Prioritising the realistic visual nuances that come straight out of the box when working with a process trailer. 

Read More
Gray Kotze Gray Kotze

Cinematography Style: Néstor Almendros

Let's take a look at the influential work of cinematographer Néstor Almendros; specifically, his use of natural lighting, his taste for cinematic simplicity and focus on providing directors with his creative insights and knowledge of cinema.

INTRODUCTION

Although cinematographer Néstor Almendros did most of his work in France, often for groundbreaking French New Wave directors, like Éric Rohmer and François Truffaut, he is probably best known for his collaborations with Hollywood directors on some iconic movies in the 70s and 80s.

Directors were often drawn to his pioneering way of working with natural light, his taste for cinematic simplicity and a focus on providing them with his creative insights, beyond just his technical expertise.

In this episode let’s break down how his thoughts and philosophies on cinematography influenced his photographic style, and also take a look at some of the gear and techniques that he used to pull off some breathtaking images.    


PHILOSOPHY

Many of the filmmaking techniques that Almendros would later use on larger feature productions, were actually gleaned in his earliest explorations into cinematography.

Three of these concepts which appear as threads throughout his filmography are: his knowledge of movies, his focus on natural lighting and his push for cinematic simplicity.

He developed an early love for cinema, so much so that he eventually became a bit of a cinefile and started writing movie reviews. He cites this as one of the best educational resources for DPs, claiming that the technical side of photography can always be learnt or executed by film technicians and crew that you work with, but having a solid foundation and understanding of what films came before and what the current trends are in cinema and photography are incredibly important. 

In his early years he wanted to be a director. This made having a perspective on the narrative a must. It’s the job of the cinematographer to be sensitive to the needs of the story and have the necessary cultural background to draw from. 

An example of a movie that he was influenced by was the early Italian neorealist film La Terra Trema - shot by cinematographer G. R. Aldo. He was blown away by how Aldo used naturalistic lighting in a way that was very different from the other much more stylised and overlit movies of the time - which blasted hard, frontal key light at actors. 

Instead of shooting on sets in film studios and shining hard, spot lit key light, fill light and backlight at actors as was the norm, many of these Italian neorealist films used available, naturalistic light in real locations, which may be hard with unflattering shadows under direct sun, a soft, gentle feel under cloudy conditions, or use the last remnants of dusk light remaining in the sky after sunset.

His appreciation for naturalistic light may also have been influenced by his filmmaking originating in shooting documentaries in Cuba - where he could only work with a camera and available light as there was not enough budget to hire lights or a team of electricians.

To overcome the low levels of light inside some of the houses they would shoot in, they came up with the idea of using mirrors to bounce the sunlight that was outside into the house through windows then bouncing it off the ceiling. 

This technique of softening light by bouncing it became important later, but so too did the function of mimicking the direction of the natural sunlight by angling it through windows and increasing its strength.

In other words, taking an existing source of natural light and strengthening it by artificial means. 

Almendros inspired a major transition in thinking about lighting. Rather than being bound to film school concepts like three-point lighting, he instead wanted the lighting in his films, even when using artificial film lights, to be motivated by what the natural light sources - like the sun - does in real life. 

He talks about this in his autobiography:

“When it comes to lighting, one of my basic principles is that the light sources must be justified. I believe that what is functional is beautiful, that functional light is beautiful light. I try to make sure that my light is logical rather than aesthetic.”

Another aspect to Almendros’ work is an appreciation for simplicity both in practical, technical terms as well as aesthetic terms. Again, this may perhaps have evolved from the beginning of his career in documentary and on low budget films, where he became accustomed to making do with a lack of resources.

A great example of his economical way of working, was on his first narrative feature: La Collectionneuse which he shot for French New Wave director Éric Rohmer. Because of an extremely limited budget they were faced with a choice early on: either shoot in the less expensive 16mm or shoot extremely economically on 35mm.

They went with 35mm. 

A shooting ratio refers to how much footage was shot in relation to the length of the finished movie: so if 20 hours of footage was shot to complete a 2 hour movie then the production had a shooting ratio of 10:1. La Collectionneuse had an insanely low shooting ratio of 1.5:1. This meant that they shot only 1 take for most shots.

Working in such an economical way has a few advantages: it forces directors to think about exactly what they want and have a refined vision. As Almendros says: “the problem is that when there are many options there is a tendency to use them all.”

Even later on in his career when he was working with larger budgets on Hollywood productions, he always gravitated to finding the simplest method using the tools that would most easily produce an image with functional, realistic light that told the story.

For example, not using a big truck full of lights and a large team of electricians, if he could get a more authentic image from only using natural light.

Visually, many of the films he shot also have a certain stylistic simplicity to them. Although of course it depended on which director he worked with, he often shot quite deliberately composed, static frames without many dynamic camera moves. This was especially true when working with Rohmer who liked shooting stationary frames from a tripod head.

So, Almendros can be characterised by his cinematic knowledge, naturalistic lighting, and simplicity - but how did he translate that over with the techniques and gear that he used?

GEAR

Nowadays the default for most cinematographers is to base their placement of lights, and quality of illumination on the real life sources that exist in the location. 

As I mentioned before, that wasn’t always the case. The prior standard was that actors should almost always be well illuminated in clear, strong pockets of light which were usually placed in front of the talent, from above, and shined directly at them. This clearly illuminates the face without shadow, however isn’t what light does in real life.

Compare this shot of how actors used to be lit when placed next to a window, to how Almendros did it in Days Of Heaven. There are two big takeaways. 

Firstly, Almendros places the light source outside the window, shining in, mimicking the direction that the sun would in real life. While the other shot keys the actor with a high, frontal source of light - that doesn’t make sense in the real world.

Secondly, the quality of the light is different. Almendros uses a much more diffused light that is far softer with a natural, gentle transition from shadow to brightness. While the other example has a very clear, crisp shadow caused by very strong, undiffused artificial light.

A technique he often used to get this soft quality of light when shooting interiors was to bounce lights, often from outside a window, into the ceiling. This reflected the source around the room, decreasing the intensity of the light, but lifting the overall ambience in the room in a natural way.

He liked using strong sources with high output to create his artificial sunlight for interiors on location or in studio sets, such as tungsten minibrutes, old carbon arc lights, or, later on, HMIs.

Although he is known for his use of naturalistic soft light in movies like Days Of Heaven, he also did use hard light at times when it was functional and could be justified by a realistic source.

He also often favoured lighting with a single source - meaning one lighting fixture which pushed light in a singular direction. He often did this by using practical light fixtures - like lamp shades with tungsten bulbs - and not adding any extra fill light to lift the exposure levels in the space. 

In Days Of Heaven he even took this idea and transferred it to the oil lamp props, which he had replaced with electric quartz bulbs that shine through orange tinted glass, which were wired under the shirts of actors and attached to a belt with batteries they could wear.

These innovative solutions led the way to what is nowadays easily done with battery powered LEDs.

He paired this warm practical light with another lighting technique he would master - exposing for very low levels of ambient dusk light in the sky. 

Days of Heaven is probably best known for using this dusk light known as “magic hour”, but it’s actually something that he’d been doing since his first feature.

This was especially difficult as for most of his career he worked with a Kodak film stock that had a very low ASA rating compared to today’s standards. Kodak 5247 is a tungsten balanced film that was rated at only 125 EI - which is around 5 stops slower than a modern digital cinema like the Sony Venice 2 that can shoot at 3,200 EI.

To expose at these extremely low levels of natural light he would rate the 125 ASA film at 200 ASA on his light meter, 2/3rds of a stop underexposed. As it got darker he would then remove the 85 filter - which changes the colour temperature of tungsten film to daylight but also darkens the image by one stop. 

Then as it got progressively darker he’d also change to lenses with a faster aperture that let in more light, ending wide open on a Panavision Super Speed T1.1 55mm - poor focus puller.

If they needed to push things even further into low light shooting he would sometimes even film at 12 or 18 frames per second and change the shutter from 1/50 to 1/16. In this case they’d also ask the actors to move more slowly than usual to mask the otherwise sped up feeling of motion you’d get - reaping the final moments of available natural light before everything became dark.

However, when he wasn’t working with the smallest amounts of available light, Almendros actually preferred to not shoot with a wide open aperture. He felt the best depth of field was slightly stopped down, so that the background wasn’t a complete blur and could still be made out, yet was slightly soft so as to isolate the characters from the frame and make them stand out.    

He also innovated other DIY tech that could be used to simulate naturalistic lighting. For example, he ignited flame jets attached to gas tanks, which could be easily handled and had a controllable flame. These could be brought near actors to naturalistically illuminate them for scenes involving fire, rather than using electric lights - which was standard practice before.

To achieve a wide shot of locusts flying off in-camera, without post production visual effects, Almendros again pulled a technique from his knowledge of cinema - this time a movie called The Good Earth. They suspended helicopters just above the shot and released seeds and peanut shells, then to get the effect of the insects taking off they got the actors to perform their actions in reverse. 

This was shot on an old Arriflex which could shoot film backwards that would later play out the original action in reverse. 

This innovation also extended into camera movement. Days Of Heaven was the first film to use the Panaglide - Panavision’s lightweight alternative to the Steadicam, which could be used to get sweeping, tracking shots with actors over uneven natural terrain.  

CONCLUSION

Much of what we take for granted in cinematography today, like shooting in low light, using practical sources, and thinking about motivated natural lighting rather than three-point lighting, are all innovations aided by Almendros’ work. 

He was able to use his knowledge of cinema to inform his taste and storytelling techniques, then pushed established technical boundaries and ways of thinking to make his cinematography extremely beautiful but also extremely influential.

Read More
Gray Kotze Gray Kotze

What Makes Anamorphic Lenses Different?

Let's dive a bit deeper into how anamorphic lenses work and what makes them different from regular spherical glass.

INTRODUCTION

Choosing a set of lenses is one of those tools in a cinematographers toolbox that can influence how stories are visually presented and how they are practically shot. 

Anamorphic glass is different to normal spherical lenses in many ways, so much so that shooting anamorphic is considered a different format altogether.

But what is it that makes these lenses different? Let’s do a bit of a deep dive.



WHAT ARE ANAMORPHIC LENSES?

When light passes through an ordinary lens, it captures an image which is correctly proportioned and can be used straight away. Anamorphic lenses are a bit funky. Because of their differently shaped, oval, cylindrical glass elements, when light hits them it gets squeezed and produces an image which is compressed. 

Kind of like how fairground mirrors can squeeze reflections to make you look long and lanky.

This contorted format was invented for two reasons: to create a widescreen aspect ratio, while at the same time maximising the quality or detail in the image. To understand how this works we must quickly dive into the origins of how old 35mm film cameras worked.

35mm film runs vertically through a camera, which captures the full width of the negative at a height of four of these little perforations tall. This is all well and good when shooting in the old, tall academy aspect ratio, but what if you wanted to film and present images in a widescreen format?

Well, you could take that taller frame and chop off the top and bottom with a mask. But that meant that when capturing or projecting you would waste a lot of expensive film that would just be blacked out and because the full height of the negative was cropped, the recording area was smaller, which decreased the clarity and quality of the recording and increased the amount of film grain you’d see.


SQUEEZE FACTOR

Anamorphic lenses fixed this by recording the full height and width of each four-perf frame, by squeezing the image to cover the entire negative. When screening the film in a cinema they then attached an extra anamorphic lens onto the projector which desqueezed the image - reverting it back to its correct proportions by stretching it out by the same amount that the anamorphic lens originally squeezed it.

The amount that the lenses compressed the image is called the squeeze factor. This refers to the ratio of horizontal to vertical information captured by an anamorphic lens. So regular spherical lenses that capture normal looking images have a factor of 1x, where the horizontal and vertical information is the same. While anamorphic lenses usually have a squeeze factor of 2x. This means that twice as much horizontal information is squeezed into the image than vertical information.

Although a 2x factor is the norm for anamorphic glass, there are also some lenses with different squeeze factors out there - which we’ll get to a bit later.



DESQUEEZE

Now that all post production, and almost all cinema projection, happens digitally rather than with film - the method for desqueezing footage has also changed.

You can now import files shot with anamorphic lenses into editing software, and apply settings to desqueeze the footage digitally, for example by a factor of two, to make the images uncompressed in a native widescreen aspect ratio.



ASPECT RATIO

Although the exact aspect ratio, or width, for anamorphic capture and projection may change ever so slightly depending on a few factors, it will usually either be 2.35:1 or, nowadays, 2.39:1 - which is more commonly referred to in more general terms by rounding it up and calling it a 2.40:1 aspect ratio.     


SUPER 35

Before I mentioned that an alternative method to get to this widescreen aspect ratio is by shooting with spherical lenses with a 1x factor, and cropping off the top and bottom of the frame.

Although this method yields less surface recording area and slightly diminished quality and resolution of detail, it is still a very commonly used format over anamorphic for a number of reasons.

Because widescreen Super 35 records extra information on the top and bottom of the frame, this can be useful in post for things like CGI, stabilising the image with software, or cropping out unwanted things by reframing up or down.


SPHERICAL VS ANAMORPHIC CHARACTERISTICS

Spherical lenses are also usually sharper across the width of the frame, meaning that details on the edges of the shot that are in focus will remain sharp. Anamorphic lenses have a sweet spot in the middle of the frame that will be sharp when in focus, while the edges of the frame will resolve detail less sharply. 

Another difference between spherical and anamorphic lenses is how their bokeh is rendered, which is the out of focus area in an image. Because the glass elements in 1x spherical lenses are rounded they produce rounded balls of bokeh.

However, anamorphic lenses, with their cylindrical elements that squeeze the image, create bokeh which takes on more of an oval shape. This shape is also affected by where the cylindrical glass element is placed within the lens. 

Most true anamorphic lenses place the cylindrical element at the front of the lens, with regular circular elements behind it. These are called front anamorphics and produce that classic ovular distortion in the background.

There are also rear anamorphic lenses, which instead place the cylindrical element at the back of the lens, with the rest of the circular elements in front of it. This is often done to create Frankenstein anamorphised zooms, which takes an existing spherical zoom lens and adds a rear anamorphic element to the back of it. 

This has the same effect of squeezing the image, however rear anamorphics often lose the oval bokeh shape, which becomes a bit more rounded or even, in some cases, rectangular.

Probably the most defining characteristic of anamorphics is their flare. When direct, hard light enters these lenses it produces a horizontal flare across the width of the image - which is usually quite pronounced.

Spherical flares tend to generally be a bit tamer and subtler and flare in a more circular way, rather than horizontally.


ANAMORPHIC IN A DIGITAL WORLD

Many digital cinema cameras use a sensor that approximately modelled the size of a Super 35 negative with a 4:3 ratio. This meant that most existing anamorphic lenses which were designed for four-perf film would be able to cover the width of digital sensors without vignetting.

Like on film, these 2x anamorphic lenses could cover the full height and most of the width of the sensor, filling a greater overall surface area than shooting a cropped Super 35 image with spherical lenses.

However, not all digital sensors used such a tall 4:3 ratio. Some sensors were designed to be more of a 16:9 size. There are some anamorphic lenses with a different 1.3x squeeze factor, instead of the standard 2x squeeze, that cover these wider sensors and still produce a widescreen image with a 2.40 aspect ratio.     

In recent years, full frame and large format digital cameras have seen a surge in popularity. Due to these sensors being significantly larger than Super 35, most 2x anamorphic lenses don’t have glass elements wide enough to cover these sensors without them seeing inside the lens and the image vignetting.

Anamorphic lenses with various squeeze factors have been designed to cover these format sizes, from 1.3x to 1.8x.



LIMITATIONS

A limitation of shooting large format or full frame anamorphic lenses is that you have a smaller selection to choose from and this glass is typically more expensive than comparable spherical options.

Spherical primes sets also usually come with far more focal lengths to choose from. For example a modern set of spherical primes like the Master Primes come in 16 focal lengths, whereas a modern set of anamorphic primes like the G-Series come in eight focal lengths. 

This sometimes means that cinematographers like to pair a set of anamorphic primes with a longer zoom - which may either be an anamorphised rear zoom, like we mentioned before, or a front anamorphic zoom like the Panavision 70-200mm.

Another potential limitation of anamorphics, especially front anamorphic zooms, are that because of their more difficult design and increased number of glass elements, they usually have a slower stop than their spherical peers - which renders a tad less bokeh and makes it more difficult to shoot in very low light conditions.

They are also on average physically larger and heavier than spherical lenses - with the gigantic Primos being a great example of just how hefty a prime anamorphic lens can get. Although many lighter alternatives do also exist.

Close focus can also be an issue. The extra glass in anamorphic lenses means that the MOD, or the closest point that the lens can render an object in sharp focus, is usually not very near. 1x spherical glass is normally far better at this. So, if filmmakers want to shoot an extreme close up on an anamorphic lens, they will need to use an extra diopter filter which allows them to achieve more of a macro focus.

Visually, anamorphics produce more distortion, with the wider focal lengths, around 40mm and wider, bending the edges of the frame - which is especially noticeable when shooting something with a straight line like a door frame. 

WHY CHOOSE ANAMORPHIC LENSES?

So, broadly, spherical lenses offer a greater practical flexibility to cinematographers, while anamorphic lenses offer a specific look, in exchange for a few practical tradeoffs.

Overall, DPs who like a clean look to their footage, which is sharp across the frame and free of aberrations or distortion often like to go with spherical glass and crop to get a wide aspect ratio. Whereas those looking to add a touch more visual character to the footage to make it a little less perfect, which is often done to counteract the sharpness of modern high res digital cameras, may prefer the look of older anamorphic lenses.

Having said that, there are exceptions to this. Old, vintage spherical lenses exist which offer a lot of imperfections, as well as modern anamorphic lenses which are very sharp and clean. 

In the end, anamorphic lenses can give projects a look that has long been considered classically cinematic, with their oval bokeh, lateral flares, falloff and native widescreen ratio. However, this does come with a few practical tradeoffs which may need to be considered by filmmakers.

Read More
Gray Kotze Gray Kotze

Mixing Film And Digital Footage: Killers Of The Flower Moon

Let's break down the cinematography - specifically the use of colour and LUTs - in Killers Of The Flower Moon.

INTRODUCTION

Colour is a tool that plays a crucial role in cinematography and can be manipulated to craft a bunch of different looks. 

Some of these looks can be pretty heavy handed, like the quote unquote ‘Mexican filter’ - which punches up the warmth everytime the world of the story moves south of the US border.

Traffic (2002)

But other applications of colour, like in Killers Of The Flower Moon, is a bit more subtle, yet has an unconscious effect on how the story is taken in by audiences.   

The workflow behind this cinematography combines old school thinking with new school technology. This comes from the choice to shoot on both film and digital cameras in different situations, and by thinking about LUTs and digital colour correction in terms of old photochemical techniques.

So, let’s explain how these principles work by breaking down the cinematography in Killers Of The Flower Moon.

MIXING FORMATS

Killers Of The Flower Moon responds to the long discussed topic of film versus digital with ‘Why not both?’ This was a decision influenced by DP Rodrigo Prieto who needed to think not only about how this choice affected the look and feel of the movie, but also how it would practically influence the way they shot it.

“Part of the role of the cinematographer is to find ways so that the camera and the lighting really represent what they are feeling. What’s the energy of the scene? What should it feel like? A lot of the discussions were about that.” - Rodrigo Prieto, Cinematographer

For scenes that featured Osage characters or took place in that world, Prieto wanted to represent colour in a way that felt natural, which, to him, was best achieved by using a film negative with a film print.


COLOUR NEGATIVE FILM

The main film he used was Kodak’s 35mm 250D 5207 stock which shot scenes that took place during the day. Because 250D can be shot at a higher ASA rating than Kodak’s other daylight stock, it means that less light is needed to expose - which makes it possible to shoot both some interiors as well as exteriors on the stock. 

He swapped this out, during Osage ritual scenes,for the less sensitive 50D stock when he wanted shots to have a tighter film grain that was less noticeable. He also sometimes used 500T when filming night interiors under tungsten light. 

They ran this film through Arri’s newest 35mm cameras, the Arricam ST and LT. With the LT version being better for more compact rigs like Steadicam, or remote head shots, and the ST version being able to record at a higher 60FPS versus the 48 of the LT.

For one extreme slow mo sequence they also pulled out an old Arriflex 435 to shoot at 150FPS in the wide and a digital Phantom to shoot the tighter shot at 700FPS.  


BLACK AND WHITE FILM

A few select frames which needed to look like old archival footage were also shot on Kodak’s 5222 black and white film stock. To push this newsreel look even further without using visual effects or post production work they shot the film on Scorcese’s own 1917 Bell & Howell 2709 camera. 

It needed to be hand cranked for the motors to move the film past the gate which meant the frame rate would change depending on how fast or slow the handle was turned. This gave the footage that unusual feeling of motion that is associated with the format. 

So, apart from these exceptions, if 35mm felt like the right format for most of the movie then why did they also shoot some scenes with a digital cinema camera?


SONY VENICE

This mainly came down to a technical limitation of film. The most sensitive colour negative film stock that Kodak produces is rated at 500 ASA and is balanced for tungsten light. This pales in comparison to the low light capabilities of a camera like the Sony Venice 2 that has a dual base ISO of 3,200.

Since certain scenes needed to be shot in the blue - the small window of light after the sun has set but before the cool, ambient light disappears altogether - they needed a camera that could shoot and expose with a very small amount of natural light. 

Having almost three extra stops of sensitivity, meant that the Venice could maximise the shooting window after sunset and continue to shoot scenes quite late as the natural ambience started to disappear. 

This was especially useful as they were shooting on Panavision T-series anamorphic lenses that have a stop of T/2.3 - which is not a particularly fast lens that lets in a huge amount of light - unlike faster spherical glass such as the Master Primes with their T/1.3 aperture. 

So the Sony Venice was chosen instead of 35mm to shoot scenes that took place in low light such as dusk scenes where they only used natural ambient light or wide night exterior scenes. 


4 FILM LUTS

Although this traditional 35mm Kodak colour film was the basis of the look, it did deviate quite a bit from there to serve the story thanks to being combined with a more modern post production workflow.

Scorcese and Prieto did this by coming up with four different LUTs - which are pre-established looks that can be added on top of the footage that affect the colour, saturation and contrast. Kind of like adding a filter using consumer photo editing software. They used these different LUTs to support certain feelings at different points in the movie.

Instead of creating looks based on simple colour ideas in grading like warm versus cool, saturated vs desaturated or high vs low contrast, the way that they conceptualised and implemented these LUTs in the workflow used old school concepts based on how cinematographers used to work with selecting, processing and colour timing film - before digital cinematography.

The first of the four looks that they created was a ‘normal film’ LUT that precisely emulated Kodak 5219 colour negative film stock and produced natural filmic colour. 

This was used for early scenes with Osage characters and could be applied to the digital footage shot on the Venice or the 35mm film scans to achieve the same regular film development and print look.

The second LUT, mainly used to present the European characters and world, tried to emulate the look of very early colour still photography processes such as Autochrome and Photochrom which was developed in the early 1900s.

Autochrome has more desaturated colour than modern colour negative film, with a brighter, more washed out violet hue in the skies caused by more magenta in the blues, desaturated greens and yellows and reds that take on an orange hue.

The third LUT - used in the latter half of the film once the story gets darker and bleak - was based on a photochemical development process created by Technicolor called ENR. This is fairly similar to bleach bypass development and creates less saturated colours across the image and increased contrast.

The final look emulated three-strip Technicolor which has a more vividly rich, saturated colour, with deeper, punchy reds and greens - and was used for more magical, experimental moments in the story or the separate coda scene which breaks away from the main narrative timeline.

Oh and one scene also used a day-for-night LUT to make footage that was shot under direct sunlight during the day appear to be night time footage.


POST WORKFLOW

These LUTs were created by senior colourist Yvan Lucas at Company3, in collaboration with Prieto, before production began. Once principal photography started, they would receive either RAW files from the Venice, or processed and scanned digital files from the developed 35mm film.

The dailies colourist would apply the appropriate LUT to the footage in Baselight grading software and then proceed to do minor colour correction where necessary using another old school way of working called ‘printer lights’. 

This copies the old method used to colour time 35mm footage from the lab, by only adjusting primary colour and exposure variables. This could be done through software by raising or lowering the offset which is the overall exposure of the image, increasing or decreasing the values for the red, green and blue colour channels, and increasing or decreasing the cyan, yellow and magenta values.

Basically, making broader adjustments to colour and exposure without using more nuanced correction or masking tools.

Scorcese and his editor worked with these dailies while cutting. Then, right at the end of the process once they had the final cut they could come in and do more nuanced tweaks to the colour in the final grade or by doing things like isolating secondary colours or adding power windows and masks.       


LUT COMPARISON

The process of subtly affecting the colour will likely go unnoticed by 99% of the audience as the movie plays. Which is what you want - you don’t want it to be so overt and obvious that changes in colour become distracting.

However, if we compare these different LUTs side by side, especially the starting Kodak look and the final ENR look we can see that they are drastically different. 

The first normal film LUT is applied when the story is more comfortable, about the environment and follows the practices of Osage characters. The colours feel natural, rich and real with nice contrast, strong, deep greens and a comfortable range between the deepest blacks and brightest highlights - which shows detail in the shadows.

As the story progresses and we delve into the moral darkness, desperation and corruption of the story and behaviour of the characters the colour and life starts getting, literally, drained away from the image.

Compare the Kodak look to the ENR look in this frame which is shot in a similar lighting and colour environment. All that deep, lovely natural colour in the greens and blues, and healthy magenta in the skin tones has been completely sucked out and replaced by warm, very desaturated tones. 

Likewise, the contrast has been increased in the ENR LUT so much that the sky is almost blowing out to a pure white and the parts of the image in the shadow have lost almost all detail - which overall feels harsher and less natural.

This circles back to Prieto’s initial commentary on cinematography. That the images filmmakers create should represent the energy and feeling of the characters in the scene as the story progresses.

Read More
Gray Kotze Gray Kotze

How Oppenheimer Reinvented Imax

The process behind creating Oppenheimer.

INTRODUCTION

There’s a famous saying that ‘life imitates art’. 

I’d say this rings true for the process of creating Oppenheimer - where the story of a team’s struggle to push the bounds of science to create a new technology was mirrored by the film crew’s very own push towards creating new Imax camera tech to tell that very story.

This included advances in lighting, lensing as well as the creation of a new 65mm black and white film stock. So, let’s break down some of the technical advancements that influenced the visual storytelling in Oppenheimer.     

HOW IMAX WORKS

If you’ve seen this film in an Imax cinema it’s difficult to deny that Imax is a truly gorgeous format. What makes it different from shooting on other camera systems is the size of the negative that it captures images on. Basically Imax takes the image capture area of a traditional 35mm camera and supersizes it - flipping the 65mm film sideways to create a negative that is over eight times larger.

This gives Imax footage an unprecedented resolution of detail, low film grain, a wide field of view and makes it possible to shoot with an extremely shallow depth of field - that can give close ups and wide shots alike a very blurry background.

However, pushing the bounds of photochemical image capture comes with a number of downsides.


WHY SHOOT ON IMAX

But, before we delve into these issues, why shoot what is effectively an intimate, dramatic portrait of a character on Imax in the first place? 

Well, director Christopher Nolan and cinematographer Hoyte Van Hoytema like to first begin by identifying the creative needs of the project, then start unpacking the technical solutions and innovations that they can come up with to make those artistic decisions a reality. 

In the words of Van Hoytema "Large format photography gives clarity and places the audience in the reality you are creating for them." So, Imax was used as a creative solution to immerse the audience not only in the action sequences and setpieces, but to intimately envelope audiences in the face and performance of the protagonist, and get them caught up in his psychological state and emotions.

Using Imax to capture every nuance of detail on a face, rather than just for aerial landscapes and explosions. 

SYSTEM 65 VS IMAX

However, if you’re observant you may notice that in Oppenheimer, and in fact all of Nolan’s films which use Imax, different shots alternate between a widescreen aspect ratio - with a black letterbox on the top and bottom, and shots with a taller frame.

These two different aspect ratios reflect the two different film formats used to shoot the movie. 

All the widescreen shots were photographed with a 2.20:1 aspect ratio on a 5-perf Panavision System 65 camera, which runs 65mm film vertically through the gate. 

While the taller shots were done on a 15-perf Imax camera that runs 65mm film horizontally and uses the largest possible recording surface area. Either presenting the full 1.43:1 Imax negative, or, in most distribution cases, cropping off the top and bottom to get to a 1.90:1 ratio.

Some scenes even intercut between these two different ratios. Why is this? Why not just shoot everything in Imax? The answer is because of sound.   



SOUND ISSUES

Running so much film through the Imax camera so quickly means that it makes a huge noise. This gets picked up by the microphone that records the dialogue on set.  

So if you watch a few scenes, you’ll notice that any moments with dialogue tend to be shot in a widescreen ratio on 65mm with the quieter 5-perf camera. While other moments without synchronised dialogue that can be covered up by sound design,  ADR, or even denoising audio in post, are shot on the noisy Imax camera. 

Sometimes Nolan even cuts from reaction shots filmed on Imax to dialogue shots filmed on 65mm during the same scene. 




CUSTOM LENSES

Another technical snag they ran into stemmed from Nolan wanting to visually present Oppenheimer's abstract idea of what was happening on a subatomic level. 

Again, they started with the creative need - shooting practical microscopic visual effects - and from there came up with a technical solution that involved creating a new, custom prototype macro snorkel lens for Imax.
This was designed by lens aficionado Dan Sasaki at Panavision who was tasked with custom engineering a waterproof, wide angle macro lens. He managed to produce a 24mm and a 35mm - with the 24mm lens being able to focus within a ridiculous 1mm from the lenses front element.

Part of the reason why these lenses didn’t exist before and why they were incredibly difficult to produce is because Imax glass needs to be very wide in order to avoid vignetting - seeing inside the barrel of the lens - which creates black circular edges around the image.

The solution to this is to make the diameter of the glass elements larger. However, because it was a probe lens there was a restriction on the width of the elements. Sasaki managed to solve this by adding an extra five glass elements inside the barrel of the lens.

Creating this new lens resulted in some spectacular practical effects. Slow motion, 48FPS, Imax macro shots of spinning beads on a wire rig and burning thermite which were shot underwater in a tank - which they hit with a single, strong, hard light source.  

For some extreme slow mo shots they also used a 35mm Arriflex 435 - which could shoot at a higher frame rate of 150FPS.

NEW FILM STOCK

Like his early film Memento, Oppenheimer uses black and white and colour as a way to delineate different timelines and perspectives within the narrative. Nolan went so far as to write whether each scene was in black and white or in colour in the screenplay.

Again the starting point was the creative decision to present Oppenheimer’s perspective by using colour and another character Lewis Straus’ perspective using black and white. 

Once again this led them to a technical snag. Kodak had never cut and finished their Double-X 5222 emulsion in 65mm before. It had only been produced for 35mm and 16mm.

Double-X has lovely contrast and a strong dancing grain in the midtones which is much more pronounced than were they to shoot on colour Vision-3 stock and convert it to black and white in post. 

So, with some work, Kodak managed to manufacture their film stock in the larger gauge size that was needed for 5-perf as well as 15-perf Imax for the first time. 

It also meant that a change needed to be made to the Imax camera. Because monochrome negative has a different backing than colour negative’s remjet layer, it created various exposure artefacts and scratches due to the polished chrome pressure plates that came into contact with the film inside the Imax camera.

To fix this Imax manufactured a custom ‘black oxide’ vacuum pressure plate that eliminated the issue. 

Fotokem, the film lab that processed the black and white stock also had to adapt. They spent three days shutting down their 65mm colour development equipment, cleaning and converting it to the chemicals needed for black and white.

This turnaround meant that the production had to schedule around shooting all the black and white scenes - sometimes waiting up to a week before being able to view the processed rushes.

TWO VISUAL APPROACHES

On top of presenting the perspectives of the two characters in colour and black and white, Nolan and Van Hoytema also employed other cinematographic techniques to do this.

If you look at this scene and this scene you can clearly see through the use of shot size and angles whose perspective each takes on. They use close ups of the character of importance, followed up with over the shoulder shots, taken from their perspective to put the audience directly in their shoes.

On top of this, they more often used a handheld camera and a wider angle 40mm or 50mm focal lengths to shoot the more uninhibited Oppenheimer. While using more static, composed shots on a longer 80mm lens to shoot the more calculated Strauss.    

LIGHTING INNOVATIONS

The team’s technical innovations weren’t limited only to the camera, but also to the lighting.  

To save time and give them the ability to dial in specific colour and brightness values, van Hoytema’s gaffer rigged the lights with latency free transmitters and receivers that fed to a DMX board. 

This meant that the electrical team could rig the lights, then van Hoytema and his gaffer could be on set and adjust all of them by dimming, adjusting colour temperature or even RGB values, from the DMX board. Even if the light was rigged blocks away. 

This was useful for perfectly balancing and matching up the colour balance of the artificial light sources with the natural light that was coming from the same direction. 

Like all the other technical decisions, the lighting was also influenced by the story. Most of Oppenheimer’s scenes were lit with slightly softer light sources like LEDs or HMIs that were diffused, while the black and white sequences of Strauss were lit with much harder, direct light from old tungsten units. 

CONCLUSION

I think what all of these innovations show is that technical breakthroughs come as a result of needing to fulfil a creative choice - not the other way around.

Nolan and Van Hoytema first come up with the best visual ideas that serve the telling of the story, then from there treat it as an engineering exercise, assembling the best team that they can to innovate and invent the best technological solution. 

Read More
Gray Kotze Gray Kotze

How Does A 3D Cinema Camera Work

Let’s go over how these cameras work, what extra film crew is needed to operate them and some methods that work best with this format.

INTRODUCTION TO 3D

Over the years 3D cinema has seen a few periods of resurgence and falling off. 

Although it's in a bit of a dip at the moment, it does still exist as a niche format with its own unique method of capturing native footage on specialist 3D cinema cameras.

So let’s go over how these cameras work, what extra film crew is needed to operate them and some methods that work best with this format. 



HOW 3D CAMERAS WORK

Unlike regular 2D cameras which capture and project a flat image against a flat surface, 3D cameras create the impression of footage having depth with different planes of dimensionality.

They do this by capturing images in the same way that human eyes use to see.

Light enters two eyes, is processed by the brain, to allow us to see the world with depth perception. In the same way, two cameras can be placed near each other at slightly different angles to capture two similar images. 

When each image is viewed by a different eye, the brain is tricked into thinking that the 2D images have different planes of depth. 

Presenting slightly offset images to the right and left eye is also called stereoscopic display, and is the same concept used for other 3D depth perception such as VR.

In 3D cinemas different images are fed to different eyes by using 3D glasses. There are some different varieties, such as glasses that use different colour lenses that only take in projected images of a certain colour, or glasses that have different polarised lenses for each eye that allow each eye to see a different picture.

It is now also possible to create a 3D film without a 3D camera, by manipulating 2D footage with post production software that separates the images into different planes. This Is the case for some movies that are shot in regular 2D that have been re-released in 3D. But, that’s a topic for another day.




3D CAMERA BUILD

There are two types of 3D camera rigs that can be used to get the two cameras close to each other to shoot these slightly offset images. 

The first is called a parallel rig - where the cameras are placed horizontally alongside each other on a sliding plate. The second type of 3D camera configuration is called a beamsplitter. 

This is where two cameras are placed at a 90 degree angle, pointing into a sort of cube that houses a mirror inside angled at 45 degrees. This mirror is like a one way mirror that you see in police interrogation scenes. This means that one side of it acts as a mirror that bounces the light off the 45 degree slope, while the other camera that is level sees through the other side of the mirror - shooting at a slightly offset angle.

As the footage from both cameras will converge to be almost exactly the same, it’s important that the cameras, lenses and any accessories used are all identical. 

This means renting two sets of everything in the camera package. So shooting with one 3D camera rig will require 2 cameras, and 2 sets of lenses. Shooting with two 3D cameras will need 4 cameras and 4 sets of lenses.

This also applies to accessories like focus motors. Often the wireless follow focus will come as part of the specialised 3D camera package. If using a system such as a Preston with a zoom lens, this may mean using two MDR boxes and six motors: 2 motors to control the focus on each camera, 2 motors to control the zoom and 2 motors for the iris.

However, there are some focus solutions that exist that minimise all those extra cables and MDR boxes, such as the CMotion CVolution - which has a single motor box that can drive up to 8 different motors at a time, and run the focus from one wireless handset using multiple motors on multiple cameras.  

All of this gear, combined with aligning two cameras alongside each other means that 3D cameras are very buky, heavy and awkward to move around. For that reason there are some best practices for working with this camera system, and also, a unique crew member that you don’t have on a regular 2D shoot.



CREW ROLES

There are two important control settings which are unique to native 3D cameras and they are both controlled by an extra crew member called a stereographer. 

These two 3D settings are the interocular distance and the convergence.

The interocular distance refers to the space between the lenses on the two respective cameras. On a parallel rig this can be adjusted by sliding the cameras further apart or closer together on the horizontal axis, while on a beamsplitter it can be changed by adjusting the distance of the lenses from the mirror.

Usually this is controlled by the stereographer with an extra motor that can turn gears to adjust this distance between the cameras. 

The reason that this distance between the lenses may need to change for different shots is because, as a general rule, the further away the subject is from the camera, the greater the distance between the two cameras should be. While the closer you are to the subject the nearer the lenses have to be to each other.

So wider shots, such as 3D aerial footage, may need to be shot with a parallel rig where the cameras can be more easily separated, while a beamsplitter 3D camera may work better for a close up where the camera lenses can be brought very close together using the mirror as a divider.

The stereographer will monitor this setting as well as the convergence, which is the angle at which the cameras are adjusted to line up the screen plane so that the images overlap correctly and are properly aligned.

If using zoom lenses this also includes making sure that both lenses are set to the exact same focal length and the width of both lenses, as well as the stop and level of exposure and other settings on each camera correctly matches.

The job of the cinematographer on a 3D film may involve some added technical complications, but otherwise their framing, lighting and other aspects to the cinematography will remain mostly the same as when shooting a 2D film.

The camera operator will have to deal with the implications of a heavier, more cumbersome camera build, but otherwise will also retain their usual role, with an important difference.

They will usually need to frame their shots a little wider than they normally would, allowing for extra frame room on the edges of the shot. This is because the 3D images may need to be slightly cropped into during post production if they need to adjust the convergence of the images.

For this reason they need to keep important parts of the frame away from the edges of the shot, for example, fully framing the top of the actors head without giving them too much of a ‘haircut’.

Custom frame lines can also be added to the monitor, which crop in on the fully captured image, so that operators know the bounds of framing that they need to abide by.

VT, who handle transmitting a live video feed from the cameras to monitors and playing back that footage, is another job on set which may get a bit trickier. 

For example, on Prometheus, this involved setting up a video village for a multicam shoot with two 3D monitors, and two 2D monitors that could be flipped between seeing the left eye and right eye camera on a single 3D camera rig.    



SHOOTING 3D

On top of these layers of complications, there are some other extra best practice rules that are often followed when shooting 3D. 

Cinematographers may be advised to light with a little more contrast between dark and light parts of the frame than they normally would. Creating different planes of light and shadow helps to better sell the illusion of depth from 3D, than if everything were lit to be flat - with a low level of contrast.

For purely practical reasons it may also be easier for DPs to work with zoom lenses rather than primes. This is because changing to a new prime lens focal length requires realigning cameras each time, while zooming in and out minimises the setup time it takes to correctly overlap the images.

Because you are using two cameras, rather than a single camera, it also means that each rig is significantly heavier and bulkier. This means that if you want to move the camera you will have to use rigs that can take a heavier payload - such as a remote head on a Technocrane.

Or if something like a Steadicam is required it will take careful technical preparation to make sure the rig can correctly balance.

The large size of these cameras also make it difficult if not impossible to shoot in tight spaces, such as car interiors, get very low angle shots, or place the camera in any situation that usually requires a small build.

When it comes to moving the camera - stability of motion is usually encouraged. Shooting handheld is advised against as too much movement can be off putting and mess with the audience’s sense of motion. Which is why 3D cameras often rely on shots operated from a base, like a tripod or dolly, or shots with stable motion like Technocrane or Steadicam.


CONCLUSION

It’s easy to see why 3D is niche for a reason. Capturing native 3D images is quite a challenge both from the technical side but also from a financial point of view. It adds layers of complication to the production workflow which can, in many ways, limit filmmaking choices.

However, in the right context 3D can work really well in recreating the feeling of being in a real world environment and increasing the audience’s engagement in the world of the story. While at other times it can feel a bit gimmicky, distracting, unnatural and actually pull the audience out of the movie.

Whether or not it will make a mainstream cinema comeback again remains to be seen. 

But, maybe these 3D rigs will be reimagined, reconfigured and applied to new visual projection experiences that use different but similar principles like the new Sphere or the world of VR. Only time will tell.

Read More
Gray Kotze Gray Kotze

5 Reasons To Light Films With Colour

There are many reasons to use different coloured lighting when you shoot a scene. In this video we’ll go over 5 of them.

INTRODUCTION TO COLOUR

Since the dawn of three-strip film in the 1930s every cinematographer has been aware of the importance of using colour to create images. Different palettes can be constructed by using production design, introducing a look in the grade, or, as we’ll discuss today, with lighting.

There are many reasons to use different coloured lighting when you shoot a scene, so today we’ll go over 5 of them. By looking at a few clips from popular movies as well as some footage I’ve lit using different hues. 




1 - MOTIVATED LIGHTING

Every environment that we walk around in has different frequencies of light bouncing around which - to both our eyes and cameras alike - read as colour.

Therefore, one of the most common motivations for illuminating with a vibrant hue, is to replicate or supplement what we call practical sources - lights that are seen within the frame of the shot - such as a lamp.

Here I’ve set the colour temperature of the practical in the background to 3,200K, then to increase its intensity and direction I’ve added a film light rigged above as a backlight. I’ve set it to the same colour temperature as the practical - supplementing the warm light that already naturally exists in the frame.

A big trend now is for DPs to build practicals into sets or locations - which are usually RGB so that they can be adjusted to different hues to add accents of colour to the lighting. Those practicals could be Astera Titan tubes posing as fluorescent bulbs, RGB strips built into signage, or yellowly, warm kitchen lights. 

As well as adding coloured practical sources that can be seen in shots, another idea linked to this is to create motivated colour lighting - where the light isn’t necessarily seen in the shot, but the direction, colour and quality of the illumination still feels relatively realistic and motivated - like that light could be there just outside the shot.

One way to do this when shooting an interior is to shine a light through a window - with the imaginary motivation for it being that there is some kind of street lamp outside that window.

Here I’ve used a hard source with a sodium vapour gel outside a window which gets cut into strips of light and shadow by shutters. I’ve then used a cooler fill inside to motivate the dusky blue natural light outside and lift the ambience a bit. I’ve rigged this light on a C-stand so it backlights her in the same direction as the natural light coming from the window. 

So, although we don’t see the exact source of light in the shot, it’s probably within the bounds of reality to imagine that there might be an old sodium vapour street lamp outside this apartment window.

Creating these extra imaginary sources is especially useful in night scenes for lifting the exposure so that there’s enough light to be able to shoot, but there are also other psychological motivations for lighting with colour. 



2 - AGITATE VISUALS

When shooting exteriors or interior scenes that take place during the day, the default colour temperature to light with and set the camera to is around 5,600K. 

If both the colour balance and light source match, you’ll get ‘normal’ looking natural colour - with white surfaces reading as white.

Some filmmakers like to use this look as a base for daylight scenes - but then add smidges of more vivid colours to parts of the frame. Now, I can’t speak to the intention behind these lighting choices, but I have noticed that within certain contexts this can have an agitating effect.

Take this location from The Bear. In different scenes there’s this ever present, just off camera red light - which adds a different hue to the otherwise naturally lit space. It’s subtle, but to me this visually ratchets up the tension just a tad - which suits the ongoing chaos and agitating tone that also comes from the story and direction.

There’s something about using a strong hue in contrast to the otherwise natural lighting, especially when it hits the skin tone of a character, which is slightly off putting and brash.

Uncut Gems is another film that does this - often mixing garish pink and off-green lighting, with more traditional CCT sources. This gives skin quite a sickly, unhealthy tone, which psychologically adds to its already tense, off kilter, chaotic atmosphere.    


3 - DEPTH & SEPARATION

So far we’ve seen that lighting with colour can be done for motivational purposes or to induce a psychological feeling, but it can also be used for more classically photographic or aesthetic reasons.

More often than not, cinematographers like to use lighting to create different planes within the image. This creates an illusion that the 2D image has depth, feels more dimensional and can be used to separate characters from the background to make them better stand out.

They can do this by lighting different areas in the image, such as the foreground and the background, with different amounts of light, or with different colours.

The colours that cinematographers choose to place within the different planes of the image are also quite deliberate. Complementary colours are hues that sit at opposite ends of each other on the colour wheel. These tend to be quite visually pleasing to the eye when they are paired together. 

This is why you often see films use a warm practical lamp in the background, that counteracts with a cooler light on the subject in the foreground. 

Not only are these colours complimentary, and therefore aesthetically appealing, but they are also placed in different planes or dimensions in the image - cool, warm and cool - increasing the feeling of depth and breaking up the subject from blending into the background.

Some other complementary or often used colour combinations besides cool and warm include: a dark orange and green, red with a lighter blue or cyan, or a pink-purple with a tungsten yellow.


4 - EMOTIONAL REALISM

There are many cinematographers, such as Vittorio Storaro, who like to talk about the psychology of lighting with certain colours. While the idea that different colours can be used to evoke specific emotions or themes is a whole other subject on its own, I think it's safe to acknowledge that as a whole colour affects mood and how images are experienced by audiences.

For that reason, cinematographers can sometimes deliberately exaggerate colour outside the bounds of reality so that the world represents how the character feels, rather than what it looks like in reality.

This is something that DP Marcell Rév referred to as emotional realism when shooting Euphoria.

As an example let's take this bathroom space, which in real life is lit by a rather ugly overhead tungsten bulb, and apply two different lighting looks to it to evoke two different feelings.

I’ll start by rigging the 60C on a polecat, so that we don’t see it in the reflection when shooting. It provides a nice downward, top light onto the subject.

Then I’ll place a second source so that it shines through the window, adding ambience and a different plane of light on the back wall. 

For the first lighter, brighter, more upbeat tone and look, I’ll set the overhead source to a 5,600K daylight temperature, with a layer of diffusion to soften its intensity. I’ll set the outside hard COB light to 5,600K with an added quarter CTS gel to give it a warm, morning glow.

For the second harsher, darker tone, I’ll remove the diff on the 60C to make the downlight harder and set it to a fluorescent green gel look. I’ll then remove the CTS gel on the light outside and instead warm it up to 2,700K so that it takes on a tungsten look.

I’ll then decrease the camera’s exposure and balance the lights through dimming their intensity values to get a look that feels much dingier and far emotionally darker in tone than the first lighting set up.

By just tweaking the colour, quality and intensity of the two sources we can create two looks that put the character in a radically different emotional space.

5 - STYLISED COLOUR 

Apart from all the above reasons for using colour it can also be used because, well, it just looks kinda cool. 

One of the easiest ways to create a more visually stylised look with lighting is by using different hues. This is often done in short form content like music videos where realistic lighting can be thrown out of the window, and colour is used to elevate the visuals and inject an energy into them. Using a stylised colour palette rather than a naturalistic one.

It’s also a favourite party trick of YouTubers. Light the background wall with one RGB colour, then place another colour RGB source that only illuminates the subject in the foreground and just like that…you’re a YouTuber.

The reason this works comes back to the idea of using complementary colour and lighting different planes in the frame to separate the subject from the background.

Read More
Gray Kotze Gray Kotze

The Filmmaking Pyramid: How To Start Your Career

Is it better for filmmakers to be a specialist or a generalist?

INTRODUCTION

Is it better for filmmakers to be specialists, who focus on developing one niche skill to the absolute max, or is it better to be a generalist whose knowledge is spread out across a diverse range of expertise?

In this video we’ll unpack this question by looking at what I call the film crew pyramid and think about the best way to build the creative and technical side of your career.



THE FILMMAKING PYRAMID

Just like the corporate structure within a company, or the ranking system within the military, the types of jobs on a large industry film set are organised in a hierarchical way.

While companies may produce a product, like fast food, or a service, like financial management, a film production company produces a movie.

At the top of the crew pyramid you have the ‘executive’ positions such as the director and producer who have a large degree of control over how the movie is produced, or are greatly responsible for coordinating the logistics and financing behind making it.

The role of these executives at the top of the pyramid is usually that of a generalist. In other words they have a very broad skill set and knowledge. They lead through delegating multiple tasks to other crew, who fall into different departments, a bit further down the chain.

For example, the director will communicate with the cinematographer to create a visual style, framing and lighting of the film. Or, once it’s been shot, they will work with an offline editor to cut the movie with the pacing, feel and story that is most suitable.

The further down the pyramid you go, the more specialised the role of the crew becomes. 

For example, let’s look at the camera department. It’s headed by the cinematographer, a generalist whose role also includes liaising with many other departments, such as the art department that builds sets and does production design, the lighting department that rigs the illumination and the grips department who execute the camera movement.

Under the DP we have the camera operator. They sit behind the camera, frame shots, expose the image and generally control how the camera is used while filming - under the direction of the cinematographer. 

This role requires a broad knowledge base, experience and technical expertise. For example, they’ll need to understand the best ways to assemble the camera in different situations, how to compose shots, how to best change all the key camera settings, where to position the camera for the best lighting, and the list goes on.

Further down we have the focus puller or 1st AC - whose role starts narrowing and becomes a bit more technical and specialist - rather than generalist. They build the camera into its different forms, coordinate and troubleshoot any problems with the gear, and, as the name suggests, pull the focus so that the subject in the image remains sharp.

The loader, or 2nd AC is an assistant to the focus puller, and performs almost an entirely technical role - doing things like the clapperboard, marking actors and reloading the camera.

Then, right at the bottom there is the camera trainee, who works under the 2nd AC and acts as a general assistant to the camera department through more basic technical tasks like charging batteries or organising the gear in the camera truck.

If we zoom out on this pyramid you can see a trend. The higher up the job is, the more people they manage, and the more of a generalist they have to be - who fully understands all aspects of the filming process.

While the further down the pyramid you move, the more your job becomes specialised on performing a few, niche technical tasks.                  

It should also be noted that this structure only really applies to larger long form jobs and industry sets. For solo shooter work, or documentaries, many of these jobs may be combined into one, and fewer crew may be hired who will be expected to perform more tasks in a generalist role. Such as a documentary DP, who also needs to light and record sound by themselves.


SPECIALIST  VS GENERALIST SKILLS

This filmmaking pyramid model was set up to use hierarchy as a way of on the job training - which started crew off by focusing on specific technical tasks, which would later continue to expand into more general knowledge as they moved up the ladder.

For example, a trainee starts off being exposed to film equipment, learning the names of all the gear and charging batteries. They move up to 2nd AC, where they need to be able to quickly locate all that gear, change lenses and learn the correct distance at which to put the clapperboard. These skills transfer up to 1st AC, where they need to use all that gear to build the camera any number of ways, and use their perception of distance to pull focus.

In this way film technicians continue broadening their skillset moving up or occupy the role of cinematographer, which requires a comprehensive knowledge of not only the camera but also lighting, grips, post production and of course a photographic eye for storytelling.

Therefore progression up this pyramid usually requires building more of the technical fundamentals of creating images at the bottom and later acquiring more of an overall creative view on how images can be used and created in different situations to suit the script.   

This old school method of working your way up the ladder isn’t the only way to become a cinematographer. Today, there are more younger DPs stepping into this position than ever before, who may get there through a combination of producing and showcasing really good lower budget work, creating connections with directors, building their skills through film school, using online educational resources, or some combination of the above.



 DIFFERENT KINDS OF FILMMAKERS

The idea of being more of a specialist or more of a generalist can also apply to the creative instincts of the filmmaker. For example, some cinematographers like to be known for creating a specific kind of a look - which is specialised and standardised across many projects.

This look could be shaped by things like selecting the same kinds of lenses, choosing to shoot on the same format, employing the same kinds of lighting techniques or framing shots in the same way.

Some directors are also known for creating a specialised feel, tone and look throughout their filmography. Movies by Wes Anderson will feature handmade production design, pastel colour palettes, symmetrical, centralised framing, straight tracking camera moves from a dolly, and very specifically directed performances. 

Whatever the movie is, this specialised brand of filmmaking will be carried through.

On the other hand there are filmmakers who work in more of a generalist way - which is probably a more common way of working, especially for cinematographers. 

What I mean by this is that they acquire a knowledge of cinematography which can be applied to a wide range of situations and shaped into different forms so that the visual style is built slightly differently to create a suitable feel for each unique story and director that they work with. 

Some of their work may be more stylised, some may be more naturalistic.

Having a broad skillset as a DP can be useful as you’ll be able to take on a greater variety of projects and adapt to what the director wants. 

While more specialist practitioners may only get the call from directors who want their movie shot in that particular style associated with the DP. If this is a bold style, for example with lots of colour, contrast, grain and haze, it may also draw more eyes to your work - especially to short form commercial or music video directors who want pieces to have a specific look and feel. This could be useful especially in building a career early on.   

CONCLUSION

 In today’s era of filmmaking there are no rules about how you should build a career as a cinematographer. But perhaps it's useful to keep this pyramid in the back of your mind, to remind yourself of how you need to develop.

In my experience I’ve found it useful to start accumulating skills in more niche positions in the camera department by working as a 2nd AC. Learning about all the other facets of filmmaking by being exposed to it on many different sets. Then broadening my creativity through watching films and deepening my experience by taking any opportunity to shoot that you can. 

Regardless of how you go about it, you’ll need to move from inexperienced to experienced, from more of a specialist to more of a generalist and from a technical role to a creative role. All the way growing from a more focused, smaller, specific world, to seeing and crafting the wider, bigger picture.  

Read More
Gray Kotze Gray Kotze

What Makes IMAX Different

Lets go into the details of Imax by looking at what makes these cameras are different from regular cinema cameras, how it affects what the footage looks like and why this high quality format is only used on a small handful of movies.

INTRODUCTION

There are two parts to getting a movie into a cinema.

First, the film needs to be captured with a camera.

And second, it needs to be presented by projecting that image onto a screen.

Imax basically took both of these parts from traditional cinema and supersized them, creating a camera system that captured enormous, top quality, high resolution images, and combined that with a projector that cast the film onto a gigantic sized screen.

So let's dive a bit further into the details of this technical process by looking at what makes Imax cameras different from regular cinema cameras, how it affects the footage and why this high quality format is only used on a small handful of movies. 

IMAX CAMERAS

To understand Imax and what makes this format different, we need to first know how most mainstream movies are captured. Up until about a decade ago, almost all movies were shot and presented using 35mm film. These have since been replaced by digital cameras, but we’ll get to that a bit later. 

35mm film is a photochemical emulsion that, as the name suggests, has a total width of 35mm, which records by passing it vertically through the camera. However, as some of that frame width needs to be used to accommodate sprocket holes, which are called perfs, that pass the film through the camera - the total usable size of a 35mm frame is around 24.90mm.

Different methods can be used to record 35mm frames at different heights, which I went over in another video, however the tallest possible 35mm film size that can be shot is 18.70mm.

If we compare this to the Imax frame, with its approximate size of 70x48mm we can see that it’s over eight times the size of what traditional cinema cameras capture.

The larger the surface area of a piece of film, the more photographic information it will contain. This means that an Imax image resolves much more detail and has very little visible film grain. Basically, Imax footage will look significantly cleaner and higher quality.

But, how do these cameras record such a big frame?

Instead of using 35mm film, they use a larger gauge 65mm stock. Again, once the extra space for the perfs on the side has been removed, it leaves us with a frame width of around 48.5mm.

If you compare 35mm and 65mm frames on film, you may also notice that the perfs for the 65mm sprockets are on the top and bottom rather than on the left and right. This is because 65mm Imax film runs horizontally through a camera, rather than 35mm which runs vertically. 

Passing through the gate horizontally gives Imax the advantage of being able to record a frame width that is larger than its 48mm size - which now becomes the frame height.

Each frame that Imax cameras record is 15 of these little perforations long, which is why this format is sometimes referred to as 15/70.

IMAX CAMERA GEAR

There are only 26 of these incredibly niche Imax cameras in the world. As such it’s a bit tricky to find information about these cameras but one of the most popular Imax film cameras is the MSM 9802, which is a sort of general purpose camera with a magazine that can hold 1,000ft of film.

Then there are also the Imax MKIV, MKIII and MKII bodies, which includes a “lightweight” version that holds 500ft of film.

Because the size of the film is so large, Imax have designed special lenses that have enough coverage to be used with these cameras without vignetting. They are made from glass by Carl Zeiss which are rehoused and designed specially for Imax.

Other third party lenses, such as rehoused Hasselblad medium format lenses from Panavision or their large format Sphero and System 65 lenses can also be used to cover the Imax format.   


DIGITAL IMAX CAMERAS

Since the early 2010s digital cinema cameras have largely overtaken using physical film to shoot movies.

However, because of its enormous negative size, many have argued that there is still no digital equivalent that can match up to the extremely high resolution, photographic feel and colour that Imax film delivers - which is estimated to be the digital equivalent of around 18,000 pixels.


DIGITAL IMAX CAMERA GEAR

Having said that, Imax has moved with the times and approved a list of high end digital cinema cameras, which they consider to be ‘Imax certified’. These include the 6.5K Arri Alexa 65 Imax, the Alexa LF and Mini LF, the Sony Venice, 8K Raptor and Monstro from Red and the Panavision DXL2 - with its rehoused Red sensor.


PROJECTION

Just like the Imax camera, that captured footage by moving the film horizontally, so did the first Imax projector need to be reconfigured and designed so that the film could pass horizontally past a gate, pause for a tiny fraction of a second as the light from the projector bulb illuminated the image on the massive Imax screen, before moving to the next piece of film.

The area that they projected onto was also much larger than traditional cinema screens - with the average screen size being around 18x24m.


ASPECT RATIOS

15-perf Imax is shot and presented in its tall, native aspect ratio of 1.43:1. This squarer format is a great ratio for using more symmetrical, central compositions, for framing singular characters, and incorporating more lateral height into the shot.

While the film industry was transitioning to shooting on digital cinema cameras, so too were cinemas changing from film to digital projection. In 2008 Imax introduced their first 2K Xenon digital projector - which they used to extend the number of Imax screens by expanding to regular cinemas - not only the custom built, enormous Imax screens.

Digital projection changed Imax in two important ways: firstly it changed the aspect ratio for 2K digitally projected Imax movies to a wider 1.90 frame - that chopped off the top and the bottom of the original Imax frame. Secondly it greatly reduced the resolution of the projection from the estimated 12K to 18K pixels in the Imax film negative to a 2K file.

In 2014 a 4K digital Imax projector was also introduced. This higher resolution projector used lasers as a light source to project an image twice as bright with a better contrast ratio than the 2K projectors. 

The increased power of this projector allowed it to show both the 1.43:1 full Imax negative on larger screens, or the wider 1.90 frame on smaller screens.

This means that some movies can now be distributed to both 2K Imax venues in 1.90:1 and some Imax film projectors in 1.43:1. 

Filmmakers need to keep all of these formats in mind when lining up their shots, to find a frame that works well in a wide aspect ratio as well as a tall one. Usually this means that the original Imax negative will include a lot of headroom for characters, which can then get cropped for the wider format.  


VISUAL CHARACTERISTICS

So, what is it that makes movies shot on Imax feel different from regular 35mm film or the equivalent Super 35 digital cinema cameras? Other than highly resolved detail and clean shots with minimal specs of film grain or noise.

Because the size of the negative that you are shooting on is so large, it produces a different field of view. In other words if you use the same lens on a Super 35 camera and an Imax camera, the Imax footage will look a lot wider, while the Super 35 footage will look more ‘cropped in’.

Cinematographers will often compensate for this extra width by using longer focal length lenses. So, where they would normally use something like a 21mm lens on a Super 35 camera to get a wide field of view, they’d have to go to a 50mm lens on Imax to get a similar wide perspective.

Using longer focal length lenses means that the depth of field will be shallower. 

So, practically, this means that Imax cameras can film wide shots on longer lenses, and achieve a shallow depth of field - where the background is out of focus. 

Whereas Super 35 cameras would use wider lenses to shoot a similar perspective which renders much more of the shot sharply in focus.


DRAWBACKS

Imax does come with some notable downsides. The first and most obvious drawback is cost. Imax cameras are incredibly niche which makes them a bit expensive to rent. But, more importantly, the price of buying and processing the amount of 65mm film required to shoot on Imax is incredibly high. 

This is because 15 perforations per frame moving through the gate 24 times in 1 second, works out to around 337ft of 65mm film for every minute that is shot.

If we do a super rough calculation: 400ft of 65mm film costs $619, developing and scanning one minute of film can cost somewhere around $490, so you’re probably looking at paying somewhere close to $1,000 for each minute of film that is shot. Of course they'll be bulk discounts and whatnot for features but, point is it’s not going to be cheap.

Another negative is that because the 15 perfs of film need to run through the camera so much quicker than regular 4 perf 35mm, Imax cameras are incredibly noisy when shooting. This makes it difficult if not impossible to record clean sync sound audio without hearing the whir of the film running.

Finally, they are incredibly bulky, heavy and cumbersome to operate. They were designed to be operated from tripods or on a remote head on a helicopter - certainly not handheld.

These limitations meant that Imax was only used for many years to shoot exhibition sorts of films, like short documentaries with aerial footage and wides of landscapes.

Imax footage was later incorporated into feature films in key action sequences, while the majority of the film’s dialogue sequences were filmed using more traditional 35mm camera systems.

However in recent years, filmmakers like Chritopher Nolan who have access to large budgets have pushed hard for the format to be used to shoot entire narrative feature films. This has involved finding solutions to some of the technical challenges posed by these bulky, noisy, expensive machines.

Recent news has also come to light that Imax will release a new, redesigned version of the Imax camera, based on feedback from filmmakers, in 2024. 

This may make the format even more usable and easy to work with as a feature film making tool. So, it seems that as long as cinema releases remain a thing and as long as there are directors and audiences that appreciate this super high quality format then Imax should still be around for quite some time.

Read More
Gray Kotze Gray Kotze

Tips For Shooting A Cinematic Documentary

In this video I’ll break down the importance of creating a coherent form and visual style for your film, and go over some technical tips you can use to do so.

INTRODUCTION

Documentaries come in all shapes and sizes. Some use re-enactments, others use interviews, while some films only stick to using observational footage. In this video I’ll break down the importance of creating a coherent form and visual style for your film, and go over some technical tips you can use to do so.


FORM

The visually strongest documentaries have a coherent style and overarching vision to them. Basically, choosing a visual approach and sticking to it for the entire film: whether that form relies on a mixed medium approach with archival footage, lets interviews play a prominent role, or means shooting in a completely observational style. 

Going a bit deeper, within that overall form, you should think about how you want to visually present your footage and try to maintain a coherence to this look throughout the film. For example, if you’re shooting interviews try to use the same framing techniques for all the interviews throughout, or if you want a steady, observational feel to scenes, then commit to filming only using a tripod whenever is realistically possible. 

You want the form and style of the documentary to be coherent and enveloping, not stylistically fractured and distracting.

When it comes to assembling the puzzle that is documentary, I’d say that there are 4 main pieces you can select - 3 of which you have visual control over. Number one, the interview, number two, B-Roll, number three, what I call ‘scenes’ and number four, archive - older footage or photos which have already been shot by other people, which is out of your control.

Some documentaries include all of these pieces, while some only include one - such as archive, or footage from observational scenes.

I like to think about footage other than interviews in terms of being either B-Roll or ‘scenes’. B-Roll is footage that usually doesn’t include dialogue between characters, and is composed of shots that create a mood or convey information. For example establishing shots of a location, more textural or atmospheric shots of objects, or undirected shots of subjects that are used to create a tone.

When I talk about ‘scenes’ this refers to sequences where there is either a dialogue exchange between characters, or a series of continuous shots of a subject are filmed that shows them performing some kind of action. Basically building out a mini narrative sequence of shots, rather than just filming more stand-alone, tonal shots. 

INTERVIEW STYLES 

Interviews are different as they provide direct, personal testimony from the subject that can be a great tool for conveying information, their thoughts and opinions or creating a narrative arc.

Let’s go over some tips for visually setting up an interview.

The first two decisions you want to make are how you frame the shot, and where the character’s eyeline will be. Whatever choice is made, I’d recommend sticking to the same format for all the sit down interviews in the film to preserve that overarching visual style we discussed earlier.


OFF-CAMERA INTERVIEW

The most traditional way to shoot an interview is by framing the subject in a medium shot or close up - where the bottom of the frame sits around the waist or chest of the character and the top of the frame includes a little bit of headroom. Usually the character will be placed either a bit more left of frame, or right of frame - which opens up their body a little bit and creates some empty, negative space on one side.

The interviewer will then sit next to the camera, with their head at around the same height as the lens. They will usually position themselves on the same side as the negative space. So if a character is placed on the left of frame, with more empty space on the right of frame then the interviewer will sit camera right so that the eyeline of the subject - where they look when they talk - will be orientated towards the open space, which makes the frame feel a bit more comfortable and natural.

This is called an ‘off-camera’ eyeline. A practical reason to shoot interviews like this is because the subject can talk to the interviewer face to face in a more conversational manner. 

An off-camera eyeline can also be combined with a host of different farming ideas. Such as placing the character more in the centre of the shot, using a tighter shot size or a wider composition, all the way to shooting the interview from a more extreme profile angle.

Again, whatever frame you decide to go with it’s usually best to be consistent, rather than framing each interview in the film using a different angle and shot size. 

An alternative to this is to use a ‘to-camera’ eyeline - where the subject’s gaze is fixed directly on the lens, looking straight at the audience as they talk. Because they are looking the viewer in the eye, this technique may create more of an intimate, personal feeling than a traditional, off-camera interview.

However, it comes with a practical downside. Trying to get a subject to look straight into the lens when they speak can be quite difficult. Often their eyeline will naturally drift off-camera to the interviewer who is asking the questions.

But, there is a solution to this which I’ve used. It’s based on a technique famously created by documentarian Errol Morris.

TO-CAMERA INTERVIEW

The Interrotron setup uses a teleprompter that is set up in front of the camera's lens. This would usually project text onto a piece of angled glass so that the person in front of the camera, like a news presenter, can read from it. However what we want to do is swap out the text with the face of the interviewer.

There are a few ways of doing this, but the setup I use involves wirelessly transmitting a video feed over wifi - using something like a Hollyland Mars wireless transmitter - from a low cost DSLR which is pointed at the interviewer, to an iPad, which is placed at the base of the teleprompter - that then gets projected onto the teleprompter screen.

I’ll then run a wired feed with an SDI cable from the camera that is actually shooting the interview to a monitor which I can place in front of the interviewer. That way the subject can see the interviewer on the teleprompter - which will make them look straight into the lens - and the interviewer can see the subject on a monitor.

This allows them to have a natural conversation, while the subject maintains a to-camera eyeline - kind of like how you would on a video conference call.  

The straight-on directness of to-camera eyelines lends itself to symmetry and central framing. Therefore the subject is usually placed in the centre of the shot. Here’s an example of a tight, personal to-camera eyeline interview that I shot for director Jessie Zinn for a short doccie called Faces Of Fortune.

SHOOTING A SCENE

Now that you’ve decided on an interview format, and shot the character’s testimonial, you’ll probably want to fill it in with some of those observational ‘scenes’ or B-Roll that I talked about earlier.

There are many ways to do this. A lot of the approach to filming scenes will be dependent on the subject matter of what you’re filming. If you’re shooting in a war zone then obviously you’ll just have to run and gun and grab whatever footage you can. However, if a more controlled shooting environment is possible here’s how I tend to approach it.

Usually the director will want to give the subject or subjects some kind of action to perform - something which they would naturally do in real life. This could be anything from having a conversation with their friends, trying on clothing, to taking care and interacting with an animal.

Once I know what they will be doing I like to ‘block’ the scene. I use this term very loosely to mean that we’ll work out a space where the action will take place. 

I’ll then create a sort of mental frame for the wider shot that we’ll need to film, think about what composition will work well for this, what objects we may want to add or remove from the shot and what I’d like the lighting to do.

The reason I always like to start wide is because then you know the cutoff point for where you can place gear, like a microphone or a light, without it being in the shot. 

You don’t want to start by framing a close up and then move wide, only to find out that you need to stop shooting to move gear or remove an object from the frame that is visually unappealing. What you want is to set things up as much as you can so that once you start rolling the camera you don’t need to stop until the scene is finished playing out.

This means that if you choose to light a space, you want to do so in a way where the light placement will work well for the entire sequence. In other words, lighting the space itself, rather than a specific close up shot.

For this reason I usually like to use as few light sources as possible. For example if it’s a dialogue scene in a bedroom with a single window, I’ll place the fixture outside the window to give the natural light a bit more shape, contrast and consistency.

This frees me up to move quickly within the room without having to avoid gear or stands, and rapidly reframe shots to capture all the angles needed to cut the scene together as the actions play out in real time.

Again, I like to do this by first framing a wide, then as the scene plays out pushing in closer to shoot mids or close ups. Often if it’s a conversational scene, I’ll return to shooting a wide or a two-shot of the characters talking as it progresses, and finally go back to end on the wide as the scene dwindles out - to give the director a quieter, long shot to end the scene on. 

Although it may seem counter intuitive, when you’re shooting a conversation it’s equally as important, if not more important, to get shots of whoever is not speaking, not only getting shots of whoever is talking.

We like to call these ‘reaction shots’. Having these in the edit allows the director to cut to a shot of whoever is not talking, and place it over shots of the character talking when they want to cut up and only use little bits of what they are saying. It’s a good way of hiding what would otherwise be jump cuts.      

Although we’ll always try to get all the shots we need in real time, it is sometimes possible once you finish a scene to go back and ask the subject to repeat a specific action again that the camera may have missed. Usually it helps to film these shots tighter, so that they can be used as an insert without breaking continuity or jump cutting.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Ben Richardson

Let’s explore how Ben Richardson's career has evolved, all the way from his start in animation, to working on some of the most popular contemporary long form television series by breaking down his philosophy on cinematography and some of the gear he uses to achieve his look.

INTRODUCTION

Ben Richardson’s cinematography has subtly evolved from an organic, looser, handheld look, to a bit of a cleaner, smoother style. However, his photography has also maintained a naturalistic feel to it that makes it seem that what is happening on screen is true to life. 

Let’s explore how his career has evolved, all the way from his start in animation, to working on some of the most popular contemporary long form television series by breaking down his philosophy on cinematography and some of the gear he uses to achieve his look.  


BACKGROUND

Ben Richardson’s first adventure into cinema didn’t start with live action filmmaking, but rather with animation. He moved from England to Prague where he found a close knit creative community, as well as a low cost of living, that allowed him to work and survive outside of the industry system of filmmaking. There he directed and shot the animated short film Seed.

“For me animation was a way to have a kind of control without having to have a huge budget. You could accomplish things with animation kind of with just sheer will power. You know, you’re literally crafting every frame. You don’t need a lot of light. You don’t need a lot of equipment.”

In Prague he met his first major creative partner, director Benh Zeitlin, who was also there looking to make an animated short. They later worked together on the live action short film Glory At Sea - a stylistic precursor to their first feature Beasts Of The Southern Wild.

There are some notable similarities between the animated short Richardson shot and directed and the first feature that he photographed: both were shot on a textural 16mm film medium, both featured a handmade, cobbled together, art direction aesthetic and a neutral colour palette with greys, blues and browns.


PHILOSOPHY

Richardson’s work on the feature also incorporated another key thread in his work: the ability to visually translate the feeling of a character and their performance to the screen. 

“I just remember having this experience of how she felt on screen and how you could translate the presence of this amazing little performer in the room to the camera. And it was the very subtle sort of movements and the way you framed her and the way you sort of held her that just, sort of, really translated what we could all feel in her physical presence into the screen image.”

Creating images that truly capture the feeling of the actor’s performance is a big focus of his. 

Whether that was with the rough, loose handheld work on Beasts Of The Southern Wild that always chased around the character, keeping the camera low to the ground at her eye level to better empathise with her perspective of the world, or whether it was by using more smoothly operated camera work and long telephoto lenses to compress more hardened, calculated characters against the landscapes and make them looming and ever present in Yellowstone.

Whatever cinematic perspective he uses, his approach to photography is never heavy handed or reliant on stylised visual tricks.

A technique I noticed he often uses when framing is incorporating foreground elements into the shot. So, instead of shooting a ‘clean’ shot without any elements placed between the subject and the camera, he’ll often place something in the foreground of the shot while keeping the focus deep on the subject.

To me, this has two functions. For one it creates a bit more depth in the frame and makes images feel less flat and more dimensional. It can also create quite an objective feeling, like the camera and therefore the audience is a silent observer, crouching in the grass or sitting across the room watching a scene play out. 

Richardson almost always shoots and lights in a way that is very naturalistic and tries to suck the viewer into the story, so that the camera’s presence is unnoticeable and offers as little distraction as possible.

His films often rely on shooting in real locations, rather than studios, use natural palettes which aren’t overly vibrant, and, most importantly, lights spaces in a naturalistic way - which truly represents the way the locations feel in real life, in much the same way as he tries to capture the feeling of actor’s performances.     

“The references we were looking at in terms of colour palette and the lighting side of cinematography were very naturalistic and I definitely wanted to work with that and I wanted to work with as much natural light as possible. And what that really came down to was just a tremendous amount of tech scouting, making sure that I knew exactly where and when the sun was going to be doing the things we wanted it to be doing.”

This process of observing how the natural light interacts with each shooting location, and using that to schedule the perfect shooting time for each scene is a great way of controlling lighting conditions without having any lights. 

Because these shooting windows were quite narrow on Beasts Of The Southern Wild this did mean any time they ran over schedule they significantly cut into their available shooting time. Such as this complex fishing scene - which they shot in the final 17 minutes of light remaining in the day. Covering the action with a single, long take in the wide shot and scrambing together the rest of the coverage with the final 11 minutes of light to spare.


GEAR

When working with a small crew and limited gear, the best way to light is to manipulate the light already present in a space by making small tweaks to balance it. In other words controlling how bright or how dim the light is in certain parts of the frame.

“Even simple things like opening and closing doors, you know, switching out a 25 watt bulb for a 15 watt bulb in the corner and, you know, tiny little shifts to keep balancing this place but all driven by this attempt to be very naturalistic and give the environments room to breath in the same way that the performances were.”

To preserve continuity and enhance the naturalistic look on other long form projects like The Mare Of Easttown, he used larger tungsten light sources to recreate the feeling of sunshine - specifically 10Ks and 20Ks. 

Rather than shining these directly through windows he used CRLS aluminium bounce reflectors. These little panels act like mirrors that you can use to bounce hard light sources - but because they are made from polished aluminium rather than glass they reflect light with a slightly softer quality.

You can also use them to increase the distance between the source and the subject by placing the reflectors outside the window, and then the lamp away from the window pointed at the reflector. This creates less fall off and gives the light the quality of natural sunlight.

Because real ambient sunlight indoors comes from the sun bouncing off different objects in a room - it has quite a broad, soft, low intensity. Therefore most of his day-interior lighting tries to mimic this soft, spread out quality.

While his day-exteriors usually just use whatever the natural light is doing - whether that is hard, direct sunshine, or a dimmer, diffused cloud cover. Like most DPs when shooting with hard sunshine the characters are often blocked and positioned so that the sun either lights them from the side or from behind as a backlight.

When it comes to cameras, most of his feature films have been shot with a single camera. However, because television series, with their many episodes, usually need to be shot quite quickly with quite a lot of coverage, he’ll often have to bring in multiple cameras. 

Yellowstone used three cameras - Arri Alexa Minis - to cover scenes. Normally when using so many cameras it makes it a bit practically easier to use zoom lenses. This is because the position of one or two cameras may need to shift so that the other cameras won’t see them. Having a zoom lens, rather than a prime, lets the camera operators then adjust the frame by zooming in or out just a little bit to avoid getting the other cameras in shot.

However, Richardson liked to shoot with fixed prime lenses - the Zeiss Ultra Primes. Knowing what prime lens was on each camera helped him structure the scenes more easily and know what coverage he would be getting - even if he wasn’t looking at every shot on a monitor. 

For example, if he put A cam on a 24mm Ultra Prime and B cam on a 135mm lens, he knew he was getting one wide shot and one close up. However if he had both A and B cam on a 24-290mm Angenieux zoom lens, the operators would be able to zoom in and out and pick up different shots at different focal lengths in a way that may have been a little less focused and deliberate.

Richardson often likes to choose longer, telephoto focal length lenses. This is especially noticeable on shows like Yellowstone - where he would shoot close ups of characters on 135mm or even 180mm prime lenses. 

Even wide shots, which are more often shot with wider angle lenses, he likes to shoot on telephoto lenses. This compresses the image - making it feel like the background landscapes are closer to the characters than they actually are. Because using a longer focal length means placing the camera further away from the actors to get a wide shot, it also means that these shots will often incorporate some foreground elements between the camera and the actors - which we mentioned before.    

Unlike some DPs that only like using old vintage lenses with interesting swirling characteristics, anamorphic bokeh or heavy flares, Richardson often prefers spherical glass which is a bit more modern, consistent, sharp, and preserves some shape of the background in the bokeh. These lenses include the Zeiss Ultra Primes, Zeiss Master Primes, or Leica Summilux-Cs.

Again, this look pushes a naturalistic feel rather than an overly stylistic one.

Other than Beasts Of The Southern Wild, which was shot on 16mm film, he almost always uses a Super 35 digital Alexa: like the Mini or the XT.

He’s also used basically the same LUT file over his past 5 projects with the Alexa, to get a consistent, natural colour look that he’s happy with.

Read More
Gray Kotze Gray Kotze

The 2 Ways To Film Stories

Let’s use this video to explore the different ways that camera, lighting, sound and editing choices can be used to create either a subjective feeling or an objective experience.

INTRODUCTION

In the process of writing, shooting and editing a movie filmmakers are faced with a choice. Do you want to tell the story in a way that is more objective, and detached from the character, or in a way that is more subjective and replicates the character’s point of view or emotions?

Let’s use this video to explore the different ways that camera, lighting, sound and editing choices can be used to create either a subjective feeling or an objective experience.



OBJECTIVE VS SUBJECTIVE FILMMAKING

In cinema, the perspective that you create is incredibly important. It affects how the audience interprets the story, feels about the characters and the kind of emotions that they experience along the way.

Although there are two perspectives: an objective one and a subjective one - there is also lots of middleground between them, and room for this to shift throughout a movie.  

There also isn’t only one technique or way of creating a perspective. It is usually the result of a handful of different methods and ideas that are combined to create a point of view on the story. 

Let’s start with objectivity. This is when filmmaking techniques convey information from an omniscient point of view: as if the shot is being observed by an audience member. There is little emotional emphasis attached to the perspective as the camera passively observes the action from a distance.

Most shots in movies are quite objective. 





CAMERA LANGUAGE

To me, this feeling is best achieved by using a stable camera, operated off a tripod or dolly. If there is camera movement, it is usually because it is motivated by the characters themselves moving.

I have another more comprehensive video on shot sizes, but basically wider shots like long shots or medium shots, place the characters a bit further from the camera and therefore feel a bit more objective - like the camera is observing their actions from a healthy distance away.

Positioning the camera so that it sits at around the same height as the characters, and making sure it is filming at a neutral angle without a tilt, is another way of imposing a feeling of objectivity.

Subjective shots, on the other hand, may try to place the audience in the shoes of the character on screen by using what is called a point-of-view, or POV, shot. This is often achieved by cutting from a shot of the character to a shot that represents what they would be seeing.

Such as this sequence which cuts from a shot of a character whose eyeline is a bit off camera, to a shot of their perspective which mimics the same angle as their gaze.  

Or, this sequence, which mimics a person’s point of view by using a macro lens to shoot the image through a scope - that the character is looking through. This POV technique is also sometimes done by using a black mask or shooting through other devices like binoculars, a keyhole in a door, a telephoto lens of a camera, or to recreate the perspective of looking through night vision goggles. 

Some movies, like Enter The Void, even leaned on this POV technique so heavily that they incorporated the character blinking, by cutting to a couple black frames incredibly quickly to create this illusion. 

An entire genre, of found footage movies, has even been created around the idea of only using shots taken from the literal POV of a character who is filming everything. 

But there are also other camera techniques, other than using literal POV shots, that can be used to create a visual experience which is more subjective and focused on the character. 

Using a tight shot where the camera is placed physically closer to the actor during filming creates a more subjective, closer bond between the audience and the character, than filming in a more objective wide shot.

Shooting from either a low angle, to make characters appear larger than life, or a higher angle that diminishes them, is a way of imposing a feeling on the image. This usually makes the audience experience the shot in a subtly more psychologically subjective way - where the camera makes the character feel stronger or weaker. 

Although it depends a lot on context, I often find that handheld camera movement creates a more subjective language. Perhaps this is due to handheld motion’s connection to the documentary genre - where the character may interact with the camera and is more aware of its presence.

This handheld motion can also be combined with an over-the-shoulder perspective, to make it feel like we are following in the literal footsteps of the character. Again, bringing the audience and the character closer together. 

Another type of camera rig that can be used to create a subjective feel is called a Snorricam - which attaches the camera directly onto the body of the protagonist. This glues the audience onto a close up of the character, so that we move with them in a subjective way.  

A good rule of thumb is that the more stylised and experimental the camera language is, the more it is usually trying to impose a feeling on the audience, get them into the mind of the character, and therefore the more subjective it is. 

While, the more naturalistic the camera’s role is, the more it creates an observational, detached perspective that is objective.    



SOUND

Sound is a filmmaking tool that also plays an important role. Objective shots have sound that presents more of a general recording of the space. We pick up on things like dialogue and any ambient sounds in the location, such as a room recording of broadcast sound from a TV that is on, while excluding special sound effects, or audio that wouldn’t naturally occur from the mix.

Subjective feeling shots will often play with sound, and may share the sound that characters hear with the audience - in a similar way to how the camera uses a POV shot to share the perspective of a character. A good example of this is when the audience hears the same sound that a character is hearing in headphones or on a telephone.

Another, more extreme version of subjectivity is if the plot and filmmaking mechanisms dive into the mind of the character - through what is called a mental subjectivity. 

This could be done by using a voice over spoken by the character, by showing their dreams, using a shallow depth of field to evoke their memories, or even using visual effects or camera trickery to convey hallucinations that they may be experiencing. 

When the language of cinema takes on how the character is experiencing the moment it is subjective.



EDITING

How quickly or how slowly scenes are edited also has an effect on how stories are told. Usually, shots that play out in longer takes or scenes with very few transitions between shots tend to mimic how we experience time in the real world and therefore usually feel more natural and objective.

While rapidly cutting through many shots incredibly quickly has more of a deliberate emotional effect and can create feelings such as anxiety or tension - feelings that emerged artificially due to subjective editing.



LIGHTING

A final cinematographic concept that can be used to tell stories in different ways comes down to how cinematographers use light.

Objective storytellers like to lean into using naturalistic lighting. This is when either only natural ambient light is used, or, more often, when the film lights that are used to add illumination to a shot are placed in a way that is motivated by the light in the shooting location.

A good example of this is how when shooting interiors big sources of light with a daylight colour temperature are placed outside windows, shining in, to mimic the direction and quality of sunlight, while also adding a more cinematic contrast and increasing the levels of light inside the room.

For more subjective stories, cinematographers can employ more expressionist lighting techniques. This may include using RGB LED lights or fixtures with coloured gels, to add different unnatural hues of light to a scene. This use of vibrant, exaggerated colour elevates stories out of the real world, to create a feeling or a tone that embodies how a character sees the world in that moment - rather than merely presenting the world plainly and objectively as it really is. 

Read More