How Wong Kar-Wai Visualises Time: Chungking Express
How does Wong Kar-Wai pull off his famous step printing technique in Chungking Express?
INTRODUCTION
Right from the get go Chungking Express throws us into a frenzied spiral of blurred motion as time chaotically rushes by in bursts of speed.
Only to later cut to this shot, which has the opposite effect of suspending time, drawing it out, elongating it, as the outside world flies past while the two characters barely move at all.
Perhaps surprisingly, both of these moments which play with our perception of time in different ways, were actually filmed using the exact same camera and editing technique. So, let's use this video to unpack how this can be done, and what effect it has on how we emotionally perceive the characters and action in Chungking Express.
TWO STORIES
Chungking Express is like a mirror. It splits the film in two, with two sets of characters and plots ,which each follow the same template, with each part mainly shot by a different cinematographer. Both characters are policemen in Hong Kong. Both are in the aftermath of a breakup.
However, these two sides of the same coin unfold slightly differently based on the divergences in the nature of each character.
The first policeman is an extroverted, outgoing, overtly emotional character, who deals with the breakup by calling everyone he knows - trying to dowse his heartbreak by seeking out the company of people.
Whereas the second cop is more introverted, stuck in his routine, and insular. He deals with his breakup by distancing himself from those around him.
With this in mind, it makes sense why Wong Kar-Wai may want to present how the more fiery first cop perceives time in more of a frenzied, sped up, manic way. Whereas, the second cop’s more inward facing solitude slows time down to a crawl.
To manipulate time and speed, the director and his cinematographers, Andrew Lau during the first part of the story and Christopher Doyle during the second part, used the same camera technique of shooting different frame rates and the editing technique of step printing.
But, before we talk about how they technically did this, it may be useful to examine where this influence may have come from.
FRENCH NEW WAVE INSPIRATION
“The thing that you should understand about Wong Kar-Wai’s style and what makes it different from the majority of films coming out of Hong Kong is almost all the Hong Kong films are either wild, crazy comedies, wild, crazy fantasy films, and then what’s become the John Woo, Ringo Lam action films. He’s also taking a cue from the French New Wave films. In particular, Godard is his man.” - Quentin Tarantino,
French New Wave filmmakers, like Godard, took genre films and subverted them, deconstructing and experimenting with the very form of cinema. One convention that he played with in a film called Every Man For Himself, was that of time and motion.
STEP PRINTING
Normally films are shot at around 24 frames per second. In other words the camera captures 24 individual pictures every second that it is rolling. When these 24 frames are played back in 1 second, we get a normal feeling of motion.
If the filmmaker wants a slow motion effect, they can shoot a fast frame rate on the camera, such as 48 frames per second. When these 48 frames are played back at 24 frames, they will play for 2 seconds, rather than 1 second - thus giving a slow motion effect.
When Godard wanted to slow time down, rather than using the traditional method of shooting a higher frame rate and playing it back at 24 frames, he instead shot at 24 frames per second and used a technique called step printing to artificially and jerkily slow down time.
Step printing reproduces multiple copies or duplicates of a frame. For example, if you took 24 frames of film and printed each frame twice, you’ll end up with 48 frames. When this is played back, it will slow down motion, in the same way that shooting with a higher frame rate would.
Visually however these look different. Because step printing is basically doing a bunch of fast freeze frames, it gives a strange stuttering effect to motion.
This step printing technique later became popular especially in Kong Kong action films which were shot around the time. Allowing them to get a slow motion effect from footage shot at 24FPS, without needing to pay twice as much for film stock when shooting 48FPS.
Now that this analogue technology of physically printing extra frames of film is obsolete, filmmakers can achieve the same effect by changing the playback speed of the footage in editing software.
STORY ONE
Step printing gave Wong Kar-Wai a way to slow down motion with a stutter, but what if, like in the first part of Chungking Express, he wanted time to instead pass at a chaotic, crazy, blurred pace. Visually represent a feeling of adrenaline in an action sequence or visually exaggerate how a chaotic character perceives life.
They could do this by slowing down the frame rate of the camera. Andrew Lau shot these sequences at only 6 frames per second. Slowing down the frame rate meant that the camera’s shutter stayed open for longer - approximately 1/12th of a second. This longer exposure meant more light was let in and that there would be more motion blur.
Later, they could then use the step printer to print each frame four times to arrive at a playback speed of 24 frames per second.
If you scrub through these scenes frame by frame, you can see that each shot holds on a freeze frame for 4 frames, before moving onto the next image.
This same technique can also be done with modern digital cameras, by shooting with a lower frame rate setting, like 6FPS. You can then either shoot with a 180 degree shutter angle which will automatically adjust how long the shutter stays open for, or you can set the shutter speed to double your frame rate. So in this case double 6FPS is a shutter speed of 1/12.
Later, once you pull the footage into editing software, you can decrease the playback speed to account for the slower frame rate. A playback speed of 24FPS divided by your shooting speed of 6FPS equals 4. In other words to get back to normal speed you need to step print each frame 4 times or slow your footage down in editing software by 1/4th or 25%.
This formula can be manipulated based on the frame rate you want to shoot at. If you shoot at 8FPS, you can set your shutter speed on the camera to double that at 1/16. Then in the edit, divide your playback speed of 24FPS by your shooting speed of 8FPS which equals 3. This means you need to print each frame 3 times, or slow down your footage by 1/3rd or 33.3%
Shooting at a low frame rate creates a lot of motion blur - especially if people or objects move through the frame at a high speed, or if the camera itself moves around a lot.
It’s the handheld camera moves used by Lau that makes these sequences with the first cop feel much more chaotic, blurred and sped up.
Even when they are shooting at a regular frame rate, a bunch of other cinematic techniques also contribute to the chaos. Chungking Express uses wide angle lenses that distort the face when up close to characters.
This is a technique they’d push even further on the next film they’d shoot, Fallen Angels, which was shot with an ultra wide-angle lens.
The camera is almost always moving and is for the most part operated handheld on the shoulder to give it more shake.
They will often start the shot on an object and then rapidly pan or tilt off it onto the subject, using these as cutting points to inject more motion and energy into scenes. He also uses a combination of counter moves, where he’ll move in the opposite direction to the character to speed up the motion, or directly track with the motion of the characters.
Quick cuts are made to tight insert close ups, which are sometimes even moving.
And, even if they do choose to move the camera further away from characters and shoot with longer telephoto lenses, they’ll still make the image feel busy and dirty by shooting through lots of foreground that blocks the frame.
Throughout Chungking Express, a lot of different, mixed light sources in different colours and temperatures are used, many of which are practicals: such as from a fish tank or cool overhead fluorescent tubes.
STORY TWO
As we transition from the intensity of the first story to the second set of characters who either monotonously wait around for their shift to end or deal with a breakup through solitude and stillness, the filmmaking itself adjusts to the pace of the characters.
Stylistically, many of the same camera techniques, such as using wide angle lenses handheld, lighting with colour and shooting with telephoto lenses through foreground, carry through to this second story. However, the intensity of the handheld moves and the frequency of the cuts made in the edit slows down.
This differentiates how the pacing of time is perceived between the two stories. With the first feeling faster, and the second feeling slower.
The second story also forgoes the manic, handheld, 6 frames per second technique which was so widely used in the first part.
However, there are three moments in the second half that use step printing, but with a very different effect.
Instead of shooting handheld, they lock the camera off on a tripod, and block the actors in the frame so that they barely move, or if they do, they deliberately slow their movements down to a snail’s pace.
They shot this at 8 frames per second. When each of these frames is later step printed 3 times in the edit, it makes it feel like the characters are moving in slow motion, suspended, while the world around them flies by.
There’s one final interesting use of this technique. This shot starts being filmed and played back at regular speed at 24FPS, however at the end of the shot, Wong Kar-Wai step prints the final few frames, duplicating each frame three times. This gives the impression of time slowing down during the shot.
The following shot is then also step printed and slowed down. This quite effectively visually represents Faye Wong’s boredom as she waits out her shift.
CONCLUSION
Wong Kar-Wai’s work shows that, just as in the French New Wave, the rules of cinema are made to be broken. Throughout his movies he’s always quite overtly shown that time, and how it is perceived, is a core theme in his work.
It makes sense then that he and his DPs like to play with the very speed at which life itself is captured by the camera.
How Different Directors Create Different Looks: Pachinko
Let’s take a look at how different episodes of Pachinko, which used the same technical crew and locations, each took a unique creative approach based on having a different director and cinematographer team.
INTRODUCTION
The job of a director is to take the bunch of words written in the screenplay and transform it into an audio and visual work. As you can imagine there are a number of ways this can be done, some lean into stylisation, others side with naturalism, some like to let the performances of actors stand alone, others still like to push the looseness of the camera language.
Just like how different artists can produce portraits which look and feel totally different, different directors, DPs and creative crew members, with their different points of view, produce films which are unique to them.
With this in mind, let’s take a look at how different episodes of Pachinko, which used the same technical crew and locations, each took a unique creative approach based on having a different director and cinematographer team.
SETTING THE LOOK
Multi part series have a shooting structure which differs from regular feature length filmmaking. Almost all movies will be led by a singular director and photographed by one cinematographer.
However, due to the extended workload and high shooting volume that comes with long form series, these shows are usually broken into a few different blocks, which are each directed by and photographed by different creative teams.
To maintain consistency, typically the creative team who shoots the first block of episodes will create what is called a ‘bible’, which dictates the look and feel of the series and how it should be creatively produced going forward.
In the case of Pachinko this first block of shooting was done by director Kogonada and DP Florian Hoffmeister.
“The structure is that the first block, which was my block and which includes the pilot, will actually set the look. A lot of creative exploration and a lot of creative development will actually happen in prep for the first block and then the second or third or whatever comes after basically works along those guidelines established by the creative team of the first block.” - Florian Hoffmeister
These guidelines will usually be bound by both technical choices as well as creative ones. For example, the camera selection, lenses, rigging of lighting fixtures at regular locations and the LUT which interprets the look of the colour grade, will remain consistent.
While similarly, visual decisions like the aspect ratio, frame composition and camera movement will try to conform to what has been done before.
You can see this across the various seasons in shows like Breaking Bad, which was filmed in 35mm with Kodak Vision 2 stocks, a 16:9 aspect ratio, with a warm colour palette with sickly greens, and consistently handheld camera movement.
Or The Bear, which uses quite a contrast heavy, saturated LUT, lots of long lenses, tight close ups, and a cooler palette.
Or Yellowstone’s use of natural light, sharp lenses and stable frames which are shot with telephoto focal lengths that compress the background of landscapes and bring them closer to the characters.
What made Pachinko a bit different from your average series, was that when it came to shooting block two with a new creative team of director Justin Chon and DP Ante Cheng, instead of conforming to a ‘bible’, they were given the freedom to depart from the established look and creatively approach their episodes with the look they felt was best suited to the story.
TWO APPROACHES
So what made the approaches of these two directors different from each other? Kogonada, who shot the first block, is a director to whom form is everything. If you look at his other work, each frame feels carefully considered, deliberately crafted and photographed with a subtle feeling of objectivity. All aspects of his storytelling feel structured and based around form, both narratively, in terms of blocking and performance and visually.
Justin Chon on the other hand, who directed the second block, is a filmmaker with an acting background whose work is very much bound up in performance and emotion. Again, this directorial tone also creeps into the look, with much of his other work having a level of looseness, energy, emotion, texture and far more subjectivity and closeness to his characters.
“I always felt that if you were to press these two very different filmmakers into this…You know, the constraints of, like, what they call a unified vision or a bible you would lose a lot of emotional and visual variety. So, I very much from the beginning, I love it for a system where the second block would not necessarily work along our lines.” - Florian Hoffmeister
Each block was given the same resources, locations and crew to shoot with, but was led by a different vision from two different directors. So, what were some of the things that each block did differently to create their unique tones?
BLOCK 1: KOGONADA / HOFFMEISTER
Pachinko is an historical epic drama with a storyline which jumps around between the 1920s and the 80s: showing how significantly the past experiences of people inform their present situation. This connection between the past and present is so strong that on a philosophical level, it’s almost as if the story is taking place in one time.
Koganada’s approach stressed that the spaces in the film are more important than time, and in doing so stylistically disregarded the different periods and treated them almost as if they were taking place at one time.
What this meant in terms of the camera language was that they chose not to visually differentiate the different time periods. Not using any different language of cinematography to portray different timelines, like aspect ratios, the colour grade, or different lenses. Kogonada and Hoffmeister made the decision to maintain the same visual look in both the 20s and 80s.
They let the production design, costume and make up work differentiate the time periods rather than the camera.
This look leaned on their preference for structured, formal, objective, stable visuals. They almost exclusively shot off a tripod or dolly - carefully composing each frame and barely moving the camera much.
If the camera did move it mostly tracked along with the motion and speed of the characters with a high degree of stability from something like a Steadicam or dolly.
Or, they used some linear push ins or pull outs on stationary characters. However, these were handled almost exclusively with very subtle, barely perceptible, creeping motion done with a dolly and tracks.
In other words the camera never did stylistic moves for the sake of movement and was rather motivated by the movement of characters or an emotion.
When it came to their shot selection, they often covered scenes with a medium-wide master shot taken from the waist up, before jumping into medium shots, medium close ups, or close ups which very rarely were framed tightly on characters.
They saved using these more intimate close ups for key moments where they really wanted to emotionally elevate a feeling and get inside the character’s head.
Framing scenes from this mid to wide range distance created a little more emotional separation and a greater feeling of objectivity.
The DP and director often balanced these shots with a symmetrical care, lining up centrally framed compositions of characters - especially during wides - which gave a hint of Yasujirō Ozu.
In terms of their camera and lens selection, they favoured quite a clean, sharp, large format look with a shallow depth of field.
The first block was shot on a Sony Venice with Panavision Panaspeed spherical prime lenses. These come with a very fast stop of T/1.4, which was used by Hoffmeister to separate characters from their background, especially during close ups, by shooting them with a wide stop with lots of bokeh.
Shooting with the wider field of view of a full frame sensor and large format lenses meant that Hoffmeister could choose longer focal lengths and still be able to frame with enough width for wide shots. Using these more medium or telephoto focal lengths, again, created a shallow depth of field which better isolated the characters in the frame by blurring the background.
Hoffmeister worked with colourist Tom Poole in creating a photochemical looking LUT for the digital camera which was based on the colour work produced by Magnum photographer Burt Glinn in Japan.
Overall, the stylistic approach of this first block relied on creating one consistent, clean look across the different time periods, which didn't try to be too nostalgic or emotional, maintaining more objective, traditionally cinematic, composed frames, never trying to make the audience aware of the presence of the camera.
BLOCK 2: CHON / CHENG
Block two’s approach countered this idea of a clean, objective, formal, structured tone, instead leaning into emotion and subjectivity.
This was done by largely forgoing the smooth or locked camera moves, in favour of a handheld camera, deliberately operated with much more looseness and less precision.
Although, like in the first block they would often hold on static frames, Chon and Cheng decided to do this with a handheld camera, rather than off a tripod head, giving the images a subtle looseness.
Sometimes these hand operated compositions would even have a bit of a dutch tilt to them, with a slightly off kilter horizon, or go in and out of focus a little in an organic way as the focus puller reacted to the more improvised camera motion.
When they wanted a bit more energy they also incorporated some handheld camera moves to give the footage a bit more dynamism.
When it came to portraying the different time periods, the block two team took a different approach. Rather than maintaining the same look for everything, they subtly gave the 20s period footage a different feel by shooting it on anamorphic lenses, then used the same spherical Panaspeeds for the 80s scenes.
When shooting anamorphic they kept the same 2.20 aspect ratio as the rest of the show by chopping off the sides of the wider 2.40 native frame which they extracted from the anamorphic lenses.
These anamorphic lenses provided a greater focus falloff around the edges of the frame. In other words, the sweet spot in the middle would stay sharp while the borders of the shot progressively fell out of focus.
These lenses also create an oval bokeh shape which is different from the regular rounded bokeh of spherical glass. Anamorphics also tend to flare a bit easier - with a lateral flare.
These flares were another inclusion which differentiated block two from block one and made Chon’s images feel a lot more textural and a bit rougher than Kogonada’s clean frames.
The DP and director also ‘messed up’ the image in other ways. They added haze when shooting interior scenes, especially those in the 20s, to give an additional layer of texture.
Also their images have a higher degree of halation or bloom to the highlights than block one, which looks a bit cleaner. I’m not sure whether this came from the anamorphic glass, the use of haze, effects in post, or by using a stronger diffusion filter in front of the lens, however it too added a smoky texture.
Like in some of their other work together, they also implemented much more colourful lighting than in the first block, pushing in blue or red tones. Or, sometimes, using a very cool colour balance to push tones into an exaggerated dawn, blue look.
CONCLUSION
I think what this case study shows is that there is not necessarily a right or a wrong approach when it comes to making creative decisions. Different directors, DPs and crew members will bring different sensibilities and interpretations to the screenplay.
Like in art, it's these nuanced differences in how the subject matter is portrayed which are the little signature at the bottom of the painting which show the authorship of each film.
The Cinema Camera That Dominates The Movie Industry
Let's evaluate what the most popular cinema camera is based on a study on what cameras were used to shoot the narrative movies at Sundance, Cannes and the Oscars from 2021 to 2024.
INTRODUCTION
Camera technology is constantly evolving. Each year different manufacturers introduce new, shiny, better than ever before products to vie to capture market share. Therefore, it may be surprising that when it comes to the world of high end digital cinema cameras there is one clear cut winner, and perhaps even more surprisingly, far and away the most popular camera on the market was released almost 10 years ago.
To back this up with some data, I went and evaluated the cameras used by narrative movies from arguably three of the most influential film awards and festivals, to determine what the most popular cinema camera is, what camera manufacturers claim the top spots, how many productions still shoot on film over digital and what sensor format is most popular.
STUDY
To put together these findings I’ve looked at what cameras were most used at three main film festivals which each represent a slightly different demographic in the market: Sundance - which focuses a bit more on indie productions, Cannes - most of whose entrants favour mid range films with a few higher budget movies, and the Oscars - which tend to favour bigger budgets.
To put this together I’ve gathered data from IndieWire’s camera surveys - which they put out every year and show which productions used which camera. To get a sense of the progression in the camera selection I’ve looked at the festivals over the previous four years.
These results only look at narrative feature films - and exclude other formats like long form series or documentaries - which as an example would feature far more Sony and Canon cameras at a lower budget level.
RESULTS
What emerged was one clear victor: Arri. More specifically, the Arri Alexa Mini. This Super35 digital cinema camera was by far the most popular, even though it was released all the way back in 2015.
Of the 175 productions over the last 4 years at Sundance an incredible 44% of them were shot with an Arri Alexa Mini. With all Arri digital cameras, including their other large format options, the Amira and alternative versions of the Alexa, being used on 72% of all productions.
This figure pales in comparison to other digital cinema camera manufacturers, such as Sony at 12% and Red at 5%.
This trend of Arri digital cinema dominance showed up at both Cannes and the Academy Awards, albeit at slightly lower levels - with again Sony coming in as the 2nd most dominant manufacturer with their Venice and Venice 2, and Red bringing up the rear.
Sundance typically also featured a greater variety of camera and manufacturer picks, with a scattering of lower end digital cinema choices - like Blackmagic Pocket Cinema Cameras, a Canon C70 or a Red Komodo.
Interestingly, the Alexa Mini’s dominance as ‘most used camera’ dropped quite a bit, all the way down to 16%, when it came to the Academy Awards. I believe there’s a good reason for this - which we’ll get to a little later.
Still, across the board the Alexa Mini reigned supreme on both indie film productions, medium budget shoots and high budget movies.
ALEXA MINI DOMINANCE
In a past video I outlined why it is that Arri’s digital cinema products have dominated the market so much: from their look and colour science, to their compatibility, ease of use, heritage and reliability.
However, why is the Mini specifically still the most popular Alexa model? I’d say the number one reason is its form factor. The Mini was released with the expectation that it would mainly be used for lighter weight builds such as drone or gimbal work, with the other studio Alexas being used as the main workhorses.
However, its smaller profile, which could also be built into bulkier modes by adding accessories meant that it soared in popularity as the go-to camera choice amongst cinematographers.
In other words, you got that sought after Alev sensor and Arri colour science in a package which could be built either compact or large depending on your preference.
In much the same way, the Alexa Mini LF, with its smaller form factor, proved vastly more popular as a large format camera than the bigger body version the Alexa LF. Proving that when it comes to choosing a camera apparently size does matter.
Before the introduction of the newer model Alexa 35 which can now conform to Netflix’s 4K recording requirements, if you wanted to shoot with a Super35 sensor the Alexa Mini was pretty much the choice.
Although the Alexa Mini still dominates even in 2024, with its slightly lower price tag and data rates, it has lost quite a lot of market share this year in the high end arena to Arri’s latest Super35 camera - the Alexa 35.
SUPER35 VS LARGE FORMAT
In recent years there’s been a bit of a surge to shoot on large format digital cameras, which have bigger sensors than traditional Super35 - such as the full frame Venice, the Arri Alexa Mini LF or even the Alexa 65 with its wider field of view.
However, looking at the numbers, large format cameras still don’t dominate the market. If we look at every production and sort each camera into either Super35 or large format, the results show that at both Cannes and Sundance there was a similar preference for Super35 with it leading at 65% and 69% respectively.
Bear in mind that some productions used multiple cameras - and paired up different sensor sizes, such as an Alexa Mini and a Mini LF.
Super35 capture offers cinematographers far more lens options to choose from than large formats do, dating all the way from vintage glass to contemporary lenses. Typically these cameras also come at a lower rental cost than the more expensive large format cameras do.
Interestingly when we look at the Academy Awards we see a different picture. Only 43% of recent Oscar picks shot in Super35, while 77% of productions utilised some form of large format capture.
This also speaks to our earlier note about only 16% being shot on an Alexa Mini.
I’d argue that one of the reasons for this may be that Oscar films have the bigger budget to rent large format camera packages, which low budget Indie movies from Sundance may not be able to do.
I’d speculate that perhaps cinematographers also feel that the more contained indie and mid budget dramas which largely come out of Cannes and Sundance are better suited to the more traditional Super35 format.
Whereas, maybe, the higher budget, blockbuster nature of Oscar picks are better enhanced by choosing wider field, higher res formats like Imax or digital 65.
That’s what the data suggests anyway.
FILM VS DIGITAL
A final question that came to mind was how many productions still shoot on film versus digital?
This number included any production which used any type of film acquisition - from 16mm all the way up to Imax. Over the past four years, 11% of Sundance productions, 22% of movies from Cannes and 30% of Academy Award nominated flicks used film cameras.
Of those film cameras, again, Arri came first, with their LT and ST versions of the Arricam being by far the most popular film cameras.
Although overall, of course digital, cameras dominated I was quite surprised at how high film usage in 2024 still remains. Again, there’s also a possible correlation here with budget. The more money a production has as it scales from indie, to mid, to high budget festivals, the more possible it is for filmmakers to make the choice of whether to shoot on film or not.
Cinematography Style: Agnès Godard
In this episode let's take a look at the cinematography of French DP Agnès Godard.
INTRODUCTION
Agnès Godard crafts images that provide viewers with a tactile and sensory experience, that uses textural photography, soft but naturalistic light and an abundance of close ups to immerse the audience and create a visceral impression of how the characters navigate the world.
So, let’s dive a bit deeper into how she became a DP, her thoughts on cinematography and look at some of the gear she uses to create images.
BACKGROUND
After graduating from film school she began working in the camera department as a technician - most notably working as an assistant on movies by Wim Wenders.
In fact it was on one of those shoots - Paris, Texas - where she met another crew member Claire Denis who was working as an assistant director.
When Denis embarked on her first feature as a director she selected Godard as the camera operator. She later stepped up as a director of photography on most of her subsequent films, forming a long running collaborative relationship with Denis, whose work often focused on intimate portraits of characters on the edges of society.
“I have a very long collaboration with Claire Denis. We’ve made 16 films together, which is a lot. What I got out of it, in any case, was the following reflection: the more we’ve gone for something that’s concise, the denser and stronger it became. And I realised that sometimes simplicity is the result of work and maturation.”
Much of her work condenses the photography into a fairly simplistic approach where how the camera moves and how scenes are lit isn’t overly convoluted, choreographed or rehearsed. She strives to make the presence of the camera disappear and have each shot contain everything that needs to be said in a primitive, simplistic way.
As a result, I’d say much of her work errs on the side of naturalism or realism, sometimes elevated by an experimental edit. When shooting exteriors she embraces the natural light and doesn’t try to over diffuse it, shape it or try to contain it.
However, I wouldn't say that her approach is purely centred on realism either. Sometimes, she’ll add colour to her lighting, going with a sharp blue for night exteriors, or a cool teal for street scenes.
Another component of photography which she elevates out of the ordinary into more of a dreamy, meandering state is how she frames shots and moves the camera. Her frames will often flow over spaces and characters with gradual pans and tilts, or take on a rougher, handheld perspective.
This framing and movement is considered and based on the foundational idea in her photography that the camera can be used to shape the perspective and the emotional and physical proximity to the characters.
“In terms of the image, perspective is something that defines a distance, first of all. So let’s say, in terms of images, the distance from which we watch a filmed subject. The distance at which the camera will be placed. It’s something very important, as it’s the distance from which you come into contact with the characters of a film.”
The perspective of the camera is rarely locked onto one character throughout a movie, and instead drifts from person to person, or from detail to detail, like the wandering eyes of an observer in a room that is never seen.
This drifting style of operating allows audiences to pursue different details within the frame, as the camera tilts or pans from objects, textures or details up to the subject.
To get this feeling she’ll often position the perspective of the camera to be quite close to whatever is being filmed. Filling the frame with tactile close ups, whether those be of characters, cutaways to details which inform the story, or doing those dreamy camera pans and tilts.
An interesting idea she’s described during her work with Denis is ‘writing’ the movie on set, through how the images are created. Arriving in a space where all the ingredients are put together, from the location, to the actors, the mise en scène and costumes, and then ‘writing’ the film by creating frames in the moment, rather than the standard idea of getting a script and then illustrating what has already been written.
I’ve noticed that her framing often includes vertical or horizontal lines within frames which intersect the image - whether from architectural or production design elements, or from using the line of a straight horizon to cut across the frame.
GEAR
As we mentioned, perspective and distance hold a very important place in her work. Her decisions as to where to place the characters and the camera, also have a direct impact on the kind of gear which she selects.
Rather than choosing very wide angle lenses and shooting them from a close distance to the character - as has become a popular cinematic trope nowadays - she instead often selects medium focal lengths or telephoto lenses, like a 40mm or longer, and places the camera a bit further away from the actors.
These longer lenses compress the image more, making the background feel closer to the characters and creating a layered feeling of depth where different planes of foreground and background layers are pushed together and assembled within the frame.
This effect of using long focal length lenses is particularly noticeable in exterior wide shots where far off landscapes feel like they are looming right behind characters. These compressed backgrounds place characters alongside their environment, rather than shooting with wide angle lenses that distances the background from the subject.
Longer lenses, with their lack of distortion, also make lining up vertical or horizontal straight lines in shots a bit simpler.
A secondary effect shooting on longer lenses has is that she can place lights further away from characters. The wider focal length that is chosen, the closer the camera will have to be to characters, and therefore the closer lights will have to be placed to actors so that the camera won’t come in between lights and cast shadows.
Godard prefers to have some distance between the light source and the subject, which sometimes makes shooting further away on long lenses easier as the camera and crew won’t get in the way of the light source.
Her camera selection has been split between shooting on 35mm film in her earlier work and on digital cameras in the more recent years.
During her early work she often favoured a lighter camera, which could be easily operated for handheld, and could fit into tight spaces - which, for her, was a Aaton 35 camera body.
She paired this camera with different spherical lenses - normally favouring the height of taller aspect ratios like 1.85 or 1.66 and staying away from the widescreen anamorphic look. Some lenses she’s used include the Zeiss Standard Speeds, Cooke S4s and Panavision 70mm and Primo glass.
Shooting 35mm film, with its wider perspective than 16mm, on the Aaton 35 with Panavision Primo primes that resolved beautiful skin tones accurately with a fast T1.9 stop was a combination she often arrived at, and can be seen in her iconic work on Beau Travail.
“I really wanted to have this 35mm perspective because I thought that to see the bodies, the men, in the landscapes was the thing. Then I chose very performant lenses. So I worked with Primo lenses from Panavision and they were fantastic. The only thing is that they were quite big.”
She’s also used zoom lenses quite often, usually from Angenieux, such as the classic 24-290mm, the 15-40mm and even the newer EZ Full Frame 22-60mm and 45-135mm zooms on her more recent work. She combined this with a teleconverter when she needed to extend her focal length range even further.
After transitioning to digital cinema with the Arri Alexa classic, she’s since moved to mainly using Sony camera bodies. From the earlier model F65, to, in more recent years, using the Sony Venice.
This change in camera selection from the lightweight Aaton 35 to the large, cumbersome Sony F65 also yielded a change in approach to how the camera moved.
The Aaton allowed her to operate raw, handheld over the shoulder shots, or easily place the camera on boats, trains or tight spaces to get shots which vibrated or shook with the movement of said transportation method.
Whereas the larger weight and size of the F65 meant that movies like Let The Sunshine In were shot completely off either a dolly or tripod which stabilised the camera and gave the images a more grounded feel.
Her lighting largely leans into a natural look, using sunlight whenever she can, whether it's soft, diffused, luminant and ambient, or hard, direct and sharp with strong shadows.
For day interiors she often supplements this with large bounced or diffused sources which illuminates actors with a very soft quality. Then, during night scenes, she has a tendency to introduce colour, sometimes favouring warm tones such as sodium vapour street lamps, and other times going for cooler tones from blue to a teal.
CONCLUSION
Although her exact approach has shifted from film to film and director to director, there is a certain natural, dreamy, reactive, wandering quality to her photography which has carried through. Transforming the camera from a machine that captures light into a perspective and point of view of an observer in the room.
The Special Filter That Most Cinematographers Use
What are diffusion filters and why are they used so often on professional film sets by cinematographers?
INTRODUCTION
One of the great things about working as a camera assistant for a number of years was that it exposed me, if you’ll excuse the pun, to the camera gear that was selected by a number of international cinematographers.
In the past, these selections were often shielded by DPs, who were hesitant to share their exact recipe and formula for creating images.
One of the biggest takeaways from my time as an assistant was that many, if not almost all productions which were shot on digital cameras, used what are called diffusion filters.
WHAT ARE DIFFUSION FILTERS?
Diffusion filters are pieces of glass which can be put in front of the camera’s lens that provide a glowing halation to the highlights, soften the image, or reduce contrast.
These filters can either be placed in a tray inside a mattebox which is clipped onto the lens, or screwed directly onto the front filter thread of the lens.
When light hits a digital camera sensor with enough intensity to overexpose it, a number of pixels will turn to pure white, while the surrounding pixels that absorb less light won’t clip.
However, when a diffusion filter is added in front of the lens, the light will get scattered. This means that there will be more of a gradient between the pure white, overexposed pixels in the highlights and the correctly exposed pixels. This gives the bright highlights a glowing effect called halation.
Some secondary effects which these diffusion filters may also have is that they add a layer of softness or blur over the image, thus reducing the lens’ sharpness. This decreases the harshness of how details are captured, which means wrinkles, pores and imperfections on the skin will be softened.
Some diffusion filters are also used to reduce contrast. So instead of capturing punchy, high contrast images with bright highlights and deep shadows, some diffusion filters will decrease contrast, retaining and lifting the information in darker parts of the image and crushing the intensity of the brightest parts of the frame.
In the past, before the rise of using glass softening filters in front of the lens, another old school technique which achieved a similar look was to place nets or stockings behind the back of the lens and between the film plane or sensor so that they didn’t come into focus, but they softened the frame. In old Hollywood this technique was particularly popular when photographing the close ups of lead actresses.
CLEAN VS DIFFUSED LOOK
Now that we know what some of these diffusion filters do, why is it that they are used so often by cinematographers?
In a previous video I’ve mentioned the idea of creating a look that is very clean versus a look that is more textured.
Although some filmmakers do indeed lean into a clean, sharp, digital look, more often than not, cinematographers like to take a little bit of the digital edge off to make images look a little more filmic, soft and textured. They do this by using a low strength diffusion filter that is not too heavy handed.
FILM VS. DIGITAL
There’s a bit of an interesting dichotomy to finding a middle ground between a clean look versus a textured look when it comes to working with film or digital. Film, by its very nature, possesses many of the qualities that diffusion filters provide. Film doesn't have a sharp clipping threshold like a digital sensor does, meaning the highlights are more muted with a natural halation effect.
It also resolves images less sharply than high resolution digital cameras, having a natural, subtle softness.
Therefore, to counteract the natural ‘softness’ of film, many DPs who shoot on film like to do so with the sharpest, modern lenses possible and without using diffusion filters. Whereas those who shoot digitally try to fight the ‘overly-sharp’ feeling of digital by choosing vintage lenses, shooting through diffusion filters, or doing both.
TYPES OF DIFFUSION FILTERS
Different diffusion filters come in different strengths of intensity: all the way from full 80s music video softening to a very subtle blooming of highlights.
Likewise, different kinds of diffusion filters do different things. I’d say that there are three main categories: halation expansion and softening filters, contrast reduction filters, and atmospheric filters.
Halation and softening filters target the highlights and create a blooming effect around them, while at the same time softening the overall sharpness of the image. Probably the most popular example of this filter is the Black Pro-Mist from Tiffen, which can comes in variations all the way from the very subtle 1/8th filter which subtly halates highlights to a 2 and above, which heavily blooms lights, washes out the contrast in the shadows and gives the entire frame a soft look.
Some other popular softening filters are Black Satins, Glimmer Glass, Hollywood Black Magic, Pearlescent and Classic Softs. These all have similar but subtly different ways of spreading the highlights and softening the image.
The second category are contrast reduction filters - which, as you would imagine, decrease the picture’s overall contrast by lifting the detail in the dark parts of the frame and reducing the thickness of darker hues, to produce a flatter look.
The Ultra Con and Low Con filters are probably the most popular varieties. These lift the shadows and wash out the blacks and dark areas so they aren’t as deep and heavy.
Finally, there are atmospheric filters, which give the illusion that there is smoke or haze in a space. This is a useful tool for sets where it’s not possible to bring a smoke machine, or for wide exteriors where adding haze is difficult or impossible.
It’s also been used on shows like The Marvellous Mrs. Maisel to increase and add to the thickness of the haze in the room.
Rather than focusing on only blooming the peaking highlights, like regular softening filters do, these atmospheric filters give a halation effect across the midtones and shadows in the entire frame, also lifting the detail in darker areas - just like smoke does.
Some examples of these atmospheric filters are Fog and Smoque filters from Tiffen.
DIGITAL DIFFUSION
Most DPs tend to air on choosing low strength diffusion filters which have a subtle effect on the image, since this decision bakes the diffused look in and is not reversible in post.
If, later on in the grade, the amount of diffusion is not sufficient, it can always be accentuated and added to with software. Some filmmakers even favour adding all diffusion effects during post, rather than with filters, as it gives them complete control over the intensity of the effect that they want.
For example, Bruno Delbonnel changed his workflow from using physical diffusion filters in his early work, to shooting his later work as clean as possible, without effects filters, and adding a digital bloom effect in post that mimicked a softening filter.
Other DPs, like Erik Messerschmidt, have also mentioned their preference for shooting a clean image and then imposing a filtered effect during the colour grade, such as on The Killer where he used a Da Vinci Resolve plugin called Scatter to get a Pro-Mist like halation effect which he could carefully control to his liking.
WHEN NOT TO USE DIFFUSION
Although, on average, cinematographers like to shoot with a low strength halation softening filter like an 1/8th Pro Mist, 1/8th Glimmer Glass or a Black Satin 1, there are some occasions where they may want to remove it altogether.
This is often done when shooting in bright environments with a lot of reflected light - for example on a white, sandy beach or in the snow. Adding diffusion may push highlights into blowing out and therefore cause a loss of detail.
When light is shined directly through a filter and into a lens, it may flare in undesirable ways, or create a ghosting effect where a distracting outline of the filter itself is superimposed over the image.
Although the current trend is to pair digital cameras with a light diffusion filter, ultimately the decision comes down to the taste of the filmmaker and the story they are trying to tell - where some may be better suited to a hard, sharp, clean look, while others may benefit from a glossy, soft, blooming haze.
5 Trademark Christopher Nolan Camera Techniques
Let’s take a look at 5 camera techniques used by Christopher Nolan and break down how he pulls them off.
INTRODUCTION
Christopher Nolan is probably the most well known director working today. His movies use non-linear storytelling, practical effects, powerful, immersive music and impactful setpieces to tell subjective, human centred stories.
His filmography has been split by his work with two different cinematographers: his earlier films with Wally Pfister and his more recent movies with Hoyte Van Hoytema. Through these different collaborations, he, as a director, has carried a few visual techniques across most of his films.
Let’s take a look at 5 of those camera techniques today and break down how he pulls them off.
1 - ANAMORPHIC & IMAX
An important creative choice when making a movie is considering what aspect ratio, or dimensions, it should be filmed in. The two most common ratios for cinema are 1.85:1, which is pretty close to the 16:9 ratio which you’d see on YouTube, or 2.40:1 which is more of a widescreen image with black bars on the top and bottom of the frame. Although other ratios, such as 1.33, 1.66 or 2:1 also exist.
Filmmakers will almost always decide on using one aspect ratio for the entirety of the movie.
One trademark of Christopher Nolan’s films is that he often cuts to shots with two different taller or wider aspect ratios during the movie, and even, sometimes, during the same scene.
The reason he does this is because he likes to use the native aspect ratios, without cropping, from the two different camera formats he shoots on. These formats are either the wider anamorphic format, or the taller Imax format.
The anamorphic format uses specially designed lenses, which capture images with a squeezed compression that can later be de-squeezed to arrive at a highly resolved widescreen look.
Imax, on the other hand uses regular spherical lenses, however captures images on gigantic pieces of 65mm film which are 15 perforations wide - providing a taller aspect ratio and wider field of view at an unparalleled resolution quality.
“Our film tries to take you into his experience and Imax, for me, is a portal into a level of immersion that you can’t get from other formats.” - Christopher Nolan
Throughout his career Nolan has favoured wider anamorphic capture, with its oval bokeh, distortion and falloff on the edges of the frame, for capturing more traditional dialogue sequences.
Then he switched to Imax cameras to capture setpieces without dialogue, such as chase sequences, stunts, or aerial establishing shots.
Although Imax was designed to capture vistas and expansive, wide spaces with impeccable resolution, Nolan has also subverted this expectation in his recent work by also using this large format to capture intimate close ups and personal moments: trying to convert nuances in performance into a cinematic spectacle.
2 - ROLLED CAMERA
There are three different ways or axes, to position and move the camera: pan, tilt and roll. A pan - that moves the camera from side to side - and a tilt - that moves the camera up and down - are both very common and can be done with a regular tripod.
The third axis of movement, roll, is however much more unusual and infrequently used.
Usually shots are framed with a level horizon, however sometimes filmmakers decide to rotate the camera on its roll axis. Many times this is done with a 3-axis remote head - a tool that holds the camera and can roll it over and position the camera on its side by an operator who controls it wirelessly.
This remote head can also be attached to a crane or technocrane, if the shot needs to push forward or move around within a space.
He used this remote head and Technocrane setup on Inception but went a step further by even rolling the dream world of the film over on itself. This was done by constructing a set in a soundstage which could be rotated.
3 - ARM CARS & HARD MOUNTS
An aspect of his blockbuster filmmaking that Nolan is well known for are his use of vehicles in big chase or action set pieces.
When filming these sequences he’ll mainly stick to using two camera techniques. Firstly, he’ll lock off the camera in hard mounted shots attached to the vehicle. Or, secondly, he’ll use an arm car or some kind of tracking vehicle to get shots that move on the road with the picture car.
“What I wanted to do was really explore the experience of watching an action film. Try and build this big screen, very immersive experience and find a reason for an audience to watch a car chase again.” - Christopher Nolan
Hard mounts are achieved by rigging the camera directly onto the picture vehicle, whether that be a plane, a car, or a spaceship. This maintains the same frame on a character, letting us view their reactions and get inside their head while the background flies past. These shots can feel quite immersive and real, since, well, they are real.
Audiences are placed directly in the cockpit or driver’s seat and feel all the little, realistic vibrations and reflections as the vehicle moves. These little nuances are part of why Nolan pushes to shoot these stunts practically, rather than using visual effects.
To get wider shots outside which establish the vehicle and action within the world and give a visceral speed to shots, he often uses an arm car. This is a specially equipped, fast driving vehicle, which has a crane arm mounted on its roof to which a remote head is attached with the camera.
Operating this shot requires a few key crew members: one, a stunt driver who drives the car, two, a technician who moves the position of the arm, three, the DP or operator who pans, tilts or rolls the camera's position on the remote head to get the right frame, and four, the 1st AC who rolls the camera, wirelessly adjusts camera settings and controls where the focus is.
4 - KODAK FILM
Unlike most productions nowadays that opt to shoot on digital cinema cameras, Christopher Nolan has a deep love for shooting on film: whether that be 35mm or large format 65mm stock.
“Film, I think, is uniquely suited to pulling an audience into a subjective experience. Film gives you a depth to the image that I find inherently more emotionally powerful and more accessible.” - Christopher Nolan
He favours the way that the emulsion captures colour, or in the case of black and white, how it captures monochromatic hues. He’s used this on movies like Memento and Oppenheimer as a tool to delineate between the different timelines as the movies weave around their nonlinear narratives.
This black and white work has always been captured in different gauges of Eastman Double-X, from 16mm on Following, to 35mm on Memento, and even getting Kodak to specially upsize the film to 65mm for Oppenheimer.
The rest of his colour work he’s captured on Kodak Vision stocks, mainly using 50D or 250D to capture exterior scenes in natural sunlight, and 500T for darker interiors or night scenes.
Although the grain is a large part of the film look, because he normally captures in either anamorphic or Imax (which both have very low visible grain), his movies tend to have a fairly clean look, with the exception being his debut feature which he shot on the more inexpensive but grainier 16mm format.
5 - HANDHELD
A camera technique that Nolan has used in most of his work is handheld. It may have several different practical or emotional purposes depending on the context it's used in, but I’d argue that one of its key uses is as a tool to tell the story from a particular point of view.
“I’m really interested in cinema’s ability to give you different points of view and multiple points of view within a single film. I’ve always really been fascinated by that relationship between the storytelling in movies and how it works and the way it aligns you with different characters.” - Christopher Nolan
Shooting a far off aerial landscape presents the frame and establishes the space from a more detached, objective point of view. Whereas, a subtle handheld camera, shooting over a character’s shoulder, places the audience subjectively right into the shoes of the character.
Even with all the high tech toys he has available, Nolan often decides to shoot handheld in this way: moving in the steps of characters as they do, shooting over their shoulder, or framing up singles on characters with a subtle, organic looseness that aligns the audience with their point of view or places them in the same visceral moment that the character themself is experiencing.
10 Tools & Consumables That Every Filmmaker Needs
What 10 consumables and tools are found on every professional film set?
INTRODUCTION
To capture images, filmmakers not only need a bunch of camera, grips and lighting gear, but also all the bits and bobs that assemble and hold everything together. That’s why today I’m going to go over 10 commonly used tools and consumables which are present on almost every professional film set.
TOOLS VS. CONSUMABLES
Quickly before we get going, we need to understand the difference between tools and consumables and who usually brings them to set.
Very simply, tools can be used over an extended period of time to assist in assembling gear or to aid the filmmaking process. Consumables serve much the same purpose, but, as their name suggests they are perishable and need to be restocked or replaced over time as they are used up.
On an industry film set, technicians will usually own and carry a kit with their own tools in a unit bag, which helps make their lives easier. For example, a 1st AC may carry items like allen keys, wrenches, magic arms, spigots, cine locks, and screwdrivers - which they will use to assemble and build the camera.
On long form jobs like movies or series, consumables are usually requested by technicians and purchased by production. Whereas on short form shoots, like music videos or commercials, typically these items will be supplied by a technician who purchases these items themselves and rents it out to the production at a daily rate.
For example, a 2nd AC, will bring along a unit bag complete with items such as different shades of gaffer tape, dust off or velcro cable ties.
1 - GAFFER TAPE
Gaffer tape, quite literally, holds shoots together.
This adhesive tape comes in a few variations: the most common, all purpose, being the wider 2 inch black, or sometimes white, gaffer tape. This high tensile, matte tape sticks to almost any surface without moisture and can be used for a number of varying applications by the camera, grips and lighting department. Whether that’s sealing a weather bag on a camera, taping down a power cable, performing a quick fix on technical equipment, or covering a stand to stop it reflecting light.
Then, there is the narrower 1 inch gaffer tape, which comes in a variety of colours. This is usually used by the 2nd AC in the camera department to mark media or film magazines with a roll number, to apply to a slate where permanent text can be written, such as the production company or title of the movie, to give actors an eyeline mark to indicate where they need to look, to create tracking marks that can be used for visual effects, or to mark the position an actor needs to stand in.
When marking actors on a long form job, a different colour is used for each character, so that actors can easily identify their colour and where they need to stand. The first lead is usually given a red mark, the second lead a blue mark, and so it goes. Likewise red gaffer tape is usually used on the A camera, and blue tape used for the B camera.
Other types of tape also exist, such as paper tape, which is thinner, lighter and may leave less of an adhesive mark on surfaces. Black paper tape is used by camera assistants to flag or block light from hitting and flaring a lens by attaching it to a matte box. It’s used because this tape has a black colour on both sides, unlike gaffer tape which has a white adhesive side which may, undesirably, reflect more light.
2 - T-BAR
One of the most used tools in an AC’s kit is a T-Bar. This is a flat screwdriver head, which is attached to a T-shaped grip. It’s used by the camera department for any assembling or disassembling of equipment parts.
For example, screwing a baseplate into the bottom sliding plate of a camera. This baseplate can then get attached to a tripod head.
The reason this T-shaped grip is normally favoured, instead of a normal screwdriver, is that it allows ACs to turn screws with more torque and have more leverage to really tighten screws, or untighten difficult to turn screws.
The security of this connection is especially important for certain rigs, such as on an arm car, where lots of violent vibrations may loosen the connection over time if the screws aren’t secured tightly.
3 - MULTI-TOOL
Most grips, sparks or ACs will carry around some form of multi-tool on their gear belt.
The advantage of a multi-tool is that it’s easy to access and fast - and on a film set saving time with technical fixes is the name of the game.
There are many things that these tools can be used for: from having a quickly accessible star and a flat screwdriver for adjustmenting screws on the camera, to using the knife for cutting lighting gels, or using the pliers to loosen a stubborn, overly tightened piece of gear.
4 - BONGO TIES
Bongo ties are part fashion accessory for camera assistants to wear around their wrists and part consumable. These all purpose ties are made of rubber which can wrap around something, with a bamboo button end that the rubber latches onto.
This makes it easy to tie and release things, such as binding cables together for better management, wrapping a waterproof cover around a mattebox, or tethering a video cable to a dolly.
These ties usually last quite a while until the rubber eventually snaps, or until, more often than not, they end up in someone else's kit at the end of the day.
5 - DUST OFF
Cleaning filters or lenses is an important procedure which needs to be done with special care - since this glass is very pricey and can’t afford to be scratched.
The first step in this process, before you apply any lens cleaner or wipe it, is to blow pressurised air at the lens to remove any specs of dust or dirt which may cause scratches if wiped with a cloth.
Dust-off is also used by the camera department for cleaning little sand particles which get lodged in difficult to reach crevices on the camera. This dust collects when shooting in an exterior location such as a farm or a beach. Pressurised air can also be used to spray off water droplets that collect on the filter or lens when shooting in the rain.
Dust-off comes in a can. A burst of pressurised air is released by pushing down on a nozzle. They come in two varieties: one with a fixed valve, and another, which is usually more popular in the film industry, has a removable nozzle which can be re-used with multiple cans of compressed air.
6 - LENS CLEANER
Once the lens or filter is clear of dust particles the next step is to wipe away any smudges. To do this, filmmakers use a lens fluid and a lens cloth. The most common lens fluid is called Panchro, which is basically a rubbing alcohol, which you can spray on to remove smudges, grease or dirt from glass.
Once this is applied you then need to clean it with a lens cloth. This could be something like a selvyt or a microfibre cloth - which can be washed and re-used after cleaning. Some also like to use lens tissue or cleaning paper, which is a consumable that can only be used once.
7 - ALLEN KEYS
Along with the T-bar, another common tool used by the camera department for building and breaking down cameras are allen keys. Many camera cages and components have hexagonal screws in mounting points that need to be locked or unlocked with allen keys. For example, when derigging the cage on an Alexa Mini and attaching a top and bottom sliding plate for a gimbal.
Camera assistants will carry around a full set of these tools in case they run into any odd sized screw, however, the two most common and standardised thread sizes for camera components are either 3/16th or 5/32 inch variations.
Many assistants like to carry T handle versions of these 2 sizes as they are quicker and easier to work with and offer more torque to release tightly secured screws.
8 - DUAL LOCK
To make camera builds nice and compact, assistants will try to attach the accessories as close to the body as possible - which may mean trying to get rid of too many bulky magic arms.
A solution to sticking smaller objects, such as a tiny timecode sync tentacle, directly to the camera’s cage or body is to use dual lock.
This has one sticky side which can strongly bind to a surface, such as a camera cage or a tentacle, and another side with a sort of grooved, interlocking velcro. When these two interlocking sides are pushed together they provide durable and reliable fastening - while also being able to release or connect together multiple times.
9 - CROC CLIPS
If gaffer tape is the most used consumable on set, probably the second most used one, at least by the lighting department, are croc clips. Croc, or crocodile, clips are metallic pegs which are primarily designed to attach lighting gels to barn doors - without melting or slipping, even when exposed to the high temperatures created by these large lights.
Most sparks will also carry around a few of these croc clips on their person for any minor rigging tweaks to gels or textiles, like clamping a wrapped duvetyne around a stand or clipping diffusion gel onto a window frame.
The camera department may also carry some of these useful clips and use them for any clamping that needs to be done. Such as clipping a black and white protective cover over the camera when it’s not being used.
10 - RAIN COVER
Although some shoots may take place indoors, entirely under the safety of a roof, for other shoots that involve shooting outdoors it’s good practice to have some kind of rain bag at the ready just in case the heavens decide to open up.
This could take the form of a reusable backpack cover, which can be clipped over the camera to protect it, while shooting in a very light drizzle. Or, if there is more heavy rain a plastic rain bag will be used to protect the camera.
This translucent plastic bag is a consumable, which will be placed over the camera, with the opening facing the ground. The team will then tightly secure the bag around the mattebox, usually with one of those bongo ties that we mentioned, and then cut a hole so that only the lens opening side of the mattebox is exposed to the elements.
This will keep the camera nice and dry, and the electronics safe from harm.
How Cinematographers Spend Their Gear Budget
In this video let me invite you behind the scenes and guide you through how exactly DPs select, rent gear, and allocate the camera budget based on my own experience as a cinematographer.
INTRODUCTION
Cinematographers are hired partly for their artistic capabilities, and partly for their technical and management skills. It’s fun to talk about creating beautiful images, but what’s less discussed is the logistical and financial processes that go unseen by audiences.
So, in this video let me invite you behind the scenes and guide you through how exactly DPs select, rent gear, and allocate the camera budget based on my own experience as a working cinematographer.
EQUIPMENT RENTAL
The gear that you see on film sets is more often than not rented out for each production, rather than purchased, by whatever production company is organising the shoot.
This gear will need to be sourced for rental from a gear owner, who is typically either a rental house, such as Arri Media, Panavision, or a smaller, locally owned rental business, or is rented directly through a crew member who owns gear, such as the focus puller, the gaffer, or even the DP.
GEAR LIST
The decision as to what technical gear is required for each shoot is decided upon by the cinematographer. It’s their job to liaise with the line producer, production manager, or responsible person working for the production company, and often with the rental company too, to come up with a list of equipment that will be used to shoot the production based on the budget.
In order for this financial negotiation to take place, the first step is for the DP to create a list of all the technical equipment they will require, which is called a ‘gear list’.
This list will be broken down into three sections based on the three technical departments which are run by the DP: a camera gear list, a lighting gear list and a grips gear list.
The equipment for camera will be run and managed by the 1st AC or focus puller, and is mainly made up of the camera itself, the lenses and all the necessary accessories needed for running the camera package, such as the follow focus.
The lighting gear list, which is managed by the gaffer and their team, will include all the lamps, textiles, and, in the British system, the stands needed to set up and rig these lights. However, in the US system, any stands and rigging gear will usually fall under grips.
Any gear needed to move or position the camera, such as a dolly, slider, crane or apple boxes, will fall into the grips gear list and will be managed by the key grip.
This list of gear will be written down by the DP, usually in the form of a spreadsheet or document, and sent on to the production team.
As I’ve mentioned many times in my Cinematography Style series, the exact gear that is selected will be determined by a number of factors, most notably: what kind of shots need to be executed, the look that the filmmakers want to create, the budget that they have to hire gear, and how many crew will be on set.
This crew consideration is especially important. If it’s a documentary shoot where you’re working by yourself then you’ll need gear that is easy to set up and use as a solo operator.
Or if it’s a low budget feature with one gaffer and one spark, then you can’t rent a million lights as they’ll take too long to set up.
The challenge of assembling a gear list is to select only the gear that you know you’ll be able to realistically use without carrying a lot of dead weight around, while at the same time having enough equipment to be able to adapt to most shot requests that the director may make.
WRITING A GEAR LIST
Everyone works differently, but my usual process is as follows. After meeting with the director, reading the script, or taking in their visual treatment, I’ll assemble a rough gear list which estimates the gear package.
Often this selection will be based on similar past gear lists I’ve made before. For long form projects I may also go into a rental house and do camera, lens and filter tests to find a look I’m happy with.
As pre-production commences, a recce will be organised. This where the director, DP and the HODs, will visit the locations which will be used. At this stage, new ideas for shots may emerge, and you’ll be able to see more clearly the spaces that you will need to light and the camera moves you’ll need to do.
The gaffer and key grip who attend this recce, will then also be able to provide gear recommendations for the kinds of technical setups you are imagining.
Additional equipment may be added to the rough gear list at this stage, for example getting blackout to shoot certain spaces day for night, or adding items like a ladderpod if the directors decide they need a high perspective shot.
Some cinematographers' lists may be more prescriptive and detailed than others. For example, many DP’s lighting gear lists may just note the lamps and textiles that they want to use, then leave the addition of the accessories, such as the stands, power generation, distribution packages, sandbags and other details about rigging, or selection of the brand of the lamp, for the gaffer to determine and organise.
Others may like to be very specific about exactly what gear they want, down to what brand of ND filter they require, or the exact make of an on-board monitor they want to use.
After the recce, DPs will want to start finalising their gear list, and send this through to production who will begin the financial calculations and negotiations.
DISCOUNTS
The rule of thumb when it comes to gear rental where I’m from is that each item of gear comes with a set daily rental cost - however, this day rate is rarely the final amount which is paid by the production company.
Gear rental is usually quite negotiable. Quotes will usually display the full cost item of each piece of gear, then apply a discount percentage to reduce the amount.
I can’t speak to other regions but where I work, I’d say an average discount rate for short form or commercial gear rental is about 40%, whereas long form or feature discounts are usually higher at around 60%.
Although depending on the length and scale of the job, these discounts may vary.
Most rented items will have their own, individual rental rate, even down to clamps or c-stands, but some items will come as a kit, such as a camera package, which may include things like recording media, a cage, and a dovetail plate.
Some rental companies also only offer lenses to be rented as part of a full kit and prohibit breaking up the lenses into individual focus lengths.
For long form jobs that operate in terms of the number of shoot weeks, as opposed to shoot days, discounts are sometimes applied in terms of how many days per week will be billed for.
A standard deal would be to bill for the gear for 4 day weeks. In other words, for each week of shooting, each item will only be invoiced for 4 shoot days, even if the production is shooting 6 days per week.
PACKAGE DEAL
Another approach to sourcing gear is for the producers to try and get what’s called a ‘package deal’.
In this case, the producer will approach a gear rental company with the list that has been supplied by the DP, and provide them with the line item for gear on their budget: so, the camera, grips, and lighting gear that they have budgeted for.
Basically, they will offer to pay their full gear budget to the rental company, ask for the listed gear in exchange, in the hope that the rental company will provide the necessary discount to arrive at their budget.
If the discount is too heavy, this may involve discussions with the DP about losing or replacing some of the items of gear, until a deal is found for both sides.
RENTAL HOUSES VS CREW RENTAL
If the gear is rented from crew members instead of a rental company, the crew will be asked by production to provide said gear at the same discounted, matched rate that the rental house would’ve rented the gear at.
BUDGET CONSIDERATIONS
On top of this, there are some other financial considerations to take into account when renting gear. One, is that insurance will need to be paid. This is so that if the equipment is damaged or stolen during the shoot, then insurance can be paid out to cover what may otherwise be a very expensive cost to replace the item.
Sometimes, gear rental companies will even kindly offer gear for free for passion projects without much of a budget. In these cases the only thing they will require is that the insurance fee for the gear is paid.
It’s also important to consider the downstream implications of your camera selection. It’s easy to want to shoot on an Alexa 35, but will your DIT be able to handle the speed of data transfers that this requires? How long will it take to transcode this footage? Is the production able to afford the hard drive space to store the media?
If working with a lower budget in post it’s important to think about this before shooting on a camera that will be impractical or unfeasible.
GEAR AVAILABILITY
Another factor when sourcing gear is its availability. Not all rental houses will have every piece of gear under the sun. For example, Arri Media won’t be able to provide Panavision lenses, which are solely rented out by Panavision and not sold.
A solution to this is to of course source gear from multiple vendors, however often, producers will try to rent the entire gear package from a single rental company - as bigger discounts are possible when rental houses are approached with a full budget, rather than part of one.
CONSUMABLES
Consumables are another line item in the budget relating to gear. These are perishable items which are needed on the set by technical crew: such as gaffer tape, dust off, or croc clips.
For long form jobs, each department will be given a budget for consumables, and sometimes, on short form jobs, crew members, such as the 2nd AC, will purchase their own consumables kit and rent it out to production for a daily rate.
CONCLUSION
Having a relationship with rental companies is an important part of being a cinematographer. It’s a reciprocal bond, where if you’re able to bring them work for some of the bigger jobs, they will likely be more inclined to grant discounts or give out free gear when your next passion project comes around.
Cinematography Style: Rob Hardy
In this episode let's take a look at the cinematography of Rob Hardy.
INTRODUCTION
Rather than dissecting a movie into its various parts, Rob Hardy’s approach to cinematography is to craft an environment on set for the actors which gets as close to the world of the film that is being shot, through both his selection and application of technical gear, as well as his broader philosophy.
Let’s unpack this a bit further by looking at the work of Rob Hardy in this edition of cinematography style.
PHILOSOPHY
One of the ideas at the core of Rob Hardy’s cinematography is to translate the psychological feelings and thoughts of the characters onto the screen using images.
An example of this can be found in his early feature work on Boy A where Hardy and the director decided to play out a scene in a single take with a Steadicam rather than getting traditional dialogue coverage of it. For him, it made sense for the camera to slowly move around the action, experiencing the moment in real time, in a way which represented how the protagonist's own mind was circling the scenario, surrounded by characters, as he tried to prevent his past from catching up to him.
Or, in Shadow Dancer, where they decided on the perspective of the camera taking on that of the central protagonist and representing how much pressure she was under through either the proximity or detachment of the camera.
Or, how the camera in Civil War takes on an on-the-ground perspective that moves with characters when shooting action scenes - in a photojournalistic style reminiscent of how the characters themselves are experiencing the moment.
These are just some examples of him using the camera to translate a psychological moment, feeling or idea onto the screen.
As he likes taking the story and the underlying emotions in the screenplay into account first when coming up with a visual approach, Hardy, unlike many other DPs, doesn't like to use other movies as references for the look.
When working with a frequent collaborator such as director Alex Garland, conversation around the story will be the basis for the look and they’ll usually avoid visual references altogether.
Or, in cases where he does want to communicate a visual reference, such as for the Grand Palais sequence in Mission: Impossible, he’ll instead draw from the world of art, in this case Olafur Elliason’s gallery installation piece ‘The Weather Project’.
An important aspect of filmmaking which isn’t necessarily seen on screen is creating an environment on set which suits the kind of work that is being done. Whether that’s honing in on a focused energy from crew for a technical long take, a more lighthearted, jovial mood for a comedy, or a more serious atmosphere for dramatic scripts. The biggest contributing factor to curating this energy around the camera is through choosing the HODs of the various departments on the set, such as the gaffer or key grip, and the team that they bring onto the set.
“It’s easy for someone to walk onto set and do something at absolutely the wrong time. Having a sense of awareness is very, very important. Everybody learns that through doing jobs together and the more jobs you do together obviously the tighter of a team you become. So, I’ll always push to work with specific people.”
Creating an environment that is conducive to the kind of energy in the story, applies not only to crew, but also to creating the right feeling with of course, the set design, and also, for the DP, the lighting.
Rather than taking a more old school approach of breaking scenes into the specific shots that will be covered, then giving actors specific marks and lighting those exact points that they will land on with a precise kind of light, Hardy likes to instead light the entirety of the space that the scene will take place in.
This gives the actors the freedom and room to move wherever best suits their performance in that space. It also makes it easy for the filmmakers to make adjustments or implement new ideas on the fly without having to spend lots of time re-lighting specific marks. If you go in with the philosophy of lighting the entire space it makes it much easier to quickly reposition the camera anywhere within that environment.
“If you light a room, if you light a space, and you give it a feeling, that when the actors walk into that space they’re like, ‘Oh my God.’ It exists, you know. It doesn’t exist as a series of shots or an abstract sequence, it exists as a space. It’s a playground then for the director and the actors to really exploit that space.”
GEAR
One continuous thread across his lighting is his persistent use of tungsten lights. These are a bit old school and generally aren’t used today as often by cinematographers as they create a lot of heat, aren’t bi-colour or RGB, change to a warmer temperature when dimmed and require large amounts of power to run. However, they are favoured for their high quality output of warm 3,200K light.
A popular choice for a high output fixture these days is an HMI, while various LEDs are often used for fill, practicals or a soft, ambient lift.
However, Hardy likes using tungsten fixtures instead, such as: a massive 24K, Wendy lights, Dinos, or Par Cans.
Most DPs light studio sets these days with overhead LED panels and LED practicals which are embedded directly into the set. On movies like Ex Machina he instead decided to use a huge amount of tungsten bulbs as practicals, 15,000 in fact, which were built into floors, ceilings and walls behind a diffused perspex.
When these 3,200K lights are shot with a 5,600K colour balance on the camera, or a daylight film stock, they will have a beautiful warmth. Because there are so many bulbs, which are spread over a larger surface area and hidden behind diffusion, rather than lighting from a direct, single source, the light has a beautiful, soft, diffused quality to it.
If a cooler look is desired from these warm, tungsten lights, he would then either balance the colour temperature in camera by changing it to record at a neutral base such as 3,200K, or a slightly warmer base such as 4,300K.
Or, he sometimes balances the lights themselves, cooling them down to a daylight colour temperature by adding a blue CTB gel in front of the light.
Even his recently released studio work, such as Devs, uses the same idea of lighting with loads of tungsten fixtures. In this case, a combination of ceiling and floor mounted tungsten bulbs, such as nine lights and par cans, were reflected off the textured, gold tiled set walls.
These were run through a board that could dim various lights up and down to create shimmering, golden reflections.
After only shooting on film for the first half of his career, he finally had to make the move into working with a digital cinema camera in 2015 on Ex Machina. Instead of going with far and away the most popular high end camera manufacturer of the time, Arri, he made the unusual call to shoot on the Sony F65.
Hardy often talks about wanting the camera to see and represent what he sees with his own eye, and, for him, the F65’s true reproduction of colour stuck out as the best way of capturing the movie.
He continued shooting on Sony cameras, using the larger F65 for studio mode shots and the lighter F55 for Steadicam or shots which needed a lighter build. He later replaced this with the newer Sony Venice upon its release.
He has also paired these Sony cameras with other cameras which are always chosen for a specific purpose. Whether that be shooting at higher resolutions with the Red Weapon Dragon, or handing out the light DJI 4-D with its in-built 4-axis gimbal to loads of on the ground operators to get stabilised, documentary style coverage of action scenes.
He often pairs his camera selection with older anamorphic lenses - which have optical imperfections and a bit more character than super sharp, optically perfect, modern lenses.
Sometimes those are Panavision anamorphics, such as the C-series or E-Series, and other times, such as on Ex Machina he chose the vintage Xtal Express anamorphics - which have lots of distortion and falloff towards the edges of the frame.
“We shot the film anamorphically and we settled on these old Cooke Xtal Express lenses. They’re basically like old, rehouse Cooke Speed Panchro spherical lenses. So, the reason I picked them was that it became about personality. If you choose a 32mm for a certain scene, it’s not because you want to go wider, its because you want that feeling of this 32mm which is off the scale in terms of its personality.”
He favours wide to mid focal length anamorphics - most often using a 35mm, 40mm or 50mm.
When shooting digitally, like most DPs, he likes using a light diffusion filter - like this BTS photo which shows him using a 1/8th Black Pro Mist filter. This filter takes the edge off digital images, blooms highlights in a subtle, filmic way and washes out the mid tones a bit.
CONCLUSION
Whether it’s his method of lighting the entire space of the set, curating the crew that is there, or selecting the camera package and LUT which will provide colour which is true to his eye - his method is always on getting the set to feel as close to the world of the film as possible.
From there it’s then a matter of bottling that feeling, so that what is captured by the camera mimics the tone of the story and impacts the audience in a real way.
Trick For Shooting Interviews With Only 1 Light
Let’s take a look at warm and cool colour interview lighting and show how you can, very simply, create colour contrast by using only 1 LED film light.
INTRODUCTION
A cinematographer’s biggest enemy are images which look flat. The best way to counteract this is by introducing contrast: which can either be done by using different areas of light and shadow, or by using different colours.
So, let’s take a look at a specific type of colour contrast in an interview setting: warm and cool tones, and show how you can very simply create images like this by using only a single film light.
COLOUR TEMPERATURE
When an object is heated to a high enough temperature, it emits light. This light can fall across a range of different colours - and is commonly measured through the unit Kelvin.
A lower Kelvin value represents light sources which are considered warmer: for example an incandescent light bulb. Whereas cooler or more neutral sources of light, like sunlight, will have a higher kelvin value.
Although there are a whole host of different light sources that can be used which come with a diverse range of cooler and warmer colour temperatures, I’d say there are 2 main Kelvin values which are most often used by filmmakers.
The most popular warm light source is what we call ‘tungsten’ - which has a value around 3,200K - the value that old tungsten filament incandescent bulbs give off.
The other popular cooler light source is ‘daylight’ - at 5,600K it’s the approximate temperature of natural sunlight.
Before LED lights, which are now capable of easily changing to a range of different colour temperatures, most film lights either emitted tungsten light or daylight. Whether that was the warmer incandescent bulbs, fresnels, par cans or dinos, or the cooler daylight kino flo bulbs, HMIs, or even the sun itself.
Because of this, film stocks were designed to capture light at either a 3,200K tungsten balance, or a 5,600K daylight balance. These same Kelvin balances can be found on digital cameras, as well as giving you the option to dial in a custom Kelvin value anywhere in between.
However, many cinematographers still stick to filming with the camera colour temperature set to either 3,200K or 5,600K.
WHITE BALANCE
Let’s take a look at what happens when you shoot cameras in different lighting conditions at these two different Kelvin bases.
If we set the colour temperature on a camera to be 5,600K and we light it with natural sunlight that also has a Kelvin value of 5,600K, then the image will have true, natural colour.
However, if we keep the camera at 5,600K and instead light with a 3,200K source, then the image will appear warm.
But, if we keep this warm light and change the colour balance on the camera to 3,200K then, again, the colour will be captured with a true and natural look.
However, if we take that same camera set to 3,200K back outside under natural sunlight, then the image will be cool.
COLOUR CONTRAST
If we circle back to the beginning, I mentioned how cinematographers often make footage more interesting by lighting with different colour temperatures in the same frame - to create contrast.
If those different colours are placed at different planes of depth, it can also make images feel a bit more three dimensional.
An easy way to apply this idea is to light the subject in the foreground with light that has a different colour temperature to the light that is in the background.
For example, lighting the subject with a lamp that is a warmer 3,200K, and lighting the background with a cooler 5,600K light.
One of the reasons that this is quite visually pleasing is due to complementary colour theory.
If you look at a colour wheel, complementary colours are those which are opposite each other. Because of this contrast they are often viewed as being visually pleasing when placed next to each other. So a green and a red are complementary, or, for the sake of this video, a warmer, orange colour and a cooler, blue colour are complementary.
TUNGSTEN INTERVIEW SETUP
For this first interview setup we’re going to set the colour balance on our camera to 3,200K in the menu. Then we’re going to place our subject in a room, and turn off the house lights, so that she is only illuminated by the natural ambient sunlight coming through the windows.
Because the camera is set to 3,200K and the light source is about 5,600K the image will have a cool colour tint to it. So, even though we haven’t set up any film lights yet, we already have a nice monochromatic blue that we can use - compliments of the sun.
In order to get a natural looking skin tone I want to set up our light to have the same colour temperature as the colour balance on the camera - so 3,200K. To soften the light quality of the 500C I’m going to attach a softbox to the front of it, then to make those shadows even more gentle and pretty I’m going to shine the light through a second layer of silk diffusion which I’ve rigged to a stand.
The issue now is that, although the key light on the subject looks nice and soft and natural, our film light is also hitting the wall in the background of our shot. If you’re going for more of a high key look then this may work, but since I want to create a bit more colour contrast it’s an issue, since the 3,200K film light overpowers the natural 5,600K sunlight in the background.
I’d also like for the background to feel a little bit darker to create more of a light and dark contrast between the subject and background.
I’m going to use a trick called flagging the light. All this means is that I want to stop the light from hitting the back wall, yet still have the light hit the subject. To do this I’ve very simply rigged some blackout on the one side - so that it blocks the light from hitting the background.
So there we have it, a simple interview shot with the camera set to 3,200K, the key light also set to 3,200K and letting the 5,600K natural ambience from the sun fill in the background.
DAYLIGHT INTERVIEW SETUP
I’m going to shoot our second interview setup at night, using the same idea of colour contrast, but just flipping things around a bit. Here, I’ve set the colour balance on the camera to 5,600K.
Technically I’m actually going to use more than one light here, but these lights are not film lights. They are what we call practicals - lights built into the set which can be seen in the shot. For example a lamp and an incandescent ceiling house light.
To my eye these pracs actually look to be a bit warmer than 3,200K, but they’re somewhere around that ballpark. Since the camera is set to 5,600K and these are around 3,200K they will read much warmer than neutral light.
Then, like I did in the previous setup, I want the key light to be the same as the camera’s colour balance - 5,600K - with our practicals in the background somewhere around 3,200K.
Another reason for placing a secondary light source, like a practical or natural sunlight, in the background of the shot, behind the subject, besides creating a colour contrast, is because it gives a subtle back light to the subject, which will help to create a bit more separation and depth in the frame.
As I did last time I’ve also flagged the key light off the back wall to make the background a bit darker and better emphasise the colour contrast.
There we go. Two different, simple setups that both use one film lamp and play with the idea of colour temperature to create frames with a bit more depth and contrast.
The Art Of The Long Take
In this video let’s take a look at what makes long takes effective, why filming these shots can be challenging, and identify the various ways and pieces of equipment which can be used to pull them off.
INTRODUCTION
If each cut in a movie can be compared to taking a breath, then doing a long take is like closing your mouth, pinching your nose and seeing how long you can go on one lungful of air.
On average, editors and directors will cut to a new shot around every 3 seconds. However, sometimes filmmakers prolong this by shooting and holding on a single shot for 3 minutes, 5 minutes or even an unbelievable 87 minutes at a time.
In this video let’s take a look at what makes long takes effective, why filming these shots can be challenging, and identify the various ways and pieces of equipment which can be used to pull them off.
If you’ve been on a film set you’ll know that sometimes even the simplest looking shots on paper, can be incredibly challenging to pull off. So, what are some reasons why filmmakers make their lives more difficult by even bothering to try ambitious long takes?
IMMERSION
The longer a shot plays out in real time, the more of an immersive experience it provides. This allows audiences to engage with sequences with a sense of real-time continuity. This can feel a bit more realistically representative and true to how we experience time in real life, rather than movies which chop up actions into loads of tiny little pieces and fragments of time.
Like this famous 4 minute long take from Children Of Men, which plays out some lighthearted, human moments, before gradually ramping up the action into a tense, immersive chase sequence - which all takes place in real-time in a single shot.
“Part of the reason we chose to tell Children Of Men in very fluid, long takes was to take advantage of the element of real time.” - Alfonso Cuarón, Director, Children Of Men
For this reason, long takes are sometimes used in action or even dialogue sequences to make audiences feel engaged and like they are watching things unfold more viscerally in real-time.
At the beginning I compared editing to breathing - which can either be done with slow, deep, relaxed breaths which are spaced apart, with short, choppy, panicked gasps for air, or, in the case of long takes, by holding a breath for an extended period of time.
EMOTIONAL IMPACT
How filmmakers space out cuts has an unconscious emotional impact on audiences. Prolonging cutting - or withholding the audience from taking a breath - can be a great tool for building a feeling of tension, anticipation and intensifying emotional moments.
Hunger holds on a locked shot of a key dialogue scene between two characters for an incredible 17 minutes - intensifying the drama, giving the dialogue a real-time rhythm and locking the audience's focus in as the scene builds a powerful atmosphere.
“Once you cut away then you give the audience an excuse. It’s no longer real. It becomes a film at that point. But if the shot carries on then there is no escape. The audience, if they’re engaged with the story and engaged with the characters, are drawn deeper and deeper into that scene by not cutting. It actually heightens the emotional impact of the story, by giving you nothing else.” - Sean Bobbitt, Cinematographer, Hunger
BUILDING THE WORLD
Another effect which long takes may have is to build the world of the story, the space it takes place in, and the characters which inhabit that world. By having the camera flow with characters through a space, it can both introduce the location itself to the audience as well as how different characters interact with each other, what their relationships are and the hierarchy or power dynamic that exists between them all without interruption.
SUBJECTIVE VS OBJECTIVE LONG TAKES
Filmmakers can also use long takes to tell the story in one of two styles: a more subjective style, where the camera is latched mainly onto one, or a couple, of characters, as they move through a space, like this 4 minute shot from Goodfellas, or in a more objective style like the opening shot of Boogie Nights, where the camera flows from an establishing shot, into showing various different members of the ensemble cast as the camera progressively roams around a nightclub.
These two ways of shooting long takes are informed by the director’s overall perspective on the story. One contains voice over narration by and is mainly seen through the eyes of a single protagonist. The other alternates between various storylines and scenes between various characters and their relationships throughout the movie.
Now that we know some of the reasons filmmakers may have for wanting to shoot these extended shots, what are the different ways in which these shots can be pulled off?
STATIONARY CAMERA
The most technically simple long take technique is to film a shot from a stationary camera position which is either locked off, or operated based on the movements of characters.
These stationary shots will be filmed from a tripod head, which could be attached to various bases depending on the height of the shot required and the gear available. Most commonly this base support would either be tripod legs, a dolly which is locked into position, or a hi-hat for shots which need the camera to be positioned low to the ground.
CAMERA MOVEMENT
The second, and probably most common way of doing a long take is by employing camera movement. There are a wide variety of ways to do this and it’s very situational depending on the quality of the movement, the position of the camera and what gear will be most practical for the operator and technicians.
The method that requires the least gear is to film the long take handheld - most commonly by balancing it on the shoulder for increased stability, while being supported from behind by a grip to make sure the operator doesn’t topple over, trip, or hit an object when they are moving backwards.
If filmmakers want the practical flexibility that comes from being able to move a handheld camera through various spaces at different angles but with movement that is more fluid, smooth and has a bit more stability, then a great option is either a Steadicam, a Trinity or a 3-Axis Gimbal.
For overhead long takes, a camera mounted on a remote head to a crane, a technocrane, or a cable-suspended Spidercam could also be used. This allows a camera to move pretty much anywhere vertically and horizontally within a dedicated volume of space. However, a large crane arm or overhead cabling will sometimes be limited by objects like ceilings, which means, outside of a studio, it isn’t always the best tool for long takes that need to move through an environment.
Another method that offers maximum stability is to commit to only moving the camera along one straight line. This can be done with the camera mounted on top of a tripod head on top of a dolly, which is pushed along a series of straight track lengths which are connected and levelled by grips.
Although this movement is supremely smooth it does limit the motion to one straight axis, unlike shooting handheld, with a gimbal or a Steadicam where the operator can turn corners, loop around and easily change their directional line.
An exception to this is if the long take is shot using a dolly in a soundstage or a location with a perfectly smooth, levelled floor where the dolly can be wheeled around without needing to use tracks to get it over uneven terrain.
COMBINED CAMERA MOVEMENTS
Finally, some filmmakers who want to employ different kinds of movement within a single shot, such as starting with a jib which moves into stabilised movement on the ground, can be done by using combined camera movements.
A popular combined camera movement is called the crane step off. This is where a Steadicam operator starts by standing on a platform on a rideable crane to which they are harnessed. The crane is then jibed around or down to the ground.
At this moment grips will simultaneously unharness the operator and secure the crane to ensure the change in weight doesn’t cause the unbalanced crane to shoot up. The Steadicam op will then step off the platform onto the ground and continue the rest of the required motion, such as tracking with a character.
Another combined camera movement is a gimbal pass off. Depending on the situation the gimbal or stabilised head can be rigged onto a bar and attached to some kind of tracking rig, such as a Technocrane which does a telescoping motion or a cable rig that the camera slides along. Once the gimbal comes to ground, it can be unhooked from the first rig by an operator, who can then grab the gimbal off the bar and begin a smooth motion along the ground.
Some high budget filmmakers wanting a long take with movement in unusual spaces, can enter the world of custom grip builds, such as this 2-axis dolly rig from Children of Men which was built onto the roof of the car.
This rig could move an underslung remote head along a track both forwards and backwards and left and right. This was mounted onto the car’s cut off roof. While an operator, stationed on top of a space rigged onto the car’s roof, could also pan the camera around 360 degrees on a Sparrow Head inside the car’s interior.
The car was placed on a low loader platform that could be driven forwards and backwards by two low mounted professional drivers at the front and rear of the car, who sat out of sight of the camera.
Something interesting happens right at the end of the long take. The camera swings around, somehow exits the car door, operated with a handheld motion, and pans over without seeing any of the rigging gear or drivers.
How is this possible? The camera is hard mounted onto a remote head inside the car and cannot be easily detached?
INVISIBLE CUT
I’d argue it was done with a technique called an invisible cut.
This is quite a widely used trick when it comes to long takes. The idea is to actually break these sequences down into a number of different shorter shots, which are more manageable to shoot, then stitch them together in the edit. These transitions are hidden by using invisible cuts, which, if done seamlessly, will mask that there was an editing point at all.
The easiest method of performing an invisible cut is by ending a shot on a block of solid, dark colour, cutting, then resume filming a new shot which starts on that same block of colour.
Some invisible cuts, like the ones in Rope, are quite easy to identify as they almost always involve tracking into a dark close up of an object.
However, when this technique is combined with a wipe it can hide the cut quite effectively. A wipe is where, unlike a regular cut, the second shot gradually moves across the screen, usually from side to side, until it takes up the whole frame.
If shot one ends with the camera tracking or panning quickly from side to side, and shot two begins with the same quick motion, and a wipe is then applied to this, sometimes with some added CGI work, this is called an invisible wipe.
This technique has been used in movies such as Birdman, or 1917, to break up the longer takes into more manageable and easily filmable smaller shots, while maintaining the illusion that it was shot as a ‘one-er’.
RUNNING TIME
Another reason that these invisible cuts were used to stitch together shots in movies from the past, was due to the running time of 35mm film cameras. Although it depends on the film gauge and magazine size, shooting on 35mm with a 1,000ft mag, gave around 11 minutes of run time at 24 frames per second.
So the longest that a single shot could be when shooting on film was 11 minutes, unless a special magazine was used, which can accommodate more than 1,000ft of film.
Nowadays digital cameras, which record onto large capacity cards or hard drives, offer a much longer running time than physical film did and have largely made this issue irrelevant.
WIRELESS TRANSMISSION
However, there are still plenty of technical challenges which come with shooting long takes. One of those is wireless transmission. This is required both to transmit a video signal to a monitor for the director to review the take as it unfolds, as well as for the 1st AC to evaluate focus, and to then transmit a signal from the wireless follow focus to the camera to pull said focus.
In ordinary setups this isn’t a problem, however if the camera needs to move a great distance during a take away from where the monitor and focus puller is set up, then either a stronger transmitter will be required that transmits a further distance, or the focus puller and director will need to be close to the camera at all times during the take.
LIGHTING
Another technical challenge is light. The more spaces a camera moves through, the more continuous lighting fixtures will need to be set up, to illuminate and balance the exposure levels, so that no area is too dark or too bright and so that the levels of natural light do not shift too much during shooting.
One way to aid with balancing exposure is to do an iris pull, where a motor on a lens’ aperture ring can be shifted during a take to either open up and make the image brighter, or stop down to darken it. If this is done slowly and subtly this effect will likely not be picked up by audiences.
It’s especially necessary when shooting exterior long takes, where the sun may go behind a cloud and darken, or come out and brighten half way through a shot.
Another consideration with lighting is to plan the lighting and camera moves so that there is never a light source between the camera and the subject, to avoid casting a camera shadow. This is why long takes through interiors will often rely on rigging light from a higher position on the ceiling, so that shadows will be cast straight down.
Once all of these technical challenges have been figured out the real biggest challenge is actually shooting the shot: by getting both the performance and blocking of the actors, as well as the motion of the camera operator to happen, simultaneously, in perfect harmony.
5 Unusual & Creative Filmmaking Shots
Let’s take a look at five out of the ordinary frames and unpack how exactly each technique is pulled off and what gear is used to do so.
INTRODUCTION
Over many decades, filmmaking has created its own standardised way of presenting images on screen. However, some stories warrant breaking these visual conventions and replacing them with unusual, creative shots.
Let’s take a look at five of these out of the ordinary frames and unpack how exactly each technique is pulled off and what gear is used to do so.
1 - SPLIT DIOPTER
One of the trademark techniques in cinematography is to shoot with a shallow depth of field - where only one part of the frame, such as the subject, remains in sharp focus, while the rest of the frame in the background is rendered soft and out of focus.
This is achieved by shooting with a wide open aperture on the lens, such as T/2, focusing on objects placed closer to the camera, using longer focal lengths and by placing different objects different distances away from each other.
The focus of a lens can only be set to one distance. So if an actor is six feet from the camera and the background objects are 60 feet from the camera, and the focus distance on the lens is set to six feet, then the actor will be sharp, while the background will be soft and out of focus.
But, what if filmmakers wanted to distort what optics can naturally do by rendering two objects which are far apart from each other sharply in focus? They can do so with this first creative technique - using a split field diopter.
This is a filter that can be placed in front of the lens. Half the filter is open and allows the camera to see through it and focus at the deeper focus point that is set on the lens while the other half contains a diopter glass which allows the lens to focus on a subject much closer to the camera, even though the focus distance on the lens is set much deeper.
These come either as circular filters or as rota filters that can fit in a standard 4x5.6” mattebox and be rotated with the gear wheel to find the right position before locking it in place.
These filters mean filmmakers can split the focus and get both subjects close to the camera in focus with the diopter, as well as subjects further away in focus with the lens.
Although it's quite a sparsely used tool, some directors such as Brian De Palma are known for using it as a way to layer information and characters across different distance planes within a single frame.
For example, allowing us to view both a TV broadcast in the foreground and observe the actions of a character in the background at the same time. Or, showing us a far off character recording sound voyeuristically, and a close up of the couple that he’s recording in the foreground.
Split diopters make it possible to incorporate different characters or subjects at different distances within a single shot, without needing to cut between two different shots, or do a focus pull between those two subjects.
Rotating and positioning these filters correctly is very important if you want the deep focus effect to be seamless. If these filters aren’t correctly positioned right on the edge of the closer object that they want in focus, such as this frame from Reservoir Dogs, then they become easily identifiable through a clear division between an area with out of focus bokeh created by the close focus diopter and an area without bokeh which is in focus.
A great way of hiding this effect is by placing the split line of the diopter against a solid area of colour like a white wall, in an area of dark shadow - which will hide bokeh when blacks have less detail and are more crushed - or by using a lower strength diopter filter with less magnification and bringing the two objects on different focal planes closer together.
2 - DUTCH ANGLE
Almost all shots in movies are framed with a straight horizon, with the camera correctly levelled and not rolled over onto its side. This makes sense. It’s the same perspective of the world which our eyes see everyday when we’re standing, sitting, or walking with our head in a level position.
However, some filmmakers deliberately subvert this convention, by framing images with the camera unevenly rolled over onto its side so that the horizon line is not parallel with the top and bottom of the frame. This is called a dutch angle.
Because it goes against how our eyes naturally orientate the world, framing in this oddly tilted way has quite a stylised, jarring and unnerving effect. For this reason filmmakers often pull out this technique for a specific moment in the story where they want to make the viewer feel uncomfortable, or get the frame itself to mimic the disorientated, uneasy, or tense feeling that the characters on screen are themselves feeling.
It can also be used in conjunction with high or low angles to emphasise a power imbalance - which is a trick Guy Ritchie has used on a number of occasions.
How dutch angles are shot depends on what movement and rigs are used to hold the camera. This tilt can be done manually by an operator rolling over a handheld camera into a weird position, by using a remote head, a gimbal or a Trinity to roll the camera on its axis, or by unbalancing the horizon of the camera on a tripod.
Because cinema cameras are heavy it’s usually not advised for tripod legs or a head to be extremely tilted or unbalanced - for danger of it falling over. Instead, filmmakers will usually use a dutch tripod head such as a Tango to get this oblique angle.
A Tango head is mounted between a fluid head and the camera and is capable of tilting the camera from side to side, to either level the horizon, or in this case, to make the horizon deliberately uneven and slanted.
Because it’s a heavily stylised technique that’s a bit disorientating to audiences it’s usually used quite sparingly. Having said that, there have been some movies, such as The Third Man, which ‘lean’ on this technique quite heavily to give the entire film a muddled, expressionist feel.
3 - DOLLY ZOOM
Since we’re on the topic of shots designed to disorientate the viewer, one of the most famous of these is the dolly zoom, sometimes called ‘the Vertigo Effect’. This shot applies two contradictory techniques at the same time to create a strange, unsettling visual experience for the viewer.
The first technique it uses is moving the camera along a straight axis. More often than not this is done by attaching the camera to a dolly and pushing it along a straight track. Moving the camera from the close position to the furthest position on the track makes the shot go from wide to tight, while inversely moving from back to front makes a shot go from a wider frame to a tighter frame.
The second part of this technique is to zoom the lens. Zooming changes the focal length of the lens, which changes the field of view to be either wider or tighter.
Because the speed of the zoom needs to be smooth and consistent for this technique, cinematographers will often use a zoom motor attached to the zoom gear on a cinema lens which can be controlled by a device like a Microforce, rather than zooming the lens by hand.
When you combine these two techniques and time them correctly, by moving the camera from a tight shot to a wide shot, while simultaneously zooming the lens from a wide to a tight focal length - the width of the frame will remain the same, while the perspective of the lens’ compression shifts as the lens zooms and the focal length shifts.
This creates an effect that distorts the perspective, either expanding, or compressing the background in an expressionist way.
It’s often used to intensify the feelings of a character on screen, such as their moment of realisation, revelation, fear, shock, or convey their feeling of vertigo.
4 - SNORRICAM
It’s common practice to hard mount the camera onto cars with a variety of rigs, so that they can be focused onto a character in a seat, who remains still while the background flies past.
However, a shot which uses this same concept, but which is a lot more unconventional and rare is the Snorricam - a rig that straps the camera directly onto the body of an actor.
After Darren Aronofsky famously used an early iteration of this rig on his indie movie Pi, the Snorricam gained popularity as a niche rig for mounting the camera onto actors.
It provides quite a stylised perspective with a few different effects, depending on how it is rigged. The most common method is for it to be latched onto the face of the actor, conveying a sense of disorientation, drunkenness or psychological unease.
It can also be flipped around to shoot more of an over the shoulder shot from behind the actor which provides a subjective point of view from the character’s perspective, or even flipping the camera around in its front mounted position to shoot a direct, forward facing POV angle.
The Snorricam is made up of low weight, carbon fibre hip plates that attach to a vest which the actor can wear. It has a cheeseplate on the back and front which can be used to mount an arm with a ballhead and support rods which hold the camera on one side and a weight to counterbalance the front heavy camera on the other side.
There’s also a new sliding track system available with a fluid head, called the Sputnik arm, which can even be operated by an actor during a take to change the point of view of the camera, from front facing to forward facing.
5 - PERISCOPE & PROBE LENSES
Macro shots that focus on an object incredibly close are not too unusual, however, what happens when filmmakers need macro capabilities from an unusual position, or with interesting movement without a bulky macro lens and big camera getting in the way.
One tool that can be used is a snorkel lens system. These lenses come in different forms and with different uses, but are mainly divided into two categories: probe lenses and periscope lenses.
Probe lenses are long, straight and can focus incredibly close to objects. This makes them great for shooting macro shots where the camera moves in a straight line incredibly close to an object, or for shooting macro details underwater in a tank where the camera can be placed outside the water and the lens inside the water.
Periscope lenses come with the added advantage of being able to tilt, swivel and pan the little macro lens at the end of the barrel to get the perspective into positions which conventional lenses are unable to go.
Regular, non-macro lenses can also be used with some of these systems, for example for shots where the camera needs to be incredibly low to the ground to shoot a low angle shot - so low that placing a regular camera in that position on a hi-hat or slider would not be low enough.
One disadvantage of using these extended optics is that due to their designs they usually come with quite high aperture values - such as T/5.6 or higher. Meaning that they will have a much deeper depth of field and much more light will be required to expose with them. However, since macro shots have an incredibly shallow depth of field anyway, it’s usually desirable to shoot with the lens stopped down a bit.
Cinematography Style: Erik Messerschmidt
Let’s get into how Erik Messerschmidt does what he does, by unpacking his thoughts and philosophy on photography and looking at what gear he chooses in this episode of Cinematography Style.
INTRODUCTION
Erik Messerschmidt’s visual preferences go a bit against many of his peers. The images he shoots usually don’t feel overly textured, soft or organic - but rather lean on a sharp, precise, crisp, digital, detail oriented look.
This comes from his choice of camera gear, his experience as a technician, the kinds of directors he works with and his reliance on digital post production effects. So, let’s get into how this DP does what he does by unpacking his thoughts and philosophy on photography and looking at what gear he chooses in this episode of Cinematography Style.
BACKGROUND
Messerschmidt’s entry point into cinematography is an interesting one. After leaving film school and doing a stint of interning at Panavision, he developed a career as a technician, first as a grip and eventually as a gaffer - the head of the lighting department.
As a gaffer he needed an incredibly strong technical knowledge of lighting gear and what tools he could use to achieve the look that the cinematographer he worked for was after in the best and most efficient way.
This depth of technical knowledge on grips and lighting would later be immensely helpful when he started working more as a director of photography.
One of the cinematographers he worked for as a gaffer was Jeff Cronenweth - David Fincher’s DP. He worked on Gone Girl as a gaffer, then on Fincher’s next feature project Mank, he got an opportunity to take over as cinematographer.
VISUAL LANGUAGE & REFERENCES
In his work with David Fincher, and indeed with other directors too, such as Michael Mann, or Ridley Scott, he appreciates them approaching projects ahead of time with a clear vision for the film.
“I like to work with directors that have a real distinct sense for what they want out of their film. It’s nice to come to the table when someone has a real developed passion for their film.”
He takes this direction, ideas, or references from each director he works with and translates that into a visual style which he will then plan and execute using his knowledge and experience as a technician.
One of the first questions he likes to ask directors as a jumping off point for the visual style is: “What do you want me to look at?”
For Michael Mann on Ferrari this was referencing paintings by Caravaggio, which he used to apply a dark, chiaroscuro style of single source, low key lighting to the interiors.
For David Fincher on Mank, Messerschmidt compiled a whole book of black and white photography references ranging from noir to more modern monochromatic imagery.
While other Fincher projects he worked on like Mindhunter or The Killer, conformed a bit closer to his usual neutral, off-yellow colour palette.
“David, he does the prep. He thinks about the film and he develops an idea. I think he had aesthetic sensibilities that he leans towards. We’re lucky in that I often share a lot of the same sensibilities with him. So it makes the conversation really easy for us.”
PERSPECTIVE & CAMERA MOVEMENT
A visual component in Fincher’s movies apart from his purely aesthetic sensibilities, comes from what perspective the camera takes on and how this is informed by the story.
When it comes to perspective and camera movement, more often than not Messerschmidt likes motion to be motivated and for the perspective to be objective.
The movement of the camera will almost always mimic the blocking of the actors. So, if the subject is stationary, then the frame will be locked off and unmoving. When a character does start to move the camera will smoothly track with them and come to a stop again if the character stops.
In this sense, the camera acts as a passive, objective observer - like someone seated across the room watching action unfold.
More often than not the angle of this perspective is also quite neutral.
Occasionally this language will be broken and the camera will have more of a subjective, noticeable, stylised presence by shooting from either a low angle that looks up at the subject or from a high angle that looks down on them. Another time this subjective perspective comes in is for very specific moments where they decide to shoot with a handheld camera.
Because Messerschmit errs so much on the side of objectivity, when the camera does break into a more subjective language it is all the more noticeable and effective.
For example, when in The Killer the camera language changes from its very clinical, objective, locked off frames when the subject is in full control of the situation to a more frenetic, subjective, handheld language when things stop going according to plan.
POST PRODUCTION
Many of the cinematographer's whose work I’ve covered on this channel enjoy achieving their photography practically in camera wherever possible - using different filters in front of the lens to alter the image, operating cameras with their own handheld touch, or pushing to shoot effects practically whenever possible.
Messerschmidt doesn’t share this same nostalgia for in-camera photography. Again, this aligns with Fincher’s taste and style of working - which incorporates a large amount of visual effects and image manipulation in post production.
He’ll often shoot interiors inside a carefully constructed set in a studio with greenscreens set up outside the windows - which will later be keyed out and replaced by plate shots which they film ahead of time in the real world.
Another example of post production image manipulation is how they created a feeling of handheld movement for a fight scene in The Killer completely artificially - by filming the footage with a stabilised movement and then, counter-intuitively, de-stabilising it in post.
“There’s actually only one or two actual, truly handheld shots in that sequence. Everything else has been post de-stabilised. We thought it would be interesting to play with the pace of the shake throughout the fight and that’s a very difficult thing to orchestrate practically when the camera is on your shoulder. It’s the most I’ve been involved in post-production on any film I’ve done.”
LENSES
Another piece of visual effects work that he often uses with Fincher is to create lens flares artificially in post. There is an interesting inconsistency here between the lenses he likes shooting with and the fake flares.
Messerschmidt stays away from anamorphic lenses by exclusively shooting his long form work and much of his short form content too with spherical lenses. Spherical lenses produce more rounded flares and anamorphic lenses produce horizontal flares.
In his work with Fincher they shoot with spherical lenses, but overlay fake anamorphic flares in post-production, creating a contradictory combination of spherical bokeh and anamorphic flares.
Remember when I mentioned the importance of perspective in his work, well, a tool he uses to control this, as well as what information is in the frame comes from his selection of focal lengths.
Telephoto focal lengths magnify and compress frames, while wide angles expand and distort images. The way these lens choices control information can be as simple as using longer glass to isolate single characters, while using wide angle lenses to incorporate more characters in frames in tight spaces.
However, these decisions also affect what background information other than the characters are given to audiences. Shooting wides with everything in focus that establish the world of the film which audience’s can peruse, or latching the perspective onto a single object by shooting it close with a shallow depth of field.
CAMERAS
When it comes to choosing cameras and lenses there are three common denominators he often lands on: one, a Red cinema camera, two, Leica Summilux-C primes and three, a Fujinon zoom lens.
He’s used this same camera package on numerous projects, although he has also recently used the Venice 2 and Panavision Panaspeed primes.
His attraction for this consistent gear selection may be based on his preference for sharp, modern, ergonomic lenses, and a camera that revolves clean images, has a high dynamic range and records at resolutions high enough to aid post-production visual effects and stabilisation.
To do this he’ll usually shoot at a high resolution such as in 6K 17:9 mode, then apply frame lines with a centre crop around 17% which he will use to frame the film. This leaves room on the edges of each shot which can later be used for stabilisation or to aid visual effects work in post if necessary.
He isn’t precious about the ‘look’ of the digital cinema camera that he shoots on, as a good colourist will be able to get the footage to the look that he and the director are after.
He prefers to shoot the negative clean, without any diffusion or effects filters in front of the lens. Then, if he wishes to add a diffusion effect it can be introduced in post - such as this heavy Pro-Mist effect which blooms the highlights and was achieved artificially with a plugin for Da Vinci Resolve called Scatter.
GRIPS
To more quickly get coverage and be able to consistently cut action or even dialogue scenes together from a single take he’ll often use multiple cameras.
A setup he likes to use for walk and talks is to rig a Ronin 2 gimbal or an M7 Evo stabilised remote head onto a dolly. That way his dolly grip can more accurately control the pace and stops and starts of the lighter dolly without an operator sitting aboard it, while his operator can use wheels to very precisely and remotely operate how the camera tilts and pans.
LIGHTING
His interior lighting is usually motivated by real life sources. He especially likes pushing a strong 18K HMI key light in through windows - which he softens with frames of diffusion.
Then, with the production designer and director, they will decide on the placement of practical sources inside the room - which he can use to fill in the actors.
A trick he uses to maintain some detail in the shadows when he lights with a strong key light and sometimes no fill light, is to pump in some smoke or haze onto the set.
Light striking haze lifts the amount of detail that can be seen, so that shadows don’t completely get crushed to black, and simultaneously gives the source's shape - allowing beams of light to be picked up by the camera.
Why Movies From The 70s & 80s Look Like This: Kodak 100T 5247
Let’s take a look Eastman’s 5247 100T II film stock, its characteristics, the new development process that it created, and why modern movies have a more diverse range of looks when compared to many of these films from the 70s and 80s.
INTRODUCTION
Towards the end of the 70s and into the early 80s there were a host of pretty iconic Hollywood films. And if you take a closer look at them you’ll see that most of them share quite a similar look.
The reds are punchy and saturated, the greens a bit cooler, with deep, crushed, dense black shadows and full of subdued and neutral grey and brown tones. While of course much of this was a product of what palette and lighting was in front of the camera, another factor they all had in common was that they were shot on the same 35mm film stock: Kodak Eastman’s 5247 100T II.
In this video let’s take a look at this film stock, its characteristics, the new development process that it created, and why modern movies have a more diverse range of looks when compared to many of these films from the 70s and 80s.
LOOK
If we look at some shots from a few different movies in this period which were all shot on 5247 with a similar colour palette, of greens, neutral tones and reds, and were all shot in natural exterior day lighting, it’s easy to see that they share a pretty similar look.
If we look at the colour, the greens favour a deeper shade which is typical of Kodak film. The reds in particular stand out. They are very saturated and punchy - which is a bit of a trademark of this stock. The blues of the sky are lighter towards white with more red or magenta in them, especially in comparison to the deeper, more punchy cyan blues found in modern Kodak stocks.
In fact overall, it has a bit of a trend towards reds, yellows and more neutral pastel colours - with tungsten lit skin tones often getting a bit of a bronzed redness to them. If we look at a vectorscope we see this. This line represents a good colour skin tone.
The modern Vision 3 stock lands on this line. However the older 5247 line goes a bit more to the right into a reddish hue.
This older film also has less dynamic range than modern stocks do, meaning it resolves a smaller variance between the brightest part of the image and the darkest part.
The 1970s film manages to keep detail in images with about 7 or 8 stops of light difference in brightness, while modern stocks like 5219 have a dynamic range of around 14 stops.
What this means is that often brighter skies will start to look white and more blown out, while shadows turn to blocks of pure black. Compared to modern stocks that can hold and roll of strong highlights far better while also maintaining visible detail in areas of shadow.
The effect is that 5247 has a high contrast look, with deep shadows and bright highlights. The data sheet even suggests exposing with an extra half stop of light if the palette of what is being filmed is mostly made up of darker hues - to avoid crushing the shadows across the frame and making images difficult to read.
EXPOSURE INDEX
Beyond the look of the film, what are some of its more technical characteristics that determined how 5247 II was shot?
I say II, because the first variation of 5247 film was introduced in 1950 with a mere EI of 16. It was later discontinued and then the label was brought back as 5247 II in 1974.
This film had an exposure index, or EI, of 100. You can think of this like ISO on digital cameras - an indication of how sensitive the film is to light. Each jump up in EI or ISO is represented by a stop - which means half the amount of light is needed to expose the image.
So, a 200 speed film absorbs two times more light than a 100 speed film, 400 EI absorbs four times more light and 800 EI absorbs eight times more light.
Therefore, modern digital cinema cameras which usually have EI’s ranging from 800 up into the 1000s, can shoot with much darker levels of light than this old film stock. Even Kodak’s modern 5219 film stock is rated at 500 EI, which means it absorbs 5 times more light than 5247.
What this meant was that cinematographers shooting inside studios or in darker interiors on 5247, would need to pump in much larger amounts of light to properly expose the image than by today’s standards.
A byproduct of low EI films is that they produce less noticeable film grain. So, for the time, 5247 produced quite a clean looking image, even if it feels quite a lot more textured by today’s technological standards which have seen a decrease in film grain and video noise.
COLOUR BALANCE
Unlike digital cameras where you can precisely dial in your colour balance, film stocks come with one of two different types of colour balances: daylight or tungsten.
Daylight stocks are designed to reproduce true, natural looking colour under sunlight, while tungsten films produce balanced looking colour when lit by much warmer tungsten film lights.
5247 is a tungsten balanced film - meaning it should be shot under lights with a 3,200K colour balance. When shot in these conditions it has an EI of 100, as we discussed.
However, if cinematographers wanted to shoot this film outdoors under natural sunlight or using lights with a 5,500K colour balance, then it was recommended that they add an 85 filter in front of the lens.
This orange filter would warm up and correct the colour balance, however it would also decrease the exposure by around a stop.
Therefore, when shooting with this filter Kodak recommended exposing with an EI of 64. Which meant the film stock needed to be given even more light when shooting under a daylight source.
DEVELOPMENT
Film is able to record images by exposing it to light for a fraction of a second, then later running it through a series of chemicals that fix that look onto the film as a negative, which can later be turned into a positive image that can be viewed.
These emulsions are very sensitive, so if they are taken out of their can and exposed directly to light for even a few seconds they will turn completely black and will be impossible to record to.
To avoid this happening, film needs to be handled completely in the dark: from when it is removed from the can, loaded into the magazine, removed from the magazine and developed.
After development, the footage gets baked into the film and it can be viewed in the light for as long as you like.
ECN-2 PROCESSING
One of the reasons that 5247 is quite a pivotal moment in Kodak’s history is because it introduced a new way of developing the film, called ECN-2, which is still used to this day.
This updated the original ECN process, making it quicker to develop the film and more environmentally friendly.
One thing that makes ECN-2 different to how 35mm stills photos are processed with C-41, is due to how cinema film is shot. Stills cameras only need to take one photograph at a time, whereas to create motion, cinema cameras take 24 or more photographs every second.
This meant the film needed to move through the camera faster, which in turn made it more prone to getting scratched as it flew through and came into contact with mechanical parts inside the camera.
To solve this an antihalation remjet layer at the back of the film was introduced to protect it from getting scratched. ECN-2 has an additional two steps at the beginning of the development process designed to get rid of this protective layer before the film is then developed.
PRINTER LIGHTS
Returning to the initial question of why many movies during this period in the late 70s and early 80s looked the same, can be answered by: because there were very limited options of what film stocks cinematographers could shoot on and those film stocks to a large degree determined how colour was captured and represented.
Cinematographers did have a few ways to control how the colour turned out, but these tools were much more rudimentary and basic than what we are able to do today.
After the film has been developed into a negative it needs to get printed onto new film as a positive at the film lab. If they wanted to control the look or do colour corrections, they could do this here with printer lights. They could control how much red, green and blue they wanted to add to the print by adjusting the strength of these different lights when they converted the negative to a positive.
It was also possible to do other tricks in development, such as a bleach bypass, which would create a more contrasty, stylised look. Or use effects or colour filters in front of the lens while shooting to create a look with a different tint.
DIGITAL INTERMEDIATE
However, nowadays, this colour correction process is handled entirely digitally with computers and software and is called the digital intermediate or colour grade.
Files from digital cameras or film negatives which are scanned and converted into digital files, can be manipulated in grading software such as Da Vinci Resolve or Baselight.
This software allows more granular corrections to be done, such as isolating certain parts of the image using power windows, or adjusting only certain hues.
It also makes it easier than ever to test and create different types of looks for films.
Because of these advances in grading software and cameras with increased dynamic range and colour flexibility, we can go beyond just creating a consistent colour balance based on whatever film was being shot with and move into creating different looks.
These looks could be emulations which try to mimic a look of a filmstock, or any number of digitally created looks by colourists.
CONCLUSION
This is why, overall, the colour in many modern films seems a bit more varied than movies from past periods which were all shot on the same film stock and developed and printed in a very similar way.
I see these advances in technology as a positive, that have made it easier than ever before to manipulate colour in the way that best suits the story of the film.
3 Techniques For Shooting With A Handheld Camera
Let’s go over three different techniques filmmakers can use to shoot handheld footage.
INTRODUCTION
Audience’s have different perceptions about the emotional effect that shooting with a handheld camera has. The shakey looseness of this motion may, for some, bring the images a bit closer to reality, give footage a sense of naturalism, elevate the emotion into a heightened state, or evoke a fly on the wall subjective point of view.
Let’s go over three different techniques on how filmmakers can get this motion, then go over some tips that may help you to shoot better handheld footage.
1 - SHOULDER MOUNTED CAMERA
The first technique for shooting handheld is pretty basic and self explanatory - you put the camera up on your shoulder and operate it through how you move your body.
This technique has a few things going for it. For one, you don’t need any additional gear beyond the camera, so it can be both a cost saver and a quick way of working with minimal setup time which avoids needing to set up and level a tripod, lay tracks for a dolly or balance a gimbal.
For another, it means operators can be reactive to the movement of characters, which frees them up to move around the space without worrying about precise blocking. This gives them room to improvise and adapt to how the character is moving and grab unexpected moments as they happen without being tethered to a position on a tripod - which is why this technique is often used in freeform documentary shooting.
Because, unlike the other handheld shooting methods which we’ll get to, it doesn’t suspend the camera from a cable, it works well for shots where the operator and camera need to walk around.
Operating from the shoulder and using your knees kind of like shock absorbers gets rid of some of the ‘bounce’ which you might get from the up down motions of taking footsteps.
Mátyás Erdély’s phenomenal shoulder mounted handheld technique on Son of Saul is a great example of this. Throughout the movie he tracks the character with an authentic and reactive handheld looseness, which eliminates most of the up and down bounce from his footsteps.
However, this shoulder mounted method doesn’t come without a few disadvantages.
2 - EASYRIG
Digital cinema cameras are very heavy chunks of metal, and film cameras with loaded magazines are even heavier. When framing a stationary shot handheld in an uncomfortable position, the longer the shot is held, the more the operator's body will tire and a tremor or shake from the tired muscles will also start to shake the image.
This brings us to our second way of shooting handheld - with an Easyrig.
This tool was initially designed by a cameraman as a way for operators to alleviate back and shoulder pain from carrying heavy cameras around all day. However, it’s since evolved into a general purpose tool for shooting handheld with a cinema camera.
An easyrig is worn like a vest or backpack - with a solid support spine which runs up the back, loops over the head and contains an adjustable cable that can be hooked onto the camera’s top handle or clamped into an eyebolt.
The tension in the cable then holds up and supports the weight of the camera. It’s a great tool that minimises the strain on the operator’s body and allows them to more easily hold shots for extended periods of time without camera shake.
The cable can be extended almost to the ground to shoot from a low angle, or raised up to the top of the spine to shoot a high angle. This makes the Easyrig a great tool for shooting handheld at more extreme angles, whereas shooting from the shoulder limits how low the camera can be positioned.
Although cameras can be gripped by the handle without an Easyrig to get low angles, the heavy weight of cinema cameras combined with an awkward extended centre of gravity (rather than having the weight of the camera sit on the solid base of the shoulder) means that there’ll quickly be an unusable amount of camera shake.
The Easyrig suspends and supports this weight and makes shooting these angles easy.
One downside to using an Easyrig comes about if you want to walk around with the camera. Because the weight of the camera sits on a taut cable rather than the shock absorbers that are the knees, it will quite easily create an up and down bouncing motion - feeling each footstep that is taken by the operator.
Easyrigs are much better for framing and holding static handheld shots than they are for doing handheld shots where the operator moves around.
3 - BUNGEE RIG
The final alternative method - which allows a subtle handheld looseness, the weight to be taken away from holding the heavy camera, and the ability to move the camera’s position without feeling each footstep can be achieved with a bungee rig.
This uses the same idea of the Easyrig - using a cable to suspend and support the weight of the camera - but rather than mounting this cable to a backpack, grips can create a host of different DIY rigs to mount this cable to.
These bungee cables can be hard locked to mounting points on a set, such as here, where Greig Fraser’s grip team attached three suction mounts to the glass window of the cockpit then strung a bungee cord between these points which the camera could be hooked onto.
This makes it easy to operate the camera with a handheld motion, shoot from a low angle and avoid bringing a large overhanging Easyrig arm into the tight space of the set.
Or, if this bungee rig needs to be mobile, another common solution is rigging it onto a dolly, which can easily be pushed around into the correct position for each shot, or used if DPs want the camera to track and move around while keeping a handheld looseness to the motion rather than perfectly smooth dolly movement.
Here’s one variation of this rig Hélène Louvart used where her grips mounted poles to a dolly in a sort of industrial Easyrig setup, from which a cable could be mounted to support the camera.
Here’s another rig using the same idea, but executing it with a combo stand rigged to a dolly which supports a backweighted boom arm which the cable is hooked onto.
This is a great solution for shooting in any location with a flat floor and enough space to push a dolly around on. However, if the location is a rough terrain exterior where the dolly wheels won’t go or a space that’s too small to fit a dolly inside, then operators will usually have to rely on one of the other two handheld methods we mentioned before.
HANDHELD TIPS
An important factor when it comes to handheld is the degree of shake that filmmakers are after. This ranges from shots having a very subtle, organic handheld motion all the way to a vigorous, chaotic shaky cam style.
The two cable suspension techniques we discussed are better for minimising the amount of camera shake, whereas shooting straight from the shoulder, or even roughly operating the camera from a grip on the top handle allows operators to increase the intensity of the vibrations.
Another method to take out some of the shake or bounce from your footsteps when doing a handheld tracking shot is to shoot on a rig called a rickshaw - a platform with a mounted seat and two wheels - which can be smoothly pulled and steered by grips while the operator sits with a handheld shoulder mounted camera.
Another factor that influences shake is lens selection. More often than not cinematographers will favour wide angle lenses when shooting handheld - all the way from 8mm wide angle fisheye focal lengths to around 35mm or 40mm on the long end.
This is because the wider focal length you select, and the wider the field of view on the camera, the less camera shake will be felt in the image. Wide lenses smooth out movement, while shooting handheld with telephoto lenses increases the sensitivity of how much shake will be felt.
A modern solution for operators who are keen on limiting the effect of shake on images when shooting handheld is through the introduction of IBIS or in-body image stabilisation.
This is where image stabilisation is introduced inside the body of the camera by moving the sensor around, kind of like a gimbal does, to compensate for camera bumps.
Although this used to be reserved more for prosumer mirrorless or stills cameras, this tech has now started filtering into higher end cinema cameras like the Sony Burano.
Some prosumer cameras also offer lenses with stabilisation which will minimise camera shake on the lens’ side.
There are different types of Easyrigs which are separated by their different weight capacities that they can carry. The most popular Easyrig for use with cinema cameras is the Vario5 - which has a maximum tolerance of 25kg - enough to support most film or digital handheld camera builds.
You can use a tool on the back of the vest to adjust the amount of tension you want the cable to have - depending on the weight of the camera and the operator’s preference of having the cable be more slack or more rigid.
The Easyrig can be operated with handles on the camera. A common setup is using a short left handle facing forwards and a short right handle facing backwards - giving operators a solid grip on either side nice and close to the camera.
Another style is to grip the camera on the back of its battery - which isn’t always the best method as I’ve seen batteries lose their contact with the backplate and shut down if handled too robustly in this way.
Shoulder mounted handheld operators will also usually use handles - however mounted to the front 15 or 19mm rods. The closer these are to the camera the better, so as to create a tight connection point, tucking the elbows into the body. The further out the elbows are the looser the camera will move, while creating contact points between camera and shoulder and elbows and stomach will create a solid base where shakes, bumps and bouncing is minimised.
Another, more old school method of operating on the shoulder is to grab the camera directly by the mattebox and use that as a contact point rather than handles.
When building handheld cameras, assistants will add a shoulder pad and try to balance it so that the centre of gravity sits squarely on the shoulders without being too front or back heavy.
Once a blocking has been established, it’s useful to walk through the action before without a camera, creating a sort of muscle memory which will help your body test out and remember the movements you’ll need to make.
And, if the take involves landing in one position and holding that for an extended period it’s useful to try to find a comfortable landing position, where the body is braced, maybe even against a surface, not in a position where you are overextended or crouched uncomfortably.
For any kind of operated handheld shot, grips will always position themselves behind the operator, carefully following in their steps, with a hand placed on their waist. This is a safety measure that, in the event of the operator losing balance they will be there to prevent them and the camera from falling.
Their hand may also steer the operator, especially when moving backwards, and re-route them to avoid hitting any obstacles.
Dune: Part Two’s Bold Cinematography
Let’s take a closer look at the cinematography and show how Dune achieved such bold, ambitious, epic visual storytelling.
INTRODUCTION
While the first Dune movie was being showered with praise at the 2022 Academy Awards, director Denis Villeneuve, DP Greig Fraser and the rest of the heads of departments were actually hard at work - preparing to shoot the sequel.
Part Two was even more ambitious. If Part One built the world and set all the parts in place, Part Two dived straight in with an escalation in drama and an abundance of large scale set pieces.
So, let’s take a closer look at the cinematography and show how Dune achieved such bold, ambitious, epic visual storytelling.
VISUAL APPROACH
“Whenever you do a movie you’ve got to solve a series of problems and some of them are technical. Like, how do you get a bit of equipment to a certain place. And then there are creative discussions. And a lot of the technical stuff had been solved for us in advance because we’d done Part One.” - Greig Fraser, Dolby
This, along with the critical and financial success of the first movie, emboldened the filmmakers and got them considering how they could creatively and visually elevate Part Two.
One of the early crossroads they arrived at was how closely they should retain the look and feel of what they had done in the first film, or if they should deviate to shooting a different format, use a different palette or implement different lighting ideas.
They landed on maintaining most of the look of the original, keeping a visual continuity in the cinematic world, but elevating that look in subtle ways, or at times by employing riskier creative decisions - like infrared black and white - which we’ll get to a bit later.
CAMERAS & FORMATS
A starting point for the look came from the camera and lens selection. Both films were designed to be released across two, or actually three, exhibition formats. Two different, taller aspect ratio formats for Imax cinemas, and a 2.39:1 widescreen format for regular cinema and streaming distribution.
Part One used two different types of lenses - spherical Panavision H-series lenses to capture the taller Imax scenes and the 1.6x anamorphic Panavision Ultra Vista lenses for non-Imax widescreen scenes.
For Part Two they kept the same large format look of Part One which they shot on the Alexa Mini LF, while also adding an extra even larger format Alexa 65 camera.
Fraser switched up his lens selection on Part Two - getting rid of the anamorphic lenses and shooting entirely with large format spherical glass. Doing this meant that they would chop off the top and bottom of the image to arrive at a widescreen aspect ratio, rather than shooting anamorphic and getting this ratio natively.
The advantage of this is that they could then preserve the height of the frame for the taller Imax distribution aspect ratios - having greater cropping options for all the different distribution formats in post.
LENSES
When it came to choosing which spherical lenses to shoot on there were two important factors that affected this decision. Firstly, they had to be large format cinema lenses with enough coverage for the LF and 65 sensors without vignetting.
Secondly, Fraser sought glass that came with a bit of vintage texture which would counteract the large format, high res, crisp digital Alexa sensors.
He landed on two different sets of lenses. One set of Arri Rental’s rehoused version of vintage Moviecam lenses from the 1980s - with the prime focal length lenses ranging from 16mm to 180mm - that have gentle, natural character, a feeling of depth and beautiful focus falloff.
These were combined with textural, large format Soviet-era glass which were custom rehoused for modern use by IronGlass.
FOCAL LENGTHS
Going through the film we can see that Fraser often elected to shoot on longer focal length primes. Shooting on these telephoto focal lengths have the effect of compressing the background and giving images the feeling that the landscape is closer to the characters than it would on a wide angle lens.
I think this is a beautiful way of giving the sand dunes more of a layered depth, showing the scale of the characters within the vastness of the imposing desert and making the environment itself feel like a character.
GROUNDED PHOTOGRAPHY
One of the biggest reasons that I think makes this film successful is how they melded a level of reality and believability to a fantastical story.
One of the pitfalls of many studio blockbusters that take on stories in unrealistic, imaginative cinematic universes is that when the visual language of the filmmaking also takes on an unrealistic tone I tend to get pulled out of the story and lose interest.
In Dune, even when fantastical, impossible things are happening on screen, because it’s shot in a way that is tethered to reality - I stayed fully invested with the characters, on the edge of my seat.
“When you’re shooting something so larger than life, it’s important I think to give the audience a grounding. Because if you then suddenly start to become unreal with a situation that’s unreal it doesn’t, kind of, work.” - Greig Fraser, Dolby
The costumes feel textural, like they’ve been worn. The environments are authentic, and don’t look like they’ve been shot against a green screen in a studio. When there are visual effects they feel fully integrated and a seamless part of the live action footage. The sci-fi technology has a mechanical believability.
And, importantly, the camera is always positioned and operated in a real way that conforms to physics. The perspective never goes through walls, swoops around the sky or moves in an unnecessary, unmotivated way.
The shots are usually operated from the ground and have a subtle handheld looseness to them. Or if the camera does move it’s usually tracking the motion of characters.
Always keeping the photography grounded in reality.
B&W INFRARED
So, part of the look relied on keeping the cinematography grounded, however certain moments and set pieces like the eclipse scene or the introduction of a new planet Giedi Prime introduced very bold, stylistic photographic ideas.
One concern that they had when introducing a new planet and character to the story was that audience’s may get confused between it and the main planet Arrakis. Giedi Prime would be introduced with an exterior scene, in a sand pit, with a similar neutral colour palette to Arrakis.
To avoid confusion Villeneuve proposed communicating this change in location by switching to a monochromatic palette. Fraser stylistically elevated this idea even further by proposing black and white infrared.
But what exactly is infrared?
Well, cameras capture wavelengths of light. There is visible light that our eyes can see and also light at different wavelengths which is invisible to the naked eye.
To make sure that none of these infrared light wavelengths outside the visible spectrum bleed into the regular colour space captured by cinema cameras - which would distort capturing realistic colours - manufacturers add an infrared filter in front of the sensor which cuts out these rays.
However, if you deliberately remove that infrared filter in the camera and replace it with a visible light cut filter then the camera will only capture infrared light.
They did exactly this, by stripping away the infrared filter layer from the Alexa sensor and adding a filter in front of the lens which would cut out any excess blue, green and red visible light. This image could then be desaturated into monochrome for on-set monitoring and post production.
Infrared light isn’t exactly flattering to actors. When shooting characters under hard, direct, toppy sunlight it gives them a frightful, intimidating pale appearance with dark eyes - an accurate emotional representation of the central sadistic character from this word.
COLOUR PALETTE
Overall the colour palette on Arrakis closely follows that of the first film - leaning heavily into a neutral look with browns, hot, white skies, greys and blacks.
However, they decided to start the opening scene with a bang by letting it unfold during an eclipse. To create a distinct look Fraser used a filter that cut most of the green and blue light but kept the visible red light.
This pushed a darker orangey-red palette into this scene with much denser, graduated orange skies than the regular palette.
LIGHTING
There’s a tendency in modern filmmaking to use an abundance of soft light. If there’s harsh sun outside, most cinematographer’s first instinct is to throw a scrim over the talent.
However, part of the beauty of Dune comes from how Fraser works with a combination of soft light and hard light. He creates a feeling of heat by keeping the light for exteriors hard with a crisp distinction between solid shadows and areas with highlights. Pushing the exposure up the curve to a brighter look.
When the characters are inside, he decreases this idea of the harsh sun by lighting with a much softer, more diffused look with a gentle gradient between shadows and highlights. He’ll weight the exposure far darker for these scenes.
Sometimes punctuating these indoor spaces with little pockets of hard light - reminding us of the overpowering nature of the desert sun outside.
He masterfully takes a visceral feeling and communicates it using light.
FILM OUT
If you want to incorporate film into a movie’s workflow - you can either do so at the beginning by shooting on it, at the end by making a film print for screening, or both.
Dune did something interesting. They decided to shoot on digital, present on digital in most cinemas, but inserted an extra step in the middle of this process called ‘film out’. This took the graded digital footage, laser printed it onto Kodak Vision3 5254 film then rescanned it back to digital.
“We went out to film negative and then scanned it back in. It was night and day what it gave the highlights, what it gave the patina, what it gave the texture.” - Greig Fraser, Dolby
The reason for this back and forth was that they felt that shooting on film looked too sentimental, while acquiring on digital felt too sharp, clean and lacked the texture they wanted the world to have. By doing a film out they got a bit of the best of both worlds.
The Cinematic Lighting Trick Hollywood Uses
What exactly is ambience and why does it help to make images feel a bit more cinematic?
OVERHEAD AMBIENCE
What exactly is ambience and why does it help to make images feel a bit more cinematic?
If we start with an ordinary room during the day without any film lights we can see that the main source of light is coming from the sun and passing through the door and windows.
However, light doesn’t only travel in perfect straight lines. Whenever a light source comes into contact with a surface it bounces off it. If it’s a light surface like a ceiling or a white polyboard quite a lot of light will get reflected back and if it’s a dark surface like a grey wall or the black side of a polyboard a much smaller amount of light will be bounced.
So here we have a natural ambience, where sunlight enters the room and bounces around off the white walls and ceiling.
To keep a nice level of information outside and avoid blowing out our background to white I’ve lowered exposure on the camera using an ND filter. However, this now leaves our interior looking very dark and a bit too underexposed - which means the natural ambience from the sunlight bouncing off the ceiling isn’t strong enough.
We can fix this by filling in the room with overhead ambience from an artificial film light.
Ambience helps make footage look a bit more filmic and less video-y because it allows us to preserve information in the bright parts of the highlights as well as lifting the detail in the dark areas.
BOUNCING THE LIGHT
There are two main ways to add an overhead ambient light source - the easy way and the more tricky way.
The easiest way to do this is by taking your light source, putting it on stand and angling it towards the ceiling. Here I’ve set up a 500B as well as Nanlite’s new PavoSlim 240C. The stronger the source outside is, the more ambient light you need to add inside to raise and balance the exposure level of the interior. So here I need quite a lot of artificial ambience.
The colour of the ambient source should be the same as the natural ambience. In this case we have sunlight which is about 5,600K, so I’ve set our light sources to the same value.
Bouncing light does three things: it decreases its intensity, it softens the light and it lifts the overall exposure in the room.
Without overhead ambience the shadows will be deeper and more crushed. With the artificial ambience there will be more detail in the shadows and the subject will have more of a fill light.
If you want to change the direction that the light bounces in you can do so by rigging a reflective surface - this could be a reflector plate, a polyboard, or even a piece of white card.
For example you could use a top bounced source to create more of a shadowy low key look with an edge light. To do this I rigged a silver polyboard on a c-stand with a polyholder, angled downwards, and placed it behind and above her so that it’s more of a backlight. I’ve then bounced a 500B into it. This both lifts the levels of ambient light indoors and gives a harder edge light to the hair, shoulders and side of the face.
RIGGING THE LIGHT
The one disadvantage of this bouncing method is that we don’t have much control over the spill. In other words the light won’t be concentrated into a single beam of light and will spread around the room.
This brings us to method number two - rigging an overhead source to the ceiling.
This technique of rigging top lights to ceilings was famously used by Conrad Hall in movies like The Godfather. Rigging a diffused toplight with skirting makes the light source fall directly downward onto a specific area, limiting it from spilling across the entire room.
This keeps the walls and background dark, while the character under the source is brighter in a pool of light.
This setup is a bit more tricky than just bouncing the light as it requires either rigging the fixture to a polecat, or using a stand with a boom pole.
Here I’ve rigged a polecat by extending it across the room then locking it. To this I’ve added a super clamp to secure a Nanlite Pavoslim 240C and its ballast onto the pole. I’ve added the softbox to the front of it with a diffusion layer to soften it.
Then to control the spill and prevent the light from hitting the back wall too much I’ve created a bit of a DIY makeshift skirting using black material - which channels the light into falling directly downward.
SHOOTING 360 DEGREES
One reason that many DPs choose to use this technique is because it allows them to point the camera anywhere around the set, other than the ceiling, without seeing the source of the light.
This way, cinematographers can fully light the entire space in a set, rather than composing a specific shot, getting an actor on a certain mark and lighting only that spot.
Actors therefore have more scope to change their blocking during a scene, and directors have leeway to easily change or add shots once they have started shooting.
If you look at behind the scenes photos of Hollywood studio sets inside soundstages you’ll almost always see these diffused toplights rigged over the set to create artificial ambience.
Rigging lights high up in a sound stage also creates more distance between the light sources and the set - which means the light falls off in intensity, making it a bit softer.
Not only does ambient top lighting feel a bit more real, but, for large long form productions, it also means these fixtures can be pre-rigged and then left there, out of sight, for the remainder of the shoot.
So, every time a scene needs to be shot on that set, the lights can be switched on, remotely dimmed and set to a preset colour temperature with a DMX board - without needing to spend time setting up and positioning the lights each shoot day.
This is a big time saver and provides consistent illumination which is easily repeatable and controllable.
NIGHT LIGHTING SETUP
Now that we know what overhead ambience is, let’s break down two examples of how we can use it to make our footage a bit more cinematic. For the first scene I’m going to do a night interior set up to show how toplights can be rigged out of sight to add a gentle ambient fill to a couple of rooms, creating pockets of light and shadow in different areas.
First I’ll rig two top lights. One in the middle of each room. I’ve used the 240C for the final larger room and a 120C for the interleading passage. I’ve rigged these to a polecat and run the cabling along the pole and out of sight - but you can also power them with batteries if you want to shoot completely 360 degrees cable free.
In this first room I’ve used another 60C Pavoslim which I’ve bounced into the ceiling. I’ve set these lights to 5,600K - while the camera is set to 4,200K. This means our ambience will appear a little bit cool.
This night time ambience lifts the level of detail in the shadows so it doesn’t get too crushed to complete black and allows us enough light to expose the scene at 800 ISO at T/2.4 on the lens.
To create a bit of warm, cool contrast and to give some backlight in our final landing position, I’ve added a practical bulb to the last room. This is a Nanlite 10C LED which I can run with a powerbank. I’ve dimmed it and set it to a warm colour temperature.
Now it’s looking a bit better but the windows feel flat and lack depth, so I’ve set up a 500B fixture outside the glass door and windows and warmed it up with a sodium vapour gel, creating some warm textured light.
So if I do a handheld walkthrough of the shot it’ll look something like this. Note that we can move the camera through the entire space without seeing any film lights.
If we start with just the practical it will look like this. Then this is with the warm window light added. Notice how the shadows are mostly crushed to black. Then we add the final ingredient, our overhead ambience, and there we go. We can now see some light detail in the shadows, while still keeping a dark look and without blowing out any highlights.
DAY LIGHTING SETUP
So, that’s an example of using ambience for a night scene, now how about a day scene.
The first thing I’m going to do here is only look at the brightest part of the frame and set my exposure so that it retains some colour and information without blowing out.
If you want a super high contrast look something like this might already start to look quite good. We’ve got a nice natural key coming through the window on the far side. However, since I want this shot to have a brighter, high key look I’m going to add some ambience to fill in our character so it’s less of a silhouette.
To do this I’ve taken a 500B and bounced it into the ceiling, positioning and angling the source so that it fills in the side of the face with shadow which is closer to the camera. As we’ve mentioned it also lifts the exposure of the darkest parts of the frame, unveiling a bit more detail.
Then I’m going to rig the 240C as a backlight behind the character from above by rigging it to the polecat. This will create a subtle rim or edge light and provides just a little more separation to the shot. Here you can see how it creates a highlight on the edge of the hair, side of the face and shoulder.
So there we go. This is where we started without any lights. Then we can turn on our ambient bounce which lifts the space with a soft light that gets rid of the dark shadows on the face. And then we add the backlight to create a slight edge on the hair.
6 Trademark Tarantino Shots
Let's look at six different shots which Quentin Tarantino has used in many of his movies, show how they are done, what effect they have and how they can be used to elevate scenes.
INTRODUCTION
Quentin Tarantino’s filmmaking shows a deep reverence for the history of cinema. He often pulls from the work of other directors and genre tropes - paying homage through his distinct and stylised shot selection. Whether from Italian Spaghetti Westerns or classic Kung Fu movies.
So, let’s break down six different shots which he has often repeated through his filmography, show how they are done, what effect they have and how they can be used to elevate scenes.
1 - CIRCULAR DOLLY
When it comes to moving the camera during a shot, one of the most traditional, well established guidelines is to only move with characters when it is motivated by their own motion. So if they walk in a straight line, the camera may track with them, but once they come to a stop, generally, the camera will stop.
Tarantino breaks this rule by moving the camera in an unmotivated way even when the characters themselves aren’t moving - especially during dialogue scenes.
He’s done this in multiple movies with a circular dolly shot, or arc shot. This is a specialised move where the camera gradually and steadily moves in a circle around the characters - who are usually seated.
Dollies are very heavy platforms with four wheels and a hydraulic arm which the tripod head and camera can be mounted on. Filmmakers can either use it as a way to easily push around and position the heavy production camera for a stationary shot, or they can place the dolly on top of a track where it can be smoothly slid along to get stable camera moves.
Usually these tracks are straight and designed to do moves like push ins, pull outs or side on tracking where the camera moves in one directional line.
However, more niche circular dolly tracks also exist. Two form a semi-circle. Four can be connected to form a circle: inside which actors can be placed. When the dolly is pushed around this track it gives a smooth, orbital movement.
Note that Tarantino uses this move quite sparingly and only for a specific purpose. Most shots in dialogue scenes he shoots with a stationary camera.
A circular move makes a shot more dynamic and interesting - especially for longer scenes without active blocking where actors don’t move around. It focuses the audience's attention on a central point - while also giving context to the space the characters are in.
When done at a slow speed it can aid in ratcheting up the tension in a scene. In certain contexts it can feel a bit voyeuristic like we’re a predator circling prey.
Or in other contexts this movement can add a light, natural, organic interplay in a group by letting us move from person to person, better establishing each character’s emotions, reactions and body language without having to intercut more rigidly between each actor’s line of dialogue.
2 - TRUNK SHOT
From his first film, Reservoir Dogs, and throughout almost every movie since, Tarantino always seems to fit in a trunk shot - where the camera, with a wide angle lens, is positioned in the boot of a car shooting up from a low angle.
Most filmmakers shoot most shots at a neutral angle - where the camera is roughly positioned at the head height of each character and is placed horizontally straight without any upwards or downwards tilt.
Tarantino often makes frames more stylistic by either shooting them from higher up, pointing down, called a high angle, or lower to the ground pointing up at characters, like a trunk shot, which is called a low angle.
This shot serves a few purposes. For one, it focuses the audience’s gaze only onto the characters and their dialogue exchange. For another, by shooting a low angle from a trunk’s point of view, it hides the contents inside the car, creating suspense, or delaying showing information until a later reverse shot.
There’s also a commonly held belief that shooting characters from a low angle makes them feel more powerful and dominant. He’ll often play with this high angle, low angle dynamic when writing and blocking scenes - placing characters in different positions of power at different heights. Often shooting characters in the position of strength - whether that’s physically, ethically or psychologically - from a low angle.
3 - CRASH ZOOM
One genre that Tarantino often pays homage to are Kung Fu movies. And a trademark stylistic trope from many of these films was the use, and sometimes the overuse, of a crash zoom.
A crash, whip or snap zoom is a shot where the operator rapidly zooms either in or out on a lens - changing the shot size from wide or tight, or from tight to wide.
Zoom lenses have internal glass elements which can be shifted inside the barrel by rotating the zoom gear outside the lens. This changes the focal length of the lens which determines the field of view. For example, going from a wide focal length like 24mm to a telephoto focal length like 150mm.
For a crash zoom to have a greater effect there needs to be a greater difference between the widest focal length on the zoom and longest focal length. To do a smaller punch in the zoom lens only needs to cover a more limited focal length range.
Because this zoom needs to be done rapidly, cinematographers usually rotate the barrel manually with their hand - sometimes using a zoom bar - a small spigot that screws into the barrel to give operators leverage to rotate it more precisely and easily.
Another option is to align a zoom motor with the gears on the barrel, which is wired to a microforce zoom control handle. This can be used to set in and out points on a zoom, control the speed of the zoom, and allows operators to manually control when to activate the motor to zoom the lens by holding down a button. It’s an especially useful tool for doing slightly slower zooms at a consistent speed, without a manual jerking to the zoom.
A crash zoom is kind of like an exclamation mark on a sentence. It’s used to rapidly heighten and intensify the emotions in a shot. It can be used to emphasise an unexpected or important moment like a character reveal or a moment of revelation or shock.
Tarantino also uses these zooms in rapid succession with quick cuts during moments of action or fight scenes, injecting them with energy.
Because this shot is extremely stylised and a bit melodramatic, in some cases it can also add a bit of humour.
4 - SPAGHETTI WESTERN CLOSE UP
When choosing a shot, the first thing to consider is how wide, or how tight, it needs to be. A close up is a shot that almost every director uses.
Although this is a common shot, what makes Tarantino’s close ups a bit different is how he frames them.
He likes using very tight close ups which are somewhere in the middle of a traditional close up, which includes the top of the head and shoulders of an actor, and an extreme close up which focuses on a tight facial detail of the actor such as their eyes.
These frames chop off the top of the actor’s head, which is called giving them a haircut, usually place the actor quite central in the shot, and tend to use longer focal length lenses which compress the background.
This framing is a definite homage to one of his favourite directors Sergio Leone, who often used this style of tight close up framing in many of his Spaghetti Western movies.
These close ups enormously expand the actor’s presence and allow audience’s to observe minute details in performances. They are especially impactful when combined with another editing technique that he also pulls from Leone’s films - cutting from very wide shots with spatial context of the environment, into these huge, punchy, impactful close ups to keenly read the emotions of the characters.
Creating a large contrast between the shot sizes in the editing, which ratchets up the tension the closer the camera gets to the actor’s face.
5 - ECU INSERT
Moving from one type of close up to another, we have the extreme close up insert shot. Sometimes called a cutaway, this is where the camera cuts away from shots showing the performance of the actors, to a shot of a specific detail or object, before returning to showing the characters.
Tarantino often uses this shot when he wants to communicate information to the audience - telling them that this object on screen has great importance in the story.
They can also be used as a type of POV to show the audience a specific detail of what the character is looking at.
By shooting these shots extremely tightly and juxtaposing them with wider frames, like how he treats his Spaghetti Western close ups, they can also be used to create more tension within a scene.
Tarantino also likes to make audience’s deliberately uncomfortable by showing experiential details, whether that’s graphic violence, or cutting into ECU insert shots like injections to make the audience uneasy, rather than cutting wide to hide these moments.
6 - COWBOY SHOT
The final shot which he often uses, once again, pays homage to the history of cinema, specifically the Western genre. It’s called a cowboy shot. This comes from it being used in old Westerns to frame the actor as well as the gun which sits in a holster on their hip.
Its width is somewhere in the middle of a medium shot, where the waist or stomach is the bottom of the frame, and a wide shot, which includes the feet of the character.
A cowboy shot can be a stationary frame, combined with camera movement, or even a crash zoom. Tarantino likes to use this same angle even when guns and shoot outs aren’t necessarily a part of the story.
A large part of how Tarantino uses this shot comes from the dynamic between the foreground and background. He’ll often incorporate two characters into this shot by placing the camera behind one of them at around the hip level, or sometimes over the shoulder, framing them either left or frame right in the foreground. The other side of the shot will then be open to capture another person in the background from the waist up in a cowboy frame.
This shot works as a way of pitting characters against each other, whether in a dialogue exchange or an exchange of bullets.
Single cowboy shots can also be used to see what a character is holding, or what is on their hip, while still remaining close enough to be able to read their performance, without pulling back all the way to a wide shot.
Cinematography Style: Rachel Morrison
Let’s dive a bit deeper into Rachel Morrison's cinematography by looking at how she got to where she is in her career, her philosophy on filmmaking and some of the technical film gear that she uses.
INTRODUCTION
What makes Rachel Morisson’s cinematography special, both in her work on indie projects as well as high budget studio movies, is how she is able to bring audiences closer to the truth and emotions of the characters with experiential, subjective photography. This comes from her frequent use of a handheld camera, her taste for naturalistic lighting, and desire to always serve the story with every cinematic choice.
In this video let’s dive a bit deeper into her work by looking at how she got to where she is in her career, her philosophy on filmmaking and some of the technical gear that she uses.
PHILOSOPHY
“I got out of school and got work right away in documentary filmmaking. But then 9/11 happened and I started to get pulled into reality TV that was the one thing that was paying the bills but I could still be behind the camera. And I got to the point when I said I never want to do this again. All I want to do is make narrative films. And, so, I stopped shooting reality and never looked back.” - Rachel Morrison
Part of what inspired her to want to shoot documentary early on came from her love of period photojournalism by photographers such as Dorothea Lange and Gordon Parks - who were able to tell really strong stories, which affected change, with only a single frame.
Documentary is a great training ground. It teaches cinematographers to make quick decisions, to change camera settings and do technical setups on the fly, quickly find good frames, and how to use natural lighting conditions.
Another skill that she picked up from shooting reality was the way that she operated a handheld camera.
“Operating is such an instinctive quality - and to come from documentary and basically learn to dance with the camera in response to somebody else’s emotion - it’s a strangely specific skill set. And that was really crucial I think to my journey as a DP.” - Rachel Morrison
These skills she picked up helped her to win a DP job to shoot a young director’s first indie movie - Fruitvale Station. Ryan Coogler was looking for a camera person who had a background in single camera documentary filmmaking who could work on the fly and tell a story in a realistic, subjective, experiential way.
The language of this film took on almost exclusively handheld camera movement, naturalistic lighting and created a fly on the wall feeling that is reminiscent of documentary filmmaking.
This is a feature that she has carried through her work. She often operates the camera on the shoulder, moving with characters, or holding frames with a natural handheld looseness and shake, to impart a mix of reality and emotional elevation to scenes.
Operating the camera with this kind of looseness, lack of rigidity and perfectly composed static frames, gives the images more of an organic feeling which brings the emotions of the characters to the forefront - like we are subjectively observing their reactions realistically, rather than watching technically perfect, rehearsed action on screen.
After getting a Best Cinematography nomination for Mudbound and moving up to make a studio feature - Black Panther - she found herself at a crossroads. In a middleground where the scripts she was being offered to shoot were either bigger studio movies which weren’t as narratively interesting or nuanced, or smaller indie movies that weren’t as impactful or make-able as some of her previous work.
This led her to forge a new path as a director. Like in her work as a cinematographer, as a director she also sought stories with a strong point of view, which informed, had a larger social impact and asked questions of the audience.
“For me, as a filmmaker, it’;s important not only to entertain but also to inform and inspire and emote and evoke and, hopefully, to leave the audience with a message of some kind.” - Rachel Morrison
GEAR
Much of her work favours quite a filmic look. Whether that’s the heavily textural Super 16 Fruitvale Station, or the tighter grain 35mm anamorphic look in Seberg.
However, since a fair amount of her work was done at the indie budget level, sometimes the practical and financial benefits from shooting on a digital camera outweighed the look of shooting on film.
Mudbound was one of these projects. Both Morrison and the director Dee Rees wanted to shoot this period movie photochemically, however after running the numbers on the budget they were presented with two choices - either shoot digitally, or shoot on the more expensive film format and lose two shooting days.
Due to the already tight schedule, they opted to shoot on the digital Alexa Mini with C-series anamorphic lenses from Panavision, and ran grading tests to get the footage to a filmic look they were happy with - which was reminiscent of her reference Gordon Parks’ photo exhibit ‘Segregation Story’ which had a muted colour palette yet rich blacks that weren’t too washed out.
“I would have loved to have shot this film on film and actually Dee would have too. But we did tests and in the tests I’d already started by rating the cameras at a 1,280 and at a 1,600ASA just to introduce a little bit of digital noise and then working with our dailies colourist to add a grain emulation and really try to get him to match the film dailies as best we could.” - Rachel Morrison
This gear selection choice of shooting with an Alexa sensor and Panavision anamorphic glass is one that she has used on multiple projects: like Dope, Mudbound and Cake, to get a widescreen look with anamorphic falloff while having the convenience of digital.
Despite often using this combination, her selection of gear is always catered to the story and also the budget - so sometimes changes from project to project. She shot Super 16 film on the Arriflex 416 with Zeiss Ultra 16 lenses on Fruitvale Station - using Kodak 500T for its higher grain look and better sensitivity for shooting in lower lighting conditions at night.
Or, she even shot the low budget indie The Sound Of My Voice on prosumer Canon 7D and 5D DSLR still cameras. When it came to lighting this movie she pulled from her early work on reality TV, where the light sources needed to be hidden by rigging them out of sight or incorporating the lighting through practical fixtures in the shot so that they could point the camera almost anywhere on the set without seeing lighting gear.
When moving from these lower budget productions up to an enormous MCU production, she found that the progression wasn’t quite as big a step up or change as many make it out to be, however there were some things that made shooting a studio film different.
To show this, let’s take a look at Morrison’s collaboration with Ryan Coogler on two movies, Fruitvale Station and Black Panther.
For one, in order to get a large amount of coverage of scenes she had to shoot with a dual camera setup controlled by multiple operators, instead of the single camera self-operated way of working that she did on most indie sets.
While the scale of the gear changed, many of the techniques she used were similar. For example, on Fruitvale Station she wanted to change the palette of the light of the BART station scenes - which in real life were lit by overhead warm fluorescent bulbs. To make the space feel a little less cosy and inviting she got her team to swap the light bulbs with a neutral white fluorescent.
Whereas on Black Panther, when they wanted to light a casino location with a soft, warm overhead source, they fully built the location as a huge set on a soundstage and rigged an enormous amount of LED lights overhead which they softened with diffusion.
Same idea but vastly different scale. Building everything from scratch rather than working with what is already present in a real location.
She mentions having somewhere around 300 Arri Skypanels on Black Panther, as opposed to working largely with practical sources and a small lights and grips package on Fruitvale Station.
Different films also demand different looks. Black Panther used a more vividly saturated comic book palette, while Fruitvale went with a more destaurated, toned down, textual realism.
They occasionally used more exaggerated dutch angles or rolled the camera on a Movi gimbal on Black Panther for more dynamic, comic book framing, while on Fruitvale they kept the camera handheld for more documentary subjectivity.
CONCLUSION
Whether she is shooting or directing, Rachel Morrison likes to keep the story at the forefront. Carefully selecting screenplays that both entertain and have a message that aligns with her sensibilities.
She’ll then bring her references, suggestions and visual preferences to the director - who she’ll work with to craft an experiential and often subjective look that is custom tailored to the story.
A Movie With NO Crew: The Zone Of Interest
Let's break down how Jonathan Glazer used an anti-filmmaking style, that disregarded artificial lighting, fancy camera moves, close ups and even having on set crew members present during shooting - yet still managed to win an Oscar.
OBJECTIVE FILMMAKING
Most movies embrace all the filmmaking tools they have to influence the audience’s emotions. Whether that is using music to indicate a triumph. Shooting a close up to put the audience in the head of the character. Or removing bright sources of light from the frame to impart a darker mood.
The Zone of Interest is well, interesting, because it went another route altogether. Director Jonathon Glazer wanted to remove cues that signalled the audience was watching a work of fiction and present the story in the most detached, objective way that he possibly could.
This anti-filmmaking style, that disregarded artificial lighting, fancy camera moves, close ups and even having on set crew members present during shooting, was specifically catered to his vision for telling the story.
“The Zone of Interest is not a story in the sense of a normal film. It’s about a man who has a lovely life with his family. He’s very good at his job. They live in the country with a lovely house with a lovely garden. He also happens to be the commandant of Auschwitz.” - Jonathan Glazer
REFLECTION OF OURSELVES
Part of the creative rationale for presenting the story this way was to neither demonise nor glamorise characters and their actions through the filmmaking language but rather present them with a detached objectivity.
This placed the ordinary way that we, the audience, live our lives adjacent to the lives of characters in the film that perpetrated heinous crimes.
It’s easy to demonise, dehumanise and distance ourselves from the bad guys in a movie. It’s far more difficult to wrangle with the idea that atrocities are committed and witnessed by ordinary people that are on some levels a reflection of ourselves - which is the thesis that The Zone of Interest puts forward.
REALITY TV SETUP
So, how exactly did Jonathan Glazer and his cinematographer Łukasz Żal go about creating this look on a technical level?
“He said we don’t want to make this like a movie set. There was no lighting. There was no film gear on the set. Only, like, cameras.” - Łukasz Żal
Glazer described this style as being like shooting a reality TV show, such as Big Brother, by setting up multiple hidden cameras in a house, then letting the actors roam around and play out scenes as they would - without being limited by blocking, hitting marks or landing in the perfect light.
Again, the idea being to strip away a traditional cinematic style and instead pursue as pure a feeling of reality as possible. This technique is one that he had already experimented with on his previous movie, Under The Skin, where he placed characters in unscripted situations in a van with hidden cameras to draw out as great a feeling of realism as he could.
CHOOSING A CAMERA
To be able to set up cameras in this way, but still maintain a high enough level of fidelity and quality to keep the audience immersed, his cinematographer had to take a few factors into consideration when choosing what camera to shoot on.
Firstly, it had to be a production cinema camera with enough dynamic range and colour science to be able to handle shooting in purely natural ambience without balancing the levels with film lighting.
Secondly, he needed to be able to solidly set up a way of getting a live feed from the cameras to a wall of monitors outside the house so that they could watch all the angles of the scenes take place without any interruptions.
And, perhaps most importantly, he needed a camera that had a very small profile, that could be hidden enough to set up multiple angles while being as discreet and invisible to actors as possible.
He landed on the Sony Venice - which has 15 stops of dynamic range and a Rialto system that can separate the lens mount and sensor of the camera through a wired connection tethered to the body that records the data and supplies power.
If we look at some of the behind the scenes photos we can see how little footprint this camera rig has. The Rialto section of the camera with only a lens and a small wireless focus motor could be propped up on apple boxes to be hidden behind a desk, or put on a mini lowboy combo stand where it was held and could be positioned and framed by a small Manfrotto ball head.
They could also attach the Rialto rig with the ball head to an L-bracket - which could be rigged from a pipe, or even screwed directly into the wall or the ceiling.
This allowed them to set it up in small spaces or even almost flat up against walls, while remaining unseen by the other cameras.
The rest of the components of the rig, such as the body, battery and MDR focus box were wired to the Rialto section and could be hidden away from the camera.
HIDDEN CAMERAS
One of the challenges of shooting scenes without giving actors definite marks or blocking, was knowing where to place the cameras in the best positions that would capture the performances that they wanted.
This meant positioning them so that they would witness everything from an objective viewpoint, which was usually wide and parallel to the action through frontal, side on or top down angles.
These frames were always locked off, except for a few specifically motivated moments of tracking movement, following the motion of characters, which were filmed with a dolly and track.
As they were shooting scenes with multiple cameras, they had to try their best to hide them so that they wouldn’t be picked up by the other cameras. This was especially challenging as almost all of the angles were long shots photographed with wide angle lenses that had a very wide field of view of the set.
Cameras were sometimes hidden behind furniture, inside part of the production design, by mounting it to the ceiling, or, in some cases where it was impossible to shoot multiple angles without seeing another camera, by doing VFX cleanups in post to digitally remove cameras or gear from shots.
They could do this, by playing out the scene with the camera in the frame, then removing the camera psychically from the space and shooting a clean plate - which they could then use to remove the camera digitally.
There were also plenty of other moments of really well done VFX, such as adding Auschwitz camp buildings in the background of shots, which, again tried to be as realistic and to draw as little attention to the filmmaking as possible.
This meant assembling a large blue screen wall around the outer perimeter of the property - which could later be keyed out and replaced with CGI buildings.
NO ON SET CREW
Since they were shooting with a multi cam setup, 360 degrees inside interiors, one way to avoid seeing crew was for there to be no crew on set. But, you may be asking, surely you need a crew next to the actors on set? How do you operate the cameras? How do you pull the focus? How do you transmit a video feed to monitors for viewing?
Part of Łukasz Żal’s job was creating an onset workflow that allowed shooting to happen almost entirely remotely - without needing any crew to physically be present inside the interior house set while they were rolling.
Normally, film sets use wireless transmitters, like a Terradek, to send a video feed from the camera to a monitor where people can view and assess a live image. However, because they would be working with 10 cameras and because they wanted to place the video village, with a wall of 10 monitors, in a container which sat outside the property, behind the blue screen, and have an uninterrupted feed at all times, the decision was made to hardwire all the cameras.
This involved tethering the cameras to the monitors with cables. Kind of like using an ethernet cable to connect to the internet rather than wifi, this method provides more consistency than transmission.
SET UP
Rather than the traditional method of having the actors block the scene on set with crew watching, have the crew tweak the cameras and lights, then play out the scene with full on set crew and cast, they employed a different method.
Basically the film crew would be separated from the actors at almost all times. The DP would prep for each day by assembling a floor plan for each scene with estimated positions for up to 10 cameras. His team of grips and ACs would then place the various cameras in their assigned positions, with their chosen lenses.
Żal and Glazer would assess all the frames on the video village monitors and make the necessary corrections. Each 1st AC, set up in the basement of the house, would be given two cameras to pull focus on - usually one wider frame that was easier and a more difficult, telephoto frame.
Fresh cards and batteries would be loaded into the cameras then all the crew would leave the set. Once the set was cleared, the actors entered the location and could begin performing the extended scenes throughout the different rooms in the house.
Shooting on the Venice’s X-OCN ST compression at 6K 3:2 meant that a 1TB card would give them 60 minutes of recording time. If they reached the end of the cards, the cameras would all cut, the actors would leave the house, then the crew would come in from the basement and reload the cameras for another take.
Each day they were usually able to film two of these different setups.
ANTI-FILMMAKING
There are a number of reasons why I’d describe the look of the film as having a kind of anti-cinematic aesthetic. For one, they shot at a deep stop, rather than with a more traditionally cinematic shallow depth of field. This supported the idea of objectivity by not selecting what the audience must look at by isolating it with focus, but rather inviting them to peruse the frame with their gaze.
They used little Leitz M 0.8 lenses which were small enough to stay out the way, yet sharp. These were shot without any diffusion to preserve a modern, objective look - which was also emphasised by a hard, contemporary, digital look in the colour grade - rather than trying to create something filmic or sentimental in any way.
They exclusively used natural lighting, relying on sunlight for day scenes and a few period accurate practicals at night - without any fill or bounce. Again, rather than shooting exteriors early in the morning or later in the afternoon when the light would feel a bit more cinematic, backlit and beautiful - they purposefully shot exterior scenes in hard, real, unflattering midday sunlight.
To be able to film night exteriors without any artificial light they used a military style infrared camera - which was later upscaled with AI in post.
WHAT WE DON’T SEE
One of the reasons that makes this film so haunting comes from an idea I’ve mentioned before on this channel: what we don’t see is sometimes more impactful and terrifying than what we do.
Rather than showing the genocide happening in the camp next door, we never see it visually, however we hear it with sound design off in the distance - which brings home the idea of their detached filmmaking style.
Showing that pursuing reality and objectivity can, in this case, be more impactful and powerful than using the tools of movie magic to manipulate audiences.