How Taika Waititi Shoots A Film At 3 Budget Levels
In this video I’ll take a look at just three feature films that he has directed at three increasing budget levels to analyse the techniques that he uses to make them.
INTRODUCTION
If there’s one word that sums up Taika Waititi’s approach to directing it’s tone. His movies are entertaining, uplifting and lean into an unforced comedic tone with a large focus on the writing, casting and performances of the actors.
With a career in film that has involved years of work in commercials, music videos and TV series, in this video I’ll take a look at just three feature films that he has directed at three increasing budget levels to analyse the techniques that he uses to make them.
HUNT FOR THE WILDERPEOPLE - $2.5 MILLION
His love for comedy began early when he formed a duo with Jermaine Clement - who he’d later work with on other projects. He also started making short films. One of them, Two Cars, One Night earned him an Academy Award nomination.
Around this time he read Wild Pork and Watercress and decided he would try to write a screenplay adaptation of the book.
“I wrote the first draft of this in 2005. I hadn’t made any other features before then and I found it really difficult adapting the book because I’d never adapted anything and I thought you needed to be super true to the material. Basically lift everything from the book and put it into a movie. I put that to the side to concentrate on some other stuff and went off and made three other features. Then coming back to the material I realised, ‘Oh you don’t have to do that at all, you can just do whatever you want’. You put it through your filter, you know.”
This idea of putting a screenplay or an idea through his own filter is a consistent feature of his work: whether he’s writing his own original idea, working with a screenwriting collaborator or bringing a massive blockbuster script to the screen. But we’ll get to that later.
He takes a screenplay and applies his filter for comedy and adventure to arrive at an end product which has his recognisable authorship. This filter comes from a combination of the writing process, his approach to directing actors, and how he and his creative team visually tell the story.
“I chose the tone that I wanted as well. I decided I was going to make a comedy that was like an adventure film. I sort of chose stylistically and tonally what I wanted to do and then took the parts of the book that I felt would work in the film I wanted to make and then made up the rest.”
With the final screenplay in place and a budget of approximately $2.5 million, half of which came from the New Zealand Film Commision, he moved to the next step in the process - which is of particular importance to a director who has a large appreciation for performance - casting the actors.
This involved casting and directing a child actor to play the role of Ricky Baker. Directing children can be a challenge. Acting, of course, takes years of practice in manipulating your emotions in a controlled way.
The level of control and consistency required is difficult for most children. However,if you find the right child that is able to lock into the character, their performance may have a purity to it that might surpass their adult counterparts as it is more natural and less constructed.
“What the trick is when you are auditioning you search for the kid that resembles the character the most in personality. So, you never try and get a kid to pretend they are someone else. You choose the Ricky Baker’s of the world and find the one that is closest to what you want in the film. And then all they have to do is remember the lines.”
With the cast in place and enough funding to shoot for a brief 25 days, Waititi brought Australian cinematographer Lachlan Milne onto the project to shoot the film.
They decided on a single camera approach for most of the movie and rented an Arri Alexa XT with, based on some behind the scenes pictures, what looks like Cooke S4s and an Angenieux 12:1 zoom.
For the car chase scene, which they shot over a couple of days, they used five different cameras to get enough coverage on the relatively low budget: three Alexa XTs which shot the on the ground footage and two Red Epics, with the Angenieux 24-290mm mounted on a Shotover on a helicopter.
To prepare, the DP used a DSLR camera to shoot different angles of a model car which could then be cut into a sort of animatic or storyboard so that they had a list of the shots they needed to get on the day.
Since most scenes take place outdoors, lighting continuity was always going to be tricky. Milne always tried to orientate day exteriors so that the actors were backlit by the sun.
He also leaned into a natural sunlight look and didn’t use any diffusion scrims over the actors to soften the light. He didn’t want perfectly soft light that would be too pretty.
Also, placing scrims overhead limits the movement of the actors and how wide the shot can be. The frame needs to be fairly fixed otherwise the legs of the stands will start getting into the shot.
The director wanted to draw on the visual style of films from the mid 80s, such as films by Peter Weir, which didn’t have visual effects and didn’t use fancy gear like Technocranes to move the camera. Therefore they used the 24-290mm zoom to punch into shots rather than using the more expensive, impractical and slicker camera motion.
Something about the slow zooms also effectively built up tension in scenes and, when combined with other wider shots, helped land some of the comedic gags. Another way he accentuates comedy is with the music and sound, and lingering on wider shots and not cutting too quickly.
Overall, he used the relatively low two and a half million dollar budget to produce a bigger looking movie which mainly had contained scenes with one large chase scene set piece, with a large focus on casting and performances, almost no CG work, and an experienced crew which moved quickly with a single camera to pull off the entire movie on a tight five week schedule.
JOJO RABBIT - $14 MILLION
“There was no real pitching process for this. So I didn’t go to studios and say ‘Hey, this is my idea for a film’. I realised early on it’s a really hard film to pitch. No one really wants to hear a pitch like this, so I’m going to write a script that’s really good and I’m going to let that be the pitch.”
A screenplay looking at World War Two through the eyes of a young boy in the Hitler Youth, where an imaginary friend version of Hitler plays a supporting role, is certainly a bit of an odd pitch.
But, after sending the completed script around, Searchlight took an interest in the project and agreed to make the film on one condition, that Waititi play Hitler. Like Hunt for the Wilderpeople, Jojo Rabbit was also an adaptation from a novel, Caging Skies.
His screenplay and vision for the film took a different approach to how most World War Two films are presented and once again drew from his own tonal sensibilities towards comedic entertainment that is uplifting.
“We can’t get complacent and keep making the same style, the same tonal style of film: it’s drama, it’s depressing…everything is desaturated and browns and greys. Crazy idea, we can also maybe create something that is colourful and bright and has humour in it. I knew the tone really early on.”
With a budget of $14 million from Fox Searchlight and TSG Entertainment they tried to find a base for production that would give them the locations they needed and the most bang for their buck.
Initially, the plan was to shoot in Germany, however since their laws meant that child actors could only work for around three hours per day, and the movie was filled with child actors, this would have almost doubled the amount of shooting days they needed.
Eventually they decided on the Czeq Republic which had buildings that came ready made to look like they belonged in the World War Two era, a reliable film industry and labour laws which allowed them to schedule the shoot into 40 days of filming.
From the budget $800,000 was given to the art department, which may sound like a lot, but is actually very low to purchase all the army equipment and create the sets for a period film. So, having town locations which were already almost good to go helped create the period world on the low budget.
Mihai Mălaimare Jr. was brought on board as the cinematographer on the film. Prior to shooting, the director and the DP collaborated to devise the format that was right for the project.
“We were both really attracted to 1.33, but the audience is not as used to that aspect ratio anymore. We were trying to work out how it would work for us framing wise and realising how much more top and bottom it would reveal in that aspect ratio. That was the only thing that made us try the 1.85:1. One thing that Taika really responded to and I wanted to try for so long was anamorphic 1.85.” - Mihai Mălaimare Jr.
To get this squarer aspect ratio with anamorphic lenses he used an unusual technique. Hawk 1.3x anamorphic lenses are designed to be shot with a 16:9 size sensor and get a 2.40:1 aspect ratio. However, if you shoot these lenses with a 4:3 sensor size, de-squeeze them 1.3x and then crop just a tad you can get a 1.85:1 aspect ratio that maintains an anamorphic look. Shooting the 1.3x V-Lites on a 4:3 sensor on an Alexa XT gave him the best of both worlds: the squarer aspect ratio along with anamorphic falloff, without needing to do much cropping.
To portray a brighter version of reality, through the eyes of a child, they used a bright colour palette with lots of vibrant greens, blues, yellows and of course reds. They also used more whimsical slow motion and central, front-on, symmetrical compositions, which placed characters in the middle of the frame and used natural framing devices on the set such as doors, picture frames, tables or tiling for balance.
Much of the tonal balance was adjusted in the edit. Whereas some directors may despise test screenings - showing a cut of the film to an audience prior to release - Waititi likes to use them in order to gauge the effectiveness of the pacing of different versions of the edit.
“It was more the tonal balance. So I test my films all the time with audiences. So you get feedback. What do you think of this? Were you bored here? Were you overstimulated here? Was it too funny here? Was it too sad here? And then just finding a balance.”
Jojo Rabbit was produced on a higher $14 million budget that accommodated for more shoot days, a war set piece, lots of extras, some star performers, and period correct production design.
THOR: RAGNAROK - $180 MILLION
“I have a theory that there are periods when the economy is suffering and people and people don’t have a lot of money to spend, they don’t want to go and see films about how tough life is for people. I think the reason that a lot of those dramatic films are not doing well is because people want an escape, which is why a lot of the superhero films are doing really well.”
This movie involved a step up from a fairly regular budget to what I guess you could best call a Marvel budget.
To get the job Waititi pitched his idea of the film, which involved creating a ‘sizzle reel’ - basically a montage that he cut to Immigrant Song by Led Zeppelin using footage from other films. The studio were also enthused by his idea to bring a vitality to the movie and his trademark brand of humour to the characters.
Working on a MCU movie means that the director basically has whatever technical resources they can dream of, have as much time as they need (in this case 85 shoot days, or a full two years when including pre and post production) and can use the massive budget to hire pretty much whatever actors they want.
However it also means that most of the look of the film will be constructed after shooting with CGI and to a large degree will be controlled by the studio. You know, the desaturated, GC-laden feeling all Marvel movies have.
What falls on the director therefore is not so much creating the aesthetic style, but rather managing the project and creating the overall tone by using performances and storytelling.
To wring more of a comedically authentic tone from the script, he worked with the actors to achieve a more natural delivery of lines.
“The thing about a lot of studio films and Hollywood films especially is that when you hear a joke in these films you get the feeling that the joke was written about a year before they shot it and then a couple of people in the board room were like, ‘and then he’s gonna say….’ and they’re like ‘that’s gonna be amazing when we shoot that in a year.”
Instead they worked with a script that had suggested dialogue and jokes. Once him and the actors were on set they could then work with that material until they found what delivery worked naturally. Not being so tightly constrained to the original shooting script.
The film was shot by Javier Aguirresarobe, who has a long career working on a range of both low and high budget movies. About 95% of the shoot was done with bluescreens.
This meant that the DP lit the actors as a way to suggest to the post production supervisor where, with what intensity, quality and colour temperature they imagine the light to be. CGI is then used to construct the rest of the world and the light that is in it. Motion capture suits were used to capture the movement of computer generated characters.
The film was largely shot on the large format Alexa 65 with Arri Prime 65 and Zeiss Vintage 765 lenses. The Phantom Flex 4K was also used for shots which needed slow motion.
Thor: Ragnarok therefore used its enormous budget to hire a cast of famous actors, fund a very lengthy 85 shooting days with all the gear they could imagine, loads of action scenes, and effectively pay for two years of production time that included expensive CGI work in almost every shot in the entire movie, all the while Waititi maintained his grasp of a perfect comedic, adventure tone.
5 Steps To Shooting A Documentary
Let's unpack five steps that you can take whenever you get the opportunity to work on a new documentary project.
INTRODUCTION
The world of documentary is one that is fundamentally different from other forms of planned fiction filmmaking, like music videos, commercials and features. What differentiates these disciplines is that one is more pre-planned and structured ahead of shooting, while documentaries rely on a broader plan with inevitably less precision.
This means that documentary cinematographers need to always be on their toes and be quick to adapt to unexpected situations as they unfold. Having said that, this doesn’t mean that you should just go in with a camera and rely purely on luck and instinct. There are some clearly defined ways that we as filmmakers can use to shoot more consistent, stronger content.
So, I thought I’d use this video to unpack five steps that I take whenever I get an opportunity to work on a new documentary project.
1- IDENTIFY THE CONTENT
Gone are the days where most documentaries meant setting up a few sit down interviews which would then get cut with archival footage.
Today documentary, or documenting a version of reality, comes in many forms - from commercial, branded content that uses touches of non-fiction, to purely observational filmmaking, re-creations of events, nature documentaries, traditional talking head documentaries and everything in between.
It may seem obvious, but as a cinematographer, the first thing you need to do when starting a new project is clearly define the form of the film and identify the types of scenarios that you will be filming.
The reason this is so important is twofold, it’ll help you to identify the gear you need to bring along and will help you to nail down a visual style. But we’ll go over those two points separately.
When you’re dealing with real life situations, planning and having a clear vision for what you need to get will make it far easier to execute on the day. Half of making documentaries comes from producing and putting yourself in the right situation to capture whatever the action is.
Most of these decisions come from the director. In documentary work, the director may also be the cinematographer. If there is a dedicated cinematographer then knowing the form of the project and the kind of footage needed is still crucial.
For example, you may go into a shoot knowing that you need an interview with the main character that should be prioritised, some B-roll footage of the location and one vérité scene with another character.
If the schedule of the main character changes and they suddenly aren’t available to do an interview in the morning anymore then you know that: first priority is scheduling the main interview, second priority is finding time with a secondary character, or looking for a potential scene to present itself and thirdly the B-roll can be gathered throughout the day in the gaps of the schedule.
Making a list that prioritises footage that is a must have, footage that would be nice to have and footage that would be a bonus to get is useful going into the shoot. It’s always easier to improvise and get unexpected, magical moments when you already have a solid base or plan to work from that tells the core of the story.
Once you’ve put the edges of the puzzle in place, it’s much easier to then fill in the rest.
2 - GEAR
As I mentioned, selecting the gear needed for a project will be determined by the kind of scenarios that need to be captured.
For example, a verite documentary may be captured by a single handheld camera, with a single lens, which also records sound, operated by one person. While larger budget true crime documentaries with re-creation scenes may have an entire crew, complete with a cinema camera package, a lighting package, and a dolly.
Whatever gear is needed on a documentary shoot there is always one certainty: you need to be able to work fast. For that reason, you need to have a high degree of familiarity with the camera you are shooting on. If you need to quickly capture a moment in slow motion can you find the setting within a few seconds? Or if the light suddenly changes and you need to compensate for overexposure can you quickly adjust the ND filter?
This is why going into a shoot I’d recommend configuring the camera in such a way that you are able to make changes as quickly as possible. This may be through user buttons, through having a variable ND filter on the front of the lens, or by having a zoom that can use to quickly punch in or out to a specific shot size. When you’re capturing real life, you don’t ever want to miss a crucial moment if it can be avoided.
Having less gear also speeds things up. It means less to set up, carry around and to pack away. There’s a sweet spot between having the tools that you need and not having too much stuff to lug around.
Although there are loads of different approaches to selecting gear, let me go over what is a fairly typical setup.
Starting with the camera, a popular choice is something like a Sony FS7, a Canon C300 or something newer like the FX6. These cameras have great codecs that produce high quality images with a relatively small file size - which you need on documentary projects where you often need to shoot a lot of footage. They also come with XLR audio inputs to feed sound directly into the camera and have user buttons and internal ND filters for quick operation.
When it comes to lenses, I personally prefer working with primes, but zooms are probably more popular as they allow you to quickly readjust shot sizes. Something like a 24-70mm is a pretty standard choice. Depending on the content it’s usually useful to also carry a long zoom like a 70-200mm.
I like to carry screw-on filters with me, such as a variable ND and maybe a diffusion filter or a diopter filter, depending on the look.
Then you want a lightweight tripod with a fluid head that is smooth to operate, but light enough to carry around all day and to quickly set up. Many people now like to shoot with a gimbal too.
I also like to carry around a little lighting bag and a stand. This can be used for an on-the-fly interview, bringing up the exposure in a dark space or lighting observational scenes so that they are more ‘cinematic’.
I exclusively choose LEDs that are both dimmable and are bi-colour. This means you can easily change their colour temperature and the intensity of the light with the turn of a knob. Again, speed is key.
3 - VISUAL STYLE
Whether you are conscious of it or not, every decision that a cinematographer makes while shooting contributes to some kind of visual style. Even the act of just picking up a camera quickly and pressing record to capture a moment creates a visual style with a loose, handheld, verite look.
This visual style may affect the audience in a subtly different way than if the same scene was shot locked off on a tripod, or shot with lots of movement on a gliding gimbal.
There are a million different directions to go in. Maybe you decide on a specific type of framing for the interviews, maybe the entire film is handheld, maybe you only use natural light, maybe you use artificial light to enhance reality, maybe you use a drone to give context to the space, maybe you suspend time by using slow motion, or shoot with a diffusion filter to make the images more dreamy. These are all decisions that influence a film’s visual style.
Therefore the next step in documentary cinematography, before arriving on set, is coming up with an idea for an overarching visual style that supports the film. This style could be rationally decided upon based on thought or based on what feels right.
This step also needs to be considered with the first step of identifying the kind of content you are shooting. You need to find a style that is balanced with what you can realistically achieve. For example if you’re shooting a fast paced fly on the wall documentary it might not be possible to shoot everything from a tripod with perfect lighting.
Usually, I find I have a stronger connection to films that have some kind of visual cohesion and an artistic vision that stretches across the entire doccie.
Of course since we are shooting in unpredictable situations, with less control over the environment, it’ll almost never be possible to get exactly what we want visually.
But, going in with a plan or an idea of the look, or finding the look as you begin shooting, will almost always result in stronger images than if you go into shooting with no vision or ideas at all and just get whatever you can get without giving any thought to how the images look and the feeling they will convey.
4 - SOUND
Next, let’s talk about something that is sometimes loathed by cinematographers, but which is as important, if not more important, than the image: sound.
Some documentaries may have the resources and the need to hire a dedicated sound person, but often in the field of documentary the job of recording sound may fall on the cinematographer.
Therefore it’s important to at least know the basics of how to record sound. There are two ways this is done. With lav mics that are clipped onto the subject, which then feeds a signal wireless to a receiver which is plugged into the camera or a recorder that captures the sound. Or with a boom mic that can either be mounted on-board the camera, or used by a boom operator on a boom pole. For more on this I made another video on boom operators.
The main point to be aware of as a cinematographer, is that getting good sound may involve compromise. For example, you may want to shoot a beautiful wide shot of a scene, or an interview, but if you are shooting in a noisy, uncontrolled environment you may be forced to scrap that plan and shoot everything in a close up so that you can get the boom mic nice and close to the subject.
It may be frustrating to sacrifice the better shot for the sake of sound, trust me I hate it, but what I always tell myself is that it’s better to get a worse shot that has usable sound, than to get a beautiful shot that has terrible sound.
If you get a beautiful shot but the sound isn’t usable it’ll just end up on the cutting room floor anyway, never seen by anyone besides the editor.
Of course this is dependent on how necessary the sound is, but as a general rule if you’re working with an on-board mic and there is crucial dialogue - prioritise getting usable sound over getting a beautiful image.
5 - COVERAGE
The final step to shooting a documentary is, well, the actual act of shooting it. Understanding coverage, which refers to the angles, shot sizes and way in which a scene is shot is an invaluable skill in documentaries.
While in fiction filmmaking you can shot list, storyboard or consider the coverage of a scene between setups as you shoot it, when you are working in unexpected situations that will only take place once, you have to make these decisions in real time.
It’s a difficult thing to give broad advice on as different scenes can unfold in different ways, but let’s go over some basic ideas for capturing an average vérité scene.
I find it’s useful to edit scenes in your head as you are shooting them. For an average dialogue scene you know you’ll need a few things. One, you’ll need a wider shot that introduces the audience to the space of the location so that they can orient themselves and understand the context. Two, you’ll need a shot of whoever is talking, specifically the main character or characters that you are focusing on. Three, you’ll need to get reaction shots of whoever isn’t talking, so that the editor can use these to shorten a scene.
For example, there may be one sentence at the beginning which is great, then they waffle for a bit, then they have another three sentences which are great. If you have a reaction shot, then you can start on the person saying the first sentence, cut to the reaction shot while you keep going with the dialogue from the end three sentences. Then cut back to the person saying the dialogue. This naturally smooths things over and ‘hides’ a cut.
If you only have shots of whoever is talking, then the editor will have no option but to either select one section of dialogue, for example the final three sentences, or to jump cut - which can be abrasive.
Also remember that the size of a shot affects how an image is interpreted. So for more personal moments you want to try and get as close as you can. However, you also need to take into consideration that your proximity to a person will affect how they act.
If you meet someone for the first time and get right up in their face with a camera immediately they will be put off and likely won’t open up to you emotionally.
That’s why I usually like to start shooting scenes wider and then begin to move closer to them as they become more comfortable with your presence and the conversation starts to heat up.
Covering a scene in a documentary situation comes with experience. It’s like an improvisational dance that needs to balance getting shots that will cut together, making the subject feel natural and at ease and anticipating the right shot size for the right moment.
Although this just touches the surface, if you want to cut a basic, verite dialogue scene together and make an editor happy, then make sure you get, at a minimum, a shot that establishes the space, a shot of the person talking and a reaction shot of people who are not talking.
Cinematography Style: Janusz Kamiński
In this episode of Cinematography Style I’ll unpack Kaminski’s philosophy on filmmaking that uses visual metaphors to express stories, and give examples of the kinds of gear and technical tricks he’s used as a cinematographer to create images.
INTRODUCTION
With a career in producing images that has spanned decades, it can be tricky to pin down exactly the work of Janusz Kaminski. However, it’s difficult to deny that a large part of his filmography is due to his extensive collaborations with iconic director Steven Spielberg.
This raises the question, how do you separate the creative input of the director and the cinematographer? Is it even possible to do so?
In this episode of Cinematography Style I’ll unpack Kaminski’s philosophy on filmmaking that uses visual metaphors to express stories, and give examples of the kinds of gear and technical tricks he’s used as a cinematographer to create images.
BACKGROUND
During a period of political turbulence in the early 1980s, the Polish cinematographer moved to the United States where he attended university. He decided to take up cinematography and went to film school at the AFI.
He got his first professional job in the industry as a dolly grip on a commercial. The camera operator quickly told him this wasn’t for him. Next, he worked as a camera assistant, where he was again told he also wasn’t any good. He then started working in lighting which kicked off his career.
During this time he worked on lower budget productions with fellow up and coming cinematographers such as Phedon Papamichael and Wally Pfister. He also began working as a cinematographer too in his own right.
“I was here for 13 years and I shot 6, 7 movies. So I was experienced I just didn’t have that little push. I shot a little movie directed by Diane Keaton. Steven liked the work, called my agent, we met and he offered me to do a television movie for his company and after that he offered me Schindler’s List.”
This collaboration proved to be a lasting one. Over the years they have shot 19 other films together and counting.
Other than Spielberg he’s also shot feature films in many different genres for other directors such as: Stephen Sommers, Cameron Crowe, Judd Apatow and David Dobkin.
PHILOSOPHY
Coming back to the question of how you separate Kaminiski’s input from Spielberg’s: in their vast collection of films together, a lot of the overarching visual decision making does come from the director’s side.
Prior to their work together, Spielberg was known for the creative way in which he positioned and moved the camera in order to tell stories. In that way, I think a great deal of the perspective of what the audience sees in the frame comes from him.
For some movies, such as West Side Story, Spielberg uses extensive storyboards to pre-plan the coverage in a very specific way. While other movies like Schindler’s List had surprisingly little planning and were more spontaneous without any shot lists or storyboards.
For this situation, Kaminski used a portable tape recorder to dictate notes about lighting, problems or gear he may need, to bring order to his thoughts and successfully execute the photography as they went.
In terms of the overall look and lighting of Spielberg’s early films, they all followed a similar template that was grounded by a traditionally beautiful Hollywood aesthetic with haze that accentuated an ever present glowing backlight that gave the actors an angelic, rim light halo outline.
The other cinematographers he worked with were intent on servicing this traditional aesthetic.
When Kaminski came on board to shoot Schindler’s List he deconstructed the Hollywood, family-friendly beauty that audience’s had come to expect from Speilberg’s work.
“I think the idea of de-glamorising the images, strangely, I’m always interested in that. I didn’t want that classical Hollywood light. I wanted more naturalistic looking. We all want to take chances, because it’s not this comfortable life we’ve chosen where we just make movies and we work with movie stars. We express ourselves artistically through our work and we want to take chances.”
Throughout their collaborations together, Kaminiski was able to find a middle ground that balanced Speilberg’s desire for a traditionally beautiful look with his own appreciation for de-glamorised images that could be considered beautiful in a different way.
Another ever present idea in his work is his use of visual metaphors - where the camerawork represents a particular idea or leans into a visual perspective that represents the location or time period that is being captured in the story.
“I think each story has its own representation. You have to allow the audience to immediately identify where they are. So if you’re not using some very strong metaphors you will lose the audience. So the first explosion is very yellow, then we go to France and it’s more blue-ish, you go to Italy it’s very warm and fuzzy, France it’s very warm and fuzzy. So using those visual cliches that we as the people identify with specific countries.”
He doesn’t only create these visual metaphors with colour. On Munich he used zooms to capture the photographic vocabulary of the 1970s when those lenses were popular.
Or in Saving Private Ryan he mimicked the kind of manic, handheld, on the ground style that the real combat cameramen of the time would have been forced to use.
Or in Catch Me If You Can, he differentiated the time periods by giving the 60s scenes a warm, romantic glow and the 1970s scenes a slightly bluer, flatter look.
These visual languages and cues subtly change depending on the movie. They back up each film by using the images to support the story in a way that hopefully goes unnoticed by the audience on the surface, but feeds into how they interpret the movie in an unconscious way.
GEAR
“I look at cameras as a sewing machine. When you talk to the wardrobe designer you don’t ask her what kind of sewing machine do you use, because it’s just a sewing machine. It doesn’t really matter. The equipment, all that stuff is not. What you do with it is essential.”
Some cinematographers like to be consistent with their gear selection to carry their visual trademark across the respective projects that they work on. Kaminiski isn’t like that.
Throughout his career he has got a variety of optical effects from his big bag of tricks. Sometimes this involves using filters, sometimes photochemical manipulation, other times unique grip rigs or playing with unconventional camera settings.
So, let’s go through a few examples of some gear he has used, starting with his camera package.
He flips between shooting with Panavision cameras and lenses in the US and using Arri cameras when working in Europe. He’s alternated between shooting Super 35 with spherical lenses and in the anamorphic format.
Spherical lenses are more practical as they are faster, have better close focus and are smaller, which makes them better suited for shooting in compact spaces such as car interiors. Examples of some of these lenses that he has used include Cooke S4s, Panavision Primos and Zeiss Standard and Super Speeds.
He usually shoots close ups at around a more romantic 50mm focal length or longer to flatter the face, but on Schindler’s List chose to shoot them with a wider 29mm field of view that lended itself to realism.
He’s used anamorphic lenses for their classical Hollywood look, with beautiful flares that are impossible to otherwise recreate. Some examples are the C-Series and more modern T-Series from Panavision.
He has used digital cinema cameras occasionally but almost exclusively shoots features on 35mm film - including his recent work. His choice of film stocks has been extremely varied.
On Schindler’s List he mainly shot on Eastman Double-X 5222 black and white. For specific sequences that required parts of the frame to be colourised, such as the famous shot of the girl in the red dress, he pulled an interesting photochemical trick by recording on Eastman EXR 500T 5296 colour negative film stock and then printing the film onto a special panchromatic high-con stock which is sensitive to all colours and used primarily for titles.
This gave them the look they wanted that best matched the rest of the black and white footage and didn’t contain the blue tint that came with removing the colour from the colour negative in the regular way.
To get a flatter image for the 1970s scenes in Catch Me If You Can he used Kodak 320T stock in combination with low-con and fog filters to purposefully make the images a bit uglier, more neutral and drab. This coincided with the main characters' fall from grace as he came to terms with the real life consequences of his actions.
Or on Saving Private Ryan, he settled on Eastman’s 200T film stock, which he pushed by one stop and used a film development process called ENR which both desaturated the stock and sharpened up the look of textures, giving the details in the image a grittiness.
When it comes to lighting with his gaffer he acknowledges that some gaffers are more technical while others are more conceptual. Due to the large scope of the kind of sets he lights it’s more practical for him to describe the lighting he wants in more general terms. Such as no backlight, or this source needs to feel warm - rather than describing and placing loads of specific units around a set.
“The scope is way too large. You can’t demand every light be placed on set according to your desires, so you have a gaffer who is knowledgeable. On the shooting day or the day before you talk about the specifics of each scene or you adjust the lighting. Or you do the lighting with the gaffer on the given day right after the rehearsal. Surround yourself with the best people so you can work less and I want to work as little as possible.”
Spielberg likes to move the camera in a fluid, expansive way, with rigs such as a Technocrane, that reveals large portions of the location. This adds to his challenge of lighting as it’s far easier to light in a single direction with a 15 degree camera angle than it is to cover 270 degrees of the set.
Although for other films such as Saving Private Ryan a lot of handheld moves were done to introduce a feeling of realism that placed the viewer right down on the shoulder of the operator, in the middle of the action.
To inject even more intensity into an already shaky image he used Clairmont Camera’s Image Shaker. This is a device which can be mounted onto the front bars of the camera and vibrates at a controlled level with vertical and horizontal vibration settings which could mimic the effect of the explosions happening around the soldiers.
CONCLUSION
Kaminiski uses whatever technical trick he can think of to create visual metaphors that push the story forward, whether that’s done photochemically, with a filter or by physically shaking up the image.
In the end, the technical solution or piece of equipment itself is less important than the cinematic effect that it produces.
Spielberg and Kaminski’s filmmaking is an intertwined creative partnership which has combined Spielberg’s traditionally cinematic visual direction with Kaminski’s focus on visual metaphors. Sometimes this means perfect golden backlight, but other times a feeling of realism that is far more ugly and true to life is what is required.
Alexa 35 Reaction: Arri's First New Sensor In 12 Years
My first reaction to details about the Alexa 35 prior to the release of the camera.
We’ve been hearing rumours that Arri has been developing a new Super 35 4K camera for years…Well, it seems it’s finally time. A brochure for the new Alexa 35 has leaked that outlines all the features of this new camera.
If you follow the channel you’ll know that I don’t really react to new stories but rather focus on discussing a more general overview of filmmaking topics. However, since I think this new Alexa 35 has the potential to take over the high end cinema camera industry in a similar way that the original Alexa Mini did all those years ago, I’m going to run through and react to some of the key features of this new camera.
BACKGROUND
Before I start, I should probably mention that Arri’s approach to camera development and releasing new cameras is a bit different to some other brands. Brands like Red, for example, are known for putting out cameras as soon as they can and then sorting out any bugs or issues that arise in early testing.
Arri is far more conservative and precise about their releases. They don’t release new gear very often. The Alexa 35 represents Arri's first new sensor that they have developed in 12 years. So, when they do choose to unveil a new piece of gear to the public you can rest assured it has been thoroughly tested and carries a reputation that it will live up to all the specs that they mention.
SUPER 35 4.6K
Arri’s cameras are all developed to fulfil a specific section of the cinema market that relates to its sensor size, specs or physical size of the camera. For example, the Alexa Mini was developed as a Super 35 camera which was small enough to be used on a gimbal. Or the Alexa 65 was developed to provide a 65mm digital sensor size.
The Alexa 35 was developed to be an update of the Alexa Mini, with a Super 35 sensor, a small form factor and the crucial update of recording higher resolutions. Apart from its effect on the images, a big reason this increase in resolution was made was to meet the 4K requirements needed to film Netflix Originals. Previously this was only possible with their cameras that had larger sensors like the Mini LF and was unavailable in the Super 35 format.
As I’ve said in a previous video, Super 35 sensors have a different look and field of view than large format cameras. Since it’s been the standard format throughout cinema history, there is also the largest range of cinema lenses to choose from.
SPECS
So let's run through some key specs. Like their other new cameras, the Alexa 35 can record in ProRes or ARRIRAW. It tops out at 4.6K in Open Gate and can record up to 75 frames per second onto the larger 2TB Codex drives, which goes down to 35 frames on the 1 TB drive.
In regular 4K, 16:9 mode, this frame rate is pushed up to 120 in ARRIRAW. This is a nice upgrade from the Mini LF and will cover most slow motion needs on set, before needing to change to a dedicated slow motion camera like a Phantom.
An impressive feature of this new sensor is that Arri has found an extra one and a half stops of dynamic range in the highlights and another stop in the shadows. This brings the total exposure latitude of the camera to 17 stops.
They also claim that the highlights have a naturalistic, film-like roll off to them. To me, how a cinema camera handles the highlights is one of the most important factors in creating a pleasing filmstock-like look. It’s something that the previous ALEV 3 sensor did well, which I’m sure will continue or be improved upon by this new iteration.
As many DPs tend to push a more naturalistic lighting style these days, I think the increased dynamic range that they claim will help control the light in more radical exteriors and make sure there is detail in the highlights from hot windows in interiors.
More manufacturers these days, such as Sony, have been moving to a dual ISO model that has a standard ISO for regular use and a boosted native ISO for low light situations.
It seems Arri hasn’t gone quite this far but has made a move in the direction of improving the low light performance of the camera with what they are calling an ‘Enhanced Sensitivity Mode’. This can be activated when the EI is set between 2,560 and 6,400. They claim this creates a low noise image in low light and is targeted at filmmakers who want to use available light during night shoots.
When it comes to colour, Arri has developed a new workflow called Reveal colour science, which they claim is a simpler workflow for ARRIRAW post production and leads to higher quality images with accurate life-like colour. They also claim that the Alexa 35 footage will be able to be cut with their existing line of Alexa cameras. While I assume the colour will therefore be fairly similar to the existing Arri look, this is going to be something that will need to be seen once footage starts getting released.
TEXTURES
So I mentioned a new feature of the Alexa 35 that I’m excited about, and that is what they are calling Arri Textures. When digital cameras were originally introduced the common way of working with them was to record as flat a log image as possible, which would then have more room be manipulated in post production by doing things like creating a look, adding artificial film grain, adjusting saturation, these kinds of things.
I think as cinematographers have gotten more used to the digital workflow there has been a bit of a push to go back to the ways of old where the decisions that cinematographers made on set determined the look of the negative.
Some do this by creating a custom LUT before production, which is then added to the transcoded files that are edited with, so that a ‘look’ for the footage is established early on, rather than found later when its handed over to a colourist at the end of the job.
With that said, Arri Textures is a sort of setting plugin that is made in camera that defines the amount and character of the grain in the image, as well as the contrast in the detail or sharpness.
So, cinematographers now have the ability to change the way the camera records an image, much like they would back in the day by selecting different film stocks. I think this is a great idea as a tool as it puts control back into the hands of cinematographers and allows them to make these decisions on set, rather than having to fight for their look in the grade.
ERGONOMICS
With all of these new features and high resolution comes a need for more power in order to get all this done. With that in mind, the Alexa 35 will be a completely 24V powered camera - rather than prior cameras that could run off 12V batteries like V-locks as well as 24V power.
This will be done with their new system of B Mount batteries. I haven’t personally worked with these batteries yet, but one plus I foresee, apart from them providing a higher level of consistent power is that they can be used by camera operators who operate with their hand on the back of the battery.
This has become a popular way to operate, particularly with a rig like an Easyrig. I always found older gold mount or V-mount batteries had a tendency to lose power and shut down the camera from time to time as the contacts shifted when operated. This should no longer be a problem with the B-mount.
In terms of its form factor, I think this new Alexa is a great size, around the same size as the Mini LF - a little larger than the original Mini but small enough to be used for handheld and gimbal work.
The pictures show the addition of a little menu on the operator’s side of the camera, with quick access to basic settings like frames per second, shutter, EI, ND and white balance. It kind of reminds me of old Arri film cameras that came with a little setting display screen on the operator side.
The main reason I think this will be useful is for when the camera needs to be stripped down, for Steadicam, gimbal or drone, and loses its viewfinder which has the main menu access. On the old cameras if you needed to change settings, you’d have to awkwardly plug in the eyepiece, and wait for it to power up before you could do so, or do it through the Arri app on a phone which can be buggy. This new menu should save time in those scenarios.
Other than that they’ve added some extra user buttons which reminds me of the Amira a bit and perhaps is intended for quicker use in documentary situations. The new camera comes with a bunch of re-designed components, with the intention of making it a small but versatile camera that can be built into light or studio setups.
Finally, one criticism I have is that like the Mini LF, the Alexa 35 only has 3 different stops of internal ND, a 0.6, 1.2 and 1.8. I’m surprised they didn’t try to add more stops to compete with Sony’s Venice that has 8 different stops of internal ND filters from 0.3 to 2.4. I know cinematographers who like shooting on the Venice almost entirely for the ease and speed that having all the internal NDs you could need provides.
What A Steadicam Operator Does On Set: Crew Breakdown
In this Crew Breakdown video, let’s take a look at the Steadicam Operator and go over what their role is, what their average day on set looks like, and a couple tips that they use to be the best in their field.
INTRODUCTION
A long time ago, in a world far before low cost gimbals were a thing, there were only a handful of options when it came to moving cameras with a cinematic stability.
You could put a camera on a dolly. You could put a camera on a crane. Which are both great options, but what about if you wanted to do this shot? How do you chase a character over uneven ground, through twists and turns, at a low angle for an extended, stabilised take?
The answer was with a piece of stabilising equipment invented by Garrett Brown, called the Steadicam, that could attach a camera to an operator, giving filmmakers the mobility of a handheld camera combined with a cinematic stability.
This created the new crew position on a film set of Steadicam Operator. So, in this Crew Breakdown video, let’s go over what their role is, what their average day on set looks like, and a couple tips that they use to be the best in their field.
ROLE
“I liked handheld. I did not like the way it looked - then or now. And so what I needed was a way to disconnect the camera from the person.” - Garrett Brown, Steadicam Inventor
Before going over what the role of the Steadicam operator is, let’s take a basic look at how a Steadicam works.
A Steadicam is basically a perfectly balanced, weighted gimbal attached to the camera operator’s body that isolates the camera from the operator’s movement. This allows the camera to be moved around precisely with smooth, stabilised motion.
It can be broken down into three basic sections: the vest, the arm and the sled. The sled includes a flat top stage which the camera sits on and a post which connects the bottom section with a monitor mount and a battery base.
The top stage with camera and the bottom stage with the monitor and the batteries are positioned so the weight of the camera is counterbalanced and even. Like balancing a sword on a finger.
Having two ends which are perfectly balanced both adds weight, and therefore more stability to the rig, and puts the centre of gravity exactly at the operator’s grip, so that they can use their hand to adjust how the camera moves with delicate adjustments.
This hefty weight is supported by a gimbal attached to the post, which attaches to an arm, which then attaches to a vest worn by the operator. The rig’s substantial weight, perfect balance and gimbal allows the operator to manoeuvre the camera around with a floating stability using the motion of their body and deft touches with their grip.
A Steadicam is therefore a great option to move a camera through tight spaces, over uneven terrain, or do flowing, 360 degrees of movement around actors in long takes.
It’s generally seen as providing more organic motion and the ability to do hard stops with precision better than 3-axis gimbals - which have a drifting motion to them before they come to a resting stop.
The role of the Steadicam operator is an interesting one, as it requires both a deep technical knowledge and proficiency as well as a creative flair and theoretical knowledge on how to move the camera and frame shots to tell a story.
Sometimes, but not always, a Steadicam op will also work as the primary camera operator (or the B-camera operator), operating shots from a tripod head, wheels and performing any Steadicam shots that are required.
Their job includes helping to build and balance the camera on the Steadicam rig, discussing a shot with the DP and director and then executing it - often adjusting between takes until the perfect take is in the can.
AVERAGE DAY ON SET
Before the shoot begins, the Steadicam operator will show up to the gear check at the rental house where the camera team tests and assembles the gear. As different jobs will use different cameras and lenses, which come in different weights and sizes, it’s crucial that the camera is properly built and balanced during testing.
Nothing would be worse than building the camera on the day, without a gear check, only to realise that the lens is too front heavy to balance on the Steadicam.
On the day of shooting, the operator will grab a shooting schedule or communicate with the 1st AD to determine what Steadicam shots need to be done and therefore when the camera needs to be built for Steadicam. Sometimes most of the day can be spent doing Steadicam, but usually it will only be reserved for a few shots, in different scenes or setups, spread throughout the day.
If there is a particularly tricky shot, or a choreographed long take that has been pre-planned, the operator may meet with the DP during pre-production, prior to shooting, and walk through the shot to work out how best to pull it off.
When it comes time for Steadicam, the first thing to be done is to build the camera. This is done by the 1st AC or focus puller who will strip the camera of excess weight, configure the necessary accessories, such as the transmitter or focus motors in the same place as they did during the gear check and attach the Steadicam’s sliding base plate to the bottom of the camera.
It is then handed off to the operator who will slide the camera onto the top stage and test it to make sure it is properly balanced on the gimbal. They’ll then throw on the vest, go up with the camera and run through a rehearsal or a rough blocking with the director, actors and DP to work out the movement.
When they’re ready they’ll go for a take. The director and DP will watch a feed of the image transmitted on a monitor and give feedback on things like the speed of the motion, the framing or suggesting a new movement.
The camera team will often hand a wireless iris control to the DP, that they can then use to change the aperture on the lens remotely if there are any changes in light.
Between takes when the camera isn’t needed, the operator will take the weight of the Steadicam off by placing it on a stand.
This is the core of their job. However, since the requirements of different shots can vary hugely depending on the situation, each shot may offer a different challenge when it comes to operating. Sometimes this may be the physical challenge of operating a heavy setup, other times it may be a matter of synchronising the timing of the movement with the actor and focus puller or the shot itself may require particularly nimble operation.
The Steadicam operator has to be able to cooly and calmly adapt to each situation to provide the creative team with the kind of shot that they imagine under the pressures of a time limit.
TIPS
To become a Steadicam Operator you can’t just show up on set and learn as you go. The reason it is such a niche profession is that it takes lots of training, knowledge, practice and experience to be hired for high end film jobs.
It’s also expensive.
Typically, Steadicam operators buy their own Steadicam, which is a pricey piece of gear, attend Steadicam workshops where they are trained how to operate it, and are then able to rent out their expertise and their rig out to productions on jobs.
In recent years Arri also introduced the Trinity, which is similar to a Steadicam with a 3-axis camera stabiliser that allows the camera to move on the roll axis, and self balancing features which allows the camera to be moved from low mode to high mode during a shot and the post to be extended for extra reach.
With a traditional Steadicam, operators need to decide before a shot begins whether to shoot in the more common high-mode, or if the camera needs to be close to the ground with the post flipped around and used in low-mode.
Another option sometimes used is to hard mount the arm of the Steadicam on a moving vehicle. The operator then sits next to the rig to operate the camera without having to hold the full weight of it.
An early example of this was worked out by Garrett Brown on The Shining for the famous hallway tracking shots. They hard mounted the Steadicam arm to a wheelchair which could then be pushed through the hotel corridors in either high mode, or inches from the ground in low mode.
Since a Steadicam rig with a cinema camera is extremely heavy, operators try to minimise the amount of time that they carry the rig in order to save their stamina for shooting. Any time the camera isn’t going for a take they’ll use a stand to rest the rig, or have a grip standing close by so that they can hand the post off to them as soon as cut is called.
Communicating with the AD to make sure that the camera only goes up at the last possible moment, and isn’t waiting there for ages while make-up does final checks and the director stands in to give notes, is another good way of minimising time holding the rig.
Since the camera is set to balance perfectly, if there are big gusts of wind the camera can be shaken and experience turbulence. Therefore it’s good to make sure the grip department is carrying a ‘wind block’. This is a sheet of mesh material attached to a frame that is held by grips between the source of the wind and the camera in order to minimise turbulence.
Another crew member that the Steadicam operator needs to communicate with is the focus puller. Since on an average shoot day the camera will usually need to alternate between studio builds and Steadicam builds the 1st AC and the Steadicam operator should come up with the easiest possible method to change between these configurations that’ll save the production the most time. Because, on a film set more than anywhere else, time is money.
How The French New Wave Changed Filmmaking Forever
Out of all of the film movements I’d say one of the most influential of them was the French New Wave. In this video I’ll outline four things from this film movement that are still present in how movies are made and thought about today, which were responsible for altering the course of filmmaking forever.
INTRO
“He immediately talked about, kind of, the French New Wave portrait of youth.” - Greta Gerwig
“The beginning of Jules and Jim, the first three or four minutes influence the style of Goodfellas and Casino and Wolf of Wall Street and so many.” - Martin Scorsese
“Godard was so influential to me at the beginning of my aesthetic as a director, of, like, wanting to be a director.” - Quentin Tarantino
Throughout the decades, there have been many defining film movements in cinema. Some have had a longer lasting impact than others. Out of all of them I’d say one of the most influential of these movements was the French New Wave, which took place from the late 50s to the late 60s. Its impact can still be seen to this day.
During this time various directors emerged who made films that could broadly be classified by their similar philosophy and approach towards experimentation and style.
Many of these directors began their careers as film critics and cinephiles who wrote for the magazine Cahiers du Cinéma where they rejected mainstream cinema and came up with a sort of film manifesto that encouraged experimentation and innovation.
In this video I’ll outline four things from this film movement that are still present in how movies are made and thought about today, which were responsible for altering the course of filmmaking forever.
AUTEUR THEORY
“An Inquisition-like regime ruled over French cinema. Everything was compartmentalised. This movie was made as a reaction against everything that wasn’t done. It was almost pathological or systematic. ‘A wide-angle lens isn’t used for a close up? Then let’s do it.’ ‘A handheld camera isn’t used for tracking shots? Then let’s do it.’” - Jean Luc-Godard
In 1954 director Francois Truffaut wrote an article for Cahiers du Cinéma called ‘A Certain Tendency of the French Cinema’, wherein he described his dissatisfaction of the adaptation and filming of safe literary works in a traditional, unimaginative way.
Up until then movies were largely credited to the actors who starred in them, or to the studios and producers involved in their funding and creation.
Instead, the cinema of the French New Wave put forward the idea that the real ‘author’ or ‘auteur’ of a movie should be the director. They should be the primary creative driving force behind each project by creating a visual style or aesthetic specific to them. Their themes, tone, or overall feeling from their films should also be consistent and identifiable across their overall body of work.
If you could glance at a film and immediately tell who the director behind it was - that was a sign it was created by an auteur.
A film by Quentin Tarantino will have ensemble casts, non-linear storylines, chapter divides, mixed genre conventions and pay homage to the history of cinema.
A film by Wes Anderson will have fast-paced comedy, childhood loss, symmetrical compositions, consistent colour palettes and highly stylised art direction.
This idea was revolutionary as it encouraged directors to tell stories through their own distinctive voice, rather than acting as craftsmen that followed the same rules and chiselled out each film the same way for a studio.
All it takes is watching a few trailers or the credits in a film to tell that auteur theory is still alive and well. Many movies use the name of the director as a selling point, even more so than the actors in some cases.
If we turn to short form filmmaking, a huge number of directors of commercials or music videos get hired by clients and agencies because they want their film told in a specific style associated with that director.
You hire The Blaze to direct if you want a character-focused, wildly energetic, passionate, personal journey told with a fluidity of movement.You hire Romain Gavras to direct if you want a carefully coordinated, composed, concept driven set piece.
But this French New Wave idea of the director as an auteur is just the first thing that had an undeniable impact on how cinema today is created.
LOW BUDGET
“I really like Band Apart. In particular it really kinda grabbed me. But one of the things that really grabbed me was that I felt I almost could have done that. I could’ve attached a camera to the back of a convertible and drive around Venice boulevard if I wanted to.” - Quentin Tarantino
In their more financially risky pursuit to break free from the constraints of the traditional mould of French cinema and create their own inventive styles as auteurs, many French New Wave directors had to work within a low budget lane.
This was also influenced by the financial restraints of post-World War Two France.
Rather than seeing it as a disadvantage, a lot of the movies that came out of this period used their lack of resources to break conventional rules and form their own style - which we’ll get into more a bit later.
They took some cues from the Italian Neorealist movement that preceded it, which cut costs by shooting on location and working with non-professional actors in rural areas.
Likewise, many French New Wave films worked on location, with a bare bones approach to lighting and homemade, DIY camera rigs. This allowed them to work quickly, unencumbered by large crews and introduced a more on-the-ground aesthetic to the filmmaking.
This further democratised filmmaking and made it more accessible than ever before. It showed that big studios were not always needed to produce great cinema.
This democratisation of filmmaking expanded further throughout the years, until it exploded even more with the introduction of low budget digital cinema cameras.
There’s a reason that many low budget indie films today still use French New Wave films from this period as a primary reference and inspiration for, not only what is possible to achieve with limited resources, but also the kind of look and style that comes with it.
VISUAL STYLE
“All these films had been very different of what had been French cinema. What was in common was to use a lot of natural light, sometimes use non actors, natural sets, a sort of speed in the inspiration and the work. That is what was in common.” - Agnes Varda
What emerged from this rejection of cinematic tradition in a low budget environment were a burst of films that broke existing filmmaking ‘rules’ and had a vigorously experimental style.
Part of this was informed by a documentary-esque approach to cinematography that free-ed the actors up to move and improvise. Like documentaries, these films were largely shot at real locations, relied on using mostly natural light (which allowed them to shoot 360 degrees in a space), using a reactive, handheld camera and sometimes employed non-professional actors who they’d get to improvise dialogue, blocking and actions.
All this went against the more formal conventions that were previously expected of traditional studio films that were shot in studio sets, off a rigid dolly, with perfect, artificial lighting and precise blocking of a pre-approved screenplay.
In this way the French New Wave paved a path that made it OK for future filmmakers to work in a rougher, more naturalistic style and broke down the very notion that cinematography needs to conform to specific rules.
EXPERIMENTATION
“I think a lot of it has to do with the relentlessness of the voice over and the rapid speech and also the pace of the music under it.1:23 “It feels like there’s a sense of freedom. Anything could happen at any moment…Narrative is completely fractured I think.” - Martin Scorsese
French New Wave directors saw exciting possibilities for using film as a medium - more like painters or novelists did - which could not only be used to tell stories but also to translate their thoughts or ideas by experimenting with form and style.
Much of this was done in the edit.
Whereas older films may have used a traditional, linear story, various scenes and exposition to unpack characters, films like Jules and Jim used voice over, fast paced music and snappy editing to immediately introduce characters and their relationships in a more fractured way that compressed time into a montage.
Directors like Godard broke down the medium even more into a self conscious, post modern vision by having characters literally break the fourth wall and talk directly into the camera, face to face with the audience.
Instead of attempting to suspend disbelief, Godard made his audience very aware that what they were watching was something constructed by an artist.
Breathless also went against a universal rule of cinema and used jump cuts, a technique which cuts forward in time using the same shot, without changing the angle or shot size. The effect is an abrasive ‘jump’ forward in time.
This technique influenced future filmmakers by tearing down the idea that the rules of cinema should be strictly followed. This post modernism that was pushed by the French New Wave has now seeped into every kind of contemporary visual art - including how many YouTube videos are now edited.
Do Cinematographers Like Lens Flares? Textured vs Clean Images Explained
When it comes to the question of whether clean or textured images should be favoured, cinematographers are generally split into two different camps.
INTRO
“I can’t stand flares. I find any artefact that is on the surface of the image a distraction for me. The audience or I’m then aware that I’m looking at something that is being recorded with a camera.” - Roger Deakins, Cinematographer
“If the light shone in the lens and flared the lens that was considered a mistake.I feel particularly involved in making mistakes feel acceptable by using them. Not by mistakes or anything but by endeavour.” - Conrad Hall, Cinematographer
When it comes to the question of whether clean or textured images should be favoured, cinematographers are generally split into two different camps. Some see their goal as being to create the most pristine, cinematically perfect visuals possible, while others like to degrade the image and break it down with light and camera tricks.
Before we discuss the pros and cons of clean and textured images, we need to understand some of the techniques used by cinematographers that affect the quality of how an image is captured. Then I’ll get into the case that can be made for clean images and the case that can be made for textured images and see which side of the fence you land on in the debate.
WHAT MAKES AN IMAGE CLEAN OR TEXTURED
When cinematographers talk about shooting something that looks clean, they are referring to an image which has the subject in sharp focus, which is devoid from any excess optical aberrations, video noise, grain or softening of the highlights or bright parts in the frame. Some cinematographers however like to introduce different kinds of textures by deliberately ‘messing it up’.
The easiest identifiable optical imperfection is the lens flare. This happens when hard light directly enters the open glass section at the front of a lens and bounces around inside the barrel of the lens off of the different pieces of glass, which are called elements.
So to get a lens flare, cinematographers use a backlight placed directly behind a subject or at an angle that is shined straight at the lens. A common way of doing this is to use the sun as a backlight and point the camera directly at the sun.
In the past, flares were often seen as undesirable so a few tools were introduced to get rid of them. To prevent a flare you need to block the path of any hard light that hits the lens directly. A mattebox is used not only to hold filters but also to block or flag light from hitting the front element. A top flap and sides can be added to a mattebox to cut light, as can a hard matte - which clips inside the mattebox and comes in different sizes which can be swapped out depending on how wide the lens is.
If a shot is stationary and the camera doesn’t move, the lighting team can also erect a black flag on a stand to cut light from reaching the lens.
On the other hand, a trick some use to artificially introduce a flare when there isn’t a strong backlight is to take a torch or a small sourcy light like a dedo and hit the lens with it from just out of shot.
Different kinds of lenses produce different kinds of flares, which are determined by the shape of their glass elements, the number of blades that make up the aperture at the back of the lens and the way in which the glass is coated. Standard, spherical lenses have curved, circular elements that produce round flares that expand or contract as the light source changes its angle.
Anamorphic lenses are made up of regular spherical glass with an added section of concave glass that vertically squeezes the image. It is then de-squeezed to get a widescreen aspect ratio.
Because of this, anamorphic lenses produce a horizontal flare that streaks across the frame. The Panavision C-Series of anamorphic lenses are famous for producing a blue anamorphic lens streak which is associated with many high end Hollywood films.
The glass elements inside a lens have different types of coatings. Modern coatings are used to decrease artefacts and limit flooding the image with a haze when the lens flares.
As technology has improved these coatings have gotten progressively better at this and therefore more modern lenses produce a ‘cleaner’ image. One way that cinematographers who like optical texture get around this is to use vintage lenses that have older coatings that don’t limit flares as much or bloom or create a subtle angelic haze around the highlights. You even get uncoated lenses for those that really want to push that vintage look.
Another option to soften up an image a bit is to use diffusion filters. These are pieces of glass that are placed inside a mattebox and create various softening effects, such as decreasing the sharpness of the image, making the highlights bloom and softening skin tones.
Some examples of these filters include Black Pro-Mists, Glimmer Glass, Pearlescents, Black Satins, Soft FX filters - the list goes on. They come in different strengths, with lower values, such as an eighth providing a subtle softness and higher values providing a heavy diffusion.
Some cinematographers even go more extreme by using their finger to deliberately smudge or dirty up the front of a filter.
A final way of introducing texture to an image is with grain. This can be done either by shooting on a more sensitive film stock, like 500ASA and push processing it, by increasing the ISO or EI on the camera, or by adding a film grain effect during the colour grade in post production.
THE CASE FOR TEXTURED IMAGES
“What lenses? Should it be sharp? Should it have flaws? Should it have interesting flares? I always try to be open to everything.” - Linus Sandgren, Cinematographer
Now that I’ve listed all the ways that an image can be messed up by cinematographers, let’s go over some reasons why anyone would actually want to do this in the first place.
Up until about the 1960s or 1970s, the idea of intentionally degrading how an image was captured wasn’t really prevalent. However, movements like the French New Wave or New Hollywood rebelled against capturing a perfect representation of each story and intentionally used things like flares to do this.
Producing optical mistakes from a more on the ground camera created an authenticity and grittiness to the images in a similar way that many documentaries did.
In different contexts, optical aberrations, like lens flares, have been used to introduce different tonal or atmospheric ideas. For example, Conrad Hall went against the Hollywood conventions of the time and embraced flares on Cool Hand Luke to create a sense of heat from the sun and inject a physical warmth into the image that reflected the setting of the story.
Some filmmakers like deliberately using lower gauge film such as 16mm or even 8mm to produce a noisy, textured image. Often this is perceived as feeling more organic and a good fit for rougher, handheld films.
Textured images with a shallow depth of field also feel a bit dreamier, and can therefore be a good tool for representing more experimental moments in a story or to portray a moment that happened in the past as a memory.
Since the digital revolution, many DPs have taken to using diffusion filters and vintage lenses on modern digital cinema cameras - to balance out the image so that it doesn’t feel overly sharp.
Degrading the image of the Alexa by shooting at a higher EI, like 1,600, shooting on lenses from the 1970s, or using an ⅛ or a ¼ Black Pro Mist filter, are all ways of trying to get the more organic texture that naturally happened when shooting on film back into the image.
THE CASE FOR CLEAN IMAGES
“Digital cameras were able to give us a beautiful, very clean, immersive image that we were very keen on…3:13 It almost translates 100% what you are feeling when you are in the location.” - Emmanuel Lubezki, Cinematographer
On the flipside, some DPs seek a supremely clean look that pairs sharp, modern glass with high resolution digital cameras.
One reason for this is that clean images better transport the audience directly into the real world, and present images in the same way that our eyes naturally see things. Clean images are regularly paired with a vision that needs to feel realistic.
These cinematographers see any excess grain or aberrations as a distraction that pulls an audience out of a story and makes them aware that what they are seeing isn’t reality and is rather a visual construction.
When light flares across a lens it’s an indication that the image was captured by a camera and may disrupt the illusion of reality.
Sometimes filmmakers also want to lean into a clean, sharp, digital look for the story. It’s like choosing to observe the world directly, in sharp focus, rather than through a hazy, fogged up window.
Cinematography Style: Ari Wegner
Ari Wegner's cinematography isn't tied down to one particular look, and is rather based on a careful and deeply thought out visual style that uses informed creative decisions to present a look that is tailor made for each individual story or script.
INTRODUCTION
“I think that’s the question for any film. How do you get the energy of the script or the idea into it visually? Every film is different and every scene is different but if you know what your aspiration is to do that then you can think of some ideas of how to achieve that.”
In this series I’ve talked before about how some cinematographers like to create a look that is fairly consistent across much of their work, while others distance themselves from one style and mould the form of their cinematography depending on the script or director that they are working with.
Ari Wegner very much falls into the latter category. The films that she shoots are never tied down to one particular look, and are rather based on a careful and deeply thought out visual style that uses informed creative decisions to present a look that is tailor made for each individual story or script.
In this video I’ll unpack this further by diving into the philosophy behind her photography and showing some of the gear that she uses to execute those ideas.
BACKGROUND
Growing up in Melbourne around her parents who were both artistically inclined filled her with an appreciation for the arts and creative thought from an early age.
Her desire to work in film was sparked by her media teacher exposing her to short films, notably one by Jane Campion. She then changed her focus from photography to cinematography.
After graduating from film school she spent years shooting local independent films and documentaries, before breaking out by photographing Lady Macbeth, which screened at numerous festivals.
Some of the directors she’s worked with include: Janicza Bravo, Jane Campion and Justin Kurzel.
PHILOSOPHY
Something which is key to formulating the carefully considered look for each film is the time that she takes to prepare. Usually about four to six weeks of time is scheduled for pre-production on a movie - for which a cinematographer may only be present for a fraction of. However, on Jane Campion’s The Power of the Dog, an entire year was dedicated to pre-production, with Wegner present right from the get go.
“When Jane first called me to ask if I’d be interested, one of the things that she had as a condition was that she wanted someone to start straight away. I often feel like there’s not enough time to prepare or that there’s a rush. Once everyone’s in the same place the decision can feel very pressured. So, Jane wanted this film to feel relaxed and not have to be rushed into any decision.”
This luxuriously long prep allowed them to take their time and gradually and deliberately develop the look of the film.
Together they scouted potential shooting locations in New Zealand, learnt about the land and foliage in the area, calculated what time of year would be best to capture the kind of dry-brown look they wanted and found a mountain range that felt mythical yet humble enough to believably be on a family ranch.
Campion and Wegner also set aside four to six weeks just to fully storyboard the script. They spent time drawing out the shots that they imagined, then took those ideas and drawings to the physical set that was being constructed and walked through them to finalise the storyboard.
All this consideration and prep resulted in a cinematic style that was down to earth, observational, natural and unadorned.
Part of this came from the colour palette - which roots the look of a lot of her films. Whether it’s the vivid, saturated colours in Zola, the warm nights and cool days in The True History Of The Kelly Gang, or the muted, bone-dry natural palette of The Power of the Dog.
Part of the job she takes on is getting different scripts, stories and ideas and applying creative thought to them in collaboration with the director in order to create a visual style that is suitable for each project.
“We started having some big conversations about colour. Myself, the production designer, the costume designer, about how we were going to unify the whole world. Making a film set in Montana in the 1920s in New Zealand. We decided a strict colour palette would be a good part of the way to do that.”
GEAR
Like her approach to the colour palette of each film, Wegner’s lighting and selection of camera gear is based on coming up with solid cinematic ideas that represent the story.
To convey the shift in mood of the character that happens in Zola, she used a subtle shift in lighting and camera gear selection. As the relationship between the two characters shifted from an infatuated, trusting, light, fun one to a distrustful, darker, unhappy one, Wegner subtly introduced more shadows, colour clashes, contrast and darkness into the image.
She did this with low key lighting and also by shooting the initial scenes using a low-con filter which subtly faded out the blacks and made the highlights blossom into the mid tones.
To dial in specific colour temperatures and hues into the lighting, she used various LED fixtures, such as: Arri SkyPanels, Astera Tubes and Litemats. She likes using LED lighting as well as practical fixtures that can be seen in the frame - whether that’s from a natural source like a fire or an artificial source like a lamp.
Sometimes she takes the edge off the quality of artificial or natural light by placing diffusion gels in front of the light source to soften the shadows that appear on a face.
Another example of how she uses light to represent an idea can be seen in The Power of the Dog. Often interiors are well lit and act as a safe haven for characters, however in this film she introduced large amounts of shadow inside the farmhouse to make the space more ominous and unsafe, by pushing hard light through windows and backlighting characters so that they appeared in silhouette.
She’s shot on 16mm film before on Zola when it was requested by the director to give it a magical quality that accentuated the highly unusual almost dreamlike story. She used Kodak 250D for the sunny exteriors and the more sensitive 500T 7219 for interiors and night scenes.
From looking at some behind the scenes photos it seems that she uses a digital stills camera to metre the film, by taking a picture with the same exposure settings as the 16mm cinema camera until she finds an exposure level that she is happy with.
Usually she favours shooting digitally on variations of the Arri Alexa, such as the Mini, the XT or the Mini LF. She’ll often pair these cameras with moderate, vintage glass that breaks down the digital sharpness with a very slight optical softness.
Some examples of lenses she’s used include: Zeiss Super Speeds, Bausch & Lomb Super Baltars, Panavision Auto Panatar 1.3x Anamorphics and zoom lenses from Angenieux or the Alura.
On The Power of the Dog she used zooms to shoot some of the wide shots at long focal lengths. This compressed the background and made it feel like it was closer to the characters.
Once again, I’d ascribe the gear that she selects as being deliberately chosen on an individual basis for each story: sometimes using spherical glass, sometimes anamorphic glass, sometimes film, sometimes digital, sometimes using a Tiffen 812 warming filter or a lo-con in front of the lens and sometimes shooting it clean.
Rather than creating and shooting with a custom LUT like many other DPs do nowadays, Wegner likes to work and expose the camera using Arri’s standard Rec 709 LUT as a reference. This is a basic, natural look that is applied to the flat log or RAW footage. She’ll then create the final look for the film along with the director during the colour grade.
“I love the idea of LUTs and then every time I try to make one it kind of scares me. Usually I shoot REC 709 a lot. Knowing at the end of the day with the dailies colourist you can take it to a nice place for the editors. With REC 709 I know it very well. It’s not pretty but it’s reliable.”
CONCLUSION
Wegner’s methodology always uses the story as a jumping off point.
After much consideration she then comes up with a visual look that will support the kind of ideas that are present in the story.
Whether that’s always using central framing in Lady Macbeth to show how trapped her character is, using her knowledge of lighting to introduce both a physcical and a psychological darkness into a space, or whether it’s reducing the contrast of the shadows in an image with a filter to brighten the mood.
Each film she shoots offers up an opportunity to do something new.
Whatever photographic tool she chooses, it’s always deliberately thought out, planned and then executed to create visuals that subtly represent ideas and emotions that form her point of view on the movie.
The Advantages Of Low Budget Filmmaking: Red Rocket
Director Sean Baker is a prime example of a filmmaker who has directed many features, has been to all the festivals, got all the acclaim and yet chose to reside and make films in the low budget range. So let’s look at his latest movie, Red Rocket, through this lens and examine what some of the advantages are to producing low budget indie films.
Making it to the top and directing the biggest films in the world may be the ultimate goal for some, but there are definitely pros to working in the more realistic, lower budget film world of indie films.
Director Sean Baker is a prime example of a filmmaker who has directed many features, has been to all the festivals, got all the acclaim and yet chose to reside and make films in the low budget range. So let’s look at his latest movie, Red Rocket, through this lens and examine what some of the advantages are to producing low budget indie films.
“That’s the big thing that I’m always conflicted about. I’m like, should I make my life easier and make a film for a studio or go to a series. It’s definitely tempting because of monetary purposes but it’s more than that. I want to tell personal stories. Films take a long time, you put all of your energy, all of your heart into them, so why not make the movie you want to make.” - Sean Baker
It’s important to remember that what allows films to be made in the first place is due to financing. Like with any investment, when individuals, or in this case a production company, is putting money down in the form of an investment they need to balance how much money is allocated with the risk of making that money back.
If a studio is throwing hundreds of millions of dollars into a movie, they need to keep their risk tolerance low by ticking off a list of items that have been proven to increase ticket sales and make money back.
For example, choosing a story about a superhero with an existing fanbase, casting celebrity actors and doing test screenings of different edits to make sure that the movie is understood by and entertains the largest possible group of people, are known money printers and ways of decreasing the film’s risk of financially failing.
If instead you want to make a movie about a niche subculture, without any A-list celebrity actors and be given complete creative freedom, you need to accept that it’s a higher risk project for investors and will therefore be allocated a low budget, of say one point two million dollars, which is a much lower financial target to make back.
Red Rocket falls into the latter category, which looks at a niche character archetype of an adult film star that uses and recruits women into that world.
He cast the leads through a combination of street casting, casting a theatre actor, first time actors, casting the lead off of his Vine page and even casting one role to his producer slash continuity supervisor slash costume designer.
Operating at a low budget level gave Baker the freedom to make the kind of film that he wanted to - which was true to his vision.
Some may see it as a disadvantage, but I think another advantage that low budgets have is that they allow you to work with a small crew.
“A four person camera crew pulled off those images. You had Drew Daniels, you had a 1st AC, a 2nd AC and a gaffer/grip. Then you had a one man sound team and then you had my sister who is the production designer on the film. The other four were just producers wearing many many hats” - Sean Baker
This crew of just ten people were able to move quickly, efficiently and pulled off the feature on a tiny 23 day shooting schedule. While having more money, more crew and more gear grants filmmakers more control, it also takes far longer to set up shots and move to new locations.
Drew Daniels shot the movie on 16mm film on Sean Baker’s own Arri SR3, which were paired with two interesting 1.44X Auto Panatar lenses from Panavision, a 16mm and a 50mm.
These anamorphic lenses allowed them to shoot in a widescreen aspect ratio on 16mm and created a unique look that combined a Hollywood anamorphic grandeur with the more grainy, organic, low-fi look of 16mm Kodak film.
Daniel’s camera crew were super bare bones, with a first AC to pull focus, a second AC to load the film and do the boards and one gaffer slash grip to set up lights, rigs and any camera moves they needed. Having a smaller crew also created more of a family atmosphere that put the performers, especially first time actors, more at ease.
“My incredible actors go in front of a camera and they’ll try something and if it doesn’t work, who cares, it doesn’t work. Let’s go for an alt take and try something else. Getting everybody in that place where everybody is comfortable and feels safe. Red Rocket was perfect for that because it was a small ten person crew, tiny, we were a pod, we were very isolated. It just allowed for that. I think we just embraced that spontaneity. We were saying there’s improv in front of the camera so why can’t there be improv behind the camera.” - Sean Baker
Working with a small, flexible camera crew frees filmmakers up and allows them to go with the flow far more than on high budget productions where everything tends to be shot listed, storyboarded, pre-visualised, pre-lit and then executed as per the plan on the day of shooting.
As a general rule, the larger the setups are, the more budget is required which in turn locks filmmakers into pre-planning. Some like working this way, but other filmmakers enjoy working with less restraints, where it’s possible to add scenes, improvise dialogue or change shots based on unexpected happy accidents.
Many of the exterior scenes in the film were also shot without location permits in the small town. Having a low budget and a small production footprint allowed them to get away with it.
On Red Rocket, Baker encouraged his actors to improvise and explore at times. To find dialogue, blocking or actions that felt more natural and aligned with what their characters would do.
The best low budget films are usually those that lean into their limitations and create a tone and look that plays to their strengths. There’s a certain charm to films shot in a more intimate on the ground, run and gun style. That adds a touch of realism and puts the audience in the character's shoes.
Filmmakers also have to be more careful in thinking about what they can and what they can’t include in the story. If there’s a big car crash in the story and you don’t have the financial resources to shoot it, it forces you into communicating that information in creative ways.
More time, more money and more gear gives you more control, but sometimes accepting a lack of control can actually improve a film.
“I realised I couldn’t overcome it. So, there was an acceptance during pre-production that I was going to have to accept all of these freakin limitations. We realised we do not have the money and we do not have the time to throw at problems, so instead of tackling those problems we’ll pivot and go in another direction. So every day even though there was a problem every three hours there was also a miracle every three hours.” - Sean Baker
An example of this happened during the proposal scene. They knew that the local train passed through once a day and therefore only had one chance to get it in the background of the shot and only 20 minutes to set it up.
As they rolled the camera, the conductor started blowing his horn at a moment that perfectly complemented the dialogue, resulting in a magically chaotic scene that couldn’t have been scripted.
Red Rocket shows that low budget filmmaking has many advantages. It allows the director more freedom to make the kind of film they want to make while working with a small, efficient crew that offers possibilities for improvisation, run and gun flexibility and finding magic moments.
Ultimately, it’s also a case study in how important it is to set up a lifestyle for yourself as a filmmaker which is both financially viable and creatively rewarding.
Baker balances the financial side of making low budget films by also working on commercials on the side. Directing just one commercial a year is lucrative and finances his lifestyle and expenses while working on the lower budget films that he has complete creative control over.
Balancing the kind of creative and financial life that you want is something that everyone in the creative industry has to contend with. It’s down to each individual as to where they fall on the scale and how they plan their career to try and land in the space that they want to.
Why Most Movies Are Shot On Arri Cameras
There is one particular brand of digital cinema camera that far and above is the most selected when it comes to high end productions. Let's take a look at why cinematographers choose to shoot on the Arri Alexa.
INTRODUCTION
“I think digital cameras…they’re all tools. It depends on the project. You choose a different camera like you used to choose a different film stock.” - Roger Deakins, Cinematographer
You hear cinematographers claim all the time that a camera is just a tool. One of many tools that can be selected from their cinematic toolbox. However if we look at the statistics, there is one particular brand of digital cinema camera that far and above is the most selected.
From the 2022 Best Cinematography Oscar nominees, four out of five productions used Arri digital cameras. Out of the Best Picture nominees that number was six out of ten.
You may think that this is just coincidence and we need a larger data sample size. Well then, from the 2021 Best Cinematography nominations four out of five used Arri. And the 2021 Best Picture nominees? Five out of eight.
If you keep going back it’s easy to see a clear pattern emerge. Most films these days are shot on Arri digital cameras. So, based on my own experience of working in the industry with these cinema cameras, I’ll explain the four main reasons that I see as being responsible for why most cinematographers on high-end productions select to shoot on the Arri Alexa.
HISTORY
“The Alexa is my digital camera of choice. It has been since it came out.” - Ben Davis, Cinematographer
Someone might say it’s as simple as Arri cameras produce the best looking image. But there’s more to it than that.
To understand why Arri’s digital cameras are so popular we need to understand how the movie industry operated before digital when all productions were shot on film.
Due to the prohibitively high cost of film gear and cameras, they need to be rented out for films by production companies on a daily or weekly basis. There were two dominant gear manufacturers that emerged to produce this niche rental equipment: Panavision and Arri. A key difference between them is that Arri sells their equipment to third party rental houses or individuals, while Panavision exclusively rents the gear they produce.
Each constructed their own camera system that had some differences, such as Panavision cameras using a PV lens mount and Arri cameras using a PL mount. However these cameras were all built around a standardised way of working that accepted most third party gear accessories, such as using 19mm rods to mount a mattebox. This meant that crew with different gear accessories could jump from a Panavision to an Arri system on different jobs without needing separate kits.
When digital began taking over from film, Panavision and Arri needed to come up with a digital alternative to their film cameras that could be interchangeable with existing lenses and gear accessories.
Over the years, many working cinematographers had built relationships with these companies and had a track record of exclusively using their gear. So when film changed over to digital they naturally were drawn to what these two companies had to offer.
Panavision produced the Genesis using some of Sony’s digital imaging technology which had a 35mm sized sensor. After early operational issues were fixed and the Genesis began seeing some initial use, it was quickly overshadowed upon the release of Arri’s competing camera the Alexa.
The quality of the Alexa’s image, its usability and basic ProRes direct-to-edit workflow and being able to be privately bought up by a range of individuals and companies around the world meant that the Alexa took off, leaving the Genesis in the dust.
COMPATIBILITY
“It was kind of scary for me because…until then all my movies had been on film…Of course for me it was no doubt that if I was going digital it was going to be Alexa…I knew the Alexa would be the camera…that looked more the way I used to work with film.” - Natasha Braier, Cinematographer
When it came to this transition from film to digital cameras, Arri tried to make this leap as smooth as possible.
The Alexa was designed to be compatible with existing lenses and film equipment. Importantly, the user experience was also designed around the way that film was shot. Their camera had a recommended native EI, like a film stock, and had a menu screen on the camera which was simple to operate, which was based on the same few settings available on film cameras, like shutter angle.
Other menu systems of competitors like the Red One were a bit more convoluted and had more requirements such as needing to do ‘black shading’ to recalibrate the black balance of the camera.
This meant the Red’s menu was more technical, like a computer, which I think appealed less to many experienced cinematographers who were used to working on film cameras that had limited settings. The Arri menu was a far easier transition.
Initially the Red also had a more complicated RAW workflow than the Arri’s ProRes one.
Over time, Arri added more Alexa cameras to their line up featuring different body sizes and formats all based on the Alev sensor. This meant that cinematographers could choose between mini cameras, large format cameras, studio cameras, or even 65mm cameras and maintain the same Alexa look and compatibility in whatever format they needed.
Arri accessories, such as their wireless follow focus, are also compatible with their cameras. It’s easier for camera assistants to work with both an Arri camera and Arri accessories. Kind of like having a Macbook and iPhone from Apple, rather than a MacBook and an Android phone.
Using Arri accessories on a Red is of course possible, but it limits some features such as changing settings or playing back takes remotely from the focus handset, and requires additional elements such as an R/S cable to run the camera.
Overall Arri’s simplicity and compatibility won out amongst cinematographers making the jump from shooting on film to shooting digitally.
LOOK
“I think the Alexa at the moments is the best camera out there…I thought that the image quality just in terms of its resolution and just that tiny little bit of movement from the pixels moving or whatever. The Alexa just has a little bit of life to it and I think if you go too far the image becomes lifeless. So I like that bit of texture it has.” - Roger Deakins, Cinematographer
We now get into probably the biggest reason most cinematographers love the Alexa: its look.
The Alexa is favoured for producing excellent, flattering skin tones, colour that feels filmic and resolving detail in a way that balances a high resolution with an organic texture.
This is due to two factors: the camera’s sensor and its image processing ability.
The Alev CMOS Bayer sensor that is found in the Alexa has a high number of photosites that balances image sharpness with a high dynamic range and low noise. It has a low pass filter that blocks artefacts and an IR and UV filter which avoids strange colour effects but leaves enough of the red spectrum intact to deliver pleasing skin tones.
The image processing of the Alexa was designed by Arri colour scientists who had developed their ARRISCAN and ARRILASER film scanning technology and were able to render colour in a very natural way.
While companies like Red pursued high resolutions, Arri took their time and focused largely on colour science - which to many cinematographers was, and still is, more important since most projects still get finished at a 2K resolution.
RELIABILITY
“I also bought the very first Alexa Classic you know when it came out and I go you know I’ll be fine if it’s useable for three years and it ended up being a functioning tool and I still use it…the longevity of these products has been amazing.” - Phedon Papamichael, Cinematographer
Finally, the durability and reliability of the Alexa is incredibly renowned across the industry.
As cinema cameras are designed to be rented out and used regularly and over many years in the extreme outdoor conditions that movies are shot in this is an important factor.
While most high end cinema cameras now have a high degree of reliability, during the early development of digital cinema cameras there were many horror stories of cameras breaking down. The Red One had a reputation for being temperamental and overheating, while the Alexa was a solid workhorse with incredible reliability.
As I say, although these reliability issues from competing cameras have been smoothed over, in the early days I think this made some people nervous to shoot on a Red and gave their cameras a bit of a stigma, as time on a film set is extremely valuable and waiting for a camera to cool down before you can reboot it wasn’t very appealing.
I’ve worked with Alexas that are many years old on beaches, in deserts, in extreme temperatures and never encountered any issues. Arri’s track record of robustness, reliability and the longevity of their cameras remains to this day.
CONCLUSION
Overall I’d say these four factors: Arri’s historical legacy in the film industry, the compatibility and ease of use of their products, the all important look, and their reputation for reliability, are what has made them the default choice for most cinematographers working today.
So much so that, as a camera assistant, when you work on any camera other than an Alexa it’s seen as an exception to the norm. I don’t see this trend changing any time soon, especially once they release their much anticipated Super 35 4K camera.
Why Some Shots In Movies Feel Different
Ever notice how some movies totally wrap you up in the world of a character to the point of it being claustrophobic and uncomfortable, while other movies make you feel more like you’re just observing events in their lives unfold in a more detached way? Much of this tone and feeling is a result of the filmmaker’s choice of shot sizes.
INTRODUCTION
Ever notice how some movies totally wrap you up in the world of a character to the point of it being claustrophobic and uncomfortable, while other movies make you feel more like you’re just observing events in their lives unfold in a more detached way?
Much of this tone and feeling is a result of the filmmaker’s choice of shot sizes.
To better understand the impact that different shots have on an audience I’ll first go over the basic shot sizes that are used by filmmakers and then dive into the effect that different types of shots, and how they are edited together, have on an audience.
SHOT SIZES
Before we get into their psychology we need to understand the basics. Shot size refers to the camera’s field of view and the width of the frame relative to how a character is placed in it and how much space they occupy.
Different shot sizes can be achieved by physically moving the camera closer or further away from the subject, or by using different focal lengths. The lower the focal length number the wider the field of view is.
So, let’s familiarise ourselves with the vocabulary that filmmakers use to refer to the width of a frame. This vocabulary helps crew members to quickly communicate their vision and is based on conventions which have been established over many years.
Starting on the widest end of the spectrum we have an extreme long shot or extreme wide shot. This is where the subject or character is totally visible and only takes up a tiny fraction of the total frame. They are used to provide a vastness and scope to the location or landscape of the story.
Due to this they are often used as establishing shots - the first shot that begins a scene and sets the context and broader space that the scene takes place in.
Moving in a bit we come to the long shot or wide shot. Like in an extreme wide the subject is shown from head to toe, however unlike an extreme wide the character now occupies more space in the frame. They are the main focus of the shot now rather than the landscape.
Wides are also commonly used as establishing shots and to show the full body actions of a character.
Next up, we push in further on the character into what is called a medium shot. This is where the bottom of the frame starts from above the waist and the top of the frame includes their head. Because we are closer to the subject we can now read their facial expression and performance more clearly, yet we are also wide enough to capture some of their upper body language and actions.
There are also a few variations of this shot that have some different names. A medium long, three quarter or cowboy shot is slightly wider than a medium shot, usually starting from the knee or thigh region. The cowboy derived its name from its regular use in western films. The slightly wider frame allowed the audience to see both the actors face and their guns that were slung around their waist.
The close up tightly frames the face of the character so that they take up almost all the space in the shot.
The bottom of the frame usually sits just below the chin at around the shoulder level and the top of the head is either included, or framed out - which is sometimes called giving the actor a haircut. There are many different degrees of width that a close up can be shot in, depending on how intimate the filmmaker needs the shot to feel.
Finally we can push in even closer to an extreme close up. This is a shot that is so tight that all we see is a detail or single feature of the face, such as the eyes. Extreme close ups can also be used to photograph objects that hold value to the story, such as text.
An extreme close up also goes by the name an Italian shot, due to its regular use by Sergio Leone in many of his Italian Western films.
THE EFFECTS OF DIFFERENT SHOTS
When you pick up a camera and decide to shoot something, the shot size that you choose will first and foremost be determined by what you choose to include in the frame and what you choose to leave out.
When someone asks you to take a photo of them on a phone, do you get right up in their face and take a close up, or back far away to an extreme wide shot? You probably wouldn’t do either. Because the information that you need to include is them and perhaps some of the background. Most people would take a photo with a frame somewhere in between those two extremes.
Whereas for action scenes, we tend to stick to wider shot sizes for the simple reason that we need to be able to see the overall action in order to know what is going on. And if we cut to a close up of a specific detail, chances are the filmmaker wants you to notice that piece of information.
So, information is the one key effect that choosing a shot has. The other important consideration is the emotion or feeling that comes from framing a shot in a different way.
While there aren’t any definitive rules set in stone that apply to every single film ever made when it comes to interpreting emotion from a shot size, I think a broad rule can be generally applied.
In life, our proximity to a person defines our emotional relationship to them. The closer we are to someone the more intimate our connection to them is, and the further apart we are the more observational and emotionally distant we are to them.
Being face to face with a partner has a different emotional feeling than watching the actions of someone you don’t know from across the room. The same principle can be applied to shot sizes.
The more of the frame a character takes up, the more intimate and personal our connection to them feels. So having a close up of a character means the audience will unconsciously feel a greater sense of connection towards the character in that moment. It’s as if the filmmaker is saying, ‘Make sure you notice this detail or emotion. It’s very important.’
While viewing an entire movie shot in wide shots will distance the audience emotionally from the character and their actions, making it feel like we are passively watching them, rather than being transported directly into their head and thoughts.
Now the reality is that most films are shot with, and include, a variety of shot sizes. This is so that different pieces of performances can be chopped together continuously and without jump cuts. Having different shot sizes to work with also allows the editor to control the pacing and emotional arc of the cut by cutting to different shot sizes that come with different emotional connotations.
For example, a textbook scene will start with a wide establishing shot of the location. then cut to a wide of the characters in a scene. As we get to know those characters we’ll cut in closer to a medium and go back and forth on mediums as the characters exchange general dialogue. Then as what the characters are saying, and how they are saying it becomes more important and intimate the editor will start to cut to close ups.
By cutting from wider shots to progressively tighter shots, the scene is able to begin by establishing the information and space of the location, and then slowly shift the audience’s perception from a more general observation of characters to building up a more personal connection with them as we get to know them.
While this is the general rule, shot sizes can be manipulated in other ways for effect. For example, Son of Saul uses close ups of the lead character for almost the entire film. This emotionally puts you in the shoes of that character and makes the space of the film more claustrophobic and confusing as we never cut wide enough to establish the space.
An opposite approach can be found in a film like Memoria, where we stay wide for most of the film. This presents the actions to us as something to be observed from afar in a more objective way.
Editors can also play against convention by flipping the idea of starting wide and cutting in closer.
The opening to The Deathly Hallows does this by starting on a bold opening statement. We cut from an extreme close up, to a close up, to a medium shot to an extreme wide. This creates an emotional arc that moves from extremely intimate to more detached, and controls the flow of information, providing context and establishing more of the world each time we cut wider.
CONCLUSION
In shot sizes, filmmakers hold a very valuable tool in their hands.
Like a puppet master they can use the size of a shot to manipulate what the audience does or doesn’t know and, perhaps more importantly, to manipulate the very emotions of the audience and the relationship they have with the characters on screen.
6 Basic Camera Settings You Need To Know
Let’s examine 6 of the most important camera variables or settings that can be changed: EI, shutter angle, aperture, ND filters, white balance and frame rate.
INTRODUCTION
For anyone who wants to take photography or cinematography more seriously, the first step is to distance yourself from the dreaded ‘auto’ setting on a camera, move the knob over to ‘manual’ and start to understand the basic camera variables or settings that change the way that an image is captured.
Professional image makers choose to manually manipulate these settings to maintain complete control over how an image looks and not leave those creative decisions down to the whims of automatic camera software.
In this video I’ll examine 6 of the most important camera variables that can be changed. These settings affect the image in different ways and can be placed into three separate categories: exposure settings, colour settings and motion settings. These 6 variables have both technical value that can be used to control how an image looks, and also have creative value that changes the effect, mood or feeling of an image.
EXPOSURE SETTINGS
Exposure refers to how dark or light an image is. This is determined by the amount of light that goes through a camera's lens and hits the sensor of the camera - where the image is recorded.
A dark image with too little light is underexposed, a bright image with too much light is overexposed and an image with enough light is evenly exposed. A camera has four variables that can be changed to alter its exposure: EI, shutter angle, aperture and by using neutral density filters.
Let’s start with the EI, or exposure index setting - a good base setting to start with. This can be referred to using different metrics such as ISO, ASA, gain or EI depending on the camera, but the concept is the same. It’s a measurement of a film or camera sensor’s sensitivity to light.
The lower the value the less sensitive it is to light and the darker an image will be. Raising this value means the sensor is more sensitive to light and the brightness of an image will increase.
Most professional digital cinema cameras have what is called a ‘base’ or ‘native’ EI setting where the sensor performs best and has the most dynamic range and lowest noise. For example the Alexa sensor has a native EI of 800.
While EI can be changed on digital cameras, when using film its speed or sensitivity to light is set at a fixed level, such as 50 ASA, and cannot be altered without changing to a different kind of film stock.
The next exposure setting we can manipulate is the shutter angle or shutter speed.
A shutter is a solid layer in front of the sensor that opens and closes rapidly. When it opens it lets in light, when it closes it blocks light. The longer the shutter is open for the more light it lets in and the brighter an image is, while the shorter the shutter remains open, the less light it lets in and the darker an image is.
Cinema cameras use shutter angle and show a measurement in degrees. A large shutter angle means that more degrees of the circular shutter is open and more light is let in. While a smaller shutter angle, with a smaller opening, lets in less light.
Consumer or still photography cameras use shutter speed that shows this metric in fractions, such as 1/50th of a second - a measurement of how long the shutter is open for. So, fractions, such as 1/250th of a second means that the shutter is open for a shorter time and that less light will be let in, whereas fractions such as 1/25th of a second means the shutter is open for longer which lets in more light - resulting in a brighter image.
With these two settings done, we now move to the lens where we can set the aperture, iris or stop.
This is the size of the hole at the back of the lens that allows light to pass through it. Iris blades can either be expanded to open up the hole and let in more light, or contracted to make the hole that light passes through smaller.
On cinema lenses this is done manually by adjusting the barrel of the lens and on modern digital stills cameras it is usually adjusted via a button or scroll wheel on the camera which changes the iris of the lens internally. The aperture is either measured as a T-stop on cinema lenses or as an F-stop on stills lenses.
Whatever measurement is used, the lower the stop number the more light will be let through and the brighter an image will be. So a lens with a stop of T/2 has a large aperture opening and will let in much more light, while a lens with a stop of T/8 has a smaller opening and will let in less light.
These three settings, ISO, shutter speed and aperture are foundational to exposing footage and are called the ‘exposure triangle’.
In photography these three settings are regularly adjusted individually to find the right exposure, however in cinematography, more often than not these settings are made up front and only tweaked for their photographic effect.
For example in cinema, usually the ISO will be set to its native level, such as 800, the shutter will be set to 180 degrees or 1/50th of a second to ensure motion or movement feels ‘normal’, then the stop of the lens will be set depending on how much of the background the cinematographer wants in focus.
Opening up the aperture to a low number like T/1.3 means a shallow depth of field with much of the image out of focus, whereas stopping down to about T/8 will mean more of the image is in focus.
So if this is the case then how else do cinematographers adjust the brightness of an image?
They do it by manipulating the strength of the lighting and with the 4th exposure variable, neutral density, or ND, filters. These are pieces of darkened glass that can be put in front of the sensor or lens that decreases the amount of light that is let in without affecting the colour or characteristics of the image.
In film, a number is ascribed to a filter to show how many stops of light it blocks. Each stop is represented by 0.3. So ND 0.3 means 1 stop of light is blocked and ND 0.9 takes away 3 stops of light.
Many modern cinema cameras have ND filters built into the camera which can be adjusted internally via a setting. ND filters can also be used as physical glass filters that are mounted onto the front of the lens using a tray in a mattebox, or with a screw in filter on stills lenses.
COLOUR SETTINGS
Now that we know the 4 variable settings that we can use to adjust the brightness of an image in camera, let's look at another very important setting related to colour - white balance.
White balance, or colour temperature, is measured in Kelvin and changes how warm or cool an image looks.
The two most common white balance settings are 3,200K (or tungsten) and 5,600K (or daylight). This is because when you set the camera’s white balance to 3,200K and light an actor with a warmer, tungsten light the colour will appear neutral - not overly cool or warm.
Likewise when you set the camera to 5,600K and shoot with a cooler daylight fixture or outside in natural sunlight the image will also appear neutral.
This means that the lower you set the Kelvin value of the white balance the cooler an image will appear. So if you shoot outside in natural sunlight and set the camera to 3,200K then the image will be blue. Inversely if you shoot in tungsten light with a colour temperature of 5,600K then the image will be warm.
As well as having these two preset colour temperatures, most modern cameras also allow you to pick from a range of colour temperatures on the Kelvin scale and even have an auto white balance setting which automatically picks a Kelvin value to give the image a neutral colour balance.
It should also be noted that like with EI, when shooting on film the colour temperature is fixed to either daylight or tungsten and cannot be changed without using a different film stock.
MOTION SETTINGS
Finally, let's take a look at a camera setting that only applies to moving images - frame rate. To understand what frame rate is we need to think of film not as a video clip, but rather as a series of individual images.
When shooting on film, 24 still pictures are captured every second. Each of these pictures is called a frame. To create the illusion of a moving image these pictures are then projected back at a speed of 24 frames per second. You can think of it kind of like leafing through still images in a flip book at a speed of 24 pages every second.
Therefore, recording a frame rate of 24, or 25, frames per second with a camera produces the illusion of motion at a speed which is the same as that which we experience in real life.
Frame rate can also be used to exaggerate motion for effect by keeping the same playback ‘base’ frame rate of 24 frames per second and adjusting the frame rate setting that the camera captures.
For example if we want slow motion, we can set the camera to record 48 frames per second and then play it back at 24 frames per second. This results in twice as many frames and therefore a feeling of motion that is half as slow as that of real life.
Something important to note is that frame rate also affects exposure. Doubling the frame rate - for example from 24 to 48 frames per second - means that the camera loses a stop of light and will therefore be darker.
CONCLUSION
So, there we go: EI, shutter angle, aperture, ND, white balance and frame rate - six camera variables that every photographer or cinematographer needs to know.
If this all seems like too much technical information, the easiest way to practically get this information in your head is to find a digital camera and start experimenting with settings by shooting.
The more you practice with a camera, the more all of this information will start to become second nature. Until you get to a point where you can manipulate all of these settings unconsciously to capture that imaginative image that you see in your head.
How Paul Thomas Anderson Shoots A Film At 3 Budget Levels
Let's take a look at three films made at three different budget levels from director Paul Thomas Anderson in order to get a sense of the trajectory of his career, his approach to filmmaking and how some of his methods of production have both remained the same and slowly shifted throughout his career.
INTRODUCTION
Compelling, flawed characters. Ensemble casts. Masterfully chaotic stories rooted in universal themes. Visual innovation. Technical competence. An overall strong vision and auteur-like control. These are some of the characteristics that, in my mind, make Paul Thomas Anderson one of, if not the best, director of the last 20 or so years.
Before we get started I think it is important to note that usually in this series I tend to feature directors who have undergone a greater change in the level of budget that they work with.
With the exception of his early work, Anderson has mainly stuck to producing work around the $25 to $40 million range and has never ventured into the realm of studio blockbusters. Nevertheless, let’s take a look at three projects which have been made at increasing budget levels: Hard Eight, Boogie Nights and Licorice Pizza.
In doing this I hope to give a sense of the trajectory of his career, his approach to filmmaking and how some of his methods of production have both remained the same and slowly shifted throughout his career.
HARD EIGHT
“I was way too young to be given the keys to the car I think. I wrote it because I had to because it just came out.” - Paul Thomas Anderson
Anderson’s interest in making films began in his childhood in the San Fernando Valley and continued throughout his teenage years. He would write, direct and then film his ideas for shorts with his father’s Betamax video camera. He attended Santa Monica College but quickly became disillusioned with film school when he felt his ideas and experimentation were discouraged and filmmaking was turned into homework or a chore.
Instead he started working as a production assistant on sets in LA and managed to cobble together $10,000 from a combination of money for college, gambling winnings and his girlfriend’s credit card to produce the short film Cigarettes & Coffee.
He managed to cast Philip Baker Hall, an actor he greatly admired due to his work on Secret Honor - a film made by one of his greatest influences, director Robert Altman.
“Yeah it was based on stuff. I’d been working in Reno. I’d spent some time up in Reno and I was coming off experiences there of watching old guys. I loved this actor named Phillip Baker Hall, still love him and I heard his voice as the character. I just started writing and that’s what came out.” - Paul Thomas Anderson
He would continue this writing process throughout his career. Many of the films he would write were based on life experiences he had and set in locations that he would frequent or had lived in.
He took these experiences and places and created narrative arcs and settings with them. At the same time he often filled in the characters based on actors that he wanted to work with and wrote the roles with certain actors in mind.
Cigarettes & Coffee did very well. It got into the Sundance Short programme. When Anderson decided to turn it into a feature length film he also got into the Sundance programme in order to develop it.
It was there that he secured funding for the feature version, titled Sydney, through Rysher Entertainment all while in his early 20s. He was so young that some crew members on the set initially mistook him for a production assistant instead of the director.
“You know I just bluffed my way through directing. You gotta understand that at that time probably based on the success of Pulp Fiction and a couple other small independent films there was a lot of cash floating around from these cable companies. So if you could make a movie for under $2 million they could kinda sell it off piece by piece with just enough genre elements and a couple cast names and you could just go make your movie.” - Paul Thomas Anderson
So, with an independently funded $2 million low budget he set out to make the film by squeezing the production window into a narrow 28 days.
He cast the film by scooping up some well known actors on the tight budget and shot it all on location.
Due to the tight schedule they had loads to shoot, particularly for the casino scenes which they had to squeeze into night shoots from 10pm to around 8am the next morning.
To shoot a lot in the small window it helped that Anderson always had a very clear idea, visually, of how he wanted to capture the film, and no time was wasted with extraneous shots or deliberation.
To execute the cinematography he hired Robert Elswit, who at the time was probably the biggest name crew member on the project. They quickly found that they complimented each other and had similar visual sensibilities.
“Paul doesn’t need a lot of help in certain areas. I understand his taste, maybe so it’s never a surprise. I can anticipate what he wants to do for the most part. He always has a visual style before he starts. Always. I mean it changes to some extent but it’s completely thought out. Nobody is more prepared. Nobody has really thought through pictorial style as completely as Paul.” - Robert Elswit, Cinematographer
This style included explorative camera movement - often done on a Steadicam - and slow dolly tracking. Elswit lit with moderate contrast ratios, exposed the actors well and used hard light in a naturalistic way.
Even though both loved the look of anamorphic lenses, the producers prohibited it due to budgetary reasons. As a compromise they shot Super 35 Kodak film stock with Panavision cameras and lenses in a 2.39:1 aspect ratio - an aspect ratio he would often use in his later films.
Rather than giving loads of direction to performances, or ‘manipulation’ as he called it, he tried to instil the feeling of what he wanted to the actors before production and cast all the parts exactly as he wanted them. As the cliche goes - most of acting is about casting.
When it came to editing he worked with a similar methodology. He doesn’t like cutting everything up too much and instead prefers to keep the performances intact and pull from limited takes.
When he submitted his first two and a half hour cut of the film, friction between him and the production company ignited over final cut. Rysher Entertainment cut it down, changed the music, titles and even the name of the film to Hard Eight.
As a final compromise, the company said they would be happy to release Anderson’s version of the film if he came up with the funds himself in order to finish it. So, he used all the money he had from a recent deal he had signed for his next film Boogie Nights to finance post production and cut it the way he wanted to -and agreed to give up his original title Sydney.
Paul Thomas Anderson used the modest budget to secure a solid cast of big name actors to draw in an audience, produced the relatively small scope story over a limited 28 day production window, saved money on production design and by shooting Super 35 with an experienced DP, and eventually won out the creative battle for final cut.
BOOGIE NIGHTS
“I went into my next situation thinking that the lesson I learned was to be paranoid, protective and don’t trust anyone. Fortunately I got to work with a great studio and a guy named Mike De Luca who was able to see what I’d gone through and said ‘No, no. Trust me and put your faith in me.” - Paul Thomas Anderson
Let’s backtrack a bit. Anderson first came up with the idea for his second film Boogie Nights when he was 18. He wrote and directed ‘The Dirk Diggler Story’, a 30 minute mockumentary about the golden age of porn.
“After I made the short film I wrote as a sort of full length documentary taking a kind of Spinal Tap approach, you know. But by the time I’d finished that, that format had kind of been worn out and done many times. I just kinda figured the way to do this is to go nuts and just make it straight narrative. I eventually had a shooting script of 186 pages.” - Paul Thomas Anderson
The eventual script looked at the rise and fall of a character in the 1970s porn scene and examined the idea of addiction, ego, surrogate families and communities.
Like Hard Eight, he wrote the script with certain actors in mind - including performers that he had worked with prior on Hard Eight. And set it in the San Fernando Valley, an area he had grown up with and was familiar with. It featured an ensemble cast, inspired by the work of Robert Altman.
“Casting and writing are kind of the same thing. Because I write parts for actors that are my friends or actors that I don’t know that I really want to work with.” - Paul Thomas Anderson
With a buzz starting to go around about the script and while in post production on his first film, New Line Cinema came on board to produce with a $15 Million budget and promised a more hands off approach.
As it was a lengthy script that was larger in scope and needed the casting of many well known actors, the budget increase was still a bit stretched. After their positive experience on Hard Eight, Elswit was again hired as his DP.
Elswit remarked that from the first location scout Anderson would outline the kind of shots he wanted. That detailed level of specificity helped them to save time and money, since it was a lengthy, ambitious film for its budget level.
This vision was also important when coordinating and communicating some of the complex long takes that Anderson had in mind. The most famous of which was probably the opening shot of the film, where a Steadicam operator started on a ride on crane, which boomed down, gave the operator a chance to step off and then track characters into an interior - introducing the audience to the space and world of the film in the first 3 minutes.
“These long, complicated tracking shots are really fun to do. I think the actors love them. Movie acting is sort of so pieced up and chopped up. Very rarely is action called and 3 or four minutes later their scene happens. It’s just kind of fun for them to really act something through and let it breathe. Let it happen.” - Paul Thomas Anderson
Due to the arduous nature of the shot the main steadicam operator Andy Shuttleworth had a backup Steadicam operator as they had scheduled doing this 1 shot over an entire night.
Eslwit lit the exterior scene with two strong, hard backlights and some smaller units which were meant to mimic street lights which were metered at a stop of T2.8. Inside the club his team rigged the lowest budget 70s-style disco lights they could find overhead to keep all film lights out of the shot. This was brighter at T/4.
To maintain an even exposure across the different lighting levels Elswit used a wireless iris motor to slowly move the aperture remotely, going from T/ 2.8 outside to T/ 4 as the camera moved inside.
This time they had the budget to shoot with anamorphic lenses. They used Panavision C-series and pretty much shot the entire film with 3 lenses. A 40mm and 50mm for wider frames and a 75mm for close ups.
Anderson disliked heavy film grain so they shot on Eastman 100T 35mm film stock - the slowest practical speed stock they could find.
Like on Hard Eight Elswit liked to observe the natural light and then augment it with additional fixtures. To do this he would take stills on slide film during location recces, which had a limited dynamic range and therefore clearly showed what the natural light was doing.
He’d then come in and accentuate the natural light by, for example, using large tungsten lights through windows for day interiors to mimic sunlight.
Overall, the budget was spent on a large ensemble cast, re-creating the 1970s period scenes in the film, over a longer production schedule with many scenes in a long script which were shot innovatively with more extensive technical gear.
LICORICE PIZZA
After a career of producing almost non-stop critically acclaimed work Anderson turned to the 70s and, again, the San Fernando Valley for his next idea.
“I had a story that wasn’t exactly mine but that paralleled mine. My relationship with Gary Goetzman, who I don’t know if many of you know is a producer. Gary worked in the Valley. He was a child actor. When that didn’t really work out he started a waterbed business. The stories he told was an opportunity to enter into a world that I remember very well.” - Paul Thomas Anderson
Again, his script pulled from his own experiences, in a setting he was familiar with, with dialogue and characters written for specific actors, or first time actors, that he had in mind.
Having worked many times with Phillip Seymour Hoffman in the past, he turned to his son to play the role, who, although it was his first film, gave a more realistic, understated performance than Anderson saw in the other castings.
This was paired with an on screen chemistry with another first time actor, Alana Haim, who Anderson had shot music videos for in the past.
The long screenplay with its many scenes meant he needed a budget of around $40 million - which was supplied by MGM.
In the build up to the film Anderson decided to shoot lots of tests - partly to find a look for the film and partly to see if his two leads had enough on screen chemistry for the movie to work. This was a luxury that the higher budget afforded him - compared to Hard Eight that had to be shot in 28 days.
During these tests they also looked at different lenses and pushing and pulling different film stocks until they settled on the look they were happy with.
After parting ways with Elswit after many films together, Anderson had developed an unusual way of working without a dedicated cinematographer.
He, along with key technical crew members, such as his Gaffer Michael Bauman, camera operator Colin Anderson and key grip Jeff Kunkel all put their skills into a giant pot and shot a project without having a director of photography as a department head.
This worked due to the director’s technical prowess and track record with his experienced team of collaborators. This was first done on Phantom Thread, which they shot in the UK and repeated on various music videos.
However, when it came to shooting in the US they needed to have an officially credited DP due to union requirements. So, Anderson and Bauman shared the official credit of cinematographer while they continued working in the same collaborative style as before, with Anderson providing a visual direction and his key crew offering their input and technical execution.
“Because we’re shooting in California you’re kind of required to have someone listed as the cinematographer versus when we were overseas…The workflow is a very collaborative environment. He and I kinda worked very closely with Colin Anderson who is the camera operator. You know, they’ll formulate a shot, the two of them will, and he and I will have done preliminary scouting and talk about the lighting and then on the day we’ll execute what the plan is.” - Michael Bauman, Gaffer & Cinematographer
Like with many of his films, they mainly shot on vintage C-series anamorphic lenses from the 70s. After doing extensive tests they chose a set of Cs which included three different 50mm lenses each with different characteristics which they picked from depending on the situation or shot.
Anderson has always been passionate about not only shooting on film but even screening the dailies, the raw footage, projected using 35mm.
“One of the things that we do is that we do film dailies. We watch dailies while we are shooting. On set we have a space that we work out of that we can project film. It’s me, it’s the camera department, the core team of the camera department basically department heads would come in and out. We use that process to figure out takes we’re going to use.” - Andy Jurgensen, Editor
Overall, Licorice Pizza’s larger budget offered the director more time and resources to fine tune his vision by doing extensive camera tests, location scouting and tests with actors before stepping onto set. This resulted in a final film which utilised extensive shots with vast period correct backgrounds, shot with a curated selection of technical gear, stunts, and an ensemble cast which included some big name performers.
CONCLUSION
Certain aspects of Paul Thomas Anderon’s way of working as a director have remained consistent throughout his career, such as: writing scripts based on his personal experiences with ensemble casts which are ratcheted up by chaotic actions, shooting on 35mm film, often with anamorphic lenses, working with a small, consistent crew, focusing largely on casting and then letting actors do their thing, and creating innovative visual languages based on camera movement.
However, the more established he has become, the more he has also been able to take his time to create the films, with more extended production schedules and more time for testing and finding the look before production begins.
After the departure of Elswit, his methodology has also shifted away from the traditional route of working with a credited cinematographer, to a collaborative working style where he leans on the expertise of his crew department heads.
Despite these changes, his films always have a recognisable tone and style that ties them together despite the genre, script or subject matter of the film.
Cinematography Style: Barry Ackroyd
Barry Ackroyd is a cinematographer who plays to his strengths. Over his career he’s developed an instantly recognisable style to his photography that is based around a vérité, documentary-esque search for truth and capturing realism. In this episode of Cinematography Style I’m going to take a look at the renowned work of Barry Ackroyd by going over his philosophical ideas on cinematography and outlining the gear that he uses to execute his vision.
INTRODUCTION
Barry Ackroyd is a cinematographer who plays to his strengths. Over his career he’s developed an instantly recognisable style to his photography that is based around a vérité, documentary-esque search for truth and capturing realism.
He works with multiple on-the-ground, handheld, reactive cameras that use bold, punch-in zooms and has been hired by directors such as Ken Loach and Paul Greengrass that highly value a sense of realism and heightened naturalism in their films.
So, in this episode of Cinematography Style I’m going to take a look at the renowned work of Barry Ackroyd by going over his philosophical ideas on cinematography and outlining the gear that he uses to execute his vision.
BACKGROUND
“I’m a cinematographer who was brought up in documentaries in Britain on small budgets.”
Ackroyd’s initial plans to become a sculptor changed while he was studying Fine Arts at Portsmouth Polytechnic after he discovered the medium of 16mm film.
He began working as a television cameraman in the 1980s, mainly shooting documentaries. It was there that he first encountered director Ken Loach. After working on a couple of documentaries together, Ackroyd was offered an opportunity to shoot Riff-Raff for Loach - his first feature length fiction film.
He continued to shoot numerous fiction films and documentaries for Loach during this period, culminating in The Wind That Shakes The Barley which won the Palme d’Or at Cannes Film Festival. Following this success he began working on other fiction projects for various well known directors such as: Paul Greengrass, Kathryn Bigelow and Adam McKay.
PHILOSOPHY
“Sometimes it’s better just to play to your strengths rather than to try to diversify too much…That was a choice I made, to play to my strengths.”
One of those strengths is a look rooted in a documentary style of working - which was informed by his early work on TV docs. Those documentaries relied on usually operating the camera handheld from the shoulder, in order to record the necessary moments as they happened live. In the real world events or moments often only happen once so you need an easily mobile camera to observe and capture them.
This is the opposite of fiction filmmaking, where events and scenes can be played out multiple times, and are more often than not photographed in a carefully curated, composed visual style. Rather than going the usual fiction cinematic route, Ackroyd took documentary conventions and ways of working and applied them to fiction filmmaking.
For example, he prefers always shooting movies on real locations whenever possible, over shooting them on a constructed set or in a soundstage - even if that real location is a ship on the ocean.
Ackroyd tends to steer away from setting things up too perfectly and instead leans towards a look where capturing a version of reality is far more important than capturing a ‘perfect’ image.
“I think if you look at my work I’m always trying to push what I’ve done before…and actually I push it towards imperfection…There’s a kind of state that you get into where you’re just in tune with what’s happening in front of the camera.”
To capture images realistically, honestly and with as few barriers as possible he relies on working with multiple camera operators and puts a lot of trust in his crew members. He gives his crew lots of credit on set and in interviews, from the focus puller to the sound recordist, and maintains the importance of teamwork and a group effort in creating a film.
“I used to say that in documentaries the best shot that you get in documentaries is out of focus and underlit and looks rubbish. You know that it had to be in the film because it was absolutely right at the time…I think that’s what you’re striving for, you know. Not to overwhelm people with the beauty. Not to fall in love with the landscape…But to get the picture that…you’re involved with it.”
An example of how he seeks authenticity through imperfections can be found in his approach to blocking scenes with directors and actors. Usually actors rehearse a scene on set and then marks are put down on the floor to indicate the exact position that actors must stand in in order to be perfectly lit, perfectly framed and perfectly angled for the shot.
Ackroyd prefers not to mark actors. He sets up any lights he needs either overhead or outside the set so that the actors have the freedom to move around as they like when they play out the scene. Since they don’t have to worry about hitting specific marks, he finds that the actors loosen up more, which injects a realist spontaneity into how their performances are captured.
Sometimes this leads to technical imperfections like moments that are out of focus or frames that aren’t classically composed. But it also injects an energy into the images which is undeniable.
GEAR
“You know I like to get physically involved. We ran around with the cameras. We had four or five cameras at times…In any one setup you’re trying to talk to all the guys, see what they’ve done, see what the next shot should be and give, you know, support and advice.”
As we mentioned, Ackroyd likes shooting with multiple handheld cameras. This allows his operators to quickly react and capture details or moments of performance. It also provides the director and the editor with multiple angles and perspectives which they can cut to in order to build up the intensity and pacing in a scene.
Directors who he has repeatedly collaborated with like Paul Greengrass and Kathryn Bigelow are known for their preference for quick cutting. Ackroyd’s style provides them with the high number of angles that are needed to work in this way.
One of the most important camera tools he uses is focus. He describes focus as being the best cinematic tool, even better than a dolly, crane or tripod, because focus mimics what we naturally do with our eyes and can be used to shift the attention of the audience to a particular part of the frame. He isn’t overly strict with his focus pullers and in fact prefers the natural, more organic method where people drift in and out of focus over every single shot having perfectly timed, measured and calculated focus pulls.
Another important tool in his toolbox is his use of zoom lenses. Again this goes against traditional fiction cinematography principles which ascribe a greater value to prime lenses over zooms - which most documentaries are shot with. He uses quick punch-in zooms as a tool to direct the focus of the audience in the moment. For example if a line of dialogue or an energetic moment of performance is particularly important his operators may push into it with a quick zoom for emphasis.
His choice of camera gear is a bit mishmash. In the same film he may use different formats, such as digital, 35mm and 16mm film, with different prime and, of course, zoom lenses. For example Captain Phillips involved shooting aerial shots digitally, while sequences in the fishing village and on the skiff were shot in Super 16, which they then switched to 35mm film once the characters boarded the large shipping vessel.
He likes the texture of film and has often used the higher grain 16mm to compliment his look. He famously used Super 16 to support the raw, on-the-ground documentary aesthetic on The Hurt Locker.
“Well then I thought it has to be Super 16. We have to get back to the basics. Get down to the lenses you can carry and run with and will give you this fantastic range of wide shots and big close ups…The first thing everybody said was that, ‘well, the quality is not going to be good.’ Well, nobody has criticised the quality of the film. They’ve only praised it.”
He has a preference for Fujifilm stock as it fares well in high contrast lighting situations. When shooting on film he would sometimes purposefully underexpose the negative and then bring up the levels later in the DI in order to introduce more grain to the image.
Ackroyd liked to combine 250D and 500T Fujifilm stocks when shooting Super 16 or 35mm. However, after Fujifilm was discontinued and no longer available he transitioned to shooting on Kodak film or with digital cameras - mainly the Arri Alexa Mini.
On Detroit he used the Arri Alexa Mini in Super 16 mode and shot with Super 16 lenses to introduce noise and grain to the image and get a Super 16 feel, which was further amped up in the grade, all while maintaining the benefits of a digital production.
The Aaton XTR is his go to Super 16 camera, so much so that he owns one. He has used different 35mm cameras such as the Aaton Penelope, the Arriflex 235, the Moviecam Compact and the Arricam LT. Some of his favourite Super 35 zooms are the 15-40mm and, in particular the 28-76mm Angenieux Optimo zoom, which are both light enough to be handheld and provide a nice zoom range that he can use to punch-in with.
He’s also used the Angenieux Optimo 24-290mm, sometimes with a doubler when he needs a longer zoom. It’s too heavy to be used handheld but he has used it with a monopod to aid in operating the huge chunk of a lens and still preserve a handheld feel. Some other zooms he has used include a rehoused Nikon 80-200mm and the Canon 10.6-180mm Super 16 zoom.
Although he prefers zooms he often carries a set of primes which have a faster stop and can be used in low light situations such as Zeiss Super Speeds or Cooke S4s.
Due to the lack of blocking or focus marks, he usually gives his focus pullers a generous, deep stop to work with of around T/5.6 and a half.
To further support his look based on realism and documentary, he lights in a very naturalistic manner. He tries to refrain from lighting exteriors all together and for interiors adds touches of artificial light which are motivated when he needs to balance the exposure in a scene. A lighting tool that he likes to use for this are single Kino Flo tubes, which can easily be rigged overhead or out of sight to provide a low level fill to a scene.
CONCLUSION
Barry Ackroyd’s cinematography is more about deconstructing photography than it is about trying to produce a perfectly beautiful image.
To him imperfections are a signal of authenticity and an expression of realism rather than a flaw. Breaking down an image can’t be done competently without a great degree of skill and knowledge.
His film’s aren’t created by just picking up a bunch of cameras and pointing them in the general direction of the action, but are rather made through deliberate thought and cultivation of a style that emits as much intensity, feeling of reality and truth as possible.
Does Sensor Size Matter?
Since there are loads of different cameras with loads of different formats and sensor sizes out there to choose from, in this video I’ll try to simplify it a bit by going over the five most common motion picture formats and discussing the effect that different sensor sizes have on an image.
INTRODUCTION
The sensor or film plane of a camera is the area that light hits to record an image. The size of this area can vary a lot depending on the camera, with each sensor size or format having a subtly different look.
Since there are loads of different cameras with loads of different formats and sensor sizes out there to choose from, in this video I’ll try to simplify it a bit by going over the five most common motion picture formats and discussing the effect that different sensor sizes have on an image.
5 MOTION PICTURE FORMATS
The size of a video camera's film plane or sensor ranges all the way from the minuscule one third inch sensor found in smartphones or old camcorders up to the massive 70x52mm 15-perf Imax film negative. But, rather than going over every single sensor in existence, I’m going to take a look at five formats which are far and above the most popular sizes used in film production today and have been standardised throughout film history.
While there are smaller sizes like 8mm film or sizes in between like the Blackmagic 4K’s four third sensor, these sizes are used far less frequently in professional film production and are an outlier rather than a standard. I’ll also only be looking at video formats so won’t be going over any photographic image sizes such as 6x6 medium format.
The smallest regularly used format is Super 16. The film’s smaller size of around 7.4 by 12.5mm makes it a cheaper option than the larger gauge 35mm, as less physical film stock is required.
Due to this it was often used in the past to capture lower budget productions. Now that digital has overtaken film, Super 16 is mainly chosen for its optical capabilities. Its lower resolution look and prominent film grain means that it is often used today to evoke a rough, documentary-esque feeling of nostalgia.
Some digital cameras, such as the original Blackmagic Pocket Cinema Camera have a sensor that covers a similar area to Super 16 and cameras such as the Arri Alexa Mini have specialised recording modes which only samples a Super 16 size area of the sensor.
Moving up, the next, and by far the most common format is Super 35. This format is based on the size of 35mm motion picture film that covers an approximate area of 21.9 by 18.6mm. 35mm refers to the total width of the frame, including the perforated edges on either side of the negative area.
Depending on the budget, aspect ratio, and lenses different amounts of horizontal space, measured in perforations, can be shot. The frame can be cropped to use less film stock or to extract a widescreen image when using spherical lenses. Shooting with anamorphic lenses, that optically squeeze the image, requires using the entire area of the negative or sensor and then de-squeezing the image at a later stage to get to a 2.39:1 aspect ratio.
Many digital cinema camera sensors are modelled on this size, with some minor size variations depending on the camera, such as the Arri Alexa Mini, the Red Dragon S35 and the Sony F65. Since this format is the most popular in cinema, most cinema lenses are designed to cover a Super 35 size sensor. Meaning this format has the widest selection of cinema glass available on the market.
Stepping up from 35mm we get to what is called a large format or a full-frame sensor. This size is modelled on still photography DSLR cameras with a 35mm image sensor format, such as the Canon 5D that is larger than Super 35. It’s also around the same size as 8-perf Vista Vision film.
Although digital sensors differ a bit depending on the camera, it is usually about 36 by 24mm. Some cameras with this sensor size include the Alexa LF, the Sony Venice and the Canon C700 FF.
This large format is a middle ground between Super 35 and the next format up - 65mm.
Originally, this format was based on using 65mm gauge film which was 3.5 times as large as standard 35mm, and measured 52.6 by 23mm using 5 vertical perforations with a widescreen aspect ratio of 2.2:1. The Alexa 65 has a digital sensor that matches 65mm film and is a viable digital version of this format.
Finally, the largest possible motion picture format that you can shoot is Imax film. With an enormous 15 perforations, an Imax frame covers a 70.4 by 52.6mm image area.
Due to its enormous negative size and the large, specialised cameras required to shoot it, this format is prohibitively expensive and out of the budget range of most productions. But, it has seen a bit of a resurgence in recent years on high budget blockbusters from directors such as Christopher Nolan who champion the super high fidelity film format.
THE EFFECTS OF SENSOR SIZES
With these five formats in mind, let’s examine some of the effects and differences between them. There are a few things that choosing a format or sensor size affects.
The most noticeable optical effect is that different formats have different fields of view. What this means is that if you put the same 35mm lens on a Super 16, Super 35 and a large format sensor camera, the smaller the sensor is the tighter the image that is recorded will appear.
So the field of view on a large format camera will be much wider than on a Super 16 camera which is tighter. Since the field of view is wider, larger formats also have a different feeling of depth and perspective.
Because of this difference, the sensor determines the range of focal length lenses that need to be used on the camera. To compensate for the field of view differences, smaller formats like Super 16 need to use wider angle lenses to get to an image that sees the same amount of information, while larger formats need to use longer lenses for that same frame.
For example, to get the same field of view from a 35mm lens on a Super 35 sensor, a Super 16 camera needs to use a 17.5mm focal length and a large format, full-frame camera needs to use a 50mm focal length.
Since focal lengths affect the depth of field an image has, this is another effect of different formats. Longer focal lengths have a very shallow depth of field or area of the image which is in focus. So full-frame cameras that use longer focal lengths will therefore have a shallower depth of field. This means that the larger the format, the more the background will be out of focus and the more the subject will be separated from the background.
This is helpful for creating a greater feeling of depth for wide shots which people often perceive as looking more ‘cinematic’.
One negative effect of this is that the job of the 1st AC to keep the focus consistently sharp becomes far more difficult. For this reason smaller formats such as Super 16 are far more forgivable to focus pullers as they have a deeper depth of field where more of the image is in focus and therefore the margin for error is not as harsh.
The grain and resolution that an image has is also affected by the size of the format. The smaller the format is, the more noticeable the grain or noise texture will usually be, and the larger the sensor is the finer the grain will appear and the greater clarity and resolution it often has.
Sometimes cinematographers deliberately shoot smaller gauge formats like Super 16 to create a more textured image, while others prefer larger formats like 65mm for it’s super clean, sharp, low noise look.
So those are the main optical effects of choosing a format.
Smaller formats require wider focal lengths, have a deep depth of field, have more grain and will overall feel like they are a bit flatter.
Larger formats require longer focal lengths, have a shallower depth of field, less grain, greater resolution and clarity and overall have a more three-dimensional look with an increased feeling of depth.
There are also the all-important practical implications to be considered. Generally speaking the larger the format, the larger the form factor of the camera will be to house it and the more expensive it is to shoot on.
This calculation may be different when comparing the costs of digital and film, but when comparing all the digital formats, renting the cameras and lenses for 65mm will be more expensive than a Super 35 camera. Likewise, when comparing film formats 16mm is vastly cheaper than Imax.
So broadly speaking, smaller formats tend to be more budget friendly and come in a smaller housed package.
DOES SENSOR SIZE MATTER?
Coming back to the question of whether sensor size matters, I don’t think any one sensor is necessarily better than another. But the effects that they produce are certainly different.
Filmmakers that want an image that immerses an audience in a crystal clear, highly detailed, wide vista with a shallow depth of field will probably elect to shoot on a larger format.
Whereas those who require a more textural, nostalgic or rougher feeling photography with less separation between the subject and the background may be drawn to smaller gauge formats.
As always, the choice of what gear is most suitable comes down to the needs of the project and the type of cinematic tone and photographic style you are trying to capture.
Using Colour To Tell A Story In Film
Let’s examine this idea of colour by going through an introduction to colour theory, look at how filmmakers can create a specific colour palette for their footage and check out some examples of how colour has been used to aid the telling of different stories.
INTRODUCTION
Cinematography is all about light.
Light is a complex thing. It can be shaped, it can come in different qualities, different strengths and, importantly, it can take the form of different colours.
So, let’s examine this idea of colour by going through an introduction to colour theory, look at how filmmakers can create a specific colour palette for their footage and check out some examples of how colour has been used to aid the telling of different stories.
WHAT IS COLOUR THEORY?
Colour theory is a set of guidelines for colour mixing and the visual effects that using different colours has on an audience.
There are many different approaches to colour theory ranging from ideas all the way back in Aristotle’s time up to more contemporary studies on colour such as those by Isaac Newton. But let's just take a look at some basic ideas and see how they can be applied to film.
When different spectrums of light hit objects with different physical properties it produces a colour, which we put into a category and ascribe a name to.
Primary colours are a group of colours that can be mixed to form a range of other colours. In film these are often, but not always, used sparingly in a frame. A splash of red in an otherwise green landscape stands out and draws the eye.
An important part of colour theory in the visual arts space is knowing complimentary colours. When two of these colours are combined they make white, grey or black. When the spectrum of colours are placed on a colour wheel, complimentary colours always take up positions opposite each other.
When two complementary colours are placed next to each other they create the strongest contrast for those two colours and are generally viewed as visually pleasing. Cinematographers often combine complimentary colours for effect and to create increased contrast and separation between two planes in an image. For example, placing a character lit with an orange, tungsten light against a blue-ish teal background creates a greater feeling of separation and depth than if both the character and the background were similar shades of orange.
When it comes to the psychology of using colour, cinematographers generally fall into two camps - or somewhere in the middle. Some cinematographers such as Vittorio Storaro think that certain colours carry an innate, specific psychological meaning.
“Changing the colour temperature of a single light, changes completely the emotion that you have in your mind. I didn’t know at the time the meaning of the colour blue. It means freedom.” - Vittorio Storaro
Other filmmakers rely more on instinct and what feels best when lighting or creating a colour palette for a film. The psychology of colour can change depending on the context and background of the audience.
As well as being a means of representing and expressing different emotions, deliberate and repeated uses of colour can also be used by filmmakers as a motif to represent themes or ideas.
Another important part of colour theory is warm and cool colours. The Kelvin scale is a way of measuring the warmth of light, with lower Kelvin values being warmer and higher Kelvin values being cooler.
Warm and cool colours can have different psychological effects on an audience and can also be used to represent different physical, atmospheric conditions. Using warmer colours can be used to emphasise the feeling of physical heat in a story, while inversely cooler colours can be used to make the setting of a story feel cold or damp.
CREATING A COLOUR PALETTE
Now that we have a basic framework of colour theory to work with, let's look at the different ways that filmmakers can make a colour palette for a movie. Colour palettes in film can be created using three tools: production design and costume, lighting and in the colour grade.
The set and the clothing that the characters are dressed in is always the starting point for creating a colour palette. In pre-production, directors will usually meet with the production designer and come up with a plan for the look of the set. They might give the art director a limit to certain colours they need to work with, or decide on specific tones for key props. The art team will then go in and dress the set by doing things such as painting the walls a different colour and bringing in pieces of furniture, curtains and household items that conform to that palette.
Since characters are usually the focus of scenes and we often view them up close, choosing a colour for their costume will also have a significant impact on the overall palette. This may be a bold primary colour that makes them stand out in the frame, or something more neutral that makes them blend into the set.
With a set to work with, the next step in creating a movie’s colour palette is with lighting.
Traditionally, film lighting is based around the colour temperature of a light which as we mentioned could be warm, such as a 3,200K tungsten light or cool, such as a 5,600K HMI. On top of this, cinematographers can also choose to introduce a tint to get to other colours. This can be done the old school way by placing different coloured gels in front of lights, or the modern way by changing the hue or tint of LEDs.
DPs can either flood the entire image with monochromatic coloured light, or, as is more common, light different pockets of the image with different colour temperatures or hues. In the same way that we create contrast by having different areas of light and shadow in an image, we can create contrast by having different areas of coloured light.
Once the colour from the set and the lighting has now been baked into the footage, we move into post-production where it’s possible to fine tune this colour in the grade.
An image contains different levels of red, green and blue light. A colourist, often with the guidance of a director or cinematographer, uses grading software like Baselight or Da Vinci Resolve to manipulate the levels of red, green and blue in an image.
They can change the RGB of specific values of light, like introducing blue into the shadows, or adding magenta to the highlights. They can also create power windows, to change the RGB values in a specific area of the frame, or key certain colours so that they can be individually adjusted. There are other significant adjustments they can make to colour such as determining the saturation or the overall intensity of the colour that the image has.
USING COLOUR TO TELL A STORY
“It’s a show about teenagers. Why not make a show for the teenagers that looks like how they imagine themselves. It’s not based on reality but mostly on how they perceive reality. I think colour comes into that pretty obviously.” - Marcell Rév
When coming up with a concept for the lighting in Euphoria, instead of assigning very specific psychological ideas to colour, Marcell Rév used colour more generally as a way to elevate scenes from reality.
He wanted to put the audience in the emotionally exaggerated minds of some of the characters and elevate the level of the emotions that were happening on screen. In the same way that the often reckless actions of the characters continuously ratcheted up the level of tension in the story, so too did the exaggerated, brash, coloured lighting.
To increase the potency of the visuals he often played with a limited palette of complementary colours. He avoided using a wide palette of colours, as it would become too visually scattered and decrease the potency of the colours that he did use.
Along with his gaffer he picked out gels, mainly light amber gels which he used with tungsten lights and cyan 30 or 60 gels which he used with daylight HMIs. They also used LED Skypanels, which they could quickly dial specific colour tints into.
“That light…that colour bouncing off the screen and arriving at us we don’t see it only with the eyes, we see it with the entire body…because light is energy. I’m sending some vibrations to you, to the camera, to the film…unconsciously.” - Vittorio Storaro
When photographing Apocalypse Now, Vittorio Storaro was very deliberate about his use of colour. He wanted the colours to be so strong and saturated that the world on film almost became surrealistic.
He wasn’t happy with Kodak’s 5247 100T film stock at the time, so he got the film laboratory to flash the negative to get the level of contrast and saturation which he was happy with.
In the jungle scenes he didn’t want to portray the location naturally. He sometimes used filters to add a monochromatic palette which was more aggressive, to increase the tension.
“I can use artificial colour in conflict with the colour of nature. I was using the symbolic way that the American army was using to indicate to the helicopter…They were using primary and complementary colours. I was using those kinds of smoke colours to create this conflict.” - Vittorio Storaro
He also described how the most important colour in the film was black, particularly in the silhouetted scenes with Kurtz. He felt black represented the unconscious and was most appropriate for scenes where the audience was trying to discover the true meaning of Kurtz, with small slithers of light, or truth, emerging from the depths of the unconscious.
What A Boom Operator Does On Set: Crew Breakdown
In this Crew Breakdown video I’ll go over the position in the sound department of the boom operator, to break down what they do, their average day on set and some tips which they use to be the best in their field.
INTRODUCTION
In this series I go behind the scenes and look at some of the different crew positions on movie sets and what each of these jobs entails. If you’ve ever watched any behind the scenes videos on filmmaking you’ve probably seen this person, holding this contraption.
In this Crew Breakdown video I’ll go over the position in the sound department of the boom operator, to break down what they do, their average day on set and some tips which they use to be the best in their field.
ROLE
The boom operator, boom swinger or first assistant sound is responsible for placing the microphone on a set in order to capture dialogue from the actors or any necessary sounds in a scene.
They do this by connecting a boom mic, or directional microphone, to a boom pole. The mic is then connected either with an XLR cable or wirelessly to a sound mixer where the sound intensity is adjusted to the correct level.
On feature films this mixing is done separately by the sound recordist who heads the department, and is responsible for recording all the audio and delegating the positioning of the mic to the boom operator. However, for low budget features, TV shoots, documentaries or commercials, the role of the sound recordist and the boom swinger is sometimes performed simultaneously by one person.
To get the best possible sound and capture dialogue clearly the microphone usually needs to be placed as close as possible to the actors. Since film frames have quite a lot of width to them and see a lot of the location the best way to get the microphone in close to the action without it entering the shot is to attach it to a boom pole, with the mic angled downwards and use the length of the boom held overhead to position the microphone directly above the actors and outside of the top of the frame.
For stationary shots without camera movement this involves finding a position for the boom and holding it throughout the take. Sometimes for long documentary interviews this can be done with the help of a stand. However, for shots which involve camera movement or actors that are moving and talking, the boom operator is tasked with performing a kind of dance. They need to move tighter or wider as the camera does, always fighting to get the mic as close as possible while making sure it never dips into the frame and enters the shot. If this happens during a take the DP, director or 1st AD will often call out ‘boom’.
While the act of operating the boom mic during filming is their primary responsibility, there are also some other tasks that boom swingers need to perform.
Dialogue is usually captured by two different types of microphones, the boom mic, as we mentioned, as well as lapel or lav microphones. These are small microphones which are strapped directly onto the actors with a clip or with an adhesive tape. They are usually positioned under clothing near the chest or throat area so that they will pick up intimate sound but be unseen by the camera.
These microphones are attached to a transmitter. This wirelessly transmits the recorded sound to a receiver that is then connected to a sound mixer where the audio feed is recorded. The boom operator is usually responsible for attaching this lav microphone to the necessary actors.
It’s best practice to always inform and explain how you’ll be attaching the mic to the actor, as it can be a bit invasive. Experienced actors are aware of this but it’s still professional to ask their permission before touching them or putting on the lav mic.
The boom operator also assists the sound recordist with any necessary technical builds or changes, such as switching out batteries or attaching a lock it box for synchronising timecode onto the camera.
AVERAGE DAY ON SET
After arriving on set the boom operator will track down a copy of the shooting schedule and sides. This lists what scenes are planned for the day and pages from the script with dialogue for those scenes.They’ll read the sides to see what dialogue needs to be recorded and what actors need to be mic-ed up. They’ll replace the necessary batteries and make sure everything is charged up and ready to go.
The boom operator or the sound recordist may test that their audio feed is getting transmitted to VT and hand out a pair of headphones and receiver to the director for them to monitor the sound during takes.
The boom operator will mic up any actors in the scene that have dialogue and prep their boom setup. This may involve changing their gear, such as using a blimp, or a ‘deadcat’ as it’s called, to cut out wind noise if they are recording a scene outdoors.
They’ll find out the lens that the camera is shooting with, or take a peek at the monitor to see how wide the frame is and how close they can position the boom without getting in the shot.
Once ready to record a take the 1st AD will call roll sound, and the sound recordist will begin recording, then the cameras will roll. The boom swinger will then move the mic over so that it’s directed at the 2nd AC, who will announce the information on the clapper board and then give it clap so that the editor has a point where they can sync the sound with the video. The boom operator will then quickly position the boom above the actors and be ready to begin recording dialogue.
They always wear headphones while recording which can help them to position the boom further away, closer or at a different angle to get the best possible sound.
When recording sound the default rule is to usually capture whatever audio is present on screen.
So, for wider shots where multiple actors are in a frame the boom swinger may alternate and move the boom closer to whoever is speaking, positioning the mic back and forth as the actors exchange dialogue. Then for close ups where only a single actor is on screen they will usually focus only on recording sound for that actor alone. When the camera switches to a close up of the next actor then their full sound will get recorded. By doing this they will then have clear dialogue for both actors which the editor can use at their discretion.
Sometimes if the boom was unable to capture a certain sound effect or a certain line of dialogue during a scene then they will pull the actor aside between setups and record what is known as wild sound - sound that isn’t recorded with any specific video footage but which may be synchronised or used later.
They may also need to record ‘room tone’, a quiet ambient recording of the space without any dialogue. This can be placed as a layer underneath the dialogue in the edit to make the cuts more natural and provide a background noise to the scene.
In this case the 1st AD will make an announcement to the set, the crew will awkwardly freeze so as not to make any noise and the mic will record about a minute of quiet, ambient sound.
TIPS
Since any footage where the boom dips into shot will be unusable one of the most valuable skills for boom operators is to know how wide a frame is. If you’re starting out, it’s useful to look at the framing on the monitor and find what is called an ‘edge of frame’. This could be a marker on the set which indicates what is in or out of the shot.
As boom operators become more experienced they’ll begin to learn focal length sizes and be able to place the boom without needing to look at a frame. For example, if they know how wide a 35mm lens is then they can imagine its field of view and be sure to stay out of it.
As space on a set can sometimes be limited, it is useful to find the best position to stand before shooting begins. When finding a position it’s important to be mindful of how the camera and actors will move and to identify any lights that may cast shadows or reflective surfaces that will pick up the mic in shot.
Always make sure to never position the boom between a light source and a character, as it will cast a shadow of the gear in the image. Adjusting the length of the boom so that it has enough reach, but isn’t overextended will also save space and mean that it stays out of the way of other crew members, gear or lights.
Boom mics are directional, meaning that they capture whatever sound they are pointed at most prominently. Any sound behind or off to the side of the mic will be recorded much softer. Therefore positioning the boom overhead with the mic facing down towards the speaker is most common.
In a dialogue scene with two people close together the operator may be able to turn the angle of the directional mic toward whoever is speaking at the time without needing to move the position of the actual pole.
If the frame has a lot of headroom it is also possible to boom sound from underneath the frame. However, when recording outside this may sometimes pick up excess aviation noise if any planes pass overhead, so should be avoided unless it’s necessary.
If you’ve ever operated a boom during a long scene you’ll know that holding it in an awkward position gets surprisingly heavy surprisingly quickly. The solution is to either spend more time at the gym or find positions that better mitigate the weight of the mic pole, such as resting it on a shoulder or on the head.
The Most Popular Cinema Lenses (Part 4): Panavision, Tokina, Atlas, Canon
In the fourth part of this series I’ll look at the Panavision G-Series anamorphics, Tokina Vistas, Atlas Orions and the Canon S16 8-64mm zoom lens.
INTRODUCTION
While many think that only the camera is responsible for the look that footage has, the glass that is put in front of the camera has just as great an influence over how a film looks.
In the fourth part of this series I’ll look at some popular lenses which are used in the film production industry and break down the ergonomics of each lens, the format they are compatible with, as well as their all important look, by using footage from movies shot with this glass.
Hopefully this will provide some insight into what kind of jobs and stories each lens is practically and aesthetically suited for. Let’s get started.
PANAVISION G-SERIES
Panavision launched their G-Series lightweight anamorphic lenses that covered a 35mm frame in 2007.
In a past episode we looked at another of Panavision’s anamorphic series of lenses, the Cs, which are probably considered their flagship product from the past - being launched in 1968. The classic, vintage anamorphic look of the Cs is still highly sought after today, despite their relative scarcity and mish-mash ergonomic designs.
The Gs were released by Panavision for DPs that prefer a slightly more updated iteration of the Panavision anamorphic look with easily workable, modern ergonomics.
The older Cs came in different sizes, with different apertures, different close focus capabilities and different front diameters. Whereas the Gs came in more consistent sizes, with more standardised T stops and front diameters. This makes working with the Gs far easier and quicker for camera assistants. For example when changing between G-series lenses the focus and iris gears are positioned almost identical distances apart, meaning the focus motors don’t have to move.
The Gs can be shot wide open at T/2.6 and get the exact same exposure with different lenses, whereas the apertures of the Cs need to be individually tweaked between lens changes. Their standardised front diameters means that clip-on matte boxes can be easily interchanged without swapping out the back, and their more standardised lengths and weights make balancing gimbals after changing lenses easier.
The Gs also have more subtle breathing, which means the image will shift less when the focus is racked.
The update of the Gs also carries over into their optical look. They have a higher contrast, a greater degree of sharpness, aberration control, glare resistance and overall, resolve higher resolution images, while maintaining Panavision’s beautiful anamorphic bokeh and focus falloff.
DPs such as Matthias Koenigswieser enjoy the more consistent and modern anamorphic look of the Gs. On Christopher Robin he combined the C-series and the G-series. He used the more modern Gs when shooting on slightly less sharp 35mm film, and used the softer, more vintage Cs for footage that needed to be shot at a higher digital resolution. In this way they balanced out to provide a consistent look across mediums.
Fun fact, when I camera assisted Matthias on a TV commercial he also opted to use the Gs for a contemporary, sharper looking, anamorphic car shoot.
Overall, Panavision G-series are great for cinematographers who need a lightweight lens that is solidly constructed, quick and easy to work with and desire a slightly updated Panavision anamorphic look that is more optically consistent.
TOKINA VISTA
From an anamorphic lens that covers the 35mm format, to a large format spherical lens, let's take a look at the Tokina Vista primes. This set of lenses features a massive 46.7mm image circle. This means that they cover almost any cinema camera on the market, including full frame sensors and large format sensors like the Red Monstro 8K or the Alexa LF.
They have a fast aperture of T/ 1.5 across the entire range of focal lengths. This means that when the longer lenses are combined with large format cameras, the depth of field becomes razor thin.
They are very solidly constructed and come in consistent lengths across the range, with the iris and focus gears all being the same distance from the mount. The front diameter is standardised to 114mm.
There are lots of well designated distance measurements on the barrel of the lens which makes focus pulling marks more accurate. Although super robust, their full metal construction and the ample glass that is needed to cover large sensors mean that the lenses are a pretty hefty weight.
The lenses are super sharp and come with modern coatings that give them very few chromatic aberrations. This means their optical qualities are far less vintage and imperfect like other large format lenses such as Arri DNAs. Their modern coatings also mean that the lenses don’t flare massively, but when hit with the right angle of light they will produce a blue, green rainbow flare.
Their look is super modern, ultra-crisp and sharp across the entire width of the frame. Even wide open at T/ 1.5 the sweet spot of the lens in focus is very crisp. They also have minimal distortion even at the widest 18mm focal length.
The Tokina Vistas are a great option for DPs that need a ergonomically designed, fast, super sharp, modern looking spherical lens that resolves high resolution images and covers large format cameras.
ATLAS ORION
The Orion series of anamorphic lenses from Atlas was first unveiled in 2017. After first starting with just a 65mm lens, the set has now been expanded to 7 focal lengths ranging from 25mm to 100mm.
Atlas was started as a small company with the goal to manufacture professional grade anamorphic cinema lenses at an affordable price point. This may not seem all that affordable at first glance, but when compared to purchasing or renting other high end anamorphic glass, the price is significantly reduced.
Since these front anamorphic lenses have been designed recently they feature solid, modern design with a robust housing, well spaced distance markings and a smooth focus gear. They aren’t the smallest, lightest or most compact of anamorphic lenses, particularly when compared to lenses such as Kowas, but they are solid.
All focal lengths, even the 25mm, feature an aperture of T/2. This means they are very fast for anamorphic lenses which typically aren’t as fast as their spherical counterparts. However, when shot wide open at T/2 they do lose some sharpness, with their sweet spot being closer to around T/4.
Another great feature is that the Orion’s have very good close focus capabilities. This makes it easy to shoot close ups in focus without the use of diopters.
When it comes to their look, I’d say they have subtle vintage characteristics, but overall create a more traditional anamorphic look with good levels of contrast and no crazy focus falloff or distortion.
So if you’re looking for a solid set of modern, fast anamorphic primes that cover a 35mm sensor, with some vintage characteristics and great close focus all at an affordable price point, then Orion’s may be the way to go.
CANON S16 8-64mm
So far we’ve looked at lenses that cover Super 35 and large format. Next let’s take a look at a smaller format lens, Canon’s super 16 zoom, the 8-64mm.
Since this lens was designed for super 16mm film it doesn’t cover a lot of today’s modern sensors which are super 35 size or larger. However, this lens is still widely used today as the zoom of choice for 16mm film work, on the Alexa Mini in its S16 mode, or certain micro four thirds digital cameras.
It’s 8-64mm range is about the equivalent of a 14.5-115mm lens in super 35 mode. This means that this single lens has lots of flexibility and covers a whole range of conventional prime focal lengths. Despite this long zoom range it has good close focus at 22”. This meant that it was a popular workhorse in the 90s, especially for TV work or documentaries which were shot in 16mm.
For such a long zoom range it’s pretty compact and lightweight but it's built like a tank. The focus rotation is only about 180 degrees which makes it a good pairing for solo documentary operators. It has some distance markings on the barrel of the lens, with probably not as many distances as most focus pullers would like, but enough to get by.
With an aperture of T/2.4 the lens is fairly fast for a zoom. When shot wide open the image does tend to get a little softer and ‘dreamier’ like a diffusion filter has been added. But if its stopped down just a bit to around T/ 2.8 the lens sharpens up.
For a vintage zoom it's amazingly sharp which you usually want when shooting 16mm film which is a lower fidelity medium. However it isn’t overly sharp or too clinical with its vintage lens coating.
Its solid construction, long zoom range, fast aperture, great close focus and sharp but slightly vintage look make the Canon 8-64mm a great choice for DPs looking for a 16mm zoom.
Cinematography Style: Conrad Hall
n this episode I’ll look at what Conrad Hall had to say about his philosophy on photography and show some of the gear which he used in order to cultivate his photographic vision.
INTRODUCTION
If you were to Google ‘who are the best cinematographers of all time?’, it won’t take long to stumble upon the name Conrad Hall. Through ten Academy Award nominations and three wins, the strength of his career speaks for itself.
His photography is characterised by neutral colour palettes, inventive uses of hard light, reflections and character focused framing which all culminated into a style which he called ‘magic naturalism’. In this episode I’ll look at what Conrad Hall had to say about his philosophy on photography and show some of the gear which he used in order to cultivate his photographic vision.
BACKGROUND
Hall was born in 1926 in French Polynesia and in his mid teens began attending boarding school in California. After graduating he signed up for a degree in journalism at USC, however that didn’t last long.
“Boy am I lucky that I got a D+ in journalism and had to change my major.”
He switched to the cinema program and began learning the basics of filmmaking, a relatively new art form to study at the time.
To work in Hollywood on a camera crew back then required being a member of the International Photographers Guild. This left him without a job. To work around this Hall and some of his classmates created their own independent production company and produced a film called Running Target which Hall shot. This gained him membership to the guild however due to regulations he wasn’t allowed to be credited as cinematographer on the film, but rather as a visual consultant, even though he shot the entire film.
As a member of the guild he then worked his way up the ranks, from camera assistant, to camera operator until he eventually got a chance to photograph the feature film Morituri as the director of photography.
Hall’s career went on to span many decades, from the 1950s to the early 2000s, during which time he worked with a host of esteemed directors which included: Richard Brooks, Stuart Rosenberg, John Huston, Steven Zaillian and Sam Mendes.
PHILOSOPHY
When Hall was asked how he decided where to point the camera, he is reported to have said, “I point it at the story.”
To him, the story was always the starting point for determining his photographic decisions and the ultimate target he aimed for. Although the look of his photography changed between projects depending on the story and director he was working with, he carried over some philosophical concepts throughout his career.
“I’ve never been somebody to get a movie to look absolutely perfect… Mine are always sort of flawed somehow or other. And in a way I don’t mind that because it’s not about perfection it’s about the overall feeling of the thing.”
His light wasn’t always perfectly soft, perfectly shaped and didn’t always have a perfect contrast ratio between light and shadow, his framing wasn't always perfectly symmetrical. By not always aiming for a perfectly beautiful image, much of his work carries a feeling of naturalism. He often incorporated interesting flourishes as well, such as using reflections, interesting hard shadows and atmospheric texture such as smoke or rain.
He called this magic naturalism: shooting things as they are, while at the same time incorporating stylistic touches that heightened the atmosphere of the story.
“I’m one of those guys who doesn’t do a lot of augmenting. But who knows how to take the accident and turn it into something wonderful, magical. I look for that. I thrive on it. I feed on it. I don’t invent stuff. It invents itself and then I notice it and use it dramatically.”
Hall was a master of observing unintentional magical moments then using the photographic tools he had to emphasise them. Whether that was zooming into the reflection of a chain gang trapped in the sunglasses of a prison guard. Throwing hard light against a tree to create ominously moving shadows made by the wind. Or positioning a character who was a murderer so that the acciedntal reflection made by the rain on the window made it look like he was crying.
When combined with an otherwise largely naturalistic look, these stylised little moments of ‘happy accidents’ elevated the story in a magical way.
Due to the length of his career, he started photographing films in black and white but of course moved on to using colour in his work as it replaced black and white as the dominant medium. A thread that he has carried through most of his colour films is using an earthy, neutral colour palette. Many of his films used lots of browns, greens, whites and greys, with strong, pure black shadows. The colour was rarely strongly saturated or vivid across the frame.
This meant that when a strong colour was used, like the famous use of red in American Beauty for example, it really stood out against the rest of the film’s neutral tones.
GEAR
“It’s as complex a language as music. A Piano’s got 88 keys and you can use them in any complex way you want to. We got the sun and light. Is there anything more complex than light?”
As we mentioned, being open to ‘happy accidents’ is an important part of his cinematic philosophy, especially when it comes to light. For that reason he liked coming up with the majority of his lighting on the day, although sometimes for large spaces some basic pre lighting work was necessary.
“I don’t like to figure things out ahead of time before the actors do”
For example, on Road To Perdition, many of the large interiors were sets constructed in a studio. During prep he got a rigging crew of 10 people to rig a collection of greenbeds and scaffolding overhead in the studio over 8 weeks. To this scaffolding his team rigged 30 10K fresnels and 60 5Ks which were all rigged to dimmers and used to light the backgrounds outside the windows. Basically acting the same as ambient sunlight outside.
It also meant that all his lights were out of the sets and wouldn’t get in the way of the actor’s blocking or framing. The sets were also electrified so that practical light sources, such as lamps, could be plugged in and used. Much of his lighting was done with tungsten balanced lights.
With these fixtures rigged in place he could then come in on the day and position the lights as he desired. A lot of the light was hard and undiffused, something he often did in many of his films. This resulted in strong lines of shadow.
He regularly created shape by breaking up the light with interesting textures or used parts of the set to shape the shadows which were cast.
When lighting interiors he would also use what he called ‘room tone’, where he bounced smaller fixtures like a 1K into the ceiling to provide a soft ambient base light to a room. This filled in the contrast from the hard light a bit. Because the light is soft and bounced it’s not very directional which means it’s difficult to tell where exactly it comes from. In this way it provides an overall lift to the space in a natural way. Once this base ‘room tone’ was in place he could then work on lighting the characters. Often hitting them from the side or behind with a hard source.
He used this same hard backlight to bring out textural components like smoke or rain.
When it came to selecting lenses he liked using a wide collection of prime focal lengths from 27mm to 150mm. Unlike the recent trend of using wide angle lenses for close ups, Hall took more of a traditional approach. We used wide angle lenses for wide shots and longer focal lengths for close ups that threw the background out of focus.
However he did sometimes use long lenses, like Panavision’s 50-500mm zoom, creatively for wide shots to increase the feeling of heat waves with distortion and represent characters as more wispy yet menacing.
He liked using a shallow depth of field, usually setting the aperture of his lenses between T/1.9 and T/2.5. This gave the photography an emotional dimension and clearly showed the focus (literally) of the shot.
Hall mainly used Panavision cameras and lenses. He shot on 35mm film before the advent of digital cinematography with cameras such as the Panavision Platinum. He especially liked using Panavision Primo lenses for their look, reliability and wide range of focal lengths.
To achieve the colour he liked he used fine grain tungsten Kodak film stocks for interiors as well as exterior daylight scenes. For example, he used the more modern Kodak Vision 200T for his later work, and Eastman EXR 100T for his older work. A lot of the classic 60s and 70s-feeling muted, neutral colour came from his use of Eastman’s famous 100T 5254 colour negative film.
For some films, such as Butch Cassidy, he wanted the colour to be even more muted and to pull out the cliche blue often used in ‘western skies’. To do this he radically overexposed the film, then got the film laboratory to compensate for the overexposure in the print. This further washed out the colour and turned the blue a softer, lighter shade.
CONCLUSION
“Those are the kind of films that I like to get a hold of and don’t often get a chance to do. Stuff that, like, goes on forever about some basic and important human condition that is bigger than all of us and will go on forever no matter what era it’s set in.”
Conrad Hall’s selection of films that he photographed is a reflection of his style as a whole. Telling simple, natural stories that represent something bigger through his injection of magical moments.
One thing that he always tried to do was to tell stories so well that if the sound was turned off the audience would still understand the story just based on the images. His love for the medium, his powers of observation and ability to translate stories using whatever magic naturally occurs on set is what has made him one of the greatest of all time.
4 Reasons Movies Shouldn't Be Watched On Laptops
Let's remind ourselves why going to the cinema is still the superior to streaming movies from home by going over 4 reasons we should still make the effort to get out of the house and go to the cinema.
INTRODUCTION
According to a recent study, only 14% of adults stated they preferred viewing a new movie in the cinema, while 36% preferred streaming it from a device at home. While this trend may have been expedited by the pandemic, I think it’s a trend that’s on the rise regardless. Fewer and fewer people are going to cinemas anymore and I think that’s kind of sad.
And I mean, look, I get it. I’m guilty of it too. Convenience outweighs viewing experience. 80% of my cinematic diet is probably consumed at home from a TV or a laptop rather than on the big screen.
Before this video becomes too much of a lamentation about the death of cinema, I’d like to flip it to remind us why going to the cinema is still the superior experience by going over 4 reasons we should still make the effort to get out of the house and go to the movies.
CONSISTENCY
One of the biggest issues of watching movies from home comes from the inconsistency of the image.
Filmmakers spend years developing their craft, putting blood, sweat and tears into lighting, testing for the perfect lenses and tweaking nuances of colour on a calibrated monitor in the grade. Only to have 60%, or so, of the audience watch the final product from a smartphone with a cracked screen in a bright room full of reflections.
Cinema is a medium which is all about refining and tweaking the details in order to create a lasting art work. A lot of this is undone by watching the final product in a sub optimal viewing environment.
I mean yes, you get the gist of the visuals, but it’s kind of like listening to an album that was carefully, meticulously written, recorded, mixed and mastered on a noisy airplane with the cheapest pair of headphones sold in the 7 Eleven. You can kind of make out most of the lyrics and melody but all the sonic nuance that the artists spent their time and energy creating is lost.
Most movie complexes use high end digital cinema projectors that are DCI compliant, tested to output a high standard of quality images, in a dark, light free environment. This means that the viewing experience at different cinemas around the world will be almost identical. Colour and contrast will be consistent and resolution is standardised to either 2K or 4K.
This differs from home viewing a movie. Different screens made by different manufacturers have different resolutions, will display colour and contrast inconsistently at varying levels of brightness and don’t have to conform to any compliance standards.
Using different media players may also affect the colour and luminance information. For example a pet peeve of mine is that Apple’s Quicktime Player shifts the gamma curve and plays video files with different contrast from the original file.
Also, unless you are viewing a movie at night with all the lights in the house off, there will be excess ambient light that may cause reflections or dilute the brightness of the image.
THE AUDIENCE EFFECT
One of the most noticeable impacts that viewing a movie at home has, is that it shifts the experience from being a communal one to being an individual one.
The idea of cinema was born out of creating a medium which could be shared by an audience. I’d say that communal viewing heightens the effect that a film has on us. Whatever emotion the filmmakers impart to the audience is heightened when we share it as a group. For horror films you can hear the audience gasp, for comedies laughter rings out, and for compelling dramas you can almost feel a communal silent focus take hold.
I think part of this reaction comes from it being an uninterrupted viewing experience. Streaming sites are set up in a way to ease and encourage the process of watching films in little segments. Watch for 5 minutes. Pause and make something to eat. Watch for another 10 minutes while you simultaneously browse your phone. Skip forward past a scene you get bored of. Then come back the next day and find the movie paused right where you left off so that you can begin this fragmented viewing process again.
Filmmakers work extremely hard to design each film as a continuous, cohesive experience that suspends your disbelief and envelops you in the world of the story. Breaking down the medium by stopping and starting it destroys a movie’s ability to take hold of you.
The cinema is so important because it forces you to view a film as it was intended to be viewed, as a single, uninterrupted experience.
Sometimes filmmakers want to test your patience and use more drawn out scenes to support their point of view of the story. Sitting through a movie from beginning to end, even if you don’t care for the film, will at least give you a complete idea of what the filmmaker was intending to do.
Plus, in today’s world where everything is so sped up and our attention spans have become shorter than ever before, I think turning off your phone and watching a complete film from start to finish is an important mental exercise we should all regularly perform.
SOUND
The sound that you hear in a cinema is far more immersive than that from a laptop, phone or TV. This is because of surround sound. Consumer display products, like a laptop, typically have a single speaker built into the device that emanates sound from one source or direction.
Cinemas have surround sound which uses multiple speakers in multiple positions to provide sound that is more immersive and which surrounds you 360 degrees.
Like high end cinema projectors which are standardised, so too is the sound. The global standard is Dolby Digital which provides an audio mix with multiple channels, such as Dolby Digital 5.1. This provides 3 front channels which are sent to separate speakers: a centre, left and right which provide clean dialogue and placement of on screen sounds. Two twin surround channels are typically placed on the sides and behind the audience to provide a fuller, 360 degree listening experience. A low frequency channel that provides bass effects, with about a tenth of the bandwidth of the other channels, makes up the final .1.
The cinema is therefore set up to provide a more captivating sonic experience that places you in the centre of the action and better draws you into the world of the movie.
THE ALTAR OF CINEMA
The final reason to go to the cinema is less of a practical one and more of a conceptual one, but is arguably the most important. This may seem a bit over the top, and hopefully no one takes offence, but I think a comparison can be made between cinema and religion in the way that they are presented.
In most religions it is of course possible to practice from home without interacting with others through meditation or prayer. However all major religions have physical spaces which bring communities together: temples, churches, mosques. Often these spaces are large, impressively built and feature significant iconography.
I think as humans we are drawn to spaces, and get some kind of greater, more significant experience from coming together as a collective in a space that is designed and devoted to that experience.
Standing in the queue for popcorn, buying tickets, sitting amongst a group of people, watching the trailers - it’s almost ritualistic and builds up a level of excitement and reverence for the film we’re about to watch. An image which is projected onto a massive screen has to be taken more seriously than one on a smartphone.
The issue with having a continuous never ending supply of content to stream at home on a laptop is that it diminishes the importance of the medium. It makes movies more mundane and everyday. Taking the time to visit the cinema builds anticipation and makes it more of an experience and an event.
CONCLUSION
So much effort goes into making movies as a work of art. I think they should be appreciated as such and not given the same gravity as this YouTube video for example. They are different mediums. The smaller the screen becomes the more that watching a film turns into an individual experience rather than the group experience that it was designed to be.
Going to the movies may be less practical than just bingeing the latest releases on a laptop, but the experience of going to a cinema elevates movies into the unique medium that they are.