Why Most Movies Are Shot On Arri Cameras
There is one particular brand of digital cinema camera that far and above is the most selected when it comes to high end productions. Let's take a look at why cinematographers choose to shoot on the Arri Alexa.
INTRODUCTION
“I think digital cameras…they’re all tools. It depends on the project. You choose a different camera like you used to choose a different film stock.” - Roger Deakins, Cinematographer
You hear cinematographers claim all the time that a camera is just a tool. One of many tools that can be selected from their cinematic toolbox. However if we look at the statistics, there is one particular brand of digital cinema camera that far and above is the most selected.
From the 2022 Best Cinematography Oscar nominees, four out of five productions used Arri digital cameras. Out of the Best Picture nominees that number was six out of ten.
You may think that this is just coincidence and we need a larger data sample size. Well then, from the 2021 Best Cinematography nominations four out of five used Arri. And the 2021 Best Picture nominees? Five out of eight.
If you keep going back it’s easy to see a clear pattern emerge. Most films these days are shot on Arri digital cameras. So, based on my own experience of working in the industry with these cinema cameras, I’ll explain the four main reasons that I see as being responsible for why most cinematographers on high-end productions select to shoot on the Arri Alexa.
HISTORY
“The Alexa is my digital camera of choice. It has been since it came out.” - Ben Davis, Cinematographer
Someone might say it’s as simple as Arri cameras produce the best looking image. But there’s more to it than that.
To understand why Arri’s digital cameras are so popular we need to understand how the movie industry operated before digital when all productions were shot on film.
Due to the prohibitively high cost of film gear and cameras, they need to be rented out for films by production companies on a daily or weekly basis. There were two dominant gear manufacturers that emerged to produce this niche rental equipment: Panavision and Arri. A key difference between them is that Arri sells their equipment to third party rental houses or individuals, while Panavision exclusively rents the gear they produce.
Each constructed their own camera system that had some differences, such as Panavision cameras using a PV lens mount and Arri cameras using a PL mount. However these cameras were all built around a standardised way of working that accepted most third party gear accessories, such as using 19mm rods to mount a mattebox. This meant that crew with different gear accessories could jump from a Panavision to an Arri system on different jobs without needing separate kits.
When digital began taking over from film, Panavision and Arri needed to come up with a digital alternative to their film cameras that could be interchangeable with existing lenses and gear accessories.
Over the years, many working cinematographers had built relationships with these companies and had a track record of exclusively using their gear. So when film changed over to digital they naturally were drawn to what these two companies had to offer.
Panavision produced the Genesis using some of Sony’s digital imaging technology which had a 35mm sized sensor. After early operational issues were fixed and the Genesis began seeing some initial use, it was quickly overshadowed upon the release of Arri’s competing camera the Alexa.
The quality of the Alexa’s image, its usability and basic ProRes direct-to-edit workflow and being able to be privately bought up by a range of individuals and companies around the world meant that the Alexa took off, leaving the Genesis in the dust.
COMPATIBILITY
“It was kind of scary for me because…until then all my movies had been on film…Of course for me it was no doubt that if I was going digital it was going to be Alexa…I knew the Alexa would be the camera…that looked more the way I used to work with film.” - Natasha Braier, Cinematographer
When it came to this transition from film to digital cameras, Arri tried to make this leap as smooth as possible.
The Alexa was designed to be compatible with existing lenses and film equipment. Importantly, the user experience was also designed around the way that film was shot. Their camera had a recommended native EI, like a film stock, and had a menu screen on the camera which was simple to operate, which was based on the same few settings available on film cameras, like shutter angle.
Other menu systems of competitors like the Red One were a bit more convoluted and had more requirements such as needing to do ‘black shading’ to recalibrate the black balance of the camera.
This meant the Red’s menu was more technical, like a computer, which I think appealed less to many experienced cinematographers who were used to working on film cameras that had limited settings. The Arri menu was a far easier transition.
Initially the Red also had a more complicated RAW workflow than the Arri’s ProRes one.
Over time, Arri added more Alexa cameras to their line up featuring different body sizes and formats all based on the Alev sensor. This meant that cinematographers could choose between mini cameras, large format cameras, studio cameras, or even 65mm cameras and maintain the same Alexa look and compatibility in whatever format they needed.
Arri accessories, such as their wireless follow focus, are also compatible with their cameras. It’s easier for camera assistants to work with both an Arri camera and Arri accessories. Kind of like having a Macbook and iPhone from Apple, rather than a MacBook and an Android phone.
Using Arri accessories on a Red is of course possible, but it limits some features such as changing settings or playing back takes remotely from the focus handset, and requires additional elements such as an R/S cable to run the camera.
Overall Arri’s simplicity and compatibility won out amongst cinematographers making the jump from shooting on film to shooting digitally.
LOOK
“I think the Alexa at the moments is the best camera out there…I thought that the image quality just in terms of its resolution and just that tiny little bit of movement from the pixels moving or whatever. The Alexa just has a little bit of life to it and I think if you go too far the image becomes lifeless. So I like that bit of texture it has.” - Roger Deakins, Cinematographer
We now get into probably the biggest reason most cinematographers love the Alexa: its look.
The Alexa is favoured for producing excellent, flattering skin tones, colour that feels filmic and resolving detail in a way that balances a high resolution with an organic texture.
This is due to two factors: the camera’s sensor and its image processing ability.
The Alev CMOS Bayer sensor that is found in the Alexa has a high number of photosites that balances image sharpness with a high dynamic range and low noise. It has a low pass filter that blocks artefacts and an IR and UV filter which avoids strange colour effects but leaves enough of the red spectrum intact to deliver pleasing skin tones.
The image processing of the Alexa was designed by Arri colour scientists who had developed their ARRISCAN and ARRILASER film scanning technology and were able to render colour in a very natural way.
While companies like Red pursued high resolutions, Arri took their time and focused largely on colour science - which to many cinematographers was, and still is, more important since most projects still get finished at a 2K resolution.
RELIABILITY
“I also bought the very first Alexa Classic you know when it came out and I go you know I’ll be fine if it’s useable for three years and it ended up being a functioning tool and I still use it…the longevity of these products has been amazing.” - Phedon Papamichael, Cinematographer
Finally, the durability and reliability of the Alexa is incredibly renowned across the industry.
As cinema cameras are designed to be rented out and used regularly and over many years in the extreme outdoor conditions that movies are shot in this is an important factor.
While most high end cinema cameras now have a high degree of reliability, during the early development of digital cinema cameras there were many horror stories of cameras breaking down. The Red One had a reputation for being temperamental and overheating, while the Alexa was a solid workhorse with incredible reliability.
As I say, although these reliability issues from competing cameras have been smoothed over, in the early days I think this made some people nervous to shoot on a Red and gave their cameras a bit of a stigma, as time on a film set is extremely valuable and waiting for a camera to cool down before you can reboot it wasn’t very appealing.
I’ve worked with Alexas that are many years old on beaches, in deserts, in extreme temperatures and never encountered any issues. Arri’s track record of robustness, reliability and the longevity of their cameras remains to this day.
CONCLUSION
Overall I’d say these four factors: Arri’s historical legacy in the film industry, the compatibility and ease of use of their products, the all important look, and their reputation for reliability, are what has made them the default choice for most cinematographers working today.
So much so that, as a camera assistant, when you work on any camera other than an Alexa it’s seen as an exception to the norm. I don’t see this trend changing any time soon, especially once they release their much anticipated Super 35 4K camera.
Why Some Shots In Movies Feel Different
Ever notice how some movies totally wrap you up in the world of a character to the point of it being claustrophobic and uncomfortable, while other movies make you feel more like you’re just observing events in their lives unfold in a more detached way? Much of this tone and feeling is a result of the filmmaker’s choice of shot sizes.
INTRODUCTION
Ever notice how some movies totally wrap you up in the world of a character to the point of it being claustrophobic and uncomfortable, while other movies make you feel more like you’re just observing events in their lives unfold in a more detached way?
Much of this tone and feeling is a result of the filmmaker’s choice of shot sizes.
To better understand the impact that different shots have on an audience I’ll first go over the basic shot sizes that are used by filmmakers and then dive into the effect that different types of shots, and how they are edited together, have on an audience.
SHOT SIZES
Before we get into their psychology we need to understand the basics. Shot size refers to the camera’s field of view and the width of the frame relative to how a character is placed in it and how much space they occupy.
Different shot sizes can be achieved by physically moving the camera closer or further away from the subject, or by using different focal lengths. The lower the focal length number the wider the field of view is.
So, let’s familiarise ourselves with the vocabulary that filmmakers use to refer to the width of a frame. This vocabulary helps crew members to quickly communicate their vision and is based on conventions which have been established over many years.
Starting on the widest end of the spectrum we have an extreme long shot or extreme wide shot. This is where the subject or character is totally visible and only takes up a tiny fraction of the total frame. They are used to provide a vastness and scope to the location or landscape of the story.
Due to this they are often used as establishing shots - the first shot that begins a scene and sets the context and broader space that the scene takes place in.
Moving in a bit we come to the long shot or wide shot. Like in an extreme wide the subject is shown from head to toe, however unlike an extreme wide the character now occupies more space in the frame. They are the main focus of the shot now rather than the landscape.
Wides are also commonly used as establishing shots and to show the full body actions of a character.
Next up, we push in further on the character into what is called a medium shot. This is where the bottom of the frame starts from above the waist and the top of the frame includes their head. Because we are closer to the subject we can now read their facial expression and performance more clearly, yet we are also wide enough to capture some of their upper body language and actions.
There are also a few variations of this shot that have some different names. A medium long, three quarter or cowboy shot is slightly wider than a medium shot, usually starting from the knee or thigh region. The cowboy derived its name from its regular use in western films. The slightly wider frame allowed the audience to see both the actors face and their guns that were slung around their waist.
The close up tightly frames the face of the character so that they take up almost all the space in the shot.
The bottom of the frame usually sits just below the chin at around the shoulder level and the top of the head is either included, or framed out - which is sometimes called giving the actor a haircut. There are many different degrees of width that a close up can be shot in, depending on how intimate the filmmaker needs the shot to feel.
Finally we can push in even closer to an extreme close up. This is a shot that is so tight that all we see is a detail or single feature of the face, such as the eyes. Extreme close ups can also be used to photograph objects that hold value to the story, such as text.
An extreme close up also goes by the name an Italian shot, due to its regular use by Sergio Leone in many of his Italian Western films.
THE EFFECTS OF DIFFERENT SHOTS
When you pick up a camera and decide to shoot something, the shot size that you choose will first and foremost be determined by what you choose to include in the frame and what you choose to leave out.
When someone asks you to take a photo of them on a phone, do you get right up in their face and take a close up, or back far away to an extreme wide shot? You probably wouldn’t do either. Because the information that you need to include is them and perhaps some of the background. Most people would take a photo with a frame somewhere in between those two extremes.
Whereas for action scenes, we tend to stick to wider shot sizes for the simple reason that we need to be able to see the overall action in order to know what is going on. And if we cut to a close up of a specific detail, chances are the filmmaker wants you to notice that piece of information.
So, information is the one key effect that choosing a shot has. The other important consideration is the emotion or feeling that comes from framing a shot in a different way.
While there aren’t any definitive rules set in stone that apply to every single film ever made when it comes to interpreting emotion from a shot size, I think a broad rule can be generally applied.
In life, our proximity to a person defines our emotional relationship to them. The closer we are to someone the more intimate our connection to them is, and the further apart we are the more observational and emotionally distant we are to them.
Being face to face with a partner has a different emotional feeling than watching the actions of someone you don’t know from across the room. The same principle can be applied to shot sizes.
The more of the frame a character takes up, the more intimate and personal our connection to them feels. So having a close up of a character means the audience will unconsciously feel a greater sense of connection towards the character in that moment. It’s as if the filmmaker is saying, ‘Make sure you notice this detail or emotion. It’s very important.’
While viewing an entire movie shot in wide shots will distance the audience emotionally from the character and their actions, making it feel like we are passively watching them, rather than being transported directly into their head and thoughts.
Now the reality is that most films are shot with, and include, a variety of shot sizes. This is so that different pieces of performances can be chopped together continuously and without jump cuts. Having different shot sizes to work with also allows the editor to control the pacing and emotional arc of the cut by cutting to different shot sizes that come with different emotional connotations.
For example, a textbook scene will start with a wide establishing shot of the location. then cut to a wide of the characters in a scene. As we get to know those characters we’ll cut in closer to a medium and go back and forth on mediums as the characters exchange general dialogue. Then as what the characters are saying, and how they are saying it becomes more important and intimate the editor will start to cut to close ups.
By cutting from wider shots to progressively tighter shots, the scene is able to begin by establishing the information and space of the location, and then slowly shift the audience’s perception from a more general observation of characters to building up a more personal connection with them as we get to know them.
While this is the general rule, shot sizes can be manipulated in other ways for effect. For example, Son of Saul uses close ups of the lead character for almost the entire film. This emotionally puts you in the shoes of that character and makes the space of the film more claustrophobic and confusing as we never cut wide enough to establish the space.
An opposite approach can be found in a film like Memoria, where we stay wide for most of the film. This presents the actions to us as something to be observed from afar in a more objective way.
Editors can also play against convention by flipping the idea of starting wide and cutting in closer.
The opening to The Deathly Hallows does this by starting on a bold opening statement. We cut from an extreme close up, to a close up, to a medium shot to an extreme wide. This creates an emotional arc that moves from extremely intimate to more detached, and controls the flow of information, providing context and establishing more of the world each time we cut wider.
CONCLUSION
In shot sizes, filmmakers hold a very valuable tool in their hands.
Like a puppet master they can use the size of a shot to manipulate what the audience does or doesn’t know and, perhaps more importantly, to manipulate the very emotions of the audience and the relationship they have with the characters on screen.
6 Basic Camera Settings You Need To Know
Let’s examine 6 of the most important camera variables or settings that can be changed: EI, shutter angle, aperture, ND filters, white balance and frame rate.
INTRODUCTION
For anyone who wants to take photography or cinematography more seriously, the first step is to distance yourself from the dreaded ‘auto’ setting on a camera, move the knob over to ‘manual’ and start to understand the basic camera variables or settings that change the way that an image is captured.
Professional image makers choose to manually manipulate these settings to maintain complete control over how an image looks and not leave those creative decisions down to the whims of automatic camera software.
In this video I’ll examine 6 of the most important camera variables that can be changed. These settings affect the image in different ways and can be placed into three separate categories: exposure settings, colour settings and motion settings. These 6 variables have both technical value that can be used to control how an image looks, and also have creative value that changes the effect, mood or feeling of an image.
EXPOSURE SETTINGS
Exposure refers to how dark or light an image is. This is determined by the amount of light that goes through a camera's lens and hits the sensor of the camera - where the image is recorded.
A dark image with too little light is underexposed, a bright image with too much light is overexposed and an image with enough light is evenly exposed. A camera has four variables that can be changed to alter its exposure: EI, shutter angle, aperture and by using neutral density filters.
Let’s start with the EI, or exposure index setting - a good base setting to start with. This can be referred to using different metrics such as ISO, ASA, gain or EI depending on the camera, but the concept is the same. It’s a measurement of a film or camera sensor’s sensitivity to light.
The lower the value the less sensitive it is to light and the darker an image will be. Raising this value means the sensor is more sensitive to light and the brightness of an image will increase.
Most professional digital cinema cameras have what is called a ‘base’ or ‘native’ EI setting where the sensor performs best and has the most dynamic range and lowest noise. For example the Alexa sensor has a native EI of 800.
While EI can be changed on digital cameras, when using film its speed or sensitivity to light is set at a fixed level, such as 50 ASA, and cannot be altered without changing to a different kind of film stock.
The next exposure setting we can manipulate is the shutter angle or shutter speed.
A shutter is a solid layer in front of the sensor that opens and closes rapidly. When it opens it lets in light, when it closes it blocks light. The longer the shutter is open for the more light it lets in and the brighter an image is, while the shorter the shutter remains open, the less light it lets in and the darker an image is.
Cinema cameras use shutter angle and show a measurement in degrees. A large shutter angle means that more degrees of the circular shutter is open and more light is let in. While a smaller shutter angle, with a smaller opening, lets in less light.
Consumer or still photography cameras use shutter speed that shows this metric in fractions, such as 1/50th of a second - a measurement of how long the shutter is open for. So, fractions, such as 1/250th of a second means that the shutter is open for a shorter time and that less light will be let in, whereas fractions such as 1/25th of a second means the shutter is open for longer which lets in more light - resulting in a brighter image.
With these two settings done, we now move to the lens where we can set the aperture, iris or stop.
This is the size of the hole at the back of the lens that allows light to pass through it. Iris blades can either be expanded to open up the hole and let in more light, or contracted to make the hole that light passes through smaller.
On cinema lenses this is done manually by adjusting the barrel of the lens and on modern digital stills cameras it is usually adjusted via a button or scroll wheel on the camera which changes the iris of the lens internally. The aperture is either measured as a T-stop on cinema lenses or as an F-stop on stills lenses.
Whatever measurement is used, the lower the stop number the more light will be let through and the brighter an image will be. So a lens with a stop of T/2 has a large aperture opening and will let in much more light, while a lens with a stop of T/8 has a smaller opening and will let in less light.
These three settings, ISO, shutter speed and aperture are foundational to exposing footage and are called the ‘exposure triangle’.
In photography these three settings are regularly adjusted individually to find the right exposure, however in cinematography, more often than not these settings are made up front and only tweaked for their photographic effect.
For example in cinema, usually the ISO will be set to its native level, such as 800, the shutter will be set to 180 degrees or 1/50th of a second to ensure motion or movement feels ‘normal’, then the stop of the lens will be set depending on how much of the background the cinematographer wants in focus.
Opening up the aperture to a low number like T/1.3 means a shallow depth of field with much of the image out of focus, whereas stopping down to about T/8 will mean more of the image is in focus.
So if this is the case then how else do cinematographers adjust the brightness of an image?
They do it by manipulating the strength of the lighting and with the 4th exposure variable, neutral density, or ND, filters. These are pieces of darkened glass that can be put in front of the sensor or lens that decreases the amount of light that is let in without affecting the colour or characteristics of the image.
In film, a number is ascribed to a filter to show how many stops of light it blocks. Each stop is represented by 0.3. So ND 0.3 means 1 stop of light is blocked and ND 0.9 takes away 3 stops of light.
Many modern cinema cameras have ND filters built into the camera which can be adjusted internally via a setting. ND filters can also be used as physical glass filters that are mounted onto the front of the lens using a tray in a mattebox, or with a screw in filter on stills lenses.
COLOUR SETTINGS
Now that we know the 4 variable settings that we can use to adjust the brightness of an image in camera, let's look at another very important setting related to colour - white balance.
White balance, or colour temperature, is measured in Kelvin and changes how warm or cool an image looks.
The two most common white balance settings are 3,200K (or tungsten) and 5,600K (or daylight). This is because when you set the camera’s white balance to 3,200K and light an actor with a warmer, tungsten light the colour will appear neutral - not overly cool or warm.
Likewise when you set the camera to 5,600K and shoot with a cooler daylight fixture or outside in natural sunlight the image will also appear neutral.
This means that the lower you set the Kelvin value of the white balance the cooler an image will appear. So if you shoot outside in natural sunlight and set the camera to 3,200K then the image will be blue. Inversely if you shoot in tungsten light with a colour temperature of 5,600K then the image will be warm.
As well as having these two preset colour temperatures, most modern cameras also allow you to pick from a range of colour temperatures on the Kelvin scale and even have an auto white balance setting which automatically picks a Kelvin value to give the image a neutral colour balance.
It should also be noted that like with EI, when shooting on film the colour temperature is fixed to either daylight or tungsten and cannot be changed without using a different film stock.
MOTION SETTINGS
Finally, let's take a look at a camera setting that only applies to moving images - frame rate. To understand what frame rate is we need to think of film not as a video clip, but rather as a series of individual images.
When shooting on film, 24 still pictures are captured every second. Each of these pictures is called a frame. To create the illusion of a moving image these pictures are then projected back at a speed of 24 frames per second. You can think of it kind of like leafing through still images in a flip book at a speed of 24 pages every second.
Therefore, recording a frame rate of 24, or 25, frames per second with a camera produces the illusion of motion at a speed which is the same as that which we experience in real life.
Frame rate can also be used to exaggerate motion for effect by keeping the same playback ‘base’ frame rate of 24 frames per second and adjusting the frame rate setting that the camera captures.
For example if we want slow motion, we can set the camera to record 48 frames per second and then play it back at 24 frames per second. This results in twice as many frames and therefore a feeling of motion that is half as slow as that of real life.
Something important to note is that frame rate also affects exposure. Doubling the frame rate - for example from 24 to 48 frames per second - means that the camera loses a stop of light and will therefore be darker.
CONCLUSION
So, there we go: EI, shutter angle, aperture, ND, white balance and frame rate - six camera variables that every photographer or cinematographer needs to know.
If this all seems like too much technical information, the easiest way to practically get this information in your head is to find a digital camera and start experimenting with settings by shooting.
The more you practice with a camera, the more all of this information will start to become second nature. Until you get to a point where you can manipulate all of these settings unconsciously to capture that imaginative image that you see in your head.
How Paul Thomas Anderson Shoots A Film At 3 Budget Levels
Let's take a look at three films made at three different budget levels from director Paul Thomas Anderson in order to get a sense of the trajectory of his career, his approach to filmmaking and how some of his methods of production have both remained the same and slowly shifted throughout his career.
INTRODUCTION
Compelling, flawed characters. Ensemble casts. Masterfully chaotic stories rooted in universal themes. Visual innovation. Technical competence. An overall strong vision and auteur-like control. These are some of the characteristics that, in my mind, make Paul Thomas Anderson one of, if not the best, director of the last 20 or so years.
Before we get started I think it is important to note that usually in this series I tend to feature directors who have undergone a greater change in the level of budget that they work with.
With the exception of his early work, Anderson has mainly stuck to producing work around the $25 to $40 million range and has never ventured into the realm of studio blockbusters. Nevertheless, let’s take a look at three projects which have been made at increasing budget levels: Hard Eight, Boogie Nights and Licorice Pizza.
In doing this I hope to give a sense of the trajectory of his career, his approach to filmmaking and how some of his methods of production have both remained the same and slowly shifted throughout his career.
HARD EIGHT
“I was way too young to be given the keys to the car I think. I wrote it because I had to because it just came out.” - Paul Thomas Anderson
Anderson’s interest in making films began in his childhood in the San Fernando Valley and continued throughout his teenage years. He would write, direct and then film his ideas for shorts with his father’s Betamax video camera. He attended Santa Monica College but quickly became disillusioned with film school when he felt his ideas and experimentation were discouraged and filmmaking was turned into homework or a chore.
Instead he started working as a production assistant on sets in LA and managed to cobble together $10,000 from a combination of money for college, gambling winnings and his girlfriend’s credit card to produce the short film Cigarettes & Coffee.
He managed to cast Philip Baker Hall, an actor he greatly admired due to his work on Secret Honor - a film made by one of his greatest influences, director Robert Altman.
“Yeah it was based on stuff. I’d been working in Reno. I’d spent some time up in Reno and I was coming off experiences there of watching old guys. I loved this actor named Phillip Baker Hall, still love him and I heard his voice as the character. I just started writing and that’s what came out.” - Paul Thomas Anderson
He would continue this writing process throughout his career. Many of the films he would write were based on life experiences he had and set in locations that he would frequent or had lived in.
He took these experiences and places and created narrative arcs and settings with them. At the same time he often filled in the characters based on actors that he wanted to work with and wrote the roles with certain actors in mind.
Cigarettes & Coffee did very well. It got into the Sundance Short programme. When Anderson decided to turn it into a feature length film he also got into the Sundance programme in order to develop it.
It was there that he secured funding for the feature version, titled Sydney, through Rysher Entertainment all while in his early 20s. He was so young that some crew members on the set initially mistook him for a production assistant instead of the director.
“You know I just bluffed my way through directing. You gotta understand that at that time probably based on the success of Pulp Fiction and a couple other small independent films there was a lot of cash floating around from these cable companies. So if you could make a movie for under $2 million they could kinda sell it off piece by piece with just enough genre elements and a couple cast names and you could just go make your movie.” - Paul Thomas Anderson
So, with an independently funded $2 million low budget he set out to make the film by squeezing the production window into a narrow 28 days.
He cast the film by scooping up some well known actors on the tight budget and shot it all on location.
Due to the tight schedule they had loads to shoot, particularly for the casino scenes which they had to squeeze into night shoots from 10pm to around 8am the next morning.
To shoot a lot in the small window it helped that Anderson always had a very clear idea, visually, of how he wanted to capture the film, and no time was wasted with extraneous shots or deliberation.
To execute the cinematography he hired Robert Elswit, who at the time was probably the biggest name crew member on the project. They quickly found that they complimented each other and had similar visual sensibilities.
“Paul doesn’t need a lot of help in certain areas. I understand his taste, maybe so it’s never a surprise. I can anticipate what he wants to do for the most part. He always has a visual style before he starts. Always. I mean it changes to some extent but it’s completely thought out. Nobody is more prepared. Nobody has really thought through pictorial style as completely as Paul.” - Robert Elswit, Cinematographer
This style included explorative camera movement - often done on a Steadicam - and slow dolly tracking. Elswit lit with moderate contrast ratios, exposed the actors well and used hard light in a naturalistic way.
Even though both loved the look of anamorphic lenses, the producers prohibited it due to budgetary reasons. As a compromise they shot Super 35 Kodak film stock with Panavision cameras and lenses in a 2.39:1 aspect ratio - an aspect ratio he would often use in his later films.
Rather than giving loads of direction to performances, or ‘manipulation’ as he called it, he tried to instil the feeling of what he wanted to the actors before production and cast all the parts exactly as he wanted them. As the cliche goes - most of acting is about casting.
When it came to editing he worked with a similar methodology. He doesn’t like cutting everything up too much and instead prefers to keep the performances intact and pull from limited takes.
When he submitted his first two and a half hour cut of the film, friction between him and the production company ignited over final cut. Rysher Entertainment cut it down, changed the music, titles and even the name of the film to Hard Eight.
As a final compromise, the company said they would be happy to release Anderson’s version of the film if he came up with the funds himself in order to finish it. So, he used all the money he had from a recent deal he had signed for his next film Boogie Nights to finance post production and cut it the way he wanted to -and agreed to give up his original title Sydney.
Paul Thomas Anderson used the modest budget to secure a solid cast of big name actors to draw in an audience, produced the relatively small scope story over a limited 28 day production window, saved money on production design and by shooting Super 35 with an experienced DP, and eventually won out the creative battle for final cut.
BOOGIE NIGHTS
“I went into my next situation thinking that the lesson I learned was to be paranoid, protective and don’t trust anyone. Fortunately I got to work with a great studio and a guy named Mike De Luca who was able to see what I’d gone through and said ‘No, no. Trust me and put your faith in me.” - Paul Thomas Anderson
Let’s backtrack a bit. Anderson first came up with the idea for his second film Boogie Nights when he was 18. He wrote and directed ‘The Dirk Diggler Story’, a 30 minute mockumentary about the golden age of porn.
“After I made the short film I wrote as a sort of full length documentary taking a kind of Spinal Tap approach, you know. But by the time I’d finished that, that format had kind of been worn out and done many times. I just kinda figured the way to do this is to go nuts and just make it straight narrative. I eventually had a shooting script of 186 pages.” - Paul Thomas Anderson
The eventual script looked at the rise and fall of a character in the 1970s porn scene and examined the idea of addiction, ego, surrogate families and communities.
Like Hard Eight, he wrote the script with certain actors in mind - including performers that he had worked with prior on Hard Eight. And set it in the San Fernando Valley, an area he had grown up with and was familiar with. It featured an ensemble cast, inspired by the work of Robert Altman.
“Casting and writing are kind of the same thing. Because I write parts for actors that are my friends or actors that I don’t know that I really want to work with.” - Paul Thomas Anderson
With a buzz starting to go around about the script and while in post production on his first film, New Line Cinema came on board to produce with a $15 Million budget and promised a more hands off approach.
As it was a lengthy script that was larger in scope and needed the casting of many well known actors, the budget increase was still a bit stretched. After their positive experience on Hard Eight, Elswit was again hired as his DP.
Elswit remarked that from the first location scout Anderson would outline the kind of shots he wanted. That detailed level of specificity helped them to save time and money, since it was a lengthy, ambitious film for its budget level.
This vision was also important when coordinating and communicating some of the complex long takes that Anderson had in mind. The most famous of which was probably the opening shot of the film, where a Steadicam operator started on a ride on crane, which boomed down, gave the operator a chance to step off and then track characters into an interior - introducing the audience to the space and world of the film in the first 3 minutes.
“These long, complicated tracking shots are really fun to do. I think the actors love them. Movie acting is sort of so pieced up and chopped up. Very rarely is action called and 3 or four minutes later their scene happens. It’s just kind of fun for them to really act something through and let it breathe. Let it happen.” - Paul Thomas Anderson
Due to the arduous nature of the shot the main steadicam operator Andy Shuttleworth had a backup Steadicam operator as they had scheduled doing this 1 shot over an entire night.
Eslwit lit the exterior scene with two strong, hard backlights and some smaller units which were meant to mimic street lights which were metered at a stop of T2.8. Inside the club his team rigged the lowest budget 70s-style disco lights they could find overhead to keep all film lights out of the shot. This was brighter at T/4.
To maintain an even exposure across the different lighting levels Elswit used a wireless iris motor to slowly move the aperture remotely, going from T/ 2.8 outside to T/ 4 as the camera moved inside.
This time they had the budget to shoot with anamorphic lenses. They used Panavision C-series and pretty much shot the entire film with 3 lenses. A 40mm and 50mm for wider frames and a 75mm for close ups.
Anderson disliked heavy film grain so they shot on Eastman 100T 35mm film stock - the slowest practical speed stock they could find.
Like on Hard Eight Elswit liked to observe the natural light and then augment it with additional fixtures. To do this he would take stills on slide film during location recces, which had a limited dynamic range and therefore clearly showed what the natural light was doing.
He’d then come in and accentuate the natural light by, for example, using large tungsten lights through windows for day interiors to mimic sunlight.
Overall, the budget was spent on a large ensemble cast, re-creating the 1970s period scenes in the film, over a longer production schedule with many scenes in a long script which were shot innovatively with more extensive technical gear.
LICORICE PIZZA
After a career of producing almost non-stop critically acclaimed work Anderson turned to the 70s and, again, the San Fernando Valley for his next idea.
“I had a story that wasn’t exactly mine but that paralleled mine. My relationship with Gary Goetzman, who I don’t know if many of you know is a producer. Gary worked in the Valley. He was a child actor. When that didn’t really work out he started a waterbed business. The stories he told was an opportunity to enter into a world that I remember very well.” - Paul Thomas Anderson
Again, his script pulled from his own experiences, in a setting he was familiar with, with dialogue and characters written for specific actors, or first time actors, that he had in mind.
Having worked many times with Phillip Seymour Hoffman in the past, he turned to his son to play the role, who, although it was his first film, gave a more realistic, understated performance than Anderson saw in the other castings.
This was paired with an on screen chemistry with another first time actor, Alana Haim, who Anderson had shot music videos for in the past.
The long screenplay with its many scenes meant he needed a budget of around $40 million - which was supplied by MGM.
In the build up to the film Anderson decided to shoot lots of tests - partly to find a look for the film and partly to see if his two leads had enough on screen chemistry for the movie to work. This was a luxury that the higher budget afforded him - compared to Hard Eight that had to be shot in 28 days.
During these tests they also looked at different lenses and pushing and pulling different film stocks until they settled on the look they were happy with.
After parting ways with Elswit after many films together, Anderson had developed an unusual way of working without a dedicated cinematographer.
He, along with key technical crew members, such as his Gaffer Michael Bauman, camera operator Colin Anderson and key grip Jeff Kunkel all put their skills into a giant pot and shot a project without having a director of photography as a department head.
This worked due to the director’s technical prowess and track record with his experienced team of collaborators. This was first done on Phantom Thread, which they shot in the UK and repeated on various music videos.
However, when it came to shooting in the US they needed to have an officially credited DP due to union requirements. So, Anderson and Bauman shared the official credit of cinematographer while they continued working in the same collaborative style as before, with Anderson providing a visual direction and his key crew offering their input and technical execution.
“Because we’re shooting in California you’re kind of required to have someone listed as the cinematographer versus when we were overseas…The workflow is a very collaborative environment. He and I kinda worked very closely with Colin Anderson who is the camera operator. You know, they’ll formulate a shot, the two of them will, and he and I will have done preliminary scouting and talk about the lighting and then on the day we’ll execute what the plan is.” - Michael Bauman, Gaffer & Cinematographer
Like with many of his films, they mainly shot on vintage C-series anamorphic lenses from the 70s. After doing extensive tests they chose a set of Cs which included three different 50mm lenses each with different characteristics which they picked from depending on the situation or shot.
Anderson has always been passionate about not only shooting on film but even screening the dailies, the raw footage, projected using 35mm.
“One of the things that we do is that we do film dailies. We watch dailies while we are shooting. On set we have a space that we work out of that we can project film. It’s me, it’s the camera department, the core team of the camera department basically department heads would come in and out. We use that process to figure out takes we’re going to use.” - Andy Jurgensen, Editor
Overall, Licorice Pizza’s larger budget offered the director more time and resources to fine tune his vision by doing extensive camera tests, location scouting and tests with actors before stepping onto set. This resulted in a final film which utilised extensive shots with vast period correct backgrounds, shot with a curated selection of technical gear, stunts, and an ensemble cast which included some big name performers.
CONCLUSION
Certain aspects of Paul Thomas Anderon’s way of working as a director have remained consistent throughout his career, such as: writing scripts based on his personal experiences with ensemble casts which are ratcheted up by chaotic actions, shooting on 35mm film, often with anamorphic lenses, working with a small, consistent crew, focusing largely on casting and then letting actors do their thing, and creating innovative visual languages based on camera movement.
However, the more established he has become, the more he has also been able to take his time to create the films, with more extended production schedules and more time for testing and finding the look before production begins.
After the departure of Elswit, his methodology has also shifted away from the traditional route of working with a credited cinematographer, to a collaborative working style where he leans on the expertise of his crew department heads.
Despite these changes, his films always have a recognisable tone and style that ties them together despite the genre, script or subject matter of the film.
Cinematography Style: Barry Ackroyd
Barry Ackroyd is a cinematographer who plays to his strengths. Over his career he’s developed an instantly recognisable style to his photography that is based around a vérité, documentary-esque search for truth and capturing realism. In this episode of Cinematography Style I’m going to take a look at the renowned work of Barry Ackroyd by going over his philosophical ideas on cinematography and outlining the gear that he uses to execute his vision.
INTRODUCTION
Barry Ackroyd is a cinematographer who plays to his strengths. Over his career he’s developed an instantly recognisable style to his photography that is based around a vérité, documentary-esque search for truth and capturing realism.
He works with multiple on-the-ground, handheld, reactive cameras that use bold, punch-in zooms and has been hired by directors such as Ken Loach and Paul Greengrass that highly value a sense of realism and heightened naturalism in their films.
So, in this episode of Cinematography Style I’m going to take a look at the renowned work of Barry Ackroyd by going over his philosophical ideas on cinematography and outlining the gear that he uses to execute his vision.
BACKGROUND
“I’m a cinematographer who was brought up in documentaries in Britain on small budgets.”
Ackroyd’s initial plans to become a sculptor changed while he was studying Fine Arts at Portsmouth Polytechnic after he discovered the medium of 16mm film.
He began working as a television cameraman in the 1980s, mainly shooting documentaries. It was there that he first encountered director Ken Loach. After working on a couple of documentaries together, Ackroyd was offered an opportunity to shoot Riff-Raff for Loach - his first feature length fiction film.
He continued to shoot numerous fiction films and documentaries for Loach during this period, culminating in The Wind That Shakes The Barley which won the Palme d’Or at Cannes Film Festival. Following this success he began working on other fiction projects for various well known directors such as: Paul Greengrass, Kathryn Bigelow and Adam McKay.
PHILOSOPHY
“Sometimes it’s better just to play to your strengths rather than to try to diversify too much…That was a choice I made, to play to my strengths.”
One of those strengths is a look rooted in a documentary style of working - which was informed by his early work on TV docs. Those documentaries relied on usually operating the camera handheld from the shoulder, in order to record the necessary moments as they happened live. In the real world events or moments often only happen once so you need an easily mobile camera to observe and capture them.
This is the opposite of fiction filmmaking, where events and scenes can be played out multiple times, and are more often than not photographed in a carefully curated, composed visual style. Rather than going the usual fiction cinematic route, Ackroyd took documentary conventions and ways of working and applied them to fiction filmmaking.
For example, he prefers always shooting movies on real locations whenever possible, over shooting them on a constructed set or in a soundstage - even if that real location is a ship on the ocean.
Ackroyd tends to steer away from setting things up too perfectly and instead leans towards a look where capturing a version of reality is far more important than capturing a ‘perfect’ image.
“I think if you look at my work I’m always trying to push what I’ve done before…and actually I push it towards imperfection…There’s a kind of state that you get into where you’re just in tune with what’s happening in front of the camera.”
To capture images realistically, honestly and with as few barriers as possible he relies on working with multiple camera operators and puts a lot of trust in his crew members. He gives his crew lots of credit on set and in interviews, from the focus puller to the sound recordist, and maintains the importance of teamwork and a group effort in creating a film.
“I used to say that in documentaries the best shot that you get in documentaries is out of focus and underlit and looks rubbish. You know that it had to be in the film because it was absolutely right at the time…I think that’s what you’re striving for, you know. Not to overwhelm people with the beauty. Not to fall in love with the landscape…But to get the picture that…you’re involved with it.”
An example of how he seeks authenticity through imperfections can be found in his approach to blocking scenes with directors and actors. Usually actors rehearse a scene on set and then marks are put down on the floor to indicate the exact position that actors must stand in in order to be perfectly lit, perfectly framed and perfectly angled for the shot.
Ackroyd prefers not to mark actors. He sets up any lights he needs either overhead or outside the set so that the actors have the freedom to move around as they like when they play out the scene. Since they don’t have to worry about hitting specific marks, he finds that the actors loosen up more, which injects a realist spontaneity into how their performances are captured.
Sometimes this leads to technical imperfections like moments that are out of focus or frames that aren’t classically composed. But it also injects an energy into the images which is undeniable.
GEAR
“You know I like to get physically involved. We ran around with the cameras. We had four or five cameras at times…In any one setup you’re trying to talk to all the guys, see what they’ve done, see what the next shot should be and give, you know, support and advice.”
As we mentioned, Ackroyd likes shooting with multiple handheld cameras. This allows his operators to quickly react and capture details or moments of performance. It also provides the director and the editor with multiple angles and perspectives which they can cut to in order to build up the intensity and pacing in a scene.
Directors who he has repeatedly collaborated with like Paul Greengrass and Kathryn Bigelow are known for their preference for quick cutting. Ackroyd’s style provides them with the high number of angles that are needed to work in this way.
One of the most important camera tools he uses is focus. He describes focus as being the best cinematic tool, even better than a dolly, crane or tripod, because focus mimics what we naturally do with our eyes and can be used to shift the attention of the audience to a particular part of the frame. He isn’t overly strict with his focus pullers and in fact prefers the natural, more organic method where people drift in and out of focus over every single shot having perfectly timed, measured and calculated focus pulls.
Another important tool in his toolbox is his use of zoom lenses. Again this goes against traditional fiction cinematography principles which ascribe a greater value to prime lenses over zooms - which most documentaries are shot with. He uses quick punch-in zooms as a tool to direct the focus of the audience in the moment. For example if a line of dialogue or an energetic moment of performance is particularly important his operators may push into it with a quick zoom for emphasis.
His choice of camera gear is a bit mishmash. In the same film he may use different formats, such as digital, 35mm and 16mm film, with different prime and, of course, zoom lenses. For example Captain Phillips involved shooting aerial shots digitally, while sequences in the fishing village and on the skiff were shot in Super 16, which they then switched to 35mm film once the characters boarded the large shipping vessel.
He likes the texture of film and has often used the higher grain 16mm to compliment his look. He famously used Super 16 to support the raw, on-the-ground documentary aesthetic on The Hurt Locker.
“Well then I thought it has to be Super 16. We have to get back to the basics. Get down to the lenses you can carry and run with and will give you this fantastic range of wide shots and big close ups…The first thing everybody said was that, ‘well, the quality is not going to be good.’ Well, nobody has criticised the quality of the film. They’ve only praised it.”
He has a preference for Fujifilm stock as it fares well in high contrast lighting situations. When shooting on film he would sometimes purposefully underexpose the negative and then bring up the levels later in the DI in order to introduce more grain to the image.
Ackroyd liked to combine 250D and 500T Fujifilm stocks when shooting Super 16 or 35mm. However, after Fujifilm was discontinued and no longer available he transitioned to shooting on Kodak film or with digital cameras - mainly the Arri Alexa Mini.
On Detroit he used the Arri Alexa Mini in Super 16 mode and shot with Super 16 lenses to introduce noise and grain to the image and get a Super 16 feel, which was further amped up in the grade, all while maintaining the benefits of a digital production.
The Aaton XTR is his go to Super 16 camera, so much so that he owns one. He has used different 35mm cameras such as the Aaton Penelope, the Arriflex 235, the Moviecam Compact and the Arricam LT. Some of his favourite Super 35 zooms are the 15-40mm and, in particular the 28-76mm Angenieux Optimo zoom, which are both light enough to be handheld and provide a nice zoom range that he can use to punch-in with.
He’s also used the Angenieux Optimo 24-290mm, sometimes with a doubler when he needs a longer zoom. It’s too heavy to be used handheld but he has used it with a monopod to aid in operating the huge chunk of a lens and still preserve a handheld feel. Some other zooms he has used include a rehoused Nikon 80-200mm and the Canon 10.6-180mm Super 16 zoom.
Although he prefers zooms he often carries a set of primes which have a faster stop and can be used in low light situations such as Zeiss Super Speeds or Cooke S4s.
Due to the lack of blocking or focus marks, he usually gives his focus pullers a generous, deep stop to work with of around T/5.6 and a half.
To further support his look based on realism and documentary, he lights in a very naturalistic manner. He tries to refrain from lighting exteriors all together and for interiors adds touches of artificial light which are motivated when he needs to balance the exposure in a scene. A lighting tool that he likes to use for this are single Kino Flo tubes, which can easily be rigged overhead or out of sight to provide a low level fill to a scene.
CONCLUSION
Barry Ackroyd’s cinematography is more about deconstructing photography than it is about trying to produce a perfectly beautiful image.
To him imperfections are a signal of authenticity and an expression of realism rather than a flaw. Breaking down an image can’t be done competently without a great degree of skill and knowledge.
His film’s aren’t created by just picking up a bunch of cameras and pointing them in the general direction of the action, but are rather made through deliberate thought and cultivation of a style that emits as much intensity, feeling of reality and truth as possible.
Does Sensor Size Matter?
Since there are loads of different cameras with loads of different formats and sensor sizes out there to choose from, in this video I’ll try to simplify it a bit by going over the five most common motion picture formats and discussing the effect that different sensor sizes have on an image.
INTRODUCTION
The sensor or film plane of a camera is the area that light hits to record an image. The size of this area can vary a lot depending on the camera, with each sensor size or format having a subtly different look.
Since there are loads of different cameras with loads of different formats and sensor sizes out there to choose from, in this video I’ll try to simplify it a bit by going over the five most common motion picture formats and discussing the effect that different sensor sizes have on an image.
5 MOTION PICTURE FORMATS
The size of a video camera's film plane or sensor ranges all the way from the minuscule one third inch sensor found in smartphones or old camcorders up to the massive 70x52mm 15-perf Imax film negative. But, rather than going over every single sensor in existence, I’m going to take a look at five formats which are far and above the most popular sizes used in film production today and have been standardised throughout film history.
While there are smaller sizes like 8mm film or sizes in between like the Blackmagic 4K’s four third sensor, these sizes are used far less frequently in professional film production and are an outlier rather than a standard. I’ll also only be looking at video formats so won’t be going over any photographic image sizes such as 6x6 medium format.
The smallest regularly used format is Super 16. The film’s smaller size of around 7.4 by 12.5mm makes it a cheaper option than the larger gauge 35mm, as less physical film stock is required.
Due to this it was often used in the past to capture lower budget productions. Now that digital has overtaken film, Super 16 is mainly chosen for its optical capabilities. Its lower resolution look and prominent film grain means that it is often used today to evoke a rough, documentary-esque feeling of nostalgia.
Some digital cameras, such as the original Blackmagic Pocket Cinema Camera have a sensor that covers a similar area to Super 16 and cameras such as the Arri Alexa Mini have specialised recording modes which only samples a Super 16 size area of the sensor.
Moving up, the next, and by far the most common format is Super 35. This format is based on the size of 35mm motion picture film that covers an approximate area of 21.9 by 18.6mm. 35mm refers to the total width of the frame, including the perforated edges on either side of the negative area.
Depending on the budget, aspect ratio, and lenses different amounts of horizontal space, measured in perforations, can be shot. The frame can be cropped to use less film stock or to extract a widescreen image when using spherical lenses. Shooting with anamorphic lenses, that optically squeeze the image, requires using the entire area of the negative or sensor and then de-squeezing the image at a later stage to get to a 2.39:1 aspect ratio.
Many digital cinema camera sensors are modelled on this size, with some minor size variations depending on the camera, such as the Arri Alexa Mini, the Red Dragon S35 and the Sony F65. Since this format is the most popular in cinema, most cinema lenses are designed to cover a Super 35 size sensor. Meaning this format has the widest selection of cinema glass available on the market.
Stepping up from 35mm we get to what is called a large format or a full-frame sensor. This size is modelled on still photography DSLR cameras with a 35mm image sensor format, such as the Canon 5D that is larger than Super 35. It’s also around the same size as 8-perf Vista Vision film.
Although digital sensors differ a bit depending on the camera, it is usually about 36 by 24mm. Some cameras with this sensor size include the Alexa LF, the Sony Venice and the Canon C700 FF.
This large format is a middle ground between Super 35 and the next format up - 65mm.
Originally, this format was based on using 65mm gauge film which was 3.5 times as large as standard 35mm, and measured 52.6 by 23mm using 5 vertical perforations with a widescreen aspect ratio of 2.2:1. The Alexa 65 has a digital sensor that matches 65mm film and is a viable digital version of this format.
Finally, the largest possible motion picture format that you can shoot is Imax film. With an enormous 15 perforations, an Imax frame covers a 70.4 by 52.6mm image area.
Due to its enormous negative size and the large, specialised cameras required to shoot it, this format is prohibitively expensive and out of the budget range of most productions. But, it has seen a bit of a resurgence in recent years on high budget blockbusters from directors such as Christopher Nolan who champion the super high fidelity film format.
THE EFFECTS OF SENSOR SIZES
With these five formats in mind, let’s examine some of the effects and differences between them. There are a few things that choosing a format or sensor size affects.
The most noticeable optical effect is that different formats have different fields of view. What this means is that if you put the same 35mm lens on a Super 16, Super 35 and a large format sensor camera, the smaller the sensor is the tighter the image that is recorded will appear.
So the field of view on a large format camera will be much wider than on a Super 16 camera which is tighter. Since the field of view is wider, larger formats also have a different feeling of depth and perspective.
Because of this difference, the sensor determines the range of focal length lenses that need to be used on the camera. To compensate for the field of view differences, smaller formats like Super 16 need to use wider angle lenses to get to an image that sees the same amount of information, while larger formats need to use longer lenses for that same frame.
For example, to get the same field of view from a 35mm lens on a Super 35 sensor, a Super 16 camera needs to use a 17.5mm focal length and a large format, full-frame camera needs to use a 50mm focal length.
Since focal lengths affect the depth of field an image has, this is another effect of different formats. Longer focal lengths have a very shallow depth of field or area of the image which is in focus. So full-frame cameras that use longer focal lengths will therefore have a shallower depth of field. This means that the larger the format, the more the background will be out of focus and the more the subject will be separated from the background.
This is helpful for creating a greater feeling of depth for wide shots which people often perceive as looking more ‘cinematic’.
One negative effect of this is that the job of the 1st AC to keep the focus consistently sharp becomes far more difficult. For this reason smaller formats such as Super 16 are far more forgivable to focus pullers as they have a deeper depth of field where more of the image is in focus and therefore the margin for error is not as harsh.
The grain and resolution that an image has is also affected by the size of the format. The smaller the format is, the more noticeable the grain or noise texture will usually be, and the larger the sensor is the finer the grain will appear and the greater clarity and resolution it often has.
Sometimes cinematographers deliberately shoot smaller gauge formats like Super 16 to create a more textured image, while others prefer larger formats like 65mm for it’s super clean, sharp, low noise look.
So those are the main optical effects of choosing a format.
Smaller formats require wider focal lengths, have a deep depth of field, have more grain and will overall feel like they are a bit flatter.
Larger formats require longer focal lengths, have a shallower depth of field, less grain, greater resolution and clarity and overall have a more three-dimensional look with an increased feeling of depth.
There are also the all-important practical implications to be considered. Generally speaking the larger the format, the larger the form factor of the camera will be to house it and the more expensive it is to shoot on.
This calculation may be different when comparing the costs of digital and film, but when comparing all the digital formats, renting the cameras and lenses for 65mm will be more expensive than a Super 35 camera. Likewise, when comparing film formats 16mm is vastly cheaper than Imax.
So broadly speaking, smaller formats tend to be more budget friendly and come in a smaller housed package.
DOES SENSOR SIZE MATTER?
Coming back to the question of whether sensor size matters, I don’t think any one sensor is necessarily better than another. But the effects that they produce are certainly different.
Filmmakers that want an image that immerses an audience in a crystal clear, highly detailed, wide vista with a shallow depth of field will probably elect to shoot on a larger format.
Whereas those who require a more textural, nostalgic or rougher feeling photography with less separation between the subject and the background may be drawn to smaller gauge formats.
As always, the choice of what gear is most suitable comes down to the needs of the project and the type of cinematic tone and photographic style you are trying to capture.
Using Colour To Tell A Story In Film
Let’s examine this idea of colour by going through an introduction to colour theory, look at how filmmakers can create a specific colour palette for their footage and check out some examples of how colour has been used to aid the telling of different stories.
INTRODUCTION
Cinematography is all about light.
Light is a complex thing. It can be shaped, it can come in different qualities, different strengths and, importantly, it can take the form of different colours.
So, let’s examine this idea of colour by going through an introduction to colour theory, look at how filmmakers can create a specific colour palette for their footage and check out some examples of how colour has been used to aid the telling of different stories.
WHAT IS COLOUR THEORY?
Colour theory is a set of guidelines for colour mixing and the visual effects that using different colours has on an audience.
There are many different approaches to colour theory ranging from ideas all the way back in Aristotle’s time up to more contemporary studies on colour such as those by Isaac Newton. But let's just take a look at some basic ideas and see how they can be applied to film.
When different spectrums of light hit objects with different physical properties it produces a colour, which we put into a category and ascribe a name to.
Primary colours are a group of colours that can be mixed to form a range of other colours. In film these are often, but not always, used sparingly in a frame. A splash of red in an otherwise green landscape stands out and draws the eye.
An important part of colour theory in the visual arts space is knowing complimentary colours. When two of these colours are combined they make white, grey or black. When the spectrum of colours are placed on a colour wheel, complimentary colours always take up positions opposite each other.
When two complementary colours are placed next to each other they create the strongest contrast for those two colours and are generally viewed as visually pleasing. Cinematographers often combine complimentary colours for effect and to create increased contrast and separation between two planes in an image. For example, placing a character lit with an orange, tungsten light against a blue-ish teal background creates a greater feeling of separation and depth than if both the character and the background were similar shades of orange.
When it comes to the psychology of using colour, cinematographers generally fall into two camps - or somewhere in the middle. Some cinematographers such as Vittorio Storaro think that certain colours carry an innate, specific psychological meaning.
“Changing the colour temperature of a single light, changes completely the emotion that you have in your mind. I didn’t know at the time the meaning of the colour blue. It means freedom.” - Vittorio Storaro
Other filmmakers rely more on instinct and what feels best when lighting or creating a colour palette for a film. The psychology of colour can change depending on the context and background of the audience.
As well as being a means of representing and expressing different emotions, deliberate and repeated uses of colour can also be used by filmmakers as a motif to represent themes or ideas.
Another important part of colour theory is warm and cool colours. The Kelvin scale is a way of measuring the warmth of light, with lower Kelvin values being warmer and higher Kelvin values being cooler.
Warm and cool colours can have different psychological effects on an audience and can also be used to represent different physical, atmospheric conditions. Using warmer colours can be used to emphasise the feeling of physical heat in a story, while inversely cooler colours can be used to make the setting of a story feel cold or damp.
CREATING A COLOUR PALETTE
Now that we have a basic framework of colour theory to work with, let's look at the different ways that filmmakers can make a colour palette for a movie. Colour palettes in film can be created using three tools: production design and costume, lighting and in the colour grade.
The set and the clothing that the characters are dressed in is always the starting point for creating a colour palette. In pre-production, directors will usually meet with the production designer and come up with a plan for the look of the set. They might give the art director a limit to certain colours they need to work with, or decide on specific tones for key props. The art team will then go in and dress the set by doing things such as painting the walls a different colour and bringing in pieces of furniture, curtains and household items that conform to that palette.
Since characters are usually the focus of scenes and we often view them up close, choosing a colour for their costume will also have a significant impact on the overall palette. This may be a bold primary colour that makes them stand out in the frame, or something more neutral that makes them blend into the set.
With a set to work with, the next step in creating a movie’s colour palette is with lighting.
Traditionally, film lighting is based around the colour temperature of a light which as we mentioned could be warm, such as a 3,200K tungsten light or cool, such as a 5,600K HMI. On top of this, cinematographers can also choose to introduce a tint to get to other colours. This can be done the old school way by placing different coloured gels in front of lights, or the modern way by changing the hue or tint of LEDs.
DPs can either flood the entire image with monochromatic coloured light, or, as is more common, light different pockets of the image with different colour temperatures or hues. In the same way that we create contrast by having different areas of light and shadow in an image, we can create contrast by having different areas of coloured light.
Once the colour from the set and the lighting has now been baked into the footage, we move into post-production where it’s possible to fine tune this colour in the grade.
An image contains different levels of red, green and blue light. A colourist, often with the guidance of a director or cinematographer, uses grading software like Baselight or Da Vinci Resolve to manipulate the levels of red, green and blue in an image.
They can change the RGB of specific values of light, like introducing blue into the shadows, or adding magenta to the highlights. They can also create power windows, to change the RGB values in a specific area of the frame, or key certain colours so that they can be individually adjusted. There are other significant adjustments they can make to colour such as determining the saturation or the overall intensity of the colour that the image has.
USING COLOUR TO TELL A STORY
“It’s a show about teenagers. Why not make a show for the teenagers that looks like how they imagine themselves. It’s not based on reality but mostly on how they perceive reality. I think colour comes into that pretty obviously.” - Marcell Rév
When coming up with a concept for the lighting in Euphoria, instead of assigning very specific psychological ideas to colour, Marcell Rév used colour more generally as a way to elevate scenes from reality.
He wanted to put the audience in the emotionally exaggerated minds of some of the characters and elevate the level of the emotions that were happening on screen. In the same way that the often reckless actions of the characters continuously ratcheted up the level of tension in the story, so too did the exaggerated, brash, coloured lighting.
To increase the potency of the visuals he often played with a limited palette of complementary colours. He avoided using a wide palette of colours, as it would become too visually scattered and decrease the potency of the colours that he did use.
Along with his gaffer he picked out gels, mainly light amber gels which he used with tungsten lights and cyan 30 or 60 gels which he used with daylight HMIs. They also used LED Skypanels, which they could quickly dial specific colour tints into.
“That light…that colour bouncing off the screen and arriving at us we don’t see it only with the eyes, we see it with the entire body…because light is energy. I’m sending some vibrations to you, to the camera, to the film…unconsciously.” - Vittorio Storaro
When photographing Apocalypse Now, Vittorio Storaro was very deliberate about his use of colour. He wanted the colours to be so strong and saturated that the world on film almost became surrealistic.
He wasn’t happy with Kodak’s 5247 100T film stock at the time, so he got the film laboratory to flash the negative to get the level of contrast and saturation which he was happy with.
In the jungle scenes he didn’t want to portray the location naturally. He sometimes used filters to add a monochromatic palette which was more aggressive, to increase the tension.
“I can use artificial colour in conflict with the colour of nature. I was using the symbolic way that the American army was using to indicate to the helicopter…They were using primary and complementary colours. I was using those kinds of smoke colours to create this conflict.” - Vittorio Storaro
He also described how the most important colour in the film was black, particularly in the silhouetted scenes with Kurtz. He felt black represented the unconscious and was most appropriate for scenes where the audience was trying to discover the true meaning of Kurtz, with small slithers of light, or truth, emerging from the depths of the unconscious.
What A Boom Operator Does On Set: Crew Breakdown
In this Crew Breakdown video I’ll go over the position in the sound department of the boom operator, to break down what they do, their average day on set and some tips which they use to be the best in their field.
INTRODUCTION
In this series I go behind the scenes and look at some of the different crew positions on movie sets and what each of these jobs entails. If you’ve ever watched any behind the scenes videos on filmmaking you’ve probably seen this person, holding this contraption.
In this Crew Breakdown video I’ll go over the position in the sound department of the boom operator, to break down what they do, their average day on set and some tips which they use to be the best in their field.
ROLE
The boom operator, boom swinger or first assistant sound is responsible for placing the microphone on a set in order to capture dialogue from the actors or any necessary sounds in a scene.
They do this by connecting a boom mic, or directional microphone, to a boom pole. The mic is then connected either with an XLR cable or wirelessly to a sound mixer where the sound intensity is adjusted to the correct level.
On feature films this mixing is done separately by the sound recordist who heads the department, and is responsible for recording all the audio and delegating the positioning of the mic to the boom operator. However, for low budget features, TV shoots, documentaries or commercials, the role of the sound recordist and the boom swinger is sometimes performed simultaneously by one person.
To get the best possible sound and capture dialogue clearly the microphone usually needs to be placed as close as possible to the actors. Since film frames have quite a lot of width to them and see a lot of the location the best way to get the microphone in close to the action without it entering the shot is to attach it to a boom pole, with the mic angled downwards and use the length of the boom held overhead to position the microphone directly above the actors and outside of the top of the frame.
For stationary shots without camera movement this involves finding a position for the boom and holding it throughout the take. Sometimes for long documentary interviews this can be done with the help of a stand. However, for shots which involve camera movement or actors that are moving and talking, the boom operator is tasked with performing a kind of dance. They need to move tighter or wider as the camera does, always fighting to get the mic as close as possible while making sure it never dips into the frame and enters the shot. If this happens during a take the DP, director or 1st AD will often call out ‘boom’.
While the act of operating the boom mic during filming is their primary responsibility, there are also some other tasks that boom swingers need to perform.
Dialogue is usually captured by two different types of microphones, the boom mic, as we mentioned, as well as lapel or lav microphones. These are small microphones which are strapped directly onto the actors with a clip or with an adhesive tape. They are usually positioned under clothing near the chest or throat area so that they will pick up intimate sound but be unseen by the camera.
These microphones are attached to a transmitter. This wirelessly transmits the recorded sound to a receiver that is then connected to a sound mixer where the audio feed is recorded. The boom operator is usually responsible for attaching this lav microphone to the necessary actors.
It’s best practice to always inform and explain how you’ll be attaching the mic to the actor, as it can be a bit invasive. Experienced actors are aware of this but it’s still professional to ask their permission before touching them or putting on the lav mic.
The boom operator also assists the sound recordist with any necessary technical builds or changes, such as switching out batteries or attaching a lock it box for synchronising timecode onto the camera.
AVERAGE DAY ON SET
After arriving on set the boom operator will track down a copy of the shooting schedule and sides. This lists what scenes are planned for the day and pages from the script with dialogue for those scenes.They’ll read the sides to see what dialogue needs to be recorded and what actors need to be mic-ed up. They’ll replace the necessary batteries and make sure everything is charged up and ready to go.
The boom operator or the sound recordist may test that their audio feed is getting transmitted to VT and hand out a pair of headphones and receiver to the director for them to monitor the sound during takes.
The boom operator will mic up any actors in the scene that have dialogue and prep their boom setup. This may involve changing their gear, such as using a blimp, or a ‘deadcat’ as it’s called, to cut out wind noise if they are recording a scene outdoors.
They’ll find out the lens that the camera is shooting with, or take a peek at the monitor to see how wide the frame is and how close they can position the boom without getting in the shot.
Once ready to record a take the 1st AD will call roll sound, and the sound recordist will begin recording, then the cameras will roll. The boom swinger will then move the mic over so that it’s directed at the 2nd AC, who will announce the information on the clapper board and then give it clap so that the editor has a point where they can sync the sound with the video. The boom operator will then quickly position the boom above the actors and be ready to begin recording dialogue.
They always wear headphones while recording which can help them to position the boom further away, closer or at a different angle to get the best possible sound.
When recording sound the default rule is to usually capture whatever audio is present on screen.
So, for wider shots where multiple actors are in a frame the boom swinger may alternate and move the boom closer to whoever is speaking, positioning the mic back and forth as the actors exchange dialogue. Then for close ups where only a single actor is on screen they will usually focus only on recording sound for that actor alone. When the camera switches to a close up of the next actor then their full sound will get recorded. By doing this they will then have clear dialogue for both actors which the editor can use at their discretion.
Sometimes if the boom was unable to capture a certain sound effect or a certain line of dialogue during a scene then they will pull the actor aside between setups and record what is known as wild sound - sound that isn’t recorded with any specific video footage but which may be synchronised or used later.
They may also need to record ‘room tone’, a quiet ambient recording of the space without any dialogue. This can be placed as a layer underneath the dialogue in the edit to make the cuts more natural and provide a background noise to the scene.
In this case the 1st AD will make an announcement to the set, the crew will awkwardly freeze so as not to make any noise and the mic will record about a minute of quiet, ambient sound.
TIPS
Since any footage where the boom dips into shot will be unusable one of the most valuable skills for boom operators is to know how wide a frame is. If you’re starting out, it’s useful to look at the framing on the monitor and find what is called an ‘edge of frame’. This could be a marker on the set which indicates what is in or out of the shot.
As boom operators become more experienced they’ll begin to learn focal length sizes and be able to place the boom without needing to look at a frame. For example, if they know how wide a 35mm lens is then they can imagine its field of view and be sure to stay out of it.
As space on a set can sometimes be limited, it is useful to find the best position to stand before shooting begins. When finding a position it’s important to be mindful of how the camera and actors will move and to identify any lights that may cast shadows or reflective surfaces that will pick up the mic in shot.
Always make sure to never position the boom between a light source and a character, as it will cast a shadow of the gear in the image. Adjusting the length of the boom so that it has enough reach, but isn’t overextended will also save space and mean that it stays out of the way of other crew members, gear or lights.
Boom mics are directional, meaning that they capture whatever sound they are pointed at most prominently. Any sound behind or off to the side of the mic will be recorded much softer. Therefore positioning the boom overhead with the mic facing down towards the speaker is most common.
In a dialogue scene with two people close together the operator may be able to turn the angle of the directional mic toward whoever is speaking at the time without needing to move the position of the actual pole.
If the frame has a lot of headroom it is also possible to boom sound from underneath the frame. However, when recording outside this may sometimes pick up excess aviation noise if any planes pass overhead, so should be avoided unless it’s necessary.
If you’ve ever operated a boom during a long scene you’ll know that holding it in an awkward position gets surprisingly heavy surprisingly quickly. The solution is to either spend more time at the gym or find positions that better mitigate the weight of the mic pole, such as resting it on a shoulder or on the head.
The Most Popular Cinema Lenses (Part 4): Panavision, Tokina, Atlas, Canon
In the fourth part of this series I’ll look at the Panavision G-Series anamorphics, Tokina Vistas, Atlas Orions and the Canon S16 8-64mm zoom lens.
INTRODUCTION
While many think that only the camera is responsible for the look that footage has, the glass that is put in front of the camera has just as great an influence over how a film looks.
In the fourth part of this series I’ll look at some popular lenses which are used in the film production industry and break down the ergonomics of each lens, the format they are compatible with, as well as their all important look, by using footage from movies shot with this glass.
Hopefully this will provide some insight into what kind of jobs and stories each lens is practically and aesthetically suited for. Let’s get started.
PANAVISION G-SERIES
Panavision launched their G-Series lightweight anamorphic lenses that covered a 35mm frame in 2007.
In a past episode we looked at another of Panavision’s anamorphic series of lenses, the Cs, which are probably considered their flagship product from the past - being launched in 1968. The classic, vintage anamorphic look of the Cs is still highly sought after today, despite their relative scarcity and mish-mash ergonomic designs.
The Gs were released by Panavision for DPs that prefer a slightly more updated iteration of the Panavision anamorphic look with easily workable, modern ergonomics.
The older Cs came in different sizes, with different apertures, different close focus capabilities and different front diameters. Whereas the Gs came in more consistent sizes, with more standardised T stops and front diameters. This makes working with the Gs far easier and quicker for camera assistants. For example when changing between G-series lenses the focus and iris gears are positioned almost identical distances apart, meaning the focus motors don’t have to move.
The Gs can be shot wide open at T/2.6 and get the exact same exposure with different lenses, whereas the apertures of the Cs need to be individually tweaked between lens changes. Their standardised front diameters means that clip-on matte boxes can be easily interchanged without swapping out the back, and their more standardised lengths and weights make balancing gimbals after changing lenses easier.
The Gs also have more subtle breathing, which means the image will shift less when the focus is racked.
The update of the Gs also carries over into their optical look. They have a higher contrast, a greater degree of sharpness, aberration control, glare resistance and overall, resolve higher resolution images, while maintaining Panavision’s beautiful anamorphic bokeh and focus falloff.
DPs such as Matthias Koenigswieser enjoy the more consistent and modern anamorphic look of the Gs. On Christopher Robin he combined the C-series and the G-series. He used the more modern Gs when shooting on slightly less sharp 35mm film, and used the softer, more vintage Cs for footage that needed to be shot at a higher digital resolution. In this way they balanced out to provide a consistent look across mediums.
Fun fact, when I camera assisted Matthias on a TV commercial he also opted to use the Gs for a contemporary, sharper looking, anamorphic car shoot.
Overall, Panavision G-series are great for cinematographers who need a lightweight lens that is solidly constructed, quick and easy to work with and desire a slightly updated Panavision anamorphic look that is more optically consistent.
TOKINA VISTA
From an anamorphic lens that covers the 35mm format, to a large format spherical lens, let's take a look at the Tokina Vista primes. This set of lenses features a massive 46.7mm image circle. This means that they cover almost any cinema camera on the market, including full frame sensors and large format sensors like the Red Monstro 8K or the Alexa LF.
They have a fast aperture of T/ 1.5 across the entire range of focal lengths. This means that when the longer lenses are combined with large format cameras, the depth of field becomes razor thin.
They are very solidly constructed and come in consistent lengths across the range, with the iris and focus gears all being the same distance from the mount. The front diameter is standardised to 114mm.
There are lots of well designated distance measurements on the barrel of the lens which makes focus pulling marks more accurate. Although super robust, their full metal construction and the ample glass that is needed to cover large sensors mean that the lenses are a pretty hefty weight.
The lenses are super sharp and come with modern coatings that give them very few chromatic aberrations. This means their optical qualities are far less vintage and imperfect like other large format lenses such as Arri DNAs. Their modern coatings also mean that the lenses don’t flare massively, but when hit with the right angle of light they will produce a blue, green rainbow flare.
Their look is super modern, ultra-crisp and sharp across the entire width of the frame. Even wide open at T/ 1.5 the sweet spot of the lens in focus is very crisp. They also have minimal distortion even at the widest 18mm focal length.
The Tokina Vistas are a great option for DPs that need a ergonomically designed, fast, super sharp, modern looking spherical lens that resolves high resolution images and covers large format cameras.
ATLAS ORION
The Orion series of anamorphic lenses from Atlas was first unveiled in 2017. After first starting with just a 65mm lens, the set has now been expanded to 7 focal lengths ranging from 25mm to 100mm.
Atlas was started as a small company with the goal to manufacture professional grade anamorphic cinema lenses at an affordable price point. This may not seem all that affordable at first glance, but when compared to purchasing or renting other high end anamorphic glass, the price is significantly reduced.
Since these front anamorphic lenses have been designed recently they feature solid, modern design with a robust housing, well spaced distance markings and a smooth focus gear. They aren’t the smallest, lightest or most compact of anamorphic lenses, particularly when compared to lenses such as Kowas, but they are solid.
All focal lengths, even the 25mm, feature an aperture of T/2. This means they are very fast for anamorphic lenses which typically aren’t as fast as their spherical counterparts. However, when shot wide open at T/2 they do lose some sharpness, with their sweet spot being closer to around T/4.
Another great feature is that the Orion’s have very good close focus capabilities. This makes it easy to shoot close ups in focus without the use of diopters.
When it comes to their look, I’d say they have subtle vintage characteristics, but overall create a more traditional anamorphic look with good levels of contrast and no crazy focus falloff or distortion.
So if you’re looking for a solid set of modern, fast anamorphic primes that cover a 35mm sensor, with some vintage characteristics and great close focus all at an affordable price point, then Orion’s may be the way to go.
CANON S16 8-64mm
So far we’ve looked at lenses that cover Super 35 and large format. Next let’s take a look at a smaller format lens, Canon’s super 16 zoom, the 8-64mm.
Since this lens was designed for super 16mm film it doesn’t cover a lot of today’s modern sensors which are super 35 size or larger. However, this lens is still widely used today as the zoom of choice for 16mm film work, on the Alexa Mini in its S16 mode, or certain micro four thirds digital cameras.
It’s 8-64mm range is about the equivalent of a 14.5-115mm lens in super 35 mode. This means that this single lens has lots of flexibility and covers a whole range of conventional prime focal lengths. Despite this long zoom range it has good close focus at 22”. This meant that it was a popular workhorse in the 90s, especially for TV work or documentaries which were shot in 16mm.
For such a long zoom range it’s pretty compact and lightweight but it's built like a tank. The focus rotation is only about 180 degrees which makes it a good pairing for solo documentary operators. It has some distance markings on the barrel of the lens, with probably not as many distances as most focus pullers would like, but enough to get by.
With an aperture of T/2.4 the lens is fairly fast for a zoom. When shot wide open the image does tend to get a little softer and ‘dreamier’ like a diffusion filter has been added. But if its stopped down just a bit to around T/ 2.8 the lens sharpens up.
For a vintage zoom it's amazingly sharp which you usually want when shooting 16mm film which is a lower fidelity medium. However it isn’t overly sharp or too clinical with its vintage lens coating.
Its solid construction, long zoom range, fast aperture, great close focus and sharp but slightly vintage look make the Canon 8-64mm a great choice for DPs looking for a 16mm zoom.
Cinematography Style: Conrad Hall
n this episode I’ll look at what Conrad Hall had to say about his philosophy on photography and show some of the gear which he used in order to cultivate his photographic vision.
INTRODUCTION
If you were to Google ‘who are the best cinematographers of all time?’, it won’t take long to stumble upon the name Conrad Hall. Through ten Academy Award nominations and three wins, the strength of his career speaks for itself.
His photography is characterised by neutral colour palettes, inventive uses of hard light, reflections and character focused framing which all culminated into a style which he called ‘magic naturalism’. In this episode I’ll look at what Conrad Hall had to say about his philosophy on photography and show some of the gear which he used in order to cultivate his photographic vision.
BACKGROUND
Hall was born in 1926 in French Polynesia and in his mid teens began attending boarding school in California. After graduating he signed up for a degree in journalism at USC, however that didn’t last long.
“Boy am I lucky that I got a D+ in journalism and had to change my major.”
He switched to the cinema program and began learning the basics of filmmaking, a relatively new art form to study at the time.
To work in Hollywood on a camera crew back then required being a member of the International Photographers Guild. This left him without a job. To work around this Hall and some of his classmates created their own independent production company and produced a film called Running Target which Hall shot. This gained him membership to the guild however due to regulations he wasn’t allowed to be credited as cinematographer on the film, but rather as a visual consultant, even though he shot the entire film.
As a member of the guild he then worked his way up the ranks, from camera assistant, to camera operator until he eventually got a chance to photograph the feature film Morituri as the director of photography.
Hall’s career went on to span many decades, from the 1950s to the early 2000s, during which time he worked with a host of esteemed directors which included: Richard Brooks, Stuart Rosenberg, John Huston, Steven Zaillian and Sam Mendes.
PHILOSOPHY
When Hall was asked how he decided where to point the camera, he is reported to have said, “I point it at the story.”
To him, the story was always the starting point for determining his photographic decisions and the ultimate target he aimed for. Although the look of his photography changed between projects depending on the story and director he was working with, he carried over some philosophical concepts throughout his career.
“I’ve never been somebody to get a movie to look absolutely perfect… Mine are always sort of flawed somehow or other. And in a way I don’t mind that because it’s not about perfection it’s about the overall feeling of the thing.”
His light wasn’t always perfectly soft, perfectly shaped and didn’t always have a perfect contrast ratio between light and shadow, his framing wasn't always perfectly symmetrical. By not always aiming for a perfectly beautiful image, much of his work carries a feeling of naturalism. He often incorporated interesting flourishes as well, such as using reflections, interesting hard shadows and atmospheric texture such as smoke or rain.
He called this magic naturalism: shooting things as they are, while at the same time incorporating stylistic touches that heightened the atmosphere of the story.
“I’m one of those guys who doesn’t do a lot of augmenting. But who knows how to take the accident and turn it into something wonderful, magical. I look for that. I thrive on it. I feed on it. I don’t invent stuff. It invents itself and then I notice it and use it dramatically.”
Hall was a master of observing unintentional magical moments then using the photographic tools he had to emphasise them. Whether that was zooming into the reflection of a chain gang trapped in the sunglasses of a prison guard. Throwing hard light against a tree to create ominously moving shadows made by the wind. Or positioning a character who was a murderer so that the acciedntal reflection made by the rain on the window made it look like he was crying.
When combined with an otherwise largely naturalistic look, these stylised little moments of ‘happy accidents’ elevated the story in a magical way.
Due to the length of his career, he started photographing films in black and white but of course moved on to using colour in his work as it replaced black and white as the dominant medium. A thread that he has carried through most of his colour films is using an earthy, neutral colour palette. Many of his films used lots of browns, greens, whites and greys, with strong, pure black shadows. The colour was rarely strongly saturated or vivid across the frame.
This meant that when a strong colour was used, like the famous use of red in American Beauty for example, it really stood out against the rest of the film’s neutral tones.
GEAR
“It’s as complex a language as music. A Piano’s got 88 keys and you can use them in any complex way you want to. We got the sun and light. Is there anything more complex than light?”
As we mentioned, being open to ‘happy accidents’ is an important part of his cinematic philosophy, especially when it comes to light. For that reason he liked coming up with the majority of his lighting on the day, although sometimes for large spaces some basic pre lighting work was necessary.
“I don’t like to figure things out ahead of time before the actors do”
For example, on Road To Perdition, many of the large interiors were sets constructed in a studio. During prep he got a rigging crew of 10 people to rig a collection of greenbeds and scaffolding overhead in the studio over 8 weeks. To this scaffolding his team rigged 30 10K fresnels and 60 5Ks which were all rigged to dimmers and used to light the backgrounds outside the windows. Basically acting the same as ambient sunlight outside.
It also meant that all his lights were out of the sets and wouldn’t get in the way of the actor’s blocking or framing. The sets were also electrified so that practical light sources, such as lamps, could be plugged in and used. Much of his lighting was done with tungsten balanced lights.
With these fixtures rigged in place he could then come in on the day and position the lights as he desired. A lot of the light was hard and undiffused, something he often did in many of his films. This resulted in strong lines of shadow.
He regularly created shape by breaking up the light with interesting textures or used parts of the set to shape the shadows which were cast.
When lighting interiors he would also use what he called ‘room tone’, where he bounced smaller fixtures like a 1K into the ceiling to provide a soft ambient base light to a room. This filled in the contrast from the hard light a bit. Because the light is soft and bounced it’s not very directional which means it’s difficult to tell where exactly it comes from. In this way it provides an overall lift to the space in a natural way. Once this base ‘room tone’ was in place he could then work on lighting the characters. Often hitting them from the side or behind with a hard source.
He used this same hard backlight to bring out textural components like smoke or rain.
When it came to selecting lenses he liked using a wide collection of prime focal lengths from 27mm to 150mm. Unlike the recent trend of using wide angle lenses for close ups, Hall took more of a traditional approach. We used wide angle lenses for wide shots and longer focal lengths for close ups that threw the background out of focus.
However he did sometimes use long lenses, like Panavision’s 50-500mm zoom, creatively for wide shots to increase the feeling of heat waves with distortion and represent characters as more wispy yet menacing.
He liked using a shallow depth of field, usually setting the aperture of his lenses between T/1.9 and T/2.5. This gave the photography an emotional dimension and clearly showed the focus (literally) of the shot.
Hall mainly used Panavision cameras and lenses. He shot on 35mm film before the advent of digital cinematography with cameras such as the Panavision Platinum. He especially liked using Panavision Primo lenses for their look, reliability and wide range of focal lengths.
To achieve the colour he liked he used fine grain tungsten Kodak film stocks for interiors as well as exterior daylight scenes. For example, he used the more modern Kodak Vision 200T for his later work, and Eastman EXR 100T for his older work. A lot of the classic 60s and 70s-feeling muted, neutral colour came from his use of Eastman’s famous 100T 5254 colour negative film.
For some films, such as Butch Cassidy, he wanted the colour to be even more muted and to pull out the cliche blue often used in ‘western skies’. To do this he radically overexposed the film, then got the film laboratory to compensate for the overexposure in the print. This further washed out the colour and turned the blue a softer, lighter shade.
CONCLUSION
“Those are the kind of films that I like to get a hold of and don’t often get a chance to do. Stuff that, like, goes on forever about some basic and important human condition that is bigger than all of us and will go on forever no matter what era it’s set in.”
Conrad Hall’s selection of films that he photographed is a reflection of his style as a whole. Telling simple, natural stories that represent something bigger through his injection of magical moments.
One thing that he always tried to do was to tell stories so well that if the sound was turned off the audience would still understand the story just based on the images. His love for the medium, his powers of observation and ability to translate stories using whatever magic naturally occurs on set is what has made him one of the greatest of all time.
4 Reasons Movies Shouldn't Be Watched On Laptops
Let's remind ourselves why going to the cinema is still the superior to streaming movies from home by going over 4 reasons we should still make the effort to get out of the house and go to the cinema.
INTRODUCTION
According to a recent study, only 14% of adults stated they preferred viewing a new movie in the cinema, while 36% preferred streaming it from a device at home. While this trend may have been expedited by the pandemic, I think it’s a trend that’s on the rise regardless. Fewer and fewer people are going to cinemas anymore and I think that’s kind of sad.
And I mean, look, I get it. I’m guilty of it too. Convenience outweighs viewing experience. 80% of my cinematic diet is probably consumed at home from a TV or a laptop rather than on the big screen.
Before this video becomes too much of a lamentation about the death of cinema, I’d like to flip it to remind us why going to the cinema is still the superior experience by going over 4 reasons we should still make the effort to get out of the house and go to the movies.
CONSISTENCY
One of the biggest issues of watching movies from home comes from the inconsistency of the image.
Filmmakers spend years developing their craft, putting blood, sweat and tears into lighting, testing for the perfect lenses and tweaking nuances of colour on a calibrated monitor in the grade. Only to have 60%, or so, of the audience watch the final product from a smartphone with a cracked screen in a bright room full of reflections.
Cinema is a medium which is all about refining and tweaking the details in order to create a lasting art work. A lot of this is undone by watching the final product in a sub optimal viewing environment.
I mean yes, you get the gist of the visuals, but it’s kind of like listening to an album that was carefully, meticulously written, recorded, mixed and mastered on a noisy airplane with the cheapest pair of headphones sold in the 7 Eleven. You can kind of make out most of the lyrics and melody but all the sonic nuance that the artists spent their time and energy creating is lost.
Most movie complexes use high end digital cinema projectors that are DCI compliant, tested to output a high standard of quality images, in a dark, light free environment. This means that the viewing experience at different cinemas around the world will be almost identical. Colour and contrast will be consistent and resolution is standardised to either 2K or 4K.
This differs from home viewing a movie. Different screens made by different manufacturers have different resolutions, will display colour and contrast inconsistently at varying levels of brightness and don’t have to conform to any compliance standards.
Using different media players may also affect the colour and luminance information. For example a pet peeve of mine is that Apple’s Quicktime Player shifts the gamma curve and plays video files with different contrast from the original file.
Also, unless you are viewing a movie at night with all the lights in the house off, there will be excess ambient light that may cause reflections or dilute the brightness of the image.
THE AUDIENCE EFFECT
One of the most noticeable impacts that viewing a movie at home has, is that it shifts the experience from being a communal one to being an individual one.
The idea of cinema was born out of creating a medium which could be shared by an audience. I’d say that communal viewing heightens the effect that a film has on us. Whatever emotion the filmmakers impart to the audience is heightened when we share it as a group. For horror films you can hear the audience gasp, for comedies laughter rings out, and for compelling dramas you can almost feel a communal silent focus take hold.
I think part of this reaction comes from it being an uninterrupted viewing experience. Streaming sites are set up in a way to ease and encourage the process of watching films in little segments. Watch for 5 minutes. Pause and make something to eat. Watch for another 10 minutes while you simultaneously browse your phone. Skip forward past a scene you get bored of. Then come back the next day and find the movie paused right where you left off so that you can begin this fragmented viewing process again.
Filmmakers work extremely hard to design each film as a continuous, cohesive experience that suspends your disbelief and envelops you in the world of the story. Breaking down the medium by stopping and starting it destroys a movie’s ability to take hold of you.
The cinema is so important because it forces you to view a film as it was intended to be viewed, as a single, uninterrupted experience.
Sometimes filmmakers want to test your patience and use more drawn out scenes to support their point of view of the story. Sitting through a movie from beginning to end, even if you don’t care for the film, will at least give you a complete idea of what the filmmaker was intending to do.
Plus, in today’s world where everything is so sped up and our attention spans have become shorter than ever before, I think turning off your phone and watching a complete film from start to finish is an important mental exercise we should all regularly perform.
SOUND
The sound that you hear in a cinema is far more immersive than that from a laptop, phone or TV. This is because of surround sound. Consumer display products, like a laptop, typically have a single speaker built into the device that emanates sound from one source or direction.
Cinemas have surround sound which uses multiple speakers in multiple positions to provide sound that is more immersive and which surrounds you 360 degrees.
Like high end cinema projectors which are standardised, so too is the sound. The global standard is Dolby Digital which provides an audio mix with multiple channels, such as Dolby Digital 5.1. This provides 3 front channels which are sent to separate speakers: a centre, left and right which provide clean dialogue and placement of on screen sounds. Two twin surround channels are typically placed on the sides and behind the audience to provide a fuller, 360 degree listening experience. A low frequency channel that provides bass effects, with about a tenth of the bandwidth of the other channels, makes up the final .1.
The cinema is therefore set up to provide a more captivating sonic experience that places you in the centre of the action and better draws you into the world of the movie.
THE ALTAR OF CINEMA
The final reason to go to the cinema is less of a practical one and more of a conceptual one, but is arguably the most important. This may seem a bit over the top, and hopefully no one takes offence, but I think a comparison can be made between cinema and religion in the way that they are presented.
In most religions it is of course possible to practice from home without interacting with others through meditation or prayer. However all major religions have physical spaces which bring communities together: temples, churches, mosques. Often these spaces are large, impressively built and feature significant iconography.
I think as humans we are drawn to spaces, and get some kind of greater, more significant experience from coming together as a collective in a space that is designed and devoted to that experience.
Standing in the queue for popcorn, buying tickets, sitting amongst a group of people, watching the trailers - it’s almost ritualistic and builds up a level of excitement and reverence for the film we’re about to watch. An image which is projected onto a massive screen has to be taken more seriously than one on a smartphone.
The issue with having a continuous never ending supply of content to stream at home on a laptop is that it diminishes the importance of the medium. It makes movies more mundane and everyday. Taking the time to visit the cinema builds anticipation and makes it more of an experience and an event.
CONCLUSION
So much effort goes into making movies as a work of art. I think they should be appreciated as such and not given the same gravity as this YouTube video for example. They are different mediums. The smaller the screen becomes the more that watching a film turns into an individual experience rather than the group experience that it was designed to be.
Going to the movies may be less practical than just bingeing the latest releases on a laptop, but the experience of going to a cinema elevates movies into the unique medium that they are.
How Denis Villeneuve Shoots A Film At 3 Budget Levels
Great directors are capable of creating and maintaining very deliberate cinematic tones. This is true of Denis Villeneuve.
INTRODUCTION
Great directors are capable of creating and maintaining very deliberate cinematic tones. This is true of Denis Villeneuve. His films are thrilling, dramatic and at times epic in both tone and scope, yet also provoke subtle political, ethical and philosophical questions that provide substance to action.
His career has wound a path from lower budget productions all the way to directing some of the largest blockbusters in the world.
In this video I’ll look at three of his films at three increasing budget levels, the low budget August 32nd on Earth, the medium budget Sicario and the high budget Dune to unveil the formation of his style and identity as a director.
AUGUST 32ND ON EARTH
The Canadian filmmaker’s interest in movies was piqued as a child. He began making short films when he was in high school, where he also developed an early love of science fiction. After leaving school he began studying science but later changed his focus to film when he moved to the University of Quebec.
After winning some awards he began working with the National Film Board of Canada where he established a working relationship with producer Roger Frappier who developed films by emerging directors.
The NFB funded his first 30 minute short film which showed a lot of promise. Frappier then produced Cosmos, a collection of six different shorts made by six young directors, which included Villeneuve as well as his future collaborator André Turpin. It was a critical success.
Following this Villeneuve wrote a screenplay with a contained story about a woman who is thrust into an existential crisis after surviving a car accident. Frappier came on board to produce the film under his production company Max Films.
André Turpin was brought on board to serve as the cinematographer on the film. This collaboration established a trait which would continue throughout his later movies - an openness to letting DPs bring their own photographic sensibilities to the project, while at the same time always firmly maintaining his own strong perspective on the script.
To August 32nd On Earth, Turpin brought his preference for strong, saturated 35mm Kodak colour, very soft side light, character focused framing and use of sharp lenses with a shallow depth of field. This was complemented by Villeneuve’s preferences for using subjective framing with lots of close ups and motivated, smooth camera moves from a tripod, dolly or Steadicam.
Although the film is a mature, cinematically grounded and more realistic production, it also has a dreamlike tone with moments of experimentation, some of which seems to have been inspired by his love of French New Wave Films, such as Breathless.
From the philosophical walk and talks, to the numerous jump cuts and even the main character's short haircut - Breathless seems to be a clear influence. And if you think maybe these are just coincidences, there’s even a shot with a poster of Seberg who starred in Breathless. While the influence of French New Wave filmmaking is strong, it’s not overpowering.
Villeneuve took parts of the style that worked effectively for a low budget film, such as a subjective focus on very few characters, and parts that suited his story, such as the experimental editing to visualise the character’s post accident haze, and combined it with own sensibilities for realism, mature drama, cinematic control, and isolated desert locations (which cropped up in much of his later work).
August 32nd established his strong voice as a director, his ability to maintain a consistent cinematic tone, openness to collaboration and his stylistic sensibilities.
He made his first low budget film by writing a simple story with few moving parts, using experimental cutting to avoid showing expensive set pieces like the car accident, and instead devoted his budget to creating a deliberate, cinematic camera language.
SICARIO ($30 Million)
August 32nd got into the Cannes Film festival and premiered in the Un Certain Regard section, which he followed with a string of Canadian medium budget films.
In 2013 it was announced that Villeneuve would direct Sicario, an action thriller on the Mexican border. He was drawn to the philosophical concept of the border, an imaginary line which divides two extremes, as well as examining the idea of western influence and how it is exerted by first world nations.
At a medium-high $30 million budget it was a step up from his prior Canadian films in the $6 Million range. However, the script involved many large, expensive set pieces and complex action sequences which meant the budget, relative to what needed to be shot, wasn’t huge. After writing or co-writing the screenplays for his early projects, Sicario was penned by Taylor Sheridan.
“The research I did after, as I was prepping the movie, just confirmed what was written in the script… I wanted to embrace Mexico. To see scenes from the victim’s point of view…try to create authenticity in front of the camera and not fall into cliches.”
To capture an authentic, naturalistic vision he turned to famed cinematographer Roger Deakins who he’d worked with before on Prisoners.
They storyboarded many of the sequences as a team during the process of location scouting in pre-production. This nailed down the photographic style they wanted and also allowed them to work quickly and effectively when shooting complex action sequences that needed to be pieced together.
This helped decrease shooting time in the tight schedule. Villeneuve’s clear vision for the shots he needed to get also saved time. For example, after shooting a master of a confrontation scene, Deakins asked if he should move the camera closer to get singles of each character. Villeneuve declined, knowing that he would use the master shot as a single long take in the edit…which he did. Not shooting extraneous close ups saved the production around three hours.
In his trademark style, Deakins shot many of the scenes from an Aerocrane jib arm with a Power Pod Classic remote head, a combination he’s used for over 20 years.
This allows him to quickly and easily move the camera on any axis, making it useful not only for smooth moves, but also for quickly repositioning the frame, allowing for a more organic working style and time saving setup.
“I mean the challenge of the photography of any film is sustaining the look and the atmosphere and not breaking out of that.”
One challenge when shooting out in Villeneuve’s favourite location, the desert, was controlling the natural light. Deakins did this by breaking down and scheduling each exterior shot at a specific time when the angle of sunlight was right with the assistant director Don Sparks.If the sun went away or into clouds they had a separate list of shots they could get such as car interiors or close ups which were easier to light.
Another way of exercising control of the lighting and the location was shooting certain interiors in a studio. To free up space for camera moves and to keep the light as motivated and as natural as possible he set up all his lights outside the set - 6 T12 fresnels pushing hard, sourcey light through windows and 65 2K space lights to provide ambience outside those windows.
He recorded on ArriRaw with the Alexa XT using Master Prime lenses - usually the 32, 35 and 40mm, occasionally pulling out the 27mm for wides.
“The overall approach to the film was this personal perspective. We’re either with Emily, or with Benicio, you know. So we took all that to say well we’ll do this whole night sequence from the perspective of the night vision system.”
To do this a special adapter was used on the Alexa to increase its sensitivity to light. He then lit the scene with a low power single source bounced from high up to mimic realistic moonlight and keep the audience immersed.
The much larger scope Sicario was therefore pulled off with a $30 million budget by: carefully planning out the complex action sequences in pre production to save time and money, casting famous leads that drew audiences to the cinema, shooting some interiors in a studio for increased control and exteriors on location to wrap the audience up in a feeling of authenticity and controlling the score, sound design and pacing in the edit to provide a consistently thrilling tone.
DUNE ($165 Million)
After Sicario’s critical and commercial success Villeneuve turned to a project he’d dreamt about making since he was a teenager - Dune - based on the sci fi novel by Frank Herbert.
“I felt in love spontaneously with it…There’s something about the journey of the main character…This feeling of isolation. The way he was struggling with the burden of his heritage. Family heritage, genetic heritage, political heritage.”
With this thematic backing Villeneuve took on this sci fi story of epic proportions with a large studio budget of around $165 million. Since a large part of the undertaking was based on creating his imaginings of the world of Dune, he teamed up with his regular production designer Patrice Vermette and experienced cinematographer Greig Fraser. Together they worked with the extensive conceptual art and storyboards to bring the story to life. Since the way in which the sets were constructed would have an impact on the lighting, Fraser had many pre-production meetings with Vermette about light.
“The main character in the movie for me is nature. I wanted the movie to look as naturalistic and as real as possible. To do so we used most of the time natural light.”
On Arrakis buildings are constructed from rock with few openings to save its occupants from the oppressive heat. So instead of using direct light, the interior lighting is soft and bounced. To create this Fraser and his gaffer rigged Chroma-Q Studio Force II LED light strips to simulate the ambient softness of bounced sunlight. For close ups where they needed more punch he used LED Digital Sputnik DS6 fixtures.
To create depth Fraser constantly broke up spaces by using areas of light and shadow in different planes of the image. To bring out the incredible heat and harshness on the desert planet, Fraser used hard natural light from the sun which he cut into sections of sharp shadow in interesting ways.
Generally in cinematography, the larger a space is the more expensive and work it takes to light. This sequence was no exception.
In a massive undertaking, Fraser’s grip and rigging team put up gigantic sections of fabric gobo over the set’s ceiling to creatively block the sunlight, to create a sense of ominous depth to the space. They then had a precise window to shoot the scene between 10:45 and 11:10am where the angle of the sun would be perfect.
They photographed Dune on large format with the Alexa LF and Mini LF on large format spherical Panavision H-series lenses to render the taller 1.43:1 Imax sequences and Panavision Ultra Vista 1.65x anamorphic lenses for the 2.39:1 shots.
“I wanted the sky to be a vivid white. A very harsh sky. To bring kind of a violence to the desert - a harshness to it.”
To do this Fraser got his colourist Dave Cole to create a LUT for the camera in pre-production that pulled out the blue in the image and rolled off the overexposure.
The final finishing of the movie in the grade involved an interesting process. Fraser felt the look of the film should be more on the digital side, with the slightest hint of film.
To do this they took the graded digital files and did a laser-recording-film-out, recording the digital image onto Kodak 5254 print film. This film was then scanned and converted back to digital files. The result was a final file with just a tiny hint of film grain and subtly organic film artefacts.
When it came to sound Villeneuve brought composer Hans Zimmer into the room with the sound design team, so that the two were married together to create the ultimate immersive experience.
Villeneuve successfully grounded Dune’s fantastical world with his trademark realism and used the massive budget to: pull off a long shoot with a big crew, enormous technical setups and set construction, access to any gear they needed and extensive VFX post-production work.
CONCLUSION
Villeneuve’s films are strung together by a thrilling subject matter with political and philosophical themes told in a grounded, realist visual style…and, well, the desert.He’s drawn to scripts that both immerse the audience in a riveting world and pose subtle thematic questions.
Throughout his career he has worked in a collaborative way with different in demand DPs who each imparted touches of their own style on the stories. However, his films are always very much his own and supported by his vision.
Villeneuve’s ability to control the tone of his films using every filmmaking element, from the script to the camera work, the edit and the music, is what has elevated his work to its critical and commercial heights.
How Movies Are Shot On Film In The Digital Era
In this video I thought I’d do a bit of a deep dive into why some productions still choose to shoot on film over using digital cameras and outline the whole process of how film is shot, from pre-production and production all the way to it’s post production workflow.
INTRODUCTION
Although there was a time when many thought that shooting on motion picture film stock would quickly die out after the launch of high quality digital cinema cameras like the Arri Alexa in 2010, film still persists. In fact in recent years it's seen a bit of a resurgence.
In this video I thought I’d do a bit of a deep dive into the topic. So sit tight while I go over why some productions still choose to shoot on film over using digital cameras and outline the whole process of how film is shot, from pre-production and production all the way to it’s post production workflow.
WHY SHOOT ON FILM?
Before going over how film is shot I think it’s important to understand why it’s shot.
On the surface digital has many apparent advantages. It’s often cheaper. It’s possible to roll for much longer. There’s less room for exposure or development errors. You can view the rushes immediately. The list goes on.
When it comes to listing the pros for shooting on film it usually comes down to two factors: the look and the way of working. My favourite cinematographer, Sayombhu Mukdeeprom, sums it simply: “It’s a better practical experience and aesthetic choice.”
Nowadays it is possible to recreate most of the colour and texture of film with digital footage in the colour grade, so that it’s perhaps a 95% match (or whatever number you want to use). However, I’m still yet to see the highlights and natural sharpness in a digital image effectively manipulated in a way that is 100% indistinguishable from film, particularly 16mm. And if you have the budget to shoot on film, and that’s the look that you are after, why shoot digitally then spend a load of effort in post trying to achieve a look and texture that is achieved out of the box with film.
Having spent time on both film and, of course, digital sets, I can attest that there is a marked difference in the vibe on these sets. Because you’re shooting on an expensive and limited commodity. When the film starts rolling through the camera everyone on set is far more focused.
Also the build up to shooting on film is more focused. Shots are carefully planned, movements and performances rehearsed and only a limited amount of takes are shot. This contrasts with the so-called ‘spray and pray’ method that sometimes happens when shooting digitally.
So for filmmakers that value both the aesthetic and more disciplined on set manner of working that film provides - shooting in 16mm or 35mm remains a viable choice.
PRE-PRODUCTION
Now that you’ve done the maths on the viability of the costs of motion picture film and chosen it as your working medium, how is it practically shot?
It all starts in pre-production.
Before arriving on set the director of photography will either conduct camera tests with various film stocks, or use their prior shooting experience to select a stock or a few stocks that are correct for the project. Today this means choosing between Kodak’s range - who are the only remaining manufacturer of motion picture film stock in the world. The cinematographer will base this decision on 3 factors, the ASA, or sensitivity of the stock, the colour balance, daylight or tungsten, and the look.
They’ll look at how the stock captures colour, each is subtly different, and the amount of grain and texture that they have. Stocks with a lower ASA, like 50D will have very fine grain, while higher ASA stocks, like 500T, will have more noticeable texture. Daylight stocks, rated around 5500K, have colour that is balanced to look normal in daylight. Tungsten stocks, around 3200K, have colour that is balanced to look normal under artificial tungsten light.
It is possible to shoot tungsten stocks in daylight and either add a warm 85 filter to correct the colour temperature, or shoot without a filter and correct the colour balance in the grade in post production.
Some cinematographers may choose multiple stocks, for example 250D for exteriors and 500T for interiors and night scenes, while others may choose to photograph an entire project with a single stock. It’s on them to estimate how many rolls of film stock will be needed, which the production team will then go about sourcing.
Short form projects like commercials will usually order all the film upfront, while longer feature projects will often keep ordering new film as they go. Often you can return excess film stock back to the supplier as long as it hasn’t been opened and loaded into a magazine. However it’s still best practice to acquire the amount of film as accurately as possible. You don’t want to order too much stock and lose money because it can’t be returned but you also never want to run out of stock or be unable to shoot. It’s the job of the camera team to determine how much stock needs to be ordered and pass that information on to production, who will order it.
PRODUCTION
With film stock in hand, or in the back of the camera truck, it’s now time to load it and start shooting. This is either done by a dedicated camera loader, especially when dealing with multiple cameras, or done by the 2nd assistant camera.
Since film captures an image by being exposed to light for a fraction of a second, it’s of the utmost importance that the raw stock is never exposed to any light. If a film can pops open for even a second outside the film will no longer be usable. That’s a good chunk of money down the drain.
The 2nd uses a light free film changing tent and loads the stock from the film can into the camera magazine completely in the dark. Once in the magazine and completely sealed the assistant then labels the magazine using tape.
Red tape for tungsten film or blue tape for daylight film. With a sharpie they’ll write down information like the roll number, what stock it is, the code that identifies the stock, how many feet of film is usable and any necessary developing instructions. The 2nd hands the mag to the focus puller who laces it onto the camera where it is ready to shoot.
Before rolling DPs metre how much light there is with a light metre and set their exposure.
Nowadays they often carry their own digital stills camera and double check their exposure with it. For example, if shooting 500 ASA film with a 180 degree shutter they set the digital camera ISO to 500, the shutter to 1/50 and manipulate their aperture until they find an exposure they are happy with.
The aperture of the film camera is then set and any necessary ND filters added or subtracted to cut down or increase the amount of light that enters the camera.
After each scene or shot is completed the assistant director will announce ‘check the gate’. The focus puller does this by taking off the lens and examining the film gate to ensure it is clean. Any dirt of hairs on the gate means the shot will be ruined. If the gate is clean the 1st AC announces ‘good gate’ and the production moves on to the next shot. It’s the job of the 2nd AC to consistently check the film counter to know when the magazine will run out.
Once all the film in a mag has been shot it is removed and carefully unloaded in the tent by the 2nd. They put it back in its can, seal it securely with tape and place the tape label from the magazine onto the can.
At the end of the day they will make a camera report, stating all the rolls that were shot with all the necessary information. From the 3 copies of the report 1 copy goes with the raw footage to the film lab to be processed, 1 goes to the production team for their records and 1 backup copy is kept by the 2nd.
POST-PRODUCTION
Once the film arrives at a lab, such as Cinelab in the UK or Fotokem in the US, the first thing that needs to happen is to develop it. The film is passed through a combination of chemicals. This sets the image on the film as a negative. Once developed, the film can now be handled in light without concern.
In order to edit the footage it needs to be converted to a digital format so that it can be worked with in the same way as files from a digital camera. To do this the film is either scanned or goes through a telecine.
For a telecine, as the film passes through a machine it is captured and recorded as a video file in real time - usually in HD. A scan is slower, more expensive and records much higher fidelity video files. The most common modern industry scanner is the Scanity HDR.
Each roll of film is put onto the spools of the scanner and motors run the film past a gate. At the gate each individual frame of film which was shot is scanned at either 2K or 4K resolution and saved as a digital DPX file. It is capable of scanning up to 15 individual frames every second.
These DPX files are uncompressed and lossless with very high dynamic range. This means they are similar to RAW files that are captured by some high end cinema cameras like ArriRaw or Redcode Raw and retain a huge amount of colour information.
Due to the high quality of the files they are fairly large. About 24 minutes of footage can be stored on 1 terabyte.
These files are then worked with in a digital post production workflow similar to how digital RAW files would be worked with. Once edited, those lossless files will be graded by a colourist, who will find the desired levels of saturation and contrast and correct any colour balances that are off.
Finally, the finished, graded footage along with the final sound mix will be converted into a DCP, a digital cinema package, basically a hard drive, which is used to digitally project the final film in cinemas.
Occasionally a film-out is done where the final DPX files are converted back to a film print, which is projected in cinemas the old school way - with light.
Why High Resolution Isn't Always A Good Thing
Let’s talk a bit about what resolution actually is and why I think high resolution isn’t always a good thing.
INTRODUCTION
What is it with this recent obsession with high resolution images? From gaming, to smartphone cameras, to what we talk about on this channel, films and cinematography. Why is the highest peak of photography associated with high resolution?
If you’ve ever worked professionally with cameras, the first thing that people like to ask is: does it shoot 4K?
Maybe part of this is based on our continuous pursuit of technological advancement. We tend to think that newer, bigger, sharper, faster, is always better. Well I think this isn’t always true. Particularly when it comes to art. So let’s talk a bit about what resolution actually is and why I think high resolution isn’t always a good thing.
WHAT IS RESOLUTION?
Some quick background on resolution. It refers to the amount of detail that an image holds. This can be measured in different ways but in the world of video and digital cinematography it is measured in pixels - tiny elements which record light.
Each pixel records a measurement of light and converts that data to a colour. I like to think of pixels like bricks in a building. With each brick painted in a different colour.
When you have a small wall made up of very few bricks, the image will appear more blocky or low resolution, whereas if you have a massive skyscraper with a ton of bricks, an image will appear clearer with greater detail.
If you set this YouTube video to 240 pixels it will be low res and blocky. If you set it to 1080 pixels it will be higher res with greater detail. Most digital cinema cameras use Bayer sensors to capture light in pixels with a red, green and blue pattern. 1000s of red, green and blue pixels are combined to create a representation of reality.
But enough with all the technical talk. Why does resolution matter? Surely the more detail that a camera can capture the better the image is?
Well, this is where I disagree. Just because an image can be recorded in 8K and resolve extreme detail it doesn’t mean that it is always appropriate to do so.
WHY HIGH RESOLUTION ISN’T ALWAYS A GOOD THING
In the world of art, painting photo-realistic images that are super sharp is one method of expression. You also get other painters that paint with broader, more abstract strokes that express feeling rather than only focusing on creating the highest fidelity image that perfectly represents reality.
Both are valid forms of expression.
In the same way, some filmmakers may prefer to tell their stories with less refined brushstrokes using a medium with a lower resolution that captures less detail like 1080p or even 16mm film as opposed to an 8K digital camera.
High or low resolution should be seen as a tool rather than something which is mandated. An image being captured in a higher resolution doesn’t make it inherently better. Resolving huge amounts of detail through high res capture means that things like skin will reveal every pore and blemish. Whereas resolving less detail gets rid of these unflattering flaws in a natural way.
It’s like when you meet someone in real life. Your eyes don’t fixate on the fine details of a person’s skin. They focus on the face as a whole. Photographing faces at a slightly lower resolution removes the focus on micro details. I think that the way in which cinema is viewed now also makes super high resolution images a little bit off-putting.
The way in which films are consumed by audiences is undeniably changing. We’ve gone from sitting way back in a cinema viewing projections on a large screen to watching content up close on laptops or phones. The larger the image projected and the further away you are from it, the more subtle the level of sharpness becomes. So when you watch Netflix on your laptop a few inches from your face the sharpness of the image will feel magnified.
I think the close viewing of high resolution video on high resolution screens results in images which are overly sharp images and a bit less…romantic. Perhaps this is just personal preference but, aesthetically, I find that super crisp digital cinematography can come off as feeling more video-y. More like broadcast TV on a 4K screen.
And actually, I don’t think I’m alone in this. Not amongst cinematographers anyway.
I’ve camera assisted on loads of shoots, and I’ve witnessed an overwhelming percentage of those DOPs pair high resolution digital cinema cameras with either diffusion filters, like an ⅛ or a ¼ Black Pro Mist or a Glimmer Glass, or pair them with vintage lenses. This is to take some of the sharpness and edge off of the high resolution digital sensor. Too much sharpness just feels artificial and unnatural.
Far fewer cinematographers pair high resolution digital cameras with modern high resolution lenses like Leica Summilux-Cs, the Alura or Master Primes without any filtration. And most of the time this is where the client or studio demands that the product must be very sharp.
I think this pursuit of maximum resolution and clarity follows the same pursuit of perfectly santitised, idealised images which are created for many contemporary mainstream Hollywood movies.
For example, myself and my filmmaker friend always joke about the fact that most featured extras in the background or actors with smaller roles in Hollywood films these days are now cast to super good looking, young models. Instead of the average, everyday folk which would be present in older movies. Like, come on, are these really what experienced scientists look like?
In the same way I think an overly sharp presentation of reality creates a cinematic world that is, photographically too perfect.
Finally, an important consideration when choosing gear is not only the creative or photographic look it has but also it’s economic and practical implications. This isn’t really a consideration for high budget films, but for lower budget projects, higher resolution cameras are more expensive to shoot with. More storage space on hard drives is required and more processing power is needed to edit and deal with that footage in post production.
CONCLUSION
Now I’m not saying that all films should be shot on 8mm film or at 720p. I think that for most digital projects, shooting and finishing them in a standard 1080p or 2K format is probably enough resolution to yield a sharp enough, but not overly sharp result.
However my main point is that sometimes 2K doesn’t feel right. Sometimes 8K is correct for capturing the project. Sometimes a 2K scan of 16mm film is correct. Some films should be finished in 4K.
Certain stories may benefit from capturing extreme details, giving images a hyper sharp, artificial digital look or benefit from the extra pixels needed for intensive visual effects work.The choice of the resolution should always be a practical and artistic choice that is motivated by the story and not just a default decision that is mandated or enforced.
Just because technology can do something, doesn’t mean it’s always right.
Cinematography Style: Maryse Alberti
In today’s episode I’ll give some background to Alberti’s career, go over her philosophy on cinematography and the gear that she has used in order to translate her vision to the screen.
INTRODUCTION
Maryse Alberti may not be as well known by mainstream audiences as some other cinematographers which I’ve featured in this series, but the strength of her career as a cinematographer speaks for itself.
She has a prolific track record in both documentary and fiction filmmaking, often choosing films that deal with real subject matter, true to life characters and situations that are interesting and elevated but grounded in reality.
In today’s episode I’ll give some background to Alberti’s career, go over her philosophy on cinematography and the gear that she has used in order to translate her vision to the screen.
BACKGROUND
“I grew up in the south of France. I didn’t have a TV, I didn’t see a TV until I was 12 years old…I just fell in love with movies when I came to the States because I stayed with people who had a TV in every room.”
After moving to the US in 1973, Alberti developed a career in capturing images, starting out by working as a still photographer in a field which isn't exactly the traditional roadmap to a career as a feature film DP.
“I ended up on the x-rated movie set where I was the still photographer…When I started to work on x-rated movies I started to meet people. The crews in New York were young people out of Columbia or NYU. It was kind of the training ground, one of the training grounds…Since I didn’t do film school that was kind of my film school.”
After starting out making film industry contacts in the ex-rated world she then got involved in shooting documentaries. Her break as a cinematographer came when she shot H2-Worker which won the Jury Prize for best documentary and best cinematographer at Sundance and launched her career as a DOP.
Throughout the years she has worked as cinematographer in both the documentary and the feature world, for many esteemed directors such as: Todd Haynes, Stephanie Black, Martin Scorsese, Ryan Coogler and Ron Howard.
PHILOSOPHY
Alberti’s career mix of documentary and fiction work has resulted in a style of working based on realism and cinema vérité.
Cinema vérité or observational filmmaking is a documentary style which attempts to capture reality in a truthful manner, by observing reality and trying to draw attention away from the presence of the camera. Although paradoxically, some argue that the very presence of a camera alters how reality is captured.
Either way, to blend into the background as much as possible, this style is often characterised by a minimal gear footprint. The very act of using less gear will impact the look of how a film is captured.
“From a cinematographers point of view you learn to work with very simple tools and very few people…Docs is another way of working. It’s more instinctual, it’s less intellectual.”
However, Alberti still recommends thinking about the subject of the documentary and basing the photography on the story.
Therefore, although I’d argue that a portion of her photography can be characterised by a vérité look, her style does of course change depending on the nature of the story.
A key difference between her work on documentaries and features comes from the level of intentionality. Long form work is more of an intellectual process with lots of prior reflection on creating a visual language, which is then executed by exercising and maintaining ultimate photographic control. Whereas in documentaries there is more scope to embrace improvisation and capture moments as they play out in real time.
For most documentary interviews Alberti will arrive at a location without seeing it beforehand, whereas when working on features she’ll usually have up to 8 weeks of prep time to scout locations and discuss production design with the director.
An example of how her taste for a natural, vérité look transfers over to her fiction work can be found in The Wrestler.
“I mean the whole film has a very naturalistic look. When I went to look at locations and went to look at a wrestling match I tried to make it work for the drama of the film. To keep it as real as possible. So that you felt you were in a real place.”
She did this by using natural looking lighting and motivated, handheld camera movement - skills which she had developed during her work in documentaries.
GEAR
“In general do I prefer film?...It depends on the story...Some stories are best told in the digital world. In documentary I think it’s a question of economics.”
When it comes to selecting gear for a project, she of course considers which equipment can achieve the desired look, but, perhaps equally as important, is the practical side of the gear selection.
When she started her career, shooting on film was the only viable option for attaining a decent quality image. A lot of her early documentary work was shot in 16mm due to it being a cheaper medium than 35mm which was needed to facilitate the lower budgets of documentary and higher shooting ratios. She mainly shot with Aaton cameras, such as the lightweight Aaton LTR 54. Even though 16mm was cheaper, it was still a costly process to photograph a documentary.
“When we did H-2 Worker…we went to Jamaica with 5 rolls of film because we didn’t have any money. You had to be very careful of the questions you asked and when you rolled.”
At approximately 11 minutes of run time per roll, this meant they had less than an hour of footage which they could shoot. Compared to today where a single interview may be longer than an hour.
Working on digital now allows filmmakers to be far more free about when they roll the camera and allows directors to have a conversation in interviews rather than asking very specific questions in an economic way. Alberti therefore prefers the practicality of digital over film when shooting documentary. It’s economic benefits, ability to roll for extended periods and smaller size outweigh the look of film.
She now uses cameras such as the Canon C300, the Sony Venice or variations of the Alexa for feature films, with different lenses like Hawk V-Lite anamorphics, Master Primes, Cooke S4s, or Angenieux zooms.
Although it is dependent on the story and subject matter of the project, much of her work has featured extensive handheld camera movement which is motivated by the movement of the characters. Perhaps this is due to directors wanting to work with her for her experience in producing quality handheld work in a vérité style.
She has operated the camera herself, but for larger feature films which require a more intensive focus on lighting, she has delegated the handheld work to camera operators. As a lot of the movement tracks the movement of the characters, it can make scenes feel a bit more ‘real’, like the actions of the actors are being observed rather than deliberately performed in multiple takes.
Alberti’s lighting does occasionally differ between projects depending on the type of story, but a lot of her lighting tries to be as naturalistic as possible, so naturalistic that the audience doesn’t even notice that the space is lit.
She does this by only supplementing the sources of light that are already present in the location. For example if sunlight is already coming through a window she may place a film light, like an HMI, outside that window to mimic the same direction and quality of the natural light. This is particularly necessary in fiction where consistent lighting conditions are required throughout a scene - which may be shot over the course of half a day.
Where possible she’ll place lights out of sight so that they can shoot a scene 360 degrees without being limited by lighting gear. She also uses textiles and diffusion gels to soften the quality of the natural or artificial light.
For interiors she’ll sometimes place practical lights in a location or use additional lights overhead like Mac 2000s to give the room a bright enough exposure to shoot in or to balance the brightness of different levels of illumination.
CONCLUSION
If I had to sum up her style, I’d say that Maryse Alberti is a cinematographer whose work in fiction is an extension of her documentary work.
Many of the characteristics of cinema vérité, such as a handheld camera and naturalistic lighting are carried over onto the feature films which she shoots but are executed on long form jobs in a more considered, deliberate and controlled manner than her more improvisational documentary camera work.
Her ability to capture a realistic feeling portrait of reality has contributed to her being an incredibly influential DP in both the world of fiction and documentary alike.
4 Camera Moves Every Filmmaker Needs To Know
Each choice made by cinematographers or directors should be a deliberate one that is responsible for visually communicating information or an emotional tone. In this video I’ll look at four common types of camera movement, go over how they are technically achieved, with what gear, and uncover how each can be used to communicate different emotional tones.
INTRODUCTION
The way in which the camera moves isn’t an arbitrary choice made by filmmakers, or, at least, it shouldn’t be. Each choice made by cinematographers or directors should be a deliberate one that is responsible for visually communicating information or an emotional tone.
From early on in cinema, people worked out that the camera presents a point of view and that moving the position of the camera in different ways during a shot can have different effects on how that shot is perceived by audiences. The way in which information on screen is presented, and in what order that information is presented, can also be controlled by the motion of the camera.
So today I thought I’ll look at four common types of camera movement, go over how they are technically achieved, with what gear, and uncover how each can be used to communicate different emotional tones.
PAN & TILT
Let’s start with the most basic and easiest to achieve camera movement - the pan and tilt.
Panning directs the angle of the camera on a horizontal axis. From right to left or from left to right. Tilting the camera moves it on a vertical axis, angling it upwards or downwards.
These movements are most often done on a tripod head, which can pan or tilt the camera in a smooth motion without shake. However, other types of gear can be used to pan to tilt, such as: a stabilised remote head like a Libra, by whipping a gimbal up or down or controlling its motion remotely, using the motion of a Steadicam, or even panning or tilting the camera handheld.
Both a pan and a tilt are usually used in combination to achieve what I’d call motivated camera movement. This is where the camera’s motion mimics that of the motion on screen.
For example, if a character moves around during a scene the operator may pan or tilt with them so that they remain the focus and do not leave the shot or ‘break frame’ as we say. By following the motion, the camera takes on a more subjective visual language that is more focused on a specific individual and their actions. As opposed to a wide locked off frame that doesn’t move and is more observational and objective.
The easiest way of quickly communicating which character in the story is most important in a scene is to follow their movement by panning or tilting with them.
Panning and tilting can also be used to reveal important information to the audience. For example the camera may start on a character and then tilt down onto an object. Tilting down to this object is a way of directing the audience’s eye to an important detail or piece of information in the story and saying ‘Look at this. Pay attention to it. It will be important later.’
The speed at which the camera tilts or pans will also create different tones.
A slow pan over a landscape may be used to build a sense of anticipation or gradually reveal the magnitude of the space. Whereas a quick whip pan makes a shot feel much more dynamic and is used to inject energy into a scene in a way that is more stylised.
PUSH IN & PULL OUT
A push-in physically moves the camera closer to its subject, usually at a gradual speed. The opposite is a pull-out where the camera steadily moves further away from its subject. So for push ins the shot size will go from wider to tighter and for pull outs the shot size will go from tighter to wider.
Although these moves can be done handheld, they are more commonly done with rigs that keep the motion smooth, such as a dolly, a slider, a Technocrane or a Steadicam.
The more slow and smooth the movement the more natural and subtle the emotional effect. The faster the motion the more abrupt, stylised and impactful it becomes.
For me, slowly pushing in on a character, especially during an important moment where we move into a character’s close up, makes me get inside that character's head. The camera is literally drawing you into their world. This movement makes you concentrate more on what the character is talking or thinking about. Often this move is used when characters are dealing with some kind of internal conflict or during a pivotal moment in the story.
The pull-out works in an almost inverse way. Instead of pushing in closer to the mind of the character, we pull away from them and become increasingly detached. This move can therefore be done to isolate a character on screen and introduce a sense of loneliness.
Another function this move has is to reveal a space or information. Starting in a close up and then pulling backwards will slowly reveal more of the location to the audience, better contextualising the character within their space.
Since the push in and pull out are not motivated by the movement of the character, it is more of a stylistic choice and is therefore in danger of losing its impact if it is overused or continuously done for every close up.
TRACK
A tracking shot kind of speaks for itself. It’s what I’d call a move where the camera physically moves through a space from a start to an end position - often tracking the movement of its subject.
Usually this is done with a dolly by laying a line of tracks and then pushing the dolly along those tracks on a straight axis, sometimes maintaining the same distance between a subject and the camera. Track positioning can also be more diagonal, where the camera tracks sideways but also gradually closer or further away from its subject.
This move can be done on a Steadicam, especially for sequences composed of longer takes with different axes of movement, or where the terrain changes gradient and placing tracks becomes cumbersome. Tracking shots done from directly behind or in front of a character are also commonly done with a Steadicam or without tracks on a dolly on a smooth, even floor.
Like with panning and tilting, this movement can be motivated, based on the movement of the characters.
For example, characters walking from right to left can be followed by tracking in the same direction. Again, this increases subjectivity, shows you what the main focus of the shot is and puts you in the literal footsteps of the characters.
Sometimes filmmakers use a counter track, where the dolly moves in the opposite direction to the subject. Usually this is done in a swift move to increase the energy and tempo of a shot. As the camera moves against the motion of the subject, it decreases the length of the take so is usually inserted as a quick cut within a sequence. For this reason, cars are often shot with counter moves from a Russian Arm, which increases the feeling of motion and speed.
Tracking through a space alongside a character in a longer take also gradually expands the scope of a location and introduces the audience to a space as we are exposed to new backgrounds as the camera moves.
BOOM
Booming refers to moving the camera up or down on a vertical axis.
Boom shots are usually associated with camera cranes which are used to lift or drop a camera using an arm. But for more limited moves they are also commonly done with a dolly, which has a smaller hydraulic arm. These two methods are popular for their stability and smoothness of movement and easy control. Some other gear used for boom shots may include a drone, a spidercam or rig using a pulley system, or a Towercam.
Booming up can be used to reveal more information using a single shot. For example, it could boom from an object, point A, up to a character, point B. This is a way of pointing out to the audience that the object at point A may be important or hold significance to the story. It creates a link between the two points.
Even in the case of the cliche example of characters driving off into the sunset on an open road, point A starts on the characters in a car which then booms up to point B, the open road. This move therefore creates a link between the characters and the open road, which may represent possibilities, freedom, or hope.
As with the push in, booming up and down is often not motivated by movement and should be used sparingly to avoid overuse and minimising its impact.
Also, in the same way as a tracking shot, booming can reveal more of a landscape or setting and is therefore often used to uncover the space as either an establishing shot at the beginning of the scene or as a closing shot at the end of a scene.
CONCLUSION
There we have it. Four types of basic moves which can be used to control how information in a movie is presented.
When interpreting and coming up with camera movement context matters. The same move made to capture different stories in different contexts, at a different pace, in a different manner with different gear may change the effect and meaning that move has on an audience.
So, when you’re planning your shots ask yourself these questions: What is the focus of the scene? What information do we need to present? In what order? Whose perspective is the story being told from? Should the movement be motivated? Or does the camera need to move at all?
These four moves are also just the tip of the iceberg. Some directors like combining some, or even all, of the above moves into a single shot if it serves the telling of the story. Because, really, how the camera in a film should move is only limited by budget, the three dimensions and our imagination.
What The Metaverse Means For The Future Of Cinema
In this video I’m going to do some speculating and take you through what the metaverse is and the potential impact I think it may have on the future of cinema and on visual communication.
INTRODUCTION
Visual communication as an industry has rapidly expanded over the past few decades. This is partly due to the internet providing more platforms for visual art to be viewed and interacted with, as well as increasing access to technology tearing down obstacles in the way of producing art.
Just over 10 years ago if you wanted to make a documentary it required using large and expensive, clunky broadcast cameras, highly expensive film stock, or low fidelity DV cameras. Once it was eventually made you then had to find a TV broadcaster willing to screen it, and if you were lucky enough to sell it, you’d need more luck just to break even on your costs of producing the documentary.
Now, people can pick up a consumer mirrorless camera, or even a phone, and get an amazing image right out of the box, then distribute the final film any number of ways online.
But what does this have to do with the metaverse?
Well, in a similar way that inexpensive digital cameras and the internet transformed the possibilities of documentary filmmaking, I think the metaverse could also have an enormous effect on how films and visual media are made, distributed and interacted with in the future.
In this video I’m going to do some speculating and take you through what the metaverse is and the potential impact I think it may have on the future of cinema and on visual communication.
WHAT IS THE METAVERSE?
On the 28th of October 2021, Facebook announced their intention to devote a huge amount of resources towards creating their version of the metaverse, signaling their intent by even renaming their holding company Meta.
Whether this bodes well or poorly for the future, one of the biggest companies in the world throwing all their chips into the metaverse pot is significant.
So what exactly is the metaverse?
“You’re going to be able to bring things from the physical world into the metaverse. Almost any type of media that can be represented digitally: photos, video, art, music, movies, music, books, games, you name it.” - Mark Zuckerberg
The metaverse is a space created on the internet which uses 3-D virtual environments. While it is still in its infancy, the metaverse involves integration between virtual and physical spaces. So people interacting in this environment will be able to create their own avatar or character that represents them, place that avatar in a virtual space, manipulate them with hardware like VR tools and effectively live a life in this space that includes consuming a variety of art forms and visual entertainment - including films.
The metaverse that Meta is currently developing will likely use a motion capture system, such as the Oculus (owned by...you guessed it...Meta), to allow players to explore the online space and interact with user generated content.
There’s definitely the possibility for filmmaking to exist and be incorporated into this future online world. But also, I think the core skill of filmmaking, which is visual communication, is already being used in developing the metaverse, whether through virtual reality, augmented reality or gaming.
WHAT THE METAVERSE MEANS FOR FILMMAKING?
So how will the metaverse change the way that movies are produced?
To understand this I think we need to know the four main categories that largely determine the cost of producing a film: sets, actors, crew and gear. The metaverse holds the potential to remove or change all of these boundaries.
Let’s start with sets. In the Metaverse, with a little bit of programming, you can create whatever location you want. In real life you may not be able to block off three avenues in New York to shoot your student film, but in the Metaverse any location you can imagine could become a reality.
Secondly, actors could be replaced with avatars representing any form. Or, actors could still be captured in real life and then placed within a 3-D virtual environment.
Third, crew. The only crew you’ll need are people to capture any live action footage and a team of programmers to do the post production digital grunt work. The hundreds of on set crew members needed for larger productions will be greatly reduced since, well...there won’t be sets.
And fourth, gear. Far more minimal camera and lighting gear will be needed to capture live action. Rather than lighting an entire space, now all that needs to be lit is a character and a green screen. Expensive gear that was once used for the bulk of capturing the footage will now be replaced by computers.
So it may seem that all of these prohibitive boundaries that there once were to make a movie will now dissolve and anyone will be able to produce a blockbuster from the comfort of their own home.
I think this yields interesting opportunities. Just as cheaper digital cameras, editing software and an increase in distribution platforms had an impact on how documentaries are made, I think this jump in metaverse technology has the potential to yield similar possibilities in visual communication.
However, I also can’t help but also be a bit sceptical.
While certain live action aspects of filmmaking, such as sets and actors, may move into the virtual space, it won’t exactly be cheap to make movies. I think celebrity actors will continue to be in demand for their ability to attract an audience and will continue to be paid premium rates whether their performance is in the real or virtual world.
I also think that many of the costs saved on crew, gear and locations will just be re-allocated to hiring a large team of programmers and designers to create the virtual movie - similar to how large budget games are produced.
In the end, when it comes to mass entertainment I still think the same players will dominate. The people who are going to be able to produce the highest-end films will still be the production companies with the largest budget, greatest resources and marketing power.
To remain on the cutting edge of technology, to employ the most talented filmmakers or artists and to promote the end product will always take a lot of money - whether in the real world or the metaverse.
While I think the metaverse and virtual reality filmmaking has many exciting possibilities and may change the landscape of independent filmmaking through creative user generated content, I think that the space of mass entertainment will continue to be dominated by the production companies that are able to spend the most money.
WHY DOES IT MATTER?
So why does it matter to those who are working, studying or interested in film and what impact will it have on them?
Although what I’m suggesting is hypothetical, we can already find practical examples of film production companies working in this virtual space. Visual effects companies such as Digital Domain, founded by James Cameron, are increasingly producing more and more work, such as characters, in the VR and AR space.
While the transition for those who occupy roles in the visual effects and post production side of the film industry is relatively straightforward, what does it mean for other crew members who are used to applying their trade in a two dimensional world - like a cinematographer for instance.
As we transition into this new virtual space there will be a period where capturing the real world will be incorporated with visual effects work. This is actually a job that cinematographers are already performing. Almost every film that is produced nowadays includes some degree of visual effects work incorporated with live action cinematography. Combining traditional photographic skills for capturing images, along with more conceptual skills is already a necessity for most DPs.
For example, Bradford Young was tasked with combining these skills when shooting Arrival.
“It was on us to determine the tenor of the visual effects. The visual effects aren’t going to determine how we make the film. We make the film and the visual effects come into play later.” - Bradford Young, Cinematographer
On Arrival the creative team decided on a set of rules when filming the live action, such as keeping the focus on the character in the foreground.
“We never threw focus or rarely threw focus to effects or a CG element. You know, we always kept it in the foreground. If we had four or five added helicopters we wouldn’t throw focus there and say ‘Hey, this is real!’...The film is not about that. The film is about what is happening in front of us.” - Bradford Young, Cinematographer
To me it would be sad to see sit-down cinema as we know it disappear in the metaverse (never mind the potential negative social effects the metaverse might have on the population at large). But one thing we can never escape from is that art is always changing.
Cinematographers of the future will be faced with tools for creating in the virtual world that may have been impossible before in the physical 2-D realm.
For example, even now with visual effects it is possible for cinematographers to shape light in a way that would have been otherwise impractical without digital help.
“We get out in these situations where we have a long walk and talk. Because of the environment that we’re in and because of the tools we have...people don’t walk with a 12x12 negative fill the whole walk. But when we do visual effects, we forget that it’s a visual effect, you’re lighting it so you can do whatever you want.” - Bradford Young, Cinematographer
The norms of how traditional creative systems are to be adapted are still being formulated, so being at the forefront of them as a creator is an exciting prospect.
CONCLUSION
I guess I’d sum up this piece by concluding that although the metaverse is still in its infancy, I think it’s indisputable that eventually filmmaking, and many other forms of entertainment, will continue to move into an increasingly virtual, online space.
As things become more and more virtual, filmmakers will need to adapt their skills from being more practical to being more conceptual. This process may be slow and take many many decades, but I have a feeling it may happen faster than we think.
The metaverse may open up interesting new possibilities for expression, but I think that the mainstream entertainment space will still be dominated by mass media companies that can spend the most. Bearing in mind that these are all predictions I think that there are a couple of things which most creatives should do to stay abreast of this changing visual world:
One. Stay informed and up to date on technological advancements.
Two. Continue honing and building your conceptual eye for visual communication and storytelling.
Because while the demand for your ability to physically photograph stories may dissolve over time, what has always been important, throughout the evolution of art from its earliest form up to what we have now, is the perspective of the artist. Having a strong artistic perspective and experienced eye for storytelling will ensure you’ll always have a job in whatever medium film, or visual storytelling, ends up being.
How Darren Aronofsky Shoots A Film At 3 Budget Levels
As I do in this series of videos, I’ll take a look at 3 different films made by Darren Aronofsky at 3 increasing budget levels: the low budget Pi, the medium budget The Wrestler, and the high budget Noah, to identify commonalities in his filmmaking and how his style has progressed throughout his career.
INTRODUCTION
The films that Darren Aronofsky makes occupy an interesting space. They straddle the line between experimental and realist, between mainstream and independent, between classical biblical allegories and contemporary tales.
However, what most of his films have in common is a strong emphasis on character and use of perspective to make the audience feel like you’re taking a journey in the shoes of those characters, not just observing their story from afar as an outsider.
As I do in this series of videos, I’ll take a look at 3 different films made by Darren Aronofsky at 3 increasing budget levels: the low budget Pi, the medium budget The Wrestler, and the high budget Noah, to identify commonalities in his filmmaking and how his style has progressed throughout his career.
PI - $60,000
Aronofsky’s introduction to filmmaking came from him studying social anthropology and filmmaking in 1991 at Harvard. His thesis short film for the programme, Supermarket Sweep, starred his friend and actor Sean Gullette. It was well received and won him a spot as a finalist at the 1991 Student Academy Awards. He went on to get his masters in directing from the AFI Conservatory, where he met and began working with his classmate in the cinematography programme Matthew Libatique.
When it came to writing Pi, like with many other low budget films, he decided to focus on a single character. This idea of doing a portrait character study was born out of the verite documentaries he would make in film school which focused on the story of one person.
The experimental, psychological horror film was set in only a few locations, with the primary one being inside a small apartment.
The movie was financed through an early version of what I guess you could call crowdfunding. Aronofsky and his producer Eric Waston went around asking every friend, relative and acquaintance to give them $100 to fund their movie. Eventually they were able to raise $60,000 which, along with a host of other favours, was used to make the film.
Some of those favours included getting the crew to work for deferred pay by granting them shares in the film which would pay out once the film was sold, paying the actors $75 a day and getting a free warehouse which they could use to build their studio set.
Around $24,000 of the budget went towards the cost of buying and developing 16mm film stock and much of the remaining funds were reserved for post production. This left very little money for gear rental, production design or locations on the 28 day shoot.
However, Libatique who would photograph the film, was granted enough to rent a Aaton XTR 16mm camera, three lenses and a free, although small, tungsten lighting package to work with. He chose the XTR for it’s lightness, which helped with the ample handheld work, along with its ability to shoot single frames, which they used for the stop motion board game scene. He got 2 16mm Canon zooms, a 8-64mm and an 11.5-138mm, and one Angénieux 5.9mm wide prime lens.
To support a surrealistic look that Libatique termed ‘low-fi stylisation’ Aronofsky decided to shoot Pi in black and white.
“Darren wanted to shoot Pi in black-and-white for both aesthetic and budgetary reasons. He wanted the most contrasty black-and-white possible, with really white whites and really black blacks.” - Matthew Libatique, Cinematographer
To achieve this look Libatique decided on using reversal film, Eastman Tri-X 200 and Plus-X 50 for daylight scenes, which have high contrast but less dynamic range than negative film. The latitude, the difference between the lightest and darkest part of the image, was so small that he only had about 3 stops before the highlights started blowing.
Which is difficult to comprehend when comparing to modern digital cameras like the Alexa, which can handle more than 14 stops of dynamic range.
Libatique’s lighting and metering of exposure had to be extremely precise as being even half a stop too bright might mean losing all detail. On top of that he used a yellow filter to further increase the contrast and get rich blacks.
Their philosophy behind the look of the film was to create a subjective perspective that put the audience in the shoes of the protagonist. They did this by shooting with a single camera, shooting over the protagonist’s shoulder and moving it in a motivated way. So when the character moved, the camera followed.
To increase this personal perspective they also used a macro lens at times to capture close details in an abstract way that also represented the character’s gaze.
A final example of this subjective perspective can be found in the stylised use of mounting a camera directly onto the actor’s body. Kind of like vlogging, before the concept of vlogging existed. This gave a personal, up close, subjective perspective that mimicked the increasingly manic movement of the character.
They rigged a still photography tripod to a weight belt that was attached to the actor and mounted Aronofsky's own 16mm Bolex camera with a 10mm lens to the tripod. He altered the frame rates, overcranking his close up, and undercranking the camera at 12fps for his POV shots to show his increasing dissociation with the real world.
Aronofsky spent the majority of the low budget on sound in post production, where he was able to find additional funding, as he knew that without a strong sound design and mix the film would fall flat. He was able to get a score from Clint Mansell who, like the crew, worked for a deferred fee.
He was therefore able to pull off Pi on an incredibly low budget by: writing a story with limited locations, characters and no large set pieces, getting crew to work for deferred pay, pulling lots of favours, and using a small gear package to create a vividly experimental, subjective, surrealist look.
THE WRESTLER - $6 Million
After winning the Directing Award at Sundance Film Festival for Pi and selling it to distributor Artisan Entertainment for more than a million dollars, Aronofsky kick started his feature film career.
Following the bigger box office budget flop of The Fountain, Aronofsky picked a lower budget script for his following film, a realistic dramatic portrayal of an aging wrestler, written by Robert D. Siegel. He raised a budget of $6 million to make the movie.
After Nicolas Cage initially expressed interest in the role, it was eventually granted to Mickey Rourke.
Although Rourke admired Aronofsky’s work and wanted to make a film with him, he wasn’t overly happy about the script as he felt that some of the dialogue didn’t accurately portray how his character would realistically talk. Therefore he, along with Aronofsky, re-worked much of the dialogue in the script until they were happy.
Due to the free way that Rourke liked to work, apparently around 40% of the final film was improvised and initially unscripted.
“I tried to approach the film as free as possible. I didn’t go onto set as I usually do with very specific notes and shot lists. I tried to be open every morning to what Mickey was going to bring and then try and figure out after I saw that the best way of capturing it.” - Darren Aronofsky
For example, most of the wrestling scenes were scheduled during real wrestling matches. The crew would wait till about halfway through a match and then bring Rourke into the ring and shoot a bit, using the real energy from the crowd who turned up.
As it was very physically demanding Rourke would then leave the ring, re-gather his energy and come back to shoot a bit more. During these breaks the real wrestlers would keep the crowd entertained while Rourke recovered and the cameras were reloaded with new film stock.
To capture this free way of working, Aronofsky devised a style and approach which both supported how he wanted to tell the story and which was practical.
There’s not much realism in the world of wrestling, which is all about over the top performance, however the life of the main character in The Wrestler is too painfully real. So Aronofsky decided to create a film grounded in cinema verite, which followed his protagonist, literally, with an up close and intimate handheld camera. Again taking on a more subjective perspective, however this time one that was far more centered around realism.
To create this look he hired cinematographer Maryse Alberti who had a track record in both fiction and documentary work.
They shot it on Super 16mm, which suited both the modest budget, as it is cheaper to shoot than 35mm, but the grain from 16mm was also reminiscent of the verite, documentary look that they were going for.
To create the look for this realistic portrait, Alberti shot almost entirely with natural light, mainly using whatever practical lighting was already in the locations. She would sometimes bring in a couple of lights or tweak them slightly in order to achieve exposure but otherwise left the lighting alone whenever possible.
The only exception was the final match, which was a built set. In this she mimicked the lighting setups of many of the other matches which they had already shot - based around using overhead lights and lighting the four corners of the ring.
Since most of the movie was assembled from long shot sequences, photographed from the shoulder on a handheld camera, she chose the Arri 416 for her camera operator Peter Nolan.
She paired the camera with a set of Zeiss Ultra 16 prime lenses and two Angenieux Optimo zooms, a 15-40mm Lightweight and a 28-76mm.
Due to the length of the takes, Peter Nolan came up with some interesting techniques for operating the camera. One involved strapping an applebox to his waist so that when sat down with the camera during a take he could rest his elbows on the apple box and hold the camera steady.
Sometimes these long takes required plenty of choreography and involved grips holding up flags at various points to block out lights from casting shadows of the camera.
So Aronofsky in some ways maintained his perspective of shooting the film in a subjective way, yet moved away from experimentation and more into realism.
The Wrestler’s higher budget allowed Aronofsky to hire a cast of well known actors for this performance heavy drama and pay all the cast and crew fair rates, yet they saved money by shooting on 16mm, in a rough, verite, documentary style which allowed them to work on real locations, without any large production design, grip or electrical setups.
NOAH - $160 Million
The Wrestler proved to be both a critical and financial success.
A few years later he turned to producing a huge scope story which he had been interested in since he was a child: the biblical story of Noah. True to his style, Aronofsky adapted Noah to the screen by straying from the brief source material and including a more surrealistic, allegorical story, which visualised and presented themes through exaggerated characters and images.
Producing such a large scope script, with its epic set pieces, required a hefty estimated budget of around $160 million. Aronofsky turned to his regular DP Matthew Libatique to shoot the film.
“We were handheld on Noah, but it wasn’t like we were floating from character to character in a vérité style. I think we’ve matured as filmmakers and can focus on what’s important, which is subjectivity and storytelling.” - Matthew Libatique, Cinematographer
But, like on The Wrestler, Aronofsky wanted to be able to move the camera in a way that was very fluid and natural, but also in a way that was very controlled. Therefore Libatique mainly used Arricam LT cameras, which were light for handheld work yet also tough enough to handle working outdoors in the elements for extended periods without breaking.
With them he selected Zeiss Ultra Primes, mainly sticking to 3 focal lengths, a wide 24mm, a medium 50mm and a long 85mm.
This time he shot on 35mm, a format with greater clarity and less grain, more suitable for an epic. Libatique shot in the higher resolution 4-perf format for any shots that required post production special effects, and in 3-perf for regular scenes.
Although most of the film was shot handheld with a single camera from a more subjective perspective, certain scenes, such as the large flood scene, was shot with four cameras, two on Chapman Hydrascope cranes and two on the ground, to more quickly cover the many shots needed in this expensive set piece.
The magical exteriors were mainly filmed on location in Iceland.
When it came to lighting characters in those exteriors not much was done except for trying to block scenes so that the actors could be backlit by the sun. Libatique likes to keep things as naturalistic as possible so avoids lighting exteriors whenever he can, only using a muslin bounce occasionally when he needed more fill.
As Libatique says: “Fighting nature to mimic nature is a large undertaking.”
However some interiors and night scenes involved enormous setups. For example, to cover the battle scene at night his team hung 18 daylight balanced helium balloons from condors. Then, two 100-ton cranes each carried 100-foot rain bars, and another 100-ton crane carried an 80-foot rain bar, with two 32K balloons on each rain bar.
Another huge setup was the Arc set, which was constructed in three levels in a studio in New York. Lighting such a big space came at a cost.
For day scenes the rigging grip built a giant white ceiling bounce, made up of smaller UltraBounce surfaces. Bouncing into it were 20 20Ks, which they rigged on each side, underslung on the truss, and also 25 Mole-Richardson 12-lights.
Once production was wrapped, 14 months of post-production work began. During this time Aronofsky tasked Industrial Light & Magic with extensive VFX work including creating 99% of the animals in the film, dropping in background plates, like mountains or trees, and of course creating the mythical elements such as The Watchers.
As with all of Aronofsky’s films dating back to Pi, a score was composed by Clint Mansell.
Noah was therefore produced on a blockbuster budget, which was needed to create massive production design builds, enormous grip and lighting setups, a cast of stars and enormous set pieces which required over a year of innovative visual effects work.
CONCLUSION
Darren Aronofsky’s filmography covers an interesting range all the way from low budget independently financed films up to large studio blockbusters.
Despite this large growth in scale, his preferences for visualising themes and presenting them through characters using a subjective perspective has carried over throughout.
While the maturity of his filmmaking might have grown, it maintains elements of original experimentation and an eye for the surreal that he’s had since his earliest foray into cinema.
The 3 Basics Of Cinematography
I think the most important duties of a director of photography or DP can best be distilled into 3 basic elements: exposure, lighting and camera positioning and movement. Let's take a look at these 3 aspects of cinematography to show why they are crucial in order to fulfil the DPs overarching function of building and capturing the look of a film.
INTRODUCTION
As you can probably gather from the name of this channel, I usually make videos that skip over some of the basics and make content that is a bit more, well, in depth. But since I’ve had some requests in the comments to make a video that goes over the basics of cinematography I thought I’d do just that.
As the role that the cinematographer takes on is a fairly technical and complex one, it’s a bit tricky to distill all the nuanced things that they do into a single YouTube video. However, I think the most important duties of a director of photography or DP can best be distilled into 3 basic elements: exposure, lighting and camera positioning and movement.
These three elements align with the three departments on a film set which the DP manages: the camera, lighting and grip departments. To be a cinematographer you need to be able to control all three of these elements and manipulate them in order to capture a visual style which suits the story being told.
So let's focus on each one of these departments, or aspects of cinematography, to show why they are crucial in order to fulfill the DPs overarching function of building and capturing the look of a film.
CAMERA
Let’s start with a fairly necessary feature of cinematography, the camera.
To capture an image light passes through a glass lens and hits the film plane, which could house a digital sensor or a film stock. How the footage will look is determined by the amount of light that hits the focal plane and the sensitivity of how easily the digital sensor or the film stock absorbs that light.
This is what we call exposure. It refers to the amount of light that is exposed to the film plane. Letting in more light will result in a brighter exposure, while letting in less light will mean a darker exposure. One of the most important parts of a cinematographer's job is measuring and ensuring the correct exposure is achieved. Exposure is an important tool that DPs can easily use to create an image that reflects the correct tone and story.
A simple example can be found in comedies versus horror films. Typically comedies have a brightly exposed image which reflects the light, comedic tone of the story. While horror films often have a darker exposure which sets a broodier, scarier psychological tone.
To control exposure with the camera, the cinematographer can adjust three different variables: the shutter, the aperture and the ISO or film speed.
Motion picture cameras usually use a rotary disk shutter. This is a semi-circular disk that spins in front of the film gate. When the disk passes the film gate light will be blocked and not let in. As it turns there will be an open section where light will be able to hit the film plane.
Since the shutter takes up a percentage of a circle in film cameras it is measured in degrees which is referred to as the shutter angle. Adjusting the shutter angle affects the amount of time that film is exposed to light. So, making the angle smaller, like 45° means that the rotating shutter will block more light. Therefore the exposure will be darker. Inversely, making the shutter angle bigger, like 270°, means that more light will be let through and that the exposure will be lighter.
The shutter angle also controls the secondary function of motion blur. A smaller shutter angle will constrict motion blur and make footage feel choppier, while a larger shutter angle will increase motion blur.
Conventionally the shutter is therefore kept at 180°, which makes movement feel ‘normal’ to the human eye.
Cinematographers therefore primarily usually use the other two variables of aperture and film speed to control exposure.
Aperture is the hole in the lens that can be opened or closed to let in more or less light. On cinema lenses this value is measured as a T-stop and as an f-stop on still photography lenses. Opening the aperture, shooting with a smaller stop number, means that the hole is large and more light is let in. Closing down the aperture, shooting at a deep stop, makes the hole smaller and lets in less light.
Finally, the ISO or film speed refers to how sensitively the digital sensor or film stock responds to light. A low film speed such as 50 ISO is less sensitive so will be darker and a high film speed such as 800 ISO will be brighter.
So to expose an image which is not too bright and not too dark, cinematographers will manipulate this ‘exposure triangle’ of shutter angle, aperture and film speed until the footage has the correctly exposed look.
LIGHTING
Now that we know how to manipulate the brightness of an image by manipulating the amount of light that is let into the camera, cinematographers must focus on how the light that is let in actually looks.
Film lighting is usually done with continuous light sources. This is where the output of an electrical light remains fixed and consistent, unlike in stills where a flash, a brief burst of light, is often used.
There are two primary functions of lighting in film: first, creating a look for the film which is visually appealing and which suits the tone and supports the storytelling of the content which is being filmed, and second, maintaining a consistent source of light so that the exposure does not fluctuate unevenly during shooting.
A mistake I made when I got my hands on a camera and first started shooting short films was not focusing enough on lighting. Beginner cinematographers, myself included, often think creating a beautiful image comes from having a nice camera to shoot on. That it’s all about having the newest, fanciest gear.
However the most important part of cinematography and creating an aesthetically appealing image is rather to focus on how the image is lit.
There are no hard and fast rules when it comes to lighting, but familiarising yourself with some basic lighting ideas and even just being aware how the light hits a subject and affects the way a shot will look, is the most important first step.
The second reason that lighting is important is that if you are shooting only using natural light, it has a tendency to change throughout the day.
For example the sun may begin by shining through a window and then move overhead, throwing the interior into shadow. But, having a strong light source placed outside a window that replicates the sun's light will minimise any changes that might prevent different shots in a scene from matching up.
Maintaining consistent lighting and cinematic continuity between different shots photographed at different times is an important part of being a DP.
GRIPS
Now that the exposure is under control and we have lit a scene in a consistent way that supports the visual style of the story that is being told, the third most important basic element of cinematography is the placement of the camera and how it moves.
Some directors may leave a lot of this to the DP. Others may like to dictate exactly how the camera moves or a shot is framed. Regardless of the creative input, it is the job of the cinematographer to technically execute those choices.
When it comes to camera placement there are some basic guidelines which should be followed, such as the 180-degree rule.
This states that the camera should be kept on one side of an imaginary axis when shooting a scene that cuts between two characters. So that the first character is always frame right of the second character. However, on occasion, filmmakers have intentionally broken this rule for effect.
How each shot in a movie is framed and executed should be a conscious choice by the cinematographer and director, as different movements will have different effects on an audience. Just as different songs can make you feel different emotions, so too can camera movement. But perhaps in a more subtle way.
For example, a manically operated handheld camera has a different feeling to a buttery smooth tracking shot.
Therefore, when it comes to placing and moving the camera, cinematographers need to be knowledgeable about both the psychological effect that camera movement and framing has on an audience, and an understanding of various grip rigs and gear that can be used to create those moves.
CONCLUSION
So there you have it: exposure, lighting and camera placement and movement. Three basics that every DP consciously controls.
As I mentioned at the beginning, these three things only touch on the fundamentals of cinematography. There are a whole host of other decisions that DPs have to make on a minute by minute basis on set.
But, when you’re starting out, be sure to remember these three basic elements and focus on using them to bend the visual tone of the cinematography into a shape that best suits the story.
What A Key Grip Does On Set: Crew Breakdown
In this Crew Breakdown episode I’ll go over the head of department position in the grips department, key grip, a job which not everyone is familiar with but which greatly contributes to the look of many movies.
INTRODUCTION
In this series I go behind the scenes and look at some of the different crew positions on movie sets - the people who go about physically constructing each and every shot that happens on screen.
In this Crew Breakdown episode I’ll go over the head of department position in the grips department, key grip, a job which not everyone is familiar with but which greatly contributes to the look of many movies. I’ll break down what they do, their average day on set and some tips which they use to be the best in their field.
ROLE
The key grip heads the department and is responsible for planning, supervising and delegating instructions to other grips in their crew.
But what exactly do grips do?
Well, whenever you see an impressive shot in a film where the camera moves in an innovative, amazing way, a grip is behind it. So grips are in charge of planning and orchestrating camera movement and the positioning of the camera which is described or delegated by the director and cinematographer.
To do this they rely on constructing and operating a variety of rigs that the camera is attached to. Some examples of these include a dolly, a crane, a ladder pod, a Technocrane, a tripod or hard mounting the camera onto objects like a car, a motorbike or even a shopping cart.
The second duty that a key grip does is rigging.
Their capacity in this regard changes depending on the working style of the country. For example in the US grips are expected to place and secure lighting stands, whereas in other countries, like South Africa where I work, the lighting department handles this.
However in almost all locations grips are expected to help out with the more complex rigging of lighting equipment. For example, securing a polecat or pole between two walls so that lights can be placed high in the ceiling. Or, rigging large scrims or blackouts, such as 20x20 foot sized frames, above actors using heavy duty stands, trusses or attaching them to a Manitou.
The third and final broad category of their work is determining the safety of a setup.
For complicated shots, difficult rigging or placing crew or cast on film equipment, the key grip is consulted on determining the safety of the setups and those involved. Having an experienced grip on the crew is important in this regard to ensure that, most importantly, there are no injuries, or damage to property, sets or gear.
This is why rental companies insist upon an experienced grip to be present when heavy duty equipment, such as Technocranes, are rented or used on set.
Overall the role of the key grip is a broad one, that carefully plans, consults with creative crew members like the director and DP, and then practically executes their vision with a variety of rigs in a safe and responsible manner.
AVERAGE DAY ON SET
The key grip joins a project before the on set shoot begins.
In pre-production they will discuss the shots that need to be gotten with the DP. Sometimes this will be done with the help of storyboards or even a pre-vis animation if the filmmakers are very specific about the kind of moves they need to do.
Some shoots are relatively straightforward and may only require basic handheld shots, while others may require complicated, tricky camera movement, custom grip rig builds or large amounts of lighting rigging that may require more planning.
Before shooting they will also go on a reccee to the shooting locations with the director, cinematographer and other relevant crew members. Here they will physically plan out, measure and assess the grip logistics - such as whether a dolly can fit through a door, or what length of pipe they will need to rig to the ceiling.
The key grip will therefore arrive on set usually with a solid idea of what lies ahead for the day. They will get a shooting schedule from the 1st AD and start preparing for the first shot. This could be a dolly shot.
In this case the DP will use a viewfinder, the camera, or their eye to determine a position and start and end point for the move. The key grip will mark these points and call on his team to bring in the correct lengths of track.
They will place the track, level it then bring in the dolly and lift it onto the track.
It should be noted that depending on the country or size of the shoot there may be a dedicated dolly grip who pushes the camera on the dolly. In some countries, such as mine however, this is usually also done by the key grip.
The grip department is made up of a key grip, who consults directly with the DP and leads the department, making important decisions. Next, the best boy grip, who manages things like gear inventory and oversees a team of grips which will fluctuate depending on the size and needs of the shoot.
The key grip and their team will continue setting up ways to position and move the camera throughout the day, while also assisting the lighting team with any necessary rigging work.
Some other duties they are called on to perform may include operating a gimbal and ensuring the safety of the operator when doing handheld work. This is done by gripping, excuse the pun, a part of the operator’s body, such as their waist, and moving with them as they operate, making sure they don’t trip, loose their balance or crash into any obstacles.
TIPS
Although I’ve never worked as a grip myself, I have observed a few tricks that some of the best use on set.
Since many grip setups, such as laying and leveling tracks or building a crane are very time consuming, the best grips are always a few steps ahead and get their team to use whatever gear is available to start on the next setup before it happens while the key grip stays with the DP close to the camera.
For example, when shooting in a house they may start off with locked off dolly shots then move onto a tracking shot outside the house. A good grip will consult with the DP beforehand to get a position for the tracking shot and then get his team to lay and level tracks so that they are all set up when they move onto the next shot.
Finding ways to save time will ingratiate yourself with the DP and director as it gives them more time to shoot.
Another way of saving time is to always try and get camera moves as correct as possible every time. To do this it’s useful to use tape and make start and end marks next to the dolly wheels. That way the dolly grip knows exactly where to start and stop each move and will never waste time by over or under shooting it.
Also having a tape measure or, ideally, a laser measurer will allow you to quickly determine measurements, such as what pole length to use, or how many track lengths are required.
A last tip is to always have the right tools for the job. This is why key grips often have their own grip truck which is fully kitted out - a process that happens through years of acquiring tools that are necessary on set.
This usually includes a full set of apple boxes, packing and wedges. These useful pieces of wood are always required on set for everything from leveling a track, to propping up a product or, most importantly as a seat for the director.
Grips should be prepared for everything: from having a jigsaw to cut bits of wooden board, clamps, WD-40 to lubricate moving parts, rolls of gaffer tape for making marks, and a selection of wrenches, allen keys and other tools which may be needed to adjust mechanical parts.
Actually scratch all that…I’ve just remembered the real best tip that any grip can follow. Get padded seats for your apple boxes. Have padded apple boxes and the crew will worship you!