Cinematography Style: Barry Ackroyd
Barry Ackroyd is a cinematographer who plays to his strengths. Over his career he’s developed an instantly recognisable style to his photography that is based around a vérité, documentary-esque search for truth and capturing realism. In this episode of Cinematography Style I’m going to take a look at the renowned work of Barry Ackroyd by going over his philosophical ideas on cinematography and outlining the gear that he uses to execute his vision.
INTRODUCTION
Barry Ackroyd is a cinematographer who plays to his strengths. Over his career he’s developed an instantly recognisable style to his photography that is based around a vérité, documentary-esque search for truth and capturing realism.
He works with multiple on-the-ground, handheld, reactive cameras that use bold, punch-in zooms and has been hired by directors such as Ken Loach and Paul Greengrass that highly value a sense of realism and heightened naturalism in their films.
So, in this episode of Cinematography Style I’m going to take a look at the renowned work of Barry Ackroyd by going over his philosophical ideas on cinematography and outlining the gear that he uses to execute his vision.
BACKGROUND
“I’m a cinematographer who was brought up in documentaries in Britain on small budgets.”
Ackroyd’s initial plans to become a sculptor changed while he was studying Fine Arts at Portsmouth Polytechnic after he discovered the medium of 16mm film.
He began working as a television cameraman in the 1980s, mainly shooting documentaries. It was there that he first encountered director Ken Loach. After working on a couple of documentaries together, Ackroyd was offered an opportunity to shoot Riff-Raff for Loach - his first feature length fiction film.
He continued to shoot numerous fiction films and documentaries for Loach during this period, culminating in The Wind That Shakes The Barley which won the Palme d’Or at Cannes Film Festival. Following this success he began working on other fiction projects for various well known directors such as: Paul Greengrass, Kathryn Bigelow and Adam McKay.
PHILOSOPHY
“Sometimes it’s better just to play to your strengths rather than to try to diversify too much…That was a choice I made, to play to my strengths.”
One of those strengths is a look rooted in a documentary style of working - which was informed by his early work on TV docs. Those documentaries relied on usually operating the camera handheld from the shoulder, in order to record the necessary moments as they happened live. In the real world events or moments often only happen once so you need an easily mobile camera to observe and capture them.
This is the opposite of fiction filmmaking, where events and scenes can be played out multiple times, and are more often than not photographed in a carefully curated, composed visual style. Rather than going the usual fiction cinematic route, Ackroyd took documentary conventions and ways of working and applied them to fiction filmmaking.
For example, he prefers always shooting movies on real locations whenever possible, over shooting them on a constructed set or in a soundstage - even if that real location is a ship on the ocean.
Ackroyd tends to steer away from setting things up too perfectly and instead leans towards a look where capturing a version of reality is far more important than capturing a ‘perfect’ image.
“I think if you look at my work I’m always trying to push what I’ve done before…and actually I push it towards imperfection…There’s a kind of state that you get into where you’re just in tune with what’s happening in front of the camera.”
To capture images realistically, honestly and with as few barriers as possible he relies on working with multiple camera operators and puts a lot of trust in his crew members. He gives his crew lots of credit on set and in interviews, from the focus puller to the sound recordist, and maintains the importance of teamwork and a group effort in creating a film.
“I used to say that in documentaries the best shot that you get in documentaries is out of focus and underlit and looks rubbish. You know that it had to be in the film because it was absolutely right at the time…I think that’s what you’re striving for, you know. Not to overwhelm people with the beauty. Not to fall in love with the landscape…But to get the picture that…you’re involved with it.”
An example of how he seeks authenticity through imperfections can be found in his approach to blocking scenes with directors and actors. Usually actors rehearse a scene on set and then marks are put down on the floor to indicate the exact position that actors must stand in in order to be perfectly lit, perfectly framed and perfectly angled for the shot.
Ackroyd prefers not to mark actors. He sets up any lights he needs either overhead or outside the set so that the actors have the freedom to move around as they like when they play out the scene. Since they don’t have to worry about hitting specific marks, he finds that the actors loosen up more, which injects a realist spontaneity into how their performances are captured.
Sometimes this leads to technical imperfections like moments that are out of focus or frames that aren’t classically composed. But it also injects an energy into the images which is undeniable.
GEAR
“You know I like to get physically involved. We ran around with the cameras. We had four or five cameras at times…In any one setup you’re trying to talk to all the guys, see what they’ve done, see what the next shot should be and give, you know, support and advice.”
As we mentioned, Ackroyd likes shooting with multiple handheld cameras. This allows his operators to quickly react and capture details or moments of performance. It also provides the director and the editor with multiple angles and perspectives which they can cut to in order to build up the intensity and pacing in a scene.
Directors who he has repeatedly collaborated with like Paul Greengrass and Kathryn Bigelow are known for their preference for quick cutting. Ackroyd’s style provides them with the high number of angles that are needed to work in this way.
One of the most important camera tools he uses is focus. He describes focus as being the best cinematic tool, even better than a dolly, crane or tripod, because focus mimics what we naturally do with our eyes and can be used to shift the attention of the audience to a particular part of the frame. He isn’t overly strict with his focus pullers and in fact prefers the natural, more organic method where people drift in and out of focus over every single shot having perfectly timed, measured and calculated focus pulls.
Another important tool in his toolbox is his use of zoom lenses. Again this goes against traditional fiction cinematography principles which ascribe a greater value to prime lenses over zooms - which most documentaries are shot with. He uses quick punch-in zooms as a tool to direct the focus of the audience in the moment. For example if a line of dialogue or an energetic moment of performance is particularly important his operators may push into it with a quick zoom for emphasis.
His choice of camera gear is a bit mishmash. In the same film he may use different formats, such as digital, 35mm and 16mm film, with different prime and, of course, zoom lenses. For example Captain Phillips involved shooting aerial shots digitally, while sequences in the fishing village and on the skiff were shot in Super 16, which they then switched to 35mm film once the characters boarded the large shipping vessel.
He likes the texture of film and has often used the higher grain 16mm to compliment his look. He famously used Super 16 to support the raw, on-the-ground documentary aesthetic on The Hurt Locker.
“Well then I thought it has to be Super 16. We have to get back to the basics. Get down to the lenses you can carry and run with and will give you this fantastic range of wide shots and big close ups…The first thing everybody said was that, ‘well, the quality is not going to be good.’ Well, nobody has criticised the quality of the film. They’ve only praised it.”
He has a preference for Fujifilm stock as it fares well in high contrast lighting situations. When shooting on film he would sometimes purposefully underexpose the negative and then bring up the levels later in the DI in order to introduce more grain to the image.
Ackroyd liked to combine 250D and 500T Fujifilm stocks when shooting Super 16 or 35mm. However, after Fujifilm was discontinued and no longer available he transitioned to shooting on Kodak film or with digital cameras - mainly the Arri Alexa Mini.
On Detroit he used the Arri Alexa Mini in Super 16 mode and shot with Super 16 lenses to introduce noise and grain to the image and get a Super 16 feel, which was further amped up in the grade, all while maintaining the benefits of a digital production.
The Aaton XTR is his go to Super 16 camera, so much so that he owns one. He has used different 35mm cameras such as the Aaton Penelope, the Arriflex 235, the Moviecam Compact and the Arricam LT. Some of his favourite Super 35 zooms are the 15-40mm and, in particular the 28-76mm Angenieux Optimo zoom, which are both light enough to be handheld and provide a nice zoom range that he can use to punch-in with.
He’s also used the Angenieux Optimo 24-290mm, sometimes with a doubler when he needs a longer zoom. It’s too heavy to be used handheld but he has used it with a monopod to aid in operating the huge chunk of a lens and still preserve a handheld feel. Some other zooms he has used include a rehoused Nikon 80-200mm and the Canon 10.6-180mm Super 16 zoom.
Although he prefers zooms he often carries a set of primes which have a faster stop and can be used in low light situations such as Zeiss Super Speeds or Cooke S4s.
Due to the lack of blocking or focus marks, he usually gives his focus pullers a generous, deep stop to work with of around T/5.6 and a half.
To further support his look based on realism and documentary, he lights in a very naturalistic manner. He tries to refrain from lighting exteriors all together and for interiors adds touches of artificial light which are motivated when he needs to balance the exposure in a scene. A lighting tool that he likes to use for this are single Kino Flo tubes, which can easily be rigged overhead or out of sight to provide a low level fill to a scene.
CONCLUSION
Barry Ackroyd’s cinematography is more about deconstructing photography than it is about trying to produce a perfectly beautiful image.
To him imperfections are a signal of authenticity and an expression of realism rather than a flaw. Breaking down an image can’t be done competently without a great degree of skill and knowledge.
His film’s aren’t created by just picking up a bunch of cameras and pointing them in the general direction of the action, but are rather made through deliberate thought and cultivation of a style that emits as much intensity, feeling of reality and truth as possible.
Does Sensor Size Matter?
Since there are loads of different cameras with loads of different formats and sensor sizes out there to choose from, in this video I’ll try to simplify it a bit by going over the five most common motion picture formats and discussing the effect that different sensor sizes have on an image.
INTRODUCTION
The sensor or film plane of a camera is the area that light hits to record an image. The size of this area can vary a lot depending on the camera, with each sensor size or format having a subtly different look.
Since there are loads of different cameras with loads of different formats and sensor sizes out there to choose from, in this video I’ll try to simplify it a bit by going over the five most common motion picture formats and discussing the effect that different sensor sizes have on an image.
5 MOTION PICTURE FORMATS
The size of a video camera's film plane or sensor ranges all the way from the minuscule one third inch sensor found in smartphones or old camcorders up to the massive 70x52mm 15-perf Imax film negative. But, rather than going over every single sensor in existence, I’m going to take a look at five formats which are far and above the most popular sizes used in film production today and have been standardised throughout film history.
While there are smaller sizes like 8mm film or sizes in between like the Blackmagic 4K’s four third sensor, these sizes are used far less frequently in professional film production and are an outlier rather than a standard. I’ll also only be looking at video formats so won’t be going over any photographic image sizes such as 6x6 medium format.
The smallest regularly used format is Super 16. The film’s smaller size of around 7.4 by 12.5mm makes it a cheaper option than the larger gauge 35mm, as less physical film stock is required.
Due to this it was often used in the past to capture lower budget productions. Now that digital has overtaken film, Super 16 is mainly chosen for its optical capabilities. Its lower resolution look and prominent film grain means that it is often used today to evoke a rough, documentary-esque feeling of nostalgia.
Some digital cameras, such as the original Blackmagic Pocket Cinema Camera have a sensor that covers a similar area to Super 16 and cameras such as the Arri Alexa Mini have specialised recording modes which only samples a Super 16 size area of the sensor.
Moving up, the next, and by far the most common format is Super 35. This format is based on the size of 35mm motion picture film that covers an approximate area of 21.9 by 18.6mm. 35mm refers to the total width of the frame, including the perforated edges on either side of the negative area.
Depending on the budget, aspect ratio, and lenses different amounts of horizontal space, measured in perforations, can be shot. The frame can be cropped to use less film stock or to extract a widescreen image when using spherical lenses. Shooting with anamorphic lenses, that optically squeeze the image, requires using the entire area of the negative or sensor and then de-squeezing the image at a later stage to get to a 2.39:1 aspect ratio.
Many digital cinema camera sensors are modelled on this size, with some minor size variations depending on the camera, such as the Arri Alexa Mini, the Red Dragon S35 and the Sony F65. Since this format is the most popular in cinema, most cinema lenses are designed to cover a Super 35 size sensor. Meaning this format has the widest selection of cinema glass available on the market.
Stepping up from 35mm we get to what is called a large format or a full-frame sensor. This size is modelled on still photography DSLR cameras with a 35mm image sensor format, such as the Canon 5D that is larger than Super 35. It’s also around the same size as 8-perf Vista Vision film.
Although digital sensors differ a bit depending on the camera, it is usually about 36 by 24mm. Some cameras with this sensor size include the Alexa LF, the Sony Venice and the Canon C700 FF.
This large format is a middle ground between Super 35 and the next format up - 65mm.
Originally, this format was based on using 65mm gauge film which was 3.5 times as large as standard 35mm, and measured 52.6 by 23mm using 5 vertical perforations with a widescreen aspect ratio of 2.2:1. The Alexa 65 has a digital sensor that matches 65mm film and is a viable digital version of this format.
Finally, the largest possible motion picture format that you can shoot is Imax film. With an enormous 15 perforations, an Imax frame covers a 70.4 by 52.6mm image area.
Due to its enormous negative size and the large, specialised cameras required to shoot it, this format is prohibitively expensive and out of the budget range of most productions. But, it has seen a bit of a resurgence in recent years on high budget blockbusters from directors such as Christopher Nolan who champion the super high fidelity film format.
THE EFFECTS OF SENSOR SIZES
With these five formats in mind, let’s examine some of the effects and differences between them. There are a few things that choosing a format or sensor size affects.
The most noticeable optical effect is that different formats have different fields of view. What this means is that if you put the same 35mm lens on a Super 16, Super 35 and a large format sensor camera, the smaller the sensor is the tighter the image that is recorded will appear.
So the field of view on a large format camera will be much wider than on a Super 16 camera which is tighter. Since the field of view is wider, larger formats also have a different feeling of depth and perspective.
Because of this difference, the sensor determines the range of focal length lenses that need to be used on the camera. To compensate for the field of view differences, smaller formats like Super 16 need to use wider angle lenses to get to an image that sees the same amount of information, while larger formats need to use longer lenses for that same frame.
For example, to get the same field of view from a 35mm lens on a Super 35 sensor, a Super 16 camera needs to use a 17.5mm focal length and a large format, full-frame camera needs to use a 50mm focal length.
Since focal lengths affect the depth of field an image has, this is another effect of different formats. Longer focal lengths have a very shallow depth of field or area of the image which is in focus. So full-frame cameras that use longer focal lengths will therefore have a shallower depth of field. This means that the larger the format, the more the background will be out of focus and the more the subject will be separated from the background.
This is helpful for creating a greater feeling of depth for wide shots which people often perceive as looking more ‘cinematic’.
One negative effect of this is that the job of the 1st AC to keep the focus consistently sharp becomes far more difficult. For this reason smaller formats such as Super 16 are far more forgivable to focus pullers as they have a deeper depth of field where more of the image is in focus and therefore the margin for error is not as harsh.
The grain and resolution that an image has is also affected by the size of the format. The smaller the format is, the more noticeable the grain or noise texture will usually be, and the larger the sensor is the finer the grain will appear and the greater clarity and resolution it often has.
Sometimes cinematographers deliberately shoot smaller gauge formats like Super 16 to create a more textured image, while others prefer larger formats like 65mm for it’s super clean, sharp, low noise look.
So those are the main optical effects of choosing a format.
Smaller formats require wider focal lengths, have a deep depth of field, have more grain and will overall feel like they are a bit flatter.
Larger formats require longer focal lengths, have a shallower depth of field, less grain, greater resolution and clarity and overall have a more three-dimensional look with an increased feeling of depth.
There are also the all-important practical implications to be considered. Generally speaking the larger the format, the larger the form factor of the camera will be to house it and the more expensive it is to shoot on.
This calculation may be different when comparing the costs of digital and film, but when comparing all the digital formats, renting the cameras and lenses for 65mm will be more expensive than a Super 35 camera. Likewise, when comparing film formats 16mm is vastly cheaper than Imax.
So broadly speaking, smaller formats tend to be more budget friendly and come in a smaller housed package.
DOES SENSOR SIZE MATTER?
Coming back to the question of whether sensor size matters, I don’t think any one sensor is necessarily better than another. But the effects that they produce are certainly different.
Filmmakers that want an image that immerses an audience in a crystal clear, highly detailed, wide vista with a shallow depth of field will probably elect to shoot on a larger format.
Whereas those who require a more textural, nostalgic or rougher feeling photography with less separation between the subject and the background may be drawn to smaller gauge formats.
As always, the choice of what gear is most suitable comes down to the needs of the project and the type of cinematic tone and photographic style you are trying to capture.
Using Colour To Tell A Story In Film
Let’s examine this idea of colour by going through an introduction to colour theory, look at how filmmakers can create a specific colour palette for their footage and check out some examples of how colour has been used to aid the telling of different stories.
INTRODUCTION
Cinematography is all about light.
Light is a complex thing. It can be shaped, it can come in different qualities, different strengths and, importantly, it can take the form of different colours.
So, let’s examine this idea of colour by going through an introduction to colour theory, look at how filmmakers can create a specific colour palette for their footage and check out some examples of how colour has been used to aid the telling of different stories.
WHAT IS COLOUR THEORY?
Colour theory is a set of guidelines for colour mixing and the visual effects that using different colours has on an audience.
There are many different approaches to colour theory ranging from ideas all the way back in Aristotle’s time up to more contemporary studies on colour such as those by Isaac Newton. But let's just take a look at some basic ideas and see how they can be applied to film.
When different spectrums of light hit objects with different physical properties it produces a colour, which we put into a category and ascribe a name to.
Primary colours are a group of colours that can be mixed to form a range of other colours. In film these are often, but not always, used sparingly in a frame. A splash of red in an otherwise green landscape stands out and draws the eye.
An important part of colour theory in the visual arts space is knowing complimentary colours. When two of these colours are combined they make white, grey or black. When the spectrum of colours are placed on a colour wheel, complimentary colours always take up positions opposite each other.
When two complementary colours are placed next to each other they create the strongest contrast for those two colours and are generally viewed as visually pleasing. Cinematographers often combine complimentary colours for effect and to create increased contrast and separation between two planes in an image. For example, placing a character lit with an orange, tungsten light against a blue-ish teal background creates a greater feeling of separation and depth than if both the character and the background were similar shades of orange.
When it comes to the psychology of using colour, cinematographers generally fall into two camps - or somewhere in the middle. Some cinematographers such as Vittorio Storaro think that certain colours carry an innate, specific psychological meaning.
“Changing the colour temperature of a single light, changes completely the emotion that you have in your mind. I didn’t know at the time the meaning of the colour blue. It means freedom.” - Vittorio Storaro
Other filmmakers rely more on instinct and what feels best when lighting or creating a colour palette for a film. The psychology of colour can change depending on the context and background of the audience.
As well as being a means of representing and expressing different emotions, deliberate and repeated uses of colour can also be used by filmmakers as a motif to represent themes or ideas.
Another important part of colour theory is warm and cool colours. The Kelvin scale is a way of measuring the warmth of light, with lower Kelvin values being warmer and higher Kelvin values being cooler.
Warm and cool colours can have different psychological effects on an audience and can also be used to represent different physical, atmospheric conditions. Using warmer colours can be used to emphasise the feeling of physical heat in a story, while inversely cooler colours can be used to make the setting of a story feel cold or damp.
CREATING A COLOUR PALETTE
Now that we have a basic framework of colour theory to work with, let's look at the different ways that filmmakers can make a colour palette for a movie. Colour palettes in film can be created using three tools: production design and costume, lighting and in the colour grade.
The set and the clothing that the characters are dressed in is always the starting point for creating a colour palette. In pre-production, directors will usually meet with the production designer and come up with a plan for the look of the set. They might give the art director a limit to certain colours they need to work with, or decide on specific tones for key props. The art team will then go in and dress the set by doing things such as painting the walls a different colour and bringing in pieces of furniture, curtains and household items that conform to that palette.
Since characters are usually the focus of scenes and we often view them up close, choosing a colour for their costume will also have a significant impact on the overall palette. This may be a bold primary colour that makes them stand out in the frame, or something more neutral that makes them blend into the set.
With a set to work with, the next step in creating a movie’s colour palette is with lighting.
Traditionally, film lighting is based around the colour temperature of a light which as we mentioned could be warm, such as a 3,200K tungsten light or cool, such as a 5,600K HMI. On top of this, cinematographers can also choose to introduce a tint to get to other colours. This can be done the old school way by placing different coloured gels in front of lights, or the modern way by changing the hue or tint of LEDs.
DPs can either flood the entire image with monochromatic coloured light, or, as is more common, light different pockets of the image with different colour temperatures or hues. In the same way that we create contrast by having different areas of light and shadow in an image, we can create contrast by having different areas of coloured light.
Once the colour from the set and the lighting has now been baked into the footage, we move into post-production where it’s possible to fine tune this colour in the grade.
An image contains different levels of red, green and blue light. A colourist, often with the guidance of a director or cinematographer, uses grading software like Baselight or Da Vinci Resolve to manipulate the levels of red, green and blue in an image.
They can change the RGB of specific values of light, like introducing blue into the shadows, or adding magenta to the highlights. They can also create power windows, to change the RGB values in a specific area of the frame, or key certain colours so that they can be individually adjusted. There are other significant adjustments they can make to colour such as determining the saturation or the overall intensity of the colour that the image has.
USING COLOUR TO TELL A STORY
“It’s a show about teenagers. Why not make a show for the teenagers that looks like how they imagine themselves. It’s not based on reality but mostly on how they perceive reality. I think colour comes into that pretty obviously.” - Marcell Rév
When coming up with a concept for the lighting in Euphoria, instead of assigning very specific psychological ideas to colour, Marcell Rév used colour more generally as a way to elevate scenes from reality.
He wanted to put the audience in the emotionally exaggerated minds of some of the characters and elevate the level of the emotions that were happening on screen. In the same way that the often reckless actions of the characters continuously ratcheted up the level of tension in the story, so too did the exaggerated, brash, coloured lighting.
To increase the potency of the visuals he often played with a limited palette of complementary colours. He avoided using a wide palette of colours, as it would become too visually scattered and decrease the potency of the colours that he did use.
Along with his gaffer he picked out gels, mainly light amber gels which he used with tungsten lights and cyan 30 or 60 gels which he used with daylight HMIs. They also used LED Skypanels, which they could quickly dial specific colour tints into.
“That light…that colour bouncing off the screen and arriving at us we don’t see it only with the eyes, we see it with the entire body…because light is energy. I’m sending some vibrations to you, to the camera, to the film…unconsciously.” - Vittorio Storaro
When photographing Apocalypse Now, Vittorio Storaro was very deliberate about his use of colour. He wanted the colours to be so strong and saturated that the world on film almost became surrealistic.
He wasn’t happy with Kodak’s 5247 100T film stock at the time, so he got the film laboratory to flash the negative to get the level of contrast and saturation which he was happy with.
In the jungle scenes he didn’t want to portray the location naturally. He sometimes used filters to add a monochromatic palette which was more aggressive, to increase the tension.
“I can use artificial colour in conflict with the colour of nature. I was using the symbolic way that the American army was using to indicate to the helicopter…They were using primary and complementary colours. I was using those kinds of smoke colours to create this conflict.” - Vittorio Storaro
He also described how the most important colour in the film was black, particularly in the silhouetted scenes with Kurtz. He felt black represented the unconscious and was most appropriate for scenes where the audience was trying to discover the true meaning of Kurtz, with small slithers of light, or truth, emerging from the depths of the unconscious.
What A Boom Operator Does On Set: Crew Breakdown
In this Crew Breakdown video I’ll go over the position in the sound department of the boom operator, to break down what they do, their average day on set and some tips which they use to be the best in their field.
INTRODUCTION
In this series I go behind the scenes and look at some of the different crew positions on movie sets and what each of these jobs entails. If you’ve ever watched any behind the scenes videos on filmmaking you’ve probably seen this person, holding this contraption.
In this Crew Breakdown video I’ll go over the position in the sound department of the boom operator, to break down what they do, their average day on set and some tips which they use to be the best in their field.
ROLE
The boom operator, boom swinger or first assistant sound is responsible for placing the microphone on a set in order to capture dialogue from the actors or any necessary sounds in a scene.
They do this by connecting a boom mic, or directional microphone, to a boom pole. The mic is then connected either with an XLR cable or wirelessly to a sound mixer where the sound intensity is adjusted to the correct level.
On feature films this mixing is done separately by the sound recordist who heads the department, and is responsible for recording all the audio and delegating the positioning of the mic to the boom operator. However, for low budget features, TV shoots, documentaries or commercials, the role of the sound recordist and the boom swinger is sometimes performed simultaneously by one person.
To get the best possible sound and capture dialogue clearly the microphone usually needs to be placed as close as possible to the actors. Since film frames have quite a lot of width to them and see a lot of the location the best way to get the microphone in close to the action without it entering the shot is to attach it to a boom pole, with the mic angled downwards and use the length of the boom held overhead to position the microphone directly above the actors and outside of the top of the frame.
For stationary shots without camera movement this involves finding a position for the boom and holding it throughout the take. Sometimes for long documentary interviews this can be done with the help of a stand. However, for shots which involve camera movement or actors that are moving and talking, the boom operator is tasked with performing a kind of dance. They need to move tighter or wider as the camera does, always fighting to get the mic as close as possible while making sure it never dips into the frame and enters the shot. If this happens during a take the DP, director or 1st AD will often call out ‘boom’.
While the act of operating the boom mic during filming is their primary responsibility, there are also some other tasks that boom swingers need to perform.
Dialogue is usually captured by two different types of microphones, the boom mic, as we mentioned, as well as lapel or lav microphones. These are small microphones which are strapped directly onto the actors with a clip or with an adhesive tape. They are usually positioned under clothing near the chest or throat area so that they will pick up intimate sound but be unseen by the camera.
These microphones are attached to a transmitter. This wirelessly transmits the recorded sound to a receiver that is then connected to a sound mixer where the audio feed is recorded. The boom operator is usually responsible for attaching this lav microphone to the necessary actors.
It’s best practice to always inform and explain how you’ll be attaching the mic to the actor, as it can be a bit invasive. Experienced actors are aware of this but it’s still professional to ask their permission before touching them or putting on the lav mic.
The boom operator also assists the sound recordist with any necessary technical builds or changes, such as switching out batteries or attaching a lock it box for synchronising timecode onto the camera.
AVERAGE DAY ON SET
After arriving on set the boom operator will track down a copy of the shooting schedule and sides. This lists what scenes are planned for the day and pages from the script with dialogue for those scenes.They’ll read the sides to see what dialogue needs to be recorded and what actors need to be mic-ed up. They’ll replace the necessary batteries and make sure everything is charged up and ready to go.
The boom operator or the sound recordist may test that their audio feed is getting transmitted to VT and hand out a pair of headphones and receiver to the director for them to monitor the sound during takes.
The boom operator will mic up any actors in the scene that have dialogue and prep their boom setup. This may involve changing their gear, such as using a blimp, or a ‘deadcat’ as it’s called, to cut out wind noise if they are recording a scene outdoors.
They’ll find out the lens that the camera is shooting with, or take a peek at the monitor to see how wide the frame is and how close they can position the boom without getting in the shot.
Once ready to record a take the 1st AD will call roll sound, and the sound recordist will begin recording, then the cameras will roll. The boom swinger will then move the mic over so that it’s directed at the 2nd AC, who will announce the information on the clapper board and then give it clap so that the editor has a point where they can sync the sound with the video. The boom operator will then quickly position the boom above the actors and be ready to begin recording dialogue.
They always wear headphones while recording which can help them to position the boom further away, closer or at a different angle to get the best possible sound.
When recording sound the default rule is to usually capture whatever audio is present on screen.
So, for wider shots where multiple actors are in a frame the boom swinger may alternate and move the boom closer to whoever is speaking, positioning the mic back and forth as the actors exchange dialogue. Then for close ups where only a single actor is on screen they will usually focus only on recording sound for that actor alone. When the camera switches to a close up of the next actor then their full sound will get recorded. By doing this they will then have clear dialogue for both actors which the editor can use at their discretion.
Sometimes if the boom was unable to capture a certain sound effect or a certain line of dialogue during a scene then they will pull the actor aside between setups and record what is known as wild sound - sound that isn’t recorded with any specific video footage but which may be synchronised or used later.
They may also need to record ‘room tone’, a quiet ambient recording of the space without any dialogue. This can be placed as a layer underneath the dialogue in the edit to make the cuts more natural and provide a background noise to the scene.
In this case the 1st AD will make an announcement to the set, the crew will awkwardly freeze so as not to make any noise and the mic will record about a minute of quiet, ambient sound.
TIPS
Since any footage where the boom dips into shot will be unusable one of the most valuable skills for boom operators is to know how wide a frame is. If you’re starting out, it’s useful to look at the framing on the monitor and find what is called an ‘edge of frame’. This could be a marker on the set which indicates what is in or out of the shot.
As boom operators become more experienced they’ll begin to learn focal length sizes and be able to place the boom without needing to look at a frame. For example, if they know how wide a 35mm lens is then they can imagine its field of view and be sure to stay out of it.
As space on a set can sometimes be limited, it is useful to find the best position to stand before shooting begins. When finding a position it’s important to be mindful of how the camera and actors will move and to identify any lights that may cast shadows or reflective surfaces that will pick up the mic in shot.
Always make sure to never position the boom between a light source and a character, as it will cast a shadow of the gear in the image. Adjusting the length of the boom so that it has enough reach, but isn’t overextended will also save space and mean that it stays out of the way of other crew members, gear or lights.
Boom mics are directional, meaning that they capture whatever sound they are pointed at most prominently. Any sound behind or off to the side of the mic will be recorded much softer. Therefore positioning the boom overhead with the mic facing down towards the speaker is most common.
In a dialogue scene with two people close together the operator may be able to turn the angle of the directional mic toward whoever is speaking at the time without needing to move the position of the actual pole.
If the frame has a lot of headroom it is also possible to boom sound from underneath the frame. However, when recording outside this may sometimes pick up excess aviation noise if any planes pass overhead, so should be avoided unless it’s necessary.
If you’ve ever operated a boom during a long scene you’ll know that holding it in an awkward position gets surprisingly heavy surprisingly quickly. The solution is to either spend more time at the gym or find positions that better mitigate the weight of the mic pole, such as resting it on a shoulder or on the head.
The Most Popular Cinema Lenses (Part 4): Panavision, Tokina, Atlas, Canon
In the fourth part of this series I’ll look at the Panavision G-Series anamorphics, Tokina Vistas, Atlas Orions and the Canon S16 8-64mm zoom lens.
INTRODUCTION
While many think that only the camera is responsible for the look that footage has, the glass that is put in front of the camera has just as great an influence over how a film looks.
In the fourth part of this series I’ll look at some popular lenses which are used in the film production industry and break down the ergonomics of each lens, the format they are compatible with, as well as their all important look, by using footage from movies shot with this glass.
Hopefully this will provide some insight into what kind of jobs and stories each lens is practically and aesthetically suited for. Let’s get started.
PANAVISION G-SERIES
Panavision launched their G-Series lightweight anamorphic lenses that covered a 35mm frame in 2007.
In a past episode we looked at another of Panavision’s anamorphic series of lenses, the Cs, which are probably considered their flagship product from the past - being launched in 1968. The classic, vintage anamorphic look of the Cs is still highly sought after today, despite their relative scarcity and mish-mash ergonomic designs.
The Gs were released by Panavision for DPs that prefer a slightly more updated iteration of the Panavision anamorphic look with easily workable, modern ergonomics.
The older Cs came in different sizes, with different apertures, different close focus capabilities and different front diameters. Whereas the Gs came in more consistent sizes, with more standardised T stops and front diameters. This makes working with the Gs far easier and quicker for camera assistants. For example when changing between G-series lenses the focus and iris gears are positioned almost identical distances apart, meaning the focus motors don’t have to move.
The Gs can be shot wide open at T/2.6 and get the exact same exposure with different lenses, whereas the apertures of the Cs need to be individually tweaked between lens changes. Their standardised front diameters means that clip-on matte boxes can be easily interchanged without swapping out the back, and their more standardised lengths and weights make balancing gimbals after changing lenses easier.
The Gs also have more subtle breathing, which means the image will shift less when the focus is racked.
The update of the Gs also carries over into their optical look. They have a higher contrast, a greater degree of sharpness, aberration control, glare resistance and overall, resolve higher resolution images, while maintaining Panavision’s beautiful anamorphic bokeh and focus falloff.
DPs such as Matthias Koenigswieser enjoy the more consistent and modern anamorphic look of the Gs. On Christopher Robin he combined the C-series and the G-series. He used the more modern Gs when shooting on slightly less sharp 35mm film, and used the softer, more vintage Cs for footage that needed to be shot at a higher digital resolution. In this way they balanced out to provide a consistent look across mediums.
Fun fact, when I camera assisted Matthias on a TV commercial he also opted to use the Gs for a contemporary, sharper looking, anamorphic car shoot.
Overall, Panavision G-series are great for cinematographers who need a lightweight lens that is solidly constructed, quick and easy to work with and desire a slightly updated Panavision anamorphic look that is more optically consistent.
TOKINA VISTA
From an anamorphic lens that covers the 35mm format, to a large format spherical lens, let's take a look at the Tokina Vista primes. This set of lenses features a massive 46.7mm image circle. This means that they cover almost any cinema camera on the market, including full frame sensors and large format sensors like the Red Monstro 8K or the Alexa LF.
They have a fast aperture of T/ 1.5 across the entire range of focal lengths. This means that when the longer lenses are combined with large format cameras, the depth of field becomes razor thin.
They are very solidly constructed and come in consistent lengths across the range, with the iris and focus gears all being the same distance from the mount. The front diameter is standardised to 114mm.
There are lots of well designated distance measurements on the barrel of the lens which makes focus pulling marks more accurate. Although super robust, their full metal construction and the ample glass that is needed to cover large sensors mean that the lenses are a pretty hefty weight.
The lenses are super sharp and come with modern coatings that give them very few chromatic aberrations. This means their optical qualities are far less vintage and imperfect like other large format lenses such as Arri DNAs. Their modern coatings also mean that the lenses don’t flare massively, but when hit with the right angle of light they will produce a blue, green rainbow flare.
Their look is super modern, ultra-crisp and sharp across the entire width of the frame. Even wide open at T/ 1.5 the sweet spot of the lens in focus is very crisp. They also have minimal distortion even at the widest 18mm focal length.
The Tokina Vistas are a great option for DPs that need a ergonomically designed, fast, super sharp, modern looking spherical lens that resolves high resolution images and covers large format cameras.
ATLAS ORION
The Orion series of anamorphic lenses from Atlas was first unveiled in 2017. After first starting with just a 65mm lens, the set has now been expanded to 7 focal lengths ranging from 25mm to 100mm.
Atlas was started as a small company with the goal to manufacture professional grade anamorphic cinema lenses at an affordable price point. This may not seem all that affordable at first glance, but when compared to purchasing or renting other high end anamorphic glass, the price is significantly reduced.
Since these front anamorphic lenses have been designed recently they feature solid, modern design with a robust housing, well spaced distance markings and a smooth focus gear. They aren’t the smallest, lightest or most compact of anamorphic lenses, particularly when compared to lenses such as Kowas, but they are solid.
All focal lengths, even the 25mm, feature an aperture of T/2. This means they are very fast for anamorphic lenses which typically aren’t as fast as their spherical counterparts. However, when shot wide open at T/2 they do lose some sharpness, with their sweet spot being closer to around T/4.
Another great feature is that the Orion’s have very good close focus capabilities. This makes it easy to shoot close ups in focus without the use of diopters.
When it comes to their look, I’d say they have subtle vintage characteristics, but overall create a more traditional anamorphic look with good levels of contrast and no crazy focus falloff or distortion.
So if you’re looking for a solid set of modern, fast anamorphic primes that cover a 35mm sensor, with some vintage characteristics and great close focus all at an affordable price point, then Orion’s may be the way to go.
CANON S16 8-64mm
So far we’ve looked at lenses that cover Super 35 and large format. Next let’s take a look at a smaller format lens, Canon’s super 16 zoom, the 8-64mm.
Since this lens was designed for super 16mm film it doesn’t cover a lot of today’s modern sensors which are super 35 size or larger. However, this lens is still widely used today as the zoom of choice for 16mm film work, on the Alexa Mini in its S16 mode, or certain micro four thirds digital cameras.
It’s 8-64mm range is about the equivalent of a 14.5-115mm lens in super 35 mode. This means that this single lens has lots of flexibility and covers a whole range of conventional prime focal lengths. Despite this long zoom range it has good close focus at 22”. This meant that it was a popular workhorse in the 90s, especially for TV work or documentaries which were shot in 16mm.
For such a long zoom range it’s pretty compact and lightweight but it's built like a tank. The focus rotation is only about 180 degrees which makes it a good pairing for solo documentary operators. It has some distance markings on the barrel of the lens, with probably not as many distances as most focus pullers would like, but enough to get by.
With an aperture of T/2.4 the lens is fairly fast for a zoom. When shot wide open the image does tend to get a little softer and ‘dreamier’ like a diffusion filter has been added. But if its stopped down just a bit to around T/ 2.8 the lens sharpens up.
For a vintage zoom it's amazingly sharp which you usually want when shooting 16mm film which is a lower fidelity medium. However it isn’t overly sharp or too clinical with its vintage lens coating.
Its solid construction, long zoom range, fast aperture, great close focus and sharp but slightly vintage look make the Canon 8-64mm a great choice for DPs looking for a 16mm zoom.
Cinematography Style: Conrad Hall
n this episode I’ll look at what Conrad Hall had to say about his philosophy on photography and show some of the gear which he used in order to cultivate his photographic vision.
INTRODUCTION
If you were to Google ‘who are the best cinematographers of all time?’, it won’t take long to stumble upon the name Conrad Hall. Through ten Academy Award nominations and three wins, the strength of his career speaks for itself.
His photography is characterised by neutral colour palettes, inventive uses of hard light, reflections and character focused framing which all culminated into a style which he called ‘magic naturalism’. In this episode I’ll look at what Conrad Hall had to say about his philosophy on photography and show some of the gear which he used in order to cultivate his photographic vision.
BACKGROUND
Hall was born in 1926 in French Polynesia and in his mid teens began attending boarding school in California. After graduating he signed up for a degree in journalism at USC, however that didn’t last long.
“Boy am I lucky that I got a D+ in journalism and had to change my major.”
He switched to the cinema program and began learning the basics of filmmaking, a relatively new art form to study at the time.
To work in Hollywood on a camera crew back then required being a member of the International Photographers Guild. This left him without a job. To work around this Hall and some of his classmates created their own independent production company and produced a film called Running Target which Hall shot. This gained him membership to the guild however due to regulations he wasn’t allowed to be credited as cinematographer on the film, but rather as a visual consultant, even though he shot the entire film.
As a member of the guild he then worked his way up the ranks, from camera assistant, to camera operator until he eventually got a chance to photograph the feature film Morituri as the director of photography.
Hall’s career went on to span many decades, from the 1950s to the early 2000s, during which time he worked with a host of esteemed directors which included: Richard Brooks, Stuart Rosenberg, John Huston, Steven Zaillian and Sam Mendes.
PHILOSOPHY
When Hall was asked how he decided where to point the camera, he is reported to have said, “I point it at the story.”
To him, the story was always the starting point for determining his photographic decisions and the ultimate target he aimed for. Although the look of his photography changed between projects depending on the story and director he was working with, he carried over some philosophical concepts throughout his career.
“I’ve never been somebody to get a movie to look absolutely perfect… Mine are always sort of flawed somehow or other. And in a way I don’t mind that because it’s not about perfection it’s about the overall feeling of the thing.”
His light wasn’t always perfectly soft, perfectly shaped and didn’t always have a perfect contrast ratio between light and shadow, his framing wasn't always perfectly symmetrical. By not always aiming for a perfectly beautiful image, much of his work carries a feeling of naturalism. He often incorporated interesting flourishes as well, such as using reflections, interesting hard shadows and atmospheric texture such as smoke or rain.
He called this magic naturalism: shooting things as they are, while at the same time incorporating stylistic touches that heightened the atmosphere of the story.
“I’m one of those guys who doesn’t do a lot of augmenting. But who knows how to take the accident and turn it into something wonderful, magical. I look for that. I thrive on it. I feed on it. I don’t invent stuff. It invents itself and then I notice it and use it dramatically.”
Hall was a master of observing unintentional magical moments then using the photographic tools he had to emphasise them. Whether that was zooming into the reflection of a chain gang trapped in the sunglasses of a prison guard. Throwing hard light against a tree to create ominously moving shadows made by the wind. Or positioning a character who was a murderer so that the acciedntal reflection made by the rain on the window made it look like he was crying.
When combined with an otherwise largely naturalistic look, these stylised little moments of ‘happy accidents’ elevated the story in a magical way.
Due to the length of his career, he started photographing films in black and white but of course moved on to using colour in his work as it replaced black and white as the dominant medium. A thread that he has carried through most of his colour films is using an earthy, neutral colour palette. Many of his films used lots of browns, greens, whites and greys, with strong, pure black shadows. The colour was rarely strongly saturated or vivid across the frame.
This meant that when a strong colour was used, like the famous use of red in American Beauty for example, it really stood out against the rest of the film’s neutral tones.
GEAR
“It’s as complex a language as music. A Piano’s got 88 keys and you can use them in any complex way you want to. We got the sun and light. Is there anything more complex than light?”
As we mentioned, being open to ‘happy accidents’ is an important part of his cinematic philosophy, especially when it comes to light. For that reason he liked coming up with the majority of his lighting on the day, although sometimes for large spaces some basic pre lighting work was necessary.
“I don’t like to figure things out ahead of time before the actors do”
For example, on Road To Perdition, many of the large interiors were sets constructed in a studio. During prep he got a rigging crew of 10 people to rig a collection of greenbeds and scaffolding overhead in the studio over 8 weeks. To this scaffolding his team rigged 30 10K fresnels and 60 5Ks which were all rigged to dimmers and used to light the backgrounds outside the windows. Basically acting the same as ambient sunlight outside.
It also meant that all his lights were out of the sets and wouldn’t get in the way of the actor’s blocking or framing. The sets were also electrified so that practical light sources, such as lamps, could be plugged in and used. Much of his lighting was done with tungsten balanced lights.
With these fixtures rigged in place he could then come in on the day and position the lights as he desired. A lot of the light was hard and undiffused, something he often did in many of his films. This resulted in strong lines of shadow.
He regularly created shape by breaking up the light with interesting textures or used parts of the set to shape the shadows which were cast.
When lighting interiors he would also use what he called ‘room tone’, where he bounced smaller fixtures like a 1K into the ceiling to provide a soft ambient base light to a room. This filled in the contrast from the hard light a bit. Because the light is soft and bounced it’s not very directional which means it’s difficult to tell where exactly it comes from. In this way it provides an overall lift to the space in a natural way. Once this base ‘room tone’ was in place he could then work on lighting the characters. Often hitting them from the side or behind with a hard source.
He used this same hard backlight to bring out textural components like smoke or rain.
When it came to selecting lenses he liked using a wide collection of prime focal lengths from 27mm to 150mm. Unlike the recent trend of using wide angle lenses for close ups, Hall took more of a traditional approach. We used wide angle lenses for wide shots and longer focal lengths for close ups that threw the background out of focus.
However he did sometimes use long lenses, like Panavision’s 50-500mm zoom, creatively for wide shots to increase the feeling of heat waves with distortion and represent characters as more wispy yet menacing.
He liked using a shallow depth of field, usually setting the aperture of his lenses between T/1.9 and T/2.5. This gave the photography an emotional dimension and clearly showed the focus (literally) of the shot.
Hall mainly used Panavision cameras and lenses. He shot on 35mm film before the advent of digital cinematography with cameras such as the Panavision Platinum. He especially liked using Panavision Primo lenses for their look, reliability and wide range of focal lengths.
To achieve the colour he liked he used fine grain tungsten Kodak film stocks for interiors as well as exterior daylight scenes. For example, he used the more modern Kodak Vision 200T for his later work, and Eastman EXR 100T for his older work. A lot of the classic 60s and 70s-feeling muted, neutral colour came from his use of Eastman’s famous 100T 5254 colour negative film.
For some films, such as Butch Cassidy, he wanted the colour to be even more muted and to pull out the cliche blue often used in ‘western skies’. To do this he radically overexposed the film, then got the film laboratory to compensate for the overexposure in the print. This further washed out the colour and turned the blue a softer, lighter shade.
CONCLUSION
“Those are the kind of films that I like to get a hold of and don’t often get a chance to do. Stuff that, like, goes on forever about some basic and important human condition that is bigger than all of us and will go on forever no matter what era it’s set in.”
Conrad Hall’s selection of films that he photographed is a reflection of his style as a whole. Telling simple, natural stories that represent something bigger through his injection of magical moments.
One thing that he always tried to do was to tell stories so well that if the sound was turned off the audience would still understand the story just based on the images. His love for the medium, his powers of observation and ability to translate stories using whatever magic naturally occurs on set is what has made him one of the greatest of all time.
4 Reasons Movies Shouldn't Be Watched On Laptops
Let's remind ourselves why going to the cinema is still the superior to streaming movies from home by going over 4 reasons we should still make the effort to get out of the house and go to the cinema.
INTRODUCTION
According to a recent study, only 14% of adults stated they preferred viewing a new movie in the cinema, while 36% preferred streaming it from a device at home. While this trend may have been expedited by the pandemic, I think it’s a trend that’s on the rise regardless. Fewer and fewer people are going to cinemas anymore and I think that’s kind of sad.
And I mean, look, I get it. I’m guilty of it too. Convenience outweighs viewing experience. 80% of my cinematic diet is probably consumed at home from a TV or a laptop rather than on the big screen.
Before this video becomes too much of a lamentation about the death of cinema, I’d like to flip it to remind us why going to the cinema is still the superior experience by going over 4 reasons we should still make the effort to get out of the house and go to the movies.
CONSISTENCY
One of the biggest issues of watching movies from home comes from the inconsistency of the image.
Filmmakers spend years developing their craft, putting blood, sweat and tears into lighting, testing for the perfect lenses and tweaking nuances of colour on a calibrated monitor in the grade. Only to have 60%, or so, of the audience watch the final product from a smartphone with a cracked screen in a bright room full of reflections.
Cinema is a medium which is all about refining and tweaking the details in order to create a lasting art work. A lot of this is undone by watching the final product in a sub optimal viewing environment.
I mean yes, you get the gist of the visuals, but it’s kind of like listening to an album that was carefully, meticulously written, recorded, mixed and mastered on a noisy airplane with the cheapest pair of headphones sold in the 7 Eleven. You can kind of make out most of the lyrics and melody but all the sonic nuance that the artists spent their time and energy creating is lost.
Most movie complexes use high end digital cinema projectors that are DCI compliant, tested to output a high standard of quality images, in a dark, light free environment. This means that the viewing experience at different cinemas around the world will be almost identical. Colour and contrast will be consistent and resolution is standardised to either 2K or 4K.
This differs from home viewing a movie. Different screens made by different manufacturers have different resolutions, will display colour and contrast inconsistently at varying levels of brightness and don’t have to conform to any compliance standards.
Using different media players may also affect the colour and luminance information. For example a pet peeve of mine is that Apple’s Quicktime Player shifts the gamma curve and plays video files with different contrast from the original file.
Also, unless you are viewing a movie at night with all the lights in the house off, there will be excess ambient light that may cause reflections or dilute the brightness of the image.
THE AUDIENCE EFFECT
One of the most noticeable impacts that viewing a movie at home has, is that it shifts the experience from being a communal one to being an individual one.
The idea of cinema was born out of creating a medium which could be shared by an audience. I’d say that communal viewing heightens the effect that a film has on us. Whatever emotion the filmmakers impart to the audience is heightened when we share it as a group. For horror films you can hear the audience gasp, for comedies laughter rings out, and for compelling dramas you can almost feel a communal silent focus take hold.
I think part of this reaction comes from it being an uninterrupted viewing experience. Streaming sites are set up in a way to ease and encourage the process of watching films in little segments. Watch for 5 minutes. Pause and make something to eat. Watch for another 10 minutes while you simultaneously browse your phone. Skip forward past a scene you get bored of. Then come back the next day and find the movie paused right where you left off so that you can begin this fragmented viewing process again.
Filmmakers work extremely hard to design each film as a continuous, cohesive experience that suspends your disbelief and envelops you in the world of the story. Breaking down the medium by stopping and starting it destroys a movie’s ability to take hold of you.
The cinema is so important because it forces you to view a film as it was intended to be viewed, as a single, uninterrupted experience.
Sometimes filmmakers want to test your patience and use more drawn out scenes to support their point of view of the story. Sitting through a movie from beginning to end, even if you don’t care for the film, will at least give you a complete idea of what the filmmaker was intending to do.
Plus, in today’s world where everything is so sped up and our attention spans have become shorter than ever before, I think turning off your phone and watching a complete film from start to finish is an important mental exercise we should all regularly perform.
SOUND
The sound that you hear in a cinema is far more immersive than that from a laptop, phone or TV. This is because of surround sound. Consumer display products, like a laptop, typically have a single speaker built into the device that emanates sound from one source or direction.
Cinemas have surround sound which uses multiple speakers in multiple positions to provide sound that is more immersive and which surrounds you 360 degrees.
Like high end cinema projectors which are standardised, so too is the sound. The global standard is Dolby Digital which provides an audio mix with multiple channels, such as Dolby Digital 5.1. This provides 3 front channels which are sent to separate speakers: a centre, left and right which provide clean dialogue and placement of on screen sounds. Two twin surround channels are typically placed on the sides and behind the audience to provide a fuller, 360 degree listening experience. A low frequency channel that provides bass effects, with about a tenth of the bandwidth of the other channels, makes up the final .1.
The cinema is therefore set up to provide a more captivating sonic experience that places you in the centre of the action and better draws you into the world of the movie.
THE ALTAR OF CINEMA
The final reason to go to the cinema is less of a practical one and more of a conceptual one, but is arguably the most important. This may seem a bit over the top, and hopefully no one takes offence, but I think a comparison can be made between cinema and religion in the way that they are presented.
In most religions it is of course possible to practice from home without interacting with others through meditation or prayer. However all major religions have physical spaces which bring communities together: temples, churches, mosques. Often these spaces are large, impressively built and feature significant iconography.
I think as humans we are drawn to spaces, and get some kind of greater, more significant experience from coming together as a collective in a space that is designed and devoted to that experience.
Standing in the queue for popcorn, buying tickets, sitting amongst a group of people, watching the trailers - it’s almost ritualistic and builds up a level of excitement and reverence for the film we’re about to watch. An image which is projected onto a massive screen has to be taken more seriously than one on a smartphone.
The issue with having a continuous never ending supply of content to stream at home on a laptop is that it diminishes the importance of the medium. It makes movies more mundane and everyday. Taking the time to visit the cinema builds anticipation and makes it more of an experience and an event.
CONCLUSION
So much effort goes into making movies as a work of art. I think they should be appreciated as such and not given the same gravity as this YouTube video for example. They are different mediums. The smaller the screen becomes the more that watching a film turns into an individual experience rather than the group experience that it was designed to be.
Going to the movies may be less practical than just bingeing the latest releases on a laptop, but the experience of going to a cinema elevates movies into the unique medium that they are.
How Denis Villeneuve Shoots A Film At 3 Budget Levels
Great directors are capable of creating and maintaining very deliberate cinematic tones. This is true of Denis Villeneuve.
INTRODUCTION
Great directors are capable of creating and maintaining very deliberate cinematic tones. This is true of Denis Villeneuve. His films are thrilling, dramatic and at times epic in both tone and scope, yet also provoke subtle political, ethical and philosophical questions that provide substance to action.
His career has wound a path from lower budget productions all the way to directing some of the largest blockbusters in the world.
In this video I’ll look at three of his films at three increasing budget levels, the low budget August 32nd on Earth, the medium budget Sicario and the high budget Dune to unveil the formation of his style and identity as a director.
AUGUST 32ND ON EARTH
The Canadian filmmaker’s interest in movies was piqued as a child. He began making short films when he was in high school, where he also developed an early love of science fiction. After leaving school he began studying science but later changed his focus to film when he moved to the University of Quebec.
After winning some awards he began working with the National Film Board of Canada where he established a working relationship with producer Roger Frappier who developed films by emerging directors.
The NFB funded his first 30 minute short film which showed a lot of promise. Frappier then produced Cosmos, a collection of six different shorts made by six young directors, which included Villeneuve as well as his future collaborator André Turpin. It was a critical success.
Following this Villeneuve wrote a screenplay with a contained story about a woman who is thrust into an existential crisis after surviving a car accident. Frappier came on board to produce the film under his production company Max Films.
André Turpin was brought on board to serve as the cinematographer on the film. This collaboration established a trait which would continue throughout his later movies - an openness to letting DPs bring their own photographic sensibilities to the project, while at the same time always firmly maintaining his own strong perspective on the script.
To August 32nd On Earth, Turpin brought his preference for strong, saturated 35mm Kodak colour, very soft side light, character focused framing and use of sharp lenses with a shallow depth of field. This was complemented by Villeneuve’s preferences for using subjective framing with lots of close ups and motivated, smooth camera moves from a tripod, dolly or Steadicam.
Although the film is a mature, cinematically grounded and more realistic production, it also has a dreamlike tone with moments of experimentation, some of which seems to have been inspired by his love of French New Wave Films, such as Breathless.
From the philosophical walk and talks, to the numerous jump cuts and even the main character's short haircut - Breathless seems to be a clear influence. And if you think maybe these are just coincidences, there’s even a shot with a poster of Seberg who starred in Breathless. While the influence of French New Wave filmmaking is strong, it’s not overpowering.
Villeneuve took parts of the style that worked effectively for a low budget film, such as a subjective focus on very few characters, and parts that suited his story, such as the experimental editing to visualise the character’s post accident haze, and combined it with own sensibilities for realism, mature drama, cinematic control, and isolated desert locations (which cropped up in much of his later work).
August 32nd established his strong voice as a director, his ability to maintain a consistent cinematic tone, openness to collaboration and his stylistic sensibilities.
He made his first low budget film by writing a simple story with few moving parts, using experimental cutting to avoid showing expensive set pieces like the car accident, and instead devoted his budget to creating a deliberate, cinematic camera language.
SICARIO ($30 Million)
August 32nd got into the Cannes Film festival and premiered in the Un Certain Regard section, which he followed with a string of Canadian medium budget films.
In 2013 it was announced that Villeneuve would direct Sicario, an action thriller on the Mexican border. He was drawn to the philosophical concept of the border, an imaginary line which divides two extremes, as well as examining the idea of western influence and how it is exerted by first world nations.
At a medium-high $30 million budget it was a step up from his prior Canadian films in the $6 Million range. However, the script involved many large, expensive set pieces and complex action sequences which meant the budget, relative to what needed to be shot, wasn’t huge. After writing or co-writing the screenplays for his early projects, Sicario was penned by Taylor Sheridan.
“The research I did after, as I was prepping the movie, just confirmed what was written in the script… I wanted to embrace Mexico. To see scenes from the victim’s point of view…try to create authenticity in front of the camera and not fall into cliches.”
To capture an authentic, naturalistic vision he turned to famed cinematographer Roger Deakins who he’d worked with before on Prisoners.
They storyboarded many of the sequences as a team during the process of location scouting in pre-production. This nailed down the photographic style they wanted and also allowed them to work quickly and effectively when shooting complex action sequences that needed to be pieced together.
This helped decrease shooting time in the tight schedule. Villeneuve’s clear vision for the shots he needed to get also saved time. For example, after shooting a master of a confrontation scene, Deakins asked if he should move the camera closer to get singles of each character. Villeneuve declined, knowing that he would use the master shot as a single long take in the edit…which he did. Not shooting extraneous close ups saved the production around three hours.
In his trademark style, Deakins shot many of the scenes from an Aerocrane jib arm with a Power Pod Classic remote head, a combination he’s used for over 20 years.
This allows him to quickly and easily move the camera on any axis, making it useful not only for smooth moves, but also for quickly repositioning the frame, allowing for a more organic working style and time saving setup.
“I mean the challenge of the photography of any film is sustaining the look and the atmosphere and not breaking out of that.”
One challenge when shooting out in Villeneuve’s favourite location, the desert, was controlling the natural light. Deakins did this by breaking down and scheduling each exterior shot at a specific time when the angle of sunlight was right with the assistant director Don Sparks.If the sun went away or into clouds they had a separate list of shots they could get such as car interiors or close ups which were easier to light.
Another way of exercising control of the lighting and the location was shooting certain interiors in a studio. To free up space for camera moves and to keep the light as motivated and as natural as possible he set up all his lights outside the set - 6 T12 fresnels pushing hard, sourcey light through windows and 65 2K space lights to provide ambience outside those windows.
He recorded on ArriRaw with the Alexa XT using Master Prime lenses - usually the 32, 35 and 40mm, occasionally pulling out the 27mm for wides.
“The overall approach to the film was this personal perspective. We’re either with Emily, or with Benicio, you know. So we took all that to say well we’ll do this whole night sequence from the perspective of the night vision system.”
To do this a special adapter was used on the Alexa to increase its sensitivity to light. He then lit the scene with a low power single source bounced from high up to mimic realistic moonlight and keep the audience immersed.
The much larger scope Sicario was therefore pulled off with a $30 million budget by: carefully planning out the complex action sequences in pre production to save time and money, casting famous leads that drew audiences to the cinema, shooting some interiors in a studio for increased control and exteriors on location to wrap the audience up in a feeling of authenticity and controlling the score, sound design and pacing in the edit to provide a consistently thrilling tone.
DUNE ($165 Million)
After Sicario’s critical and commercial success Villeneuve turned to a project he’d dreamt about making since he was a teenager - Dune - based on the sci fi novel by Frank Herbert.
“I felt in love spontaneously with it…There’s something about the journey of the main character…This feeling of isolation. The way he was struggling with the burden of his heritage. Family heritage, genetic heritage, political heritage.”
With this thematic backing Villeneuve took on this sci fi story of epic proportions with a large studio budget of around $165 million. Since a large part of the undertaking was based on creating his imaginings of the world of Dune, he teamed up with his regular production designer Patrice Vermette and experienced cinematographer Greig Fraser. Together they worked with the extensive conceptual art and storyboards to bring the story to life. Since the way in which the sets were constructed would have an impact on the lighting, Fraser had many pre-production meetings with Vermette about light.
“The main character in the movie for me is nature. I wanted the movie to look as naturalistic and as real as possible. To do so we used most of the time natural light.”
On Arrakis buildings are constructed from rock with few openings to save its occupants from the oppressive heat. So instead of using direct light, the interior lighting is soft and bounced. To create this Fraser and his gaffer rigged Chroma-Q Studio Force II LED light strips to simulate the ambient softness of bounced sunlight. For close ups where they needed more punch he used LED Digital Sputnik DS6 fixtures.
To create depth Fraser constantly broke up spaces by using areas of light and shadow in different planes of the image. To bring out the incredible heat and harshness on the desert planet, Fraser used hard natural light from the sun which he cut into sections of sharp shadow in interesting ways.
Generally in cinematography, the larger a space is the more expensive and work it takes to light. This sequence was no exception.
In a massive undertaking, Fraser’s grip and rigging team put up gigantic sections of fabric gobo over the set’s ceiling to creatively block the sunlight, to create a sense of ominous depth to the space. They then had a precise window to shoot the scene between 10:45 and 11:10am where the angle of the sun would be perfect.
They photographed Dune on large format with the Alexa LF and Mini LF on large format spherical Panavision H-series lenses to render the taller 1.43:1 Imax sequences and Panavision Ultra Vista 1.65x anamorphic lenses for the 2.39:1 shots.
“I wanted the sky to be a vivid white. A very harsh sky. To bring kind of a violence to the desert - a harshness to it.”
To do this Fraser got his colourist Dave Cole to create a LUT for the camera in pre-production that pulled out the blue in the image and rolled off the overexposure.
The final finishing of the movie in the grade involved an interesting process. Fraser felt the look of the film should be more on the digital side, with the slightest hint of film.
To do this they took the graded digital files and did a laser-recording-film-out, recording the digital image onto Kodak 5254 print film. This film was then scanned and converted back to digital files. The result was a final file with just a tiny hint of film grain and subtly organic film artefacts.
When it came to sound Villeneuve brought composer Hans Zimmer into the room with the sound design team, so that the two were married together to create the ultimate immersive experience.
Villeneuve successfully grounded Dune’s fantastical world with his trademark realism and used the massive budget to: pull off a long shoot with a big crew, enormous technical setups and set construction, access to any gear they needed and extensive VFX post-production work.
CONCLUSION
Villeneuve’s films are strung together by a thrilling subject matter with political and philosophical themes told in a grounded, realist visual style…and, well, the desert.He’s drawn to scripts that both immerse the audience in a riveting world and pose subtle thematic questions.
Throughout his career he has worked in a collaborative way with different in demand DPs who each imparted touches of their own style on the stories. However, his films are always very much his own and supported by his vision.
Villeneuve’s ability to control the tone of his films using every filmmaking element, from the script to the camera work, the edit and the music, is what has elevated his work to its critical and commercial heights.
How Movies Are Shot On Film In The Digital Era
In this video I thought I’d do a bit of a deep dive into why some productions still choose to shoot on film over using digital cameras and outline the whole process of how film is shot, from pre-production and production all the way to it’s post production workflow.
INTRODUCTION
Although there was a time when many thought that shooting on motion picture film stock would quickly die out after the launch of high quality digital cinema cameras like the Arri Alexa in 2010, film still persists. In fact in recent years it's seen a bit of a resurgence.
In this video I thought I’d do a bit of a deep dive into the topic. So sit tight while I go over why some productions still choose to shoot on film over using digital cameras and outline the whole process of how film is shot, from pre-production and production all the way to it’s post production workflow.
WHY SHOOT ON FILM?
Before going over how film is shot I think it’s important to understand why it’s shot.
On the surface digital has many apparent advantages. It’s often cheaper. It’s possible to roll for much longer. There’s less room for exposure or development errors. You can view the rushes immediately. The list goes on.
When it comes to listing the pros for shooting on film it usually comes down to two factors: the look and the way of working. My favourite cinematographer, Sayombhu Mukdeeprom, sums it simply: “It’s a better practical experience and aesthetic choice.”
Nowadays it is possible to recreate most of the colour and texture of film with digital footage in the colour grade, so that it’s perhaps a 95% match (or whatever number you want to use). However, I’m still yet to see the highlights and natural sharpness in a digital image effectively manipulated in a way that is 100% indistinguishable from film, particularly 16mm. And if you have the budget to shoot on film, and that’s the look that you are after, why shoot digitally then spend a load of effort in post trying to achieve a look and texture that is achieved out of the box with film.
Having spent time on both film and, of course, digital sets, I can attest that there is a marked difference in the vibe on these sets. Because you’re shooting on an expensive and limited commodity. When the film starts rolling through the camera everyone on set is far more focused.
Also the build up to shooting on film is more focused. Shots are carefully planned, movements and performances rehearsed and only a limited amount of takes are shot. This contrasts with the so-called ‘spray and pray’ method that sometimes happens when shooting digitally.
So for filmmakers that value both the aesthetic and more disciplined on set manner of working that film provides - shooting in 16mm or 35mm remains a viable choice.
PRE-PRODUCTION
Now that you’ve done the maths on the viability of the costs of motion picture film and chosen it as your working medium, how is it practically shot?
It all starts in pre-production.
Before arriving on set the director of photography will either conduct camera tests with various film stocks, or use their prior shooting experience to select a stock or a few stocks that are correct for the project. Today this means choosing between Kodak’s range - who are the only remaining manufacturer of motion picture film stock in the world. The cinematographer will base this decision on 3 factors, the ASA, or sensitivity of the stock, the colour balance, daylight or tungsten, and the look.
They’ll look at how the stock captures colour, each is subtly different, and the amount of grain and texture that they have. Stocks with a lower ASA, like 50D will have very fine grain, while higher ASA stocks, like 500T, will have more noticeable texture. Daylight stocks, rated around 5500K, have colour that is balanced to look normal in daylight. Tungsten stocks, around 3200K, have colour that is balanced to look normal under artificial tungsten light.
It is possible to shoot tungsten stocks in daylight and either add a warm 85 filter to correct the colour temperature, or shoot without a filter and correct the colour balance in the grade in post production.
Some cinematographers may choose multiple stocks, for example 250D for exteriors and 500T for interiors and night scenes, while others may choose to photograph an entire project with a single stock. It’s on them to estimate how many rolls of film stock will be needed, which the production team will then go about sourcing.
Short form projects like commercials will usually order all the film upfront, while longer feature projects will often keep ordering new film as they go. Often you can return excess film stock back to the supplier as long as it hasn’t been opened and loaded into a magazine. However it’s still best practice to acquire the amount of film as accurately as possible. You don’t want to order too much stock and lose money because it can’t be returned but you also never want to run out of stock or be unable to shoot. It’s the job of the camera team to determine how much stock needs to be ordered and pass that information on to production, who will order it.
PRODUCTION
With film stock in hand, or in the back of the camera truck, it’s now time to load it and start shooting. This is either done by a dedicated camera loader, especially when dealing with multiple cameras, or done by the 2nd assistant camera.
Since film captures an image by being exposed to light for a fraction of a second, it’s of the utmost importance that the raw stock is never exposed to any light. If a film can pops open for even a second outside the film will no longer be usable. That’s a good chunk of money down the drain.
The 2nd uses a light free film changing tent and loads the stock from the film can into the camera magazine completely in the dark. Once in the magazine and completely sealed the assistant then labels the magazine using tape.
Red tape for tungsten film or blue tape for daylight film. With a sharpie they’ll write down information like the roll number, what stock it is, the code that identifies the stock, how many feet of film is usable and any necessary developing instructions. The 2nd hands the mag to the focus puller who laces it onto the camera where it is ready to shoot.
Before rolling DPs metre how much light there is with a light metre and set their exposure.
Nowadays they often carry their own digital stills camera and double check their exposure with it. For example, if shooting 500 ASA film with a 180 degree shutter they set the digital camera ISO to 500, the shutter to 1/50 and manipulate their aperture until they find an exposure they are happy with.
The aperture of the film camera is then set and any necessary ND filters added or subtracted to cut down or increase the amount of light that enters the camera.
After each scene or shot is completed the assistant director will announce ‘check the gate’. The focus puller does this by taking off the lens and examining the film gate to ensure it is clean. Any dirt of hairs on the gate means the shot will be ruined. If the gate is clean the 1st AC announces ‘good gate’ and the production moves on to the next shot. It’s the job of the 2nd AC to consistently check the film counter to know when the magazine will run out.
Once all the film in a mag has been shot it is removed and carefully unloaded in the tent by the 2nd. They put it back in its can, seal it securely with tape and place the tape label from the magazine onto the can.
At the end of the day they will make a camera report, stating all the rolls that were shot with all the necessary information. From the 3 copies of the report 1 copy goes with the raw footage to the film lab to be processed, 1 goes to the production team for their records and 1 backup copy is kept by the 2nd.
POST-PRODUCTION
Once the film arrives at a lab, such as Cinelab in the UK or Fotokem in the US, the first thing that needs to happen is to develop it. The film is passed through a combination of chemicals. This sets the image on the film as a negative. Once developed, the film can now be handled in light without concern.
In order to edit the footage it needs to be converted to a digital format so that it can be worked with in the same way as files from a digital camera. To do this the film is either scanned or goes through a telecine.
For a telecine, as the film passes through a machine it is captured and recorded as a video file in real time - usually in HD. A scan is slower, more expensive and records much higher fidelity video files. The most common modern industry scanner is the Scanity HDR.
Each roll of film is put onto the spools of the scanner and motors run the film past a gate. At the gate each individual frame of film which was shot is scanned at either 2K or 4K resolution and saved as a digital DPX file. It is capable of scanning up to 15 individual frames every second.
These DPX files are uncompressed and lossless with very high dynamic range. This means they are similar to RAW files that are captured by some high end cinema cameras like ArriRaw or Redcode Raw and retain a huge amount of colour information.
Due to the high quality of the files they are fairly large. About 24 minutes of footage can be stored on 1 terabyte.
These files are then worked with in a digital post production workflow similar to how digital RAW files would be worked with. Once edited, those lossless files will be graded by a colourist, who will find the desired levels of saturation and contrast and correct any colour balances that are off.
Finally, the finished, graded footage along with the final sound mix will be converted into a DCP, a digital cinema package, basically a hard drive, which is used to digitally project the final film in cinemas.
Occasionally a film-out is done where the final DPX files are converted back to a film print, which is projected in cinemas the old school way - with light.
Why High Resolution Isn't Always A Good Thing
Let’s talk a bit about what resolution actually is and why I think high resolution isn’t always a good thing.
INTRODUCTION
What is it with this recent obsession with high resolution images? From gaming, to smartphone cameras, to what we talk about on this channel, films and cinematography. Why is the highest peak of photography associated with high resolution?
If you’ve ever worked professionally with cameras, the first thing that people like to ask is: does it shoot 4K?
Maybe part of this is based on our continuous pursuit of technological advancement. We tend to think that newer, bigger, sharper, faster, is always better. Well I think this isn’t always true. Particularly when it comes to art. So let’s talk a bit about what resolution actually is and why I think high resolution isn’t always a good thing.
WHAT IS RESOLUTION?
Some quick background on resolution. It refers to the amount of detail that an image holds. This can be measured in different ways but in the world of video and digital cinematography it is measured in pixels - tiny elements which record light.
Each pixel records a measurement of light and converts that data to a colour. I like to think of pixels like bricks in a building. With each brick painted in a different colour.
When you have a small wall made up of very few bricks, the image will appear more blocky or low resolution, whereas if you have a massive skyscraper with a ton of bricks, an image will appear clearer with greater detail.
If you set this YouTube video to 240 pixels it will be low res and blocky. If you set it to 1080 pixels it will be higher res with greater detail. Most digital cinema cameras use Bayer sensors to capture light in pixels with a red, green and blue pattern. 1000s of red, green and blue pixels are combined to create a representation of reality.
But enough with all the technical talk. Why does resolution matter? Surely the more detail that a camera can capture the better the image is?
Well, this is where I disagree. Just because an image can be recorded in 8K and resolve extreme detail it doesn’t mean that it is always appropriate to do so.
WHY HIGH RESOLUTION ISN’T ALWAYS A GOOD THING
In the world of art, painting photo-realistic images that are super sharp is one method of expression. You also get other painters that paint with broader, more abstract strokes that express feeling rather than only focusing on creating the highest fidelity image that perfectly represents reality.
Both are valid forms of expression.
In the same way, some filmmakers may prefer to tell their stories with less refined brushstrokes using a medium with a lower resolution that captures less detail like 1080p or even 16mm film as opposed to an 8K digital camera.
High or low resolution should be seen as a tool rather than something which is mandated. An image being captured in a higher resolution doesn’t make it inherently better. Resolving huge amounts of detail through high res capture means that things like skin will reveal every pore and blemish. Whereas resolving less detail gets rid of these unflattering flaws in a natural way.
It’s like when you meet someone in real life. Your eyes don’t fixate on the fine details of a person’s skin. They focus on the face as a whole. Photographing faces at a slightly lower resolution removes the focus on micro details. I think that the way in which cinema is viewed now also makes super high resolution images a little bit off-putting.
The way in which films are consumed by audiences is undeniably changing. We’ve gone from sitting way back in a cinema viewing projections on a large screen to watching content up close on laptops or phones. The larger the image projected and the further away you are from it, the more subtle the level of sharpness becomes. So when you watch Netflix on your laptop a few inches from your face the sharpness of the image will feel magnified.
I think the close viewing of high resolution video on high resolution screens results in images which are overly sharp images and a bit less…romantic. Perhaps this is just personal preference but, aesthetically, I find that super crisp digital cinematography can come off as feeling more video-y. More like broadcast TV on a 4K screen.
And actually, I don’t think I’m alone in this. Not amongst cinematographers anyway.
I’ve camera assisted on loads of shoots, and I’ve witnessed an overwhelming percentage of those DOPs pair high resolution digital cinema cameras with either diffusion filters, like an ⅛ or a ¼ Black Pro Mist or a Glimmer Glass, or pair them with vintage lenses. This is to take some of the sharpness and edge off of the high resolution digital sensor. Too much sharpness just feels artificial and unnatural.
Far fewer cinematographers pair high resolution digital cameras with modern high resolution lenses like Leica Summilux-Cs, the Alura or Master Primes without any filtration. And most of the time this is where the client or studio demands that the product must be very sharp.
I think this pursuit of maximum resolution and clarity follows the same pursuit of perfectly santitised, idealised images which are created for many contemporary mainstream Hollywood movies.
For example, myself and my filmmaker friend always joke about the fact that most featured extras in the background or actors with smaller roles in Hollywood films these days are now cast to super good looking, young models. Instead of the average, everyday folk which would be present in older movies. Like, come on, are these really what experienced scientists look like?
In the same way I think an overly sharp presentation of reality creates a cinematic world that is, photographically too perfect.
Finally, an important consideration when choosing gear is not only the creative or photographic look it has but also it’s economic and practical implications. This isn’t really a consideration for high budget films, but for lower budget projects, higher resolution cameras are more expensive to shoot with. More storage space on hard drives is required and more processing power is needed to edit and deal with that footage in post production.
CONCLUSION
Now I’m not saying that all films should be shot on 8mm film or at 720p. I think that for most digital projects, shooting and finishing them in a standard 1080p or 2K format is probably enough resolution to yield a sharp enough, but not overly sharp result.
However my main point is that sometimes 2K doesn’t feel right. Sometimes 8K is correct for capturing the project. Sometimes a 2K scan of 16mm film is correct. Some films should be finished in 4K.
Certain stories may benefit from capturing extreme details, giving images a hyper sharp, artificial digital look or benefit from the extra pixels needed for intensive visual effects work.The choice of the resolution should always be a practical and artistic choice that is motivated by the story and not just a default decision that is mandated or enforced.
Just because technology can do something, doesn’t mean it’s always right.
Cinematography Style: Maryse Alberti
In today’s episode I’ll give some background to Alberti’s career, go over her philosophy on cinematography and the gear that she has used in order to translate her vision to the screen.
INTRODUCTION
Maryse Alberti may not be as well known by mainstream audiences as some other cinematographers which I’ve featured in this series, but the strength of her career as a cinematographer speaks for itself.
She has a prolific track record in both documentary and fiction filmmaking, often choosing films that deal with real subject matter, true to life characters and situations that are interesting and elevated but grounded in reality.
In today’s episode I’ll give some background to Alberti’s career, go over her philosophy on cinematography and the gear that she has used in order to translate her vision to the screen.
BACKGROUND
“I grew up in the south of France. I didn’t have a TV, I didn’t see a TV until I was 12 years old…I just fell in love with movies when I came to the States because I stayed with people who had a TV in every room.”
After moving to the US in 1973, Alberti developed a career in capturing images, starting out by working as a still photographer in a field which isn't exactly the traditional roadmap to a career as a feature film DP.
“I ended up on the x-rated movie set where I was the still photographer…When I started to work on x-rated movies I started to meet people. The crews in New York were young people out of Columbia or NYU. It was kind of the training ground, one of the training grounds…Since I didn’t do film school that was kind of my film school.”
After starting out making film industry contacts in the ex-rated world she then got involved in shooting documentaries. Her break as a cinematographer came when she shot H2-Worker which won the Jury Prize for best documentary and best cinematographer at Sundance and launched her career as a DOP.
Throughout the years she has worked as cinematographer in both the documentary and the feature world, for many esteemed directors such as: Todd Haynes, Stephanie Black, Martin Scorsese, Ryan Coogler and Ron Howard.
PHILOSOPHY
Alberti’s career mix of documentary and fiction work has resulted in a style of working based on realism and cinema vérité.
Cinema vérité or observational filmmaking is a documentary style which attempts to capture reality in a truthful manner, by observing reality and trying to draw attention away from the presence of the camera. Although paradoxically, some argue that the very presence of a camera alters how reality is captured.
Either way, to blend into the background as much as possible, this style is often characterised by a minimal gear footprint. The very act of using less gear will impact the look of how a film is captured.
“From a cinematographers point of view you learn to work with very simple tools and very few people…Docs is another way of working. It’s more instinctual, it’s less intellectual.”
However, Alberti still recommends thinking about the subject of the documentary and basing the photography on the story.
Therefore, although I’d argue that a portion of her photography can be characterised by a vérité look, her style does of course change depending on the nature of the story.
A key difference between her work on documentaries and features comes from the level of intentionality. Long form work is more of an intellectual process with lots of prior reflection on creating a visual language, which is then executed by exercising and maintaining ultimate photographic control. Whereas in documentaries there is more scope to embrace improvisation and capture moments as they play out in real time.
For most documentary interviews Alberti will arrive at a location without seeing it beforehand, whereas when working on features she’ll usually have up to 8 weeks of prep time to scout locations and discuss production design with the director.
An example of how her taste for a natural, vérité look transfers over to her fiction work can be found in The Wrestler.
“I mean the whole film has a very naturalistic look. When I went to look at locations and went to look at a wrestling match I tried to make it work for the drama of the film. To keep it as real as possible. So that you felt you were in a real place.”
She did this by using natural looking lighting and motivated, handheld camera movement - skills which she had developed during her work in documentaries.
GEAR
“In general do I prefer film?...It depends on the story...Some stories are best told in the digital world. In documentary I think it’s a question of economics.”
When it comes to selecting gear for a project, she of course considers which equipment can achieve the desired look, but, perhaps equally as important, is the practical side of the gear selection.
When she started her career, shooting on film was the only viable option for attaining a decent quality image. A lot of her early documentary work was shot in 16mm due to it being a cheaper medium than 35mm which was needed to facilitate the lower budgets of documentary and higher shooting ratios. She mainly shot with Aaton cameras, such as the lightweight Aaton LTR 54. Even though 16mm was cheaper, it was still a costly process to photograph a documentary.
“When we did H-2 Worker…we went to Jamaica with 5 rolls of film because we didn’t have any money. You had to be very careful of the questions you asked and when you rolled.”
At approximately 11 minutes of run time per roll, this meant they had less than an hour of footage which they could shoot. Compared to today where a single interview may be longer than an hour.
Working on digital now allows filmmakers to be far more free about when they roll the camera and allows directors to have a conversation in interviews rather than asking very specific questions in an economic way. Alberti therefore prefers the practicality of digital over film when shooting documentary. It’s economic benefits, ability to roll for extended periods and smaller size outweigh the look of film.
She now uses cameras such as the Canon C300, the Sony Venice or variations of the Alexa for feature films, with different lenses like Hawk V-Lite anamorphics, Master Primes, Cooke S4s, or Angenieux zooms.
Although it is dependent on the story and subject matter of the project, much of her work has featured extensive handheld camera movement which is motivated by the movement of the characters. Perhaps this is due to directors wanting to work with her for her experience in producing quality handheld work in a vérité style.
She has operated the camera herself, but for larger feature films which require a more intensive focus on lighting, she has delegated the handheld work to camera operators. As a lot of the movement tracks the movement of the characters, it can make scenes feel a bit more ‘real’, like the actions of the actors are being observed rather than deliberately performed in multiple takes.
Alberti’s lighting does occasionally differ between projects depending on the type of story, but a lot of her lighting tries to be as naturalistic as possible, so naturalistic that the audience doesn’t even notice that the space is lit.
She does this by only supplementing the sources of light that are already present in the location. For example if sunlight is already coming through a window she may place a film light, like an HMI, outside that window to mimic the same direction and quality of the natural light. This is particularly necessary in fiction where consistent lighting conditions are required throughout a scene - which may be shot over the course of half a day.
Where possible she’ll place lights out of sight so that they can shoot a scene 360 degrees without being limited by lighting gear. She also uses textiles and diffusion gels to soften the quality of the natural or artificial light.
For interiors she’ll sometimes place practical lights in a location or use additional lights overhead like Mac 2000s to give the room a bright enough exposure to shoot in or to balance the brightness of different levels of illumination.
CONCLUSION
If I had to sum up her style, I’d say that Maryse Alberti is a cinematographer whose work in fiction is an extension of her documentary work.
Many of the characteristics of cinema vérité, such as a handheld camera and naturalistic lighting are carried over onto the feature films which she shoots but are executed on long form jobs in a more considered, deliberate and controlled manner than her more improvisational documentary camera work.
Her ability to capture a realistic feeling portrait of reality has contributed to her being an incredibly influential DP in both the world of fiction and documentary alike.
4 Camera Moves Every Filmmaker Needs To Know
Each choice made by cinematographers or directors should be a deliberate one that is responsible for visually communicating information or an emotional tone. In this video I’ll look at four common types of camera movement, go over how they are technically achieved, with what gear, and uncover how each can be used to communicate different emotional tones.
INTRODUCTION
The way in which the camera moves isn’t an arbitrary choice made by filmmakers, or, at least, it shouldn’t be. Each choice made by cinematographers or directors should be a deliberate one that is responsible for visually communicating information or an emotional tone.
From early on in cinema, people worked out that the camera presents a point of view and that moving the position of the camera in different ways during a shot can have different effects on how that shot is perceived by audiences. The way in which information on screen is presented, and in what order that information is presented, can also be controlled by the motion of the camera.
So today I thought I’ll look at four common types of camera movement, go over how they are technically achieved, with what gear, and uncover how each can be used to communicate different emotional tones.
PAN & TILT
Let’s start with the most basic and easiest to achieve camera movement - the pan and tilt.
Panning directs the angle of the camera on a horizontal axis. From right to left or from left to right. Tilting the camera moves it on a vertical axis, angling it upwards or downwards.
These movements are most often done on a tripod head, which can pan or tilt the camera in a smooth motion without shake. However, other types of gear can be used to pan to tilt, such as: a stabilised remote head like a Libra, by whipping a gimbal up or down or controlling its motion remotely, using the motion of a Steadicam, or even panning or tilting the camera handheld.
Both a pan and a tilt are usually used in combination to achieve what I’d call motivated camera movement. This is where the camera’s motion mimics that of the motion on screen.
For example, if a character moves around during a scene the operator may pan or tilt with them so that they remain the focus and do not leave the shot or ‘break frame’ as we say. By following the motion, the camera takes on a more subjective visual language that is more focused on a specific individual and their actions. As opposed to a wide locked off frame that doesn’t move and is more observational and objective.
The easiest way of quickly communicating which character in the story is most important in a scene is to follow their movement by panning or tilting with them.
Panning and tilting can also be used to reveal important information to the audience. For example the camera may start on a character and then tilt down onto an object. Tilting down to this object is a way of directing the audience’s eye to an important detail or piece of information in the story and saying ‘Look at this. Pay attention to it. It will be important later.’
The speed at which the camera tilts or pans will also create different tones.
A slow pan over a landscape may be used to build a sense of anticipation or gradually reveal the magnitude of the space. Whereas a quick whip pan makes a shot feel much more dynamic and is used to inject energy into a scene in a way that is more stylised.
PUSH IN & PULL OUT
A push-in physically moves the camera closer to its subject, usually at a gradual speed. The opposite is a pull-out where the camera steadily moves further away from its subject. So for push ins the shot size will go from wider to tighter and for pull outs the shot size will go from tighter to wider.
Although these moves can be done handheld, they are more commonly done with rigs that keep the motion smooth, such as a dolly, a slider, a Technocrane or a Steadicam.
The more slow and smooth the movement the more natural and subtle the emotional effect. The faster the motion the more abrupt, stylised and impactful it becomes.
For me, slowly pushing in on a character, especially during an important moment where we move into a character’s close up, makes me get inside that character's head. The camera is literally drawing you into their world. This movement makes you concentrate more on what the character is talking or thinking about. Often this move is used when characters are dealing with some kind of internal conflict or during a pivotal moment in the story.
The pull-out works in an almost inverse way. Instead of pushing in closer to the mind of the character, we pull away from them and become increasingly detached. This move can therefore be done to isolate a character on screen and introduce a sense of loneliness.
Another function this move has is to reveal a space or information. Starting in a close up and then pulling backwards will slowly reveal more of the location to the audience, better contextualising the character within their space.
Since the push in and pull out are not motivated by the movement of the character, it is more of a stylistic choice and is therefore in danger of losing its impact if it is overused or continuously done for every close up.
TRACK
A tracking shot kind of speaks for itself. It’s what I’d call a move where the camera physically moves through a space from a start to an end position - often tracking the movement of its subject.
Usually this is done with a dolly by laying a line of tracks and then pushing the dolly along those tracks on a straight axis, sometimes maintaining the same distance between a subject and the camera. Track positioning can also be more diagonal, where the camera tracks sideways but also gradually closer or further away from its subject.
This move can be done on a Steadicam, especially for sequences composed of longer takes with different axes of movement, or where the terrain changes gradient and placing tracks becomes cumbersome. Tracking shots done from directly behind or in front of a character are also commonly done with a Steadicam or without tracks on a dolly on a smooth, even floor.
Like with panning and tilting, this movement can be motivated, based on the movement of the characters.
For example, characters walking from right to left can be followed by tracking in the same direction. Again, this increases subjectivity, shows you what the main focus of the shot is and puts you in the literal footsteps of the characters.
Sometimes filmmakers use a counter track, where the dolly moves in the opposite direction to the subject. Usually this is done in a swift move to increase the energy and tempo of a shot. As the camera moves against the motion of the subject, it decreases the length of the take so is usually inserted as a quick cut within a sequence. For this reason, cars are often shot with counter moves from a Russian Arm, which increases the feeling of motion and speed.
Tracking through a space alongside a character in a longer take also gradually expands the scope of a location and introduces the audience to a space as we are exposed to new backgrounds as the camera moves.
BOOM
Booming refers to moving the camera up or down on a vertical axis.
Boom shots are usually associated with camera cranes which are used to lift or drop a camera using an arm. But for more limited moves they are also commonly done with a dolly, which has a smaller hydraulic arm. These two methods are popular for their stability and smoothness of movement and easy control. Some other gear used for boom shots may include a drone, a spidercam or rig using a pulley system, or a Towercam.
Booming up can be used to reveal more information using a single shot. For example, it could boom from an object, point A, up to a character, point B. This is a way of pointing out to the audience that the object at point A may be important or hold significance to the story. It creates a link between the two points.
Even in the case of the cliche example of characters driving off into the sunset on an open road, point A starts on the characters in a car which then booms up to point B, the open road. This move therefore creates a link between the characters and the open road, which may represent possibilities, freedom, or hope.
As with the push in, booming up and down is often not motivated by movement and should be used sparingly to avoid overuse and minimising its impact.
Also, in the same way as a tracking shot, booming can reveal more of a landscape or setting and is therefore often used to uncover the space as either an establishing shot at the beginning of the scene or as a closing shot at the end of a scene.
CONCLUSION
There we have it. Four types of basic moves which can be used to control how information in a movie is presented.
When interpreting and coming up with camera movement context matters. The same move made to capture different stories in different contexts, at a different pace, in a different manner with different gear may change the effect and meaning that move has on an audience.
So, when you’re planning your shots ask yourself these questions: What is the focus of the scene? What information do we need to present? In what order? Whose perspective is the story being told from? Should the movement be motivated? Or does the camera need to move at all?
These four moves are also just the tip of the iceberg. Some directors like combining some, or even all, of the above moves into a single shot if it serves the telling of the story. Because, really, how the camera in a film should move is only limited by budget, the three dimensions and our imagination.
What The Metaverse Means For The Future Of Cinema
In this video I’m going to do some speculating and take you through what the metaverse is and the potential impact I think it may have on the future of cinema and on visual communication.
INTRODUCTION
Visual communication as an industry has rapidly expanded over the past few decades. This is partly due to the internet providing more platforms for visual art to be viewed and interacted with, as well as increasing access to technology tearing down obstacles in the way of producing art.
Just over 10 years ago if you wanted to make a documentary it required using large and expensive, clunky broadcast cameras, highly expensive film stock, or low fidelity DV cameras. Once it was eventually made you then had to find a TV broadcaster willing to screen it, and if you were lucky enough to sell it, you’d need more luck just to break even on your costs of producing the documentary.
Now, people can pick up a consumer mirrorless camera, or even a phone, and get an amazing image right out of the box, then distribute the final film any number of ways online.
But what does this have to do with the metaverse?
Well, in a similar way that inexpensive digital cameras and the internet transformed the possibilities of documentary filmmaking, I think the metaverse could also have an enormous effect on how films and visual media are made, distributed and interacted with in the future.
In this video I’m going to do some speculating and take you through what the metaverse is and the potential impact I think it may have on the future of cinema and on visual communication.
WHAT IS THE METAVERSE?
On the 28th of October 2021, Facebook announced their intention to devote a huge amount of resources towards creating their version of the metaverse, signaling their intent by even renaming their holding company Meta.
Whether this bodes well or poorly for the future, one of the biggest companies in the world throwing all their chips into the metaverse pot is significant.
So what exactly is the metaverse?
“You’re going to be able to bring things from the physical world into the metaverse. Almost any type of media that can be represented digitally: photos, video, art, music, movies, music, books, games, you name it.” - Mark Zuckerberg
The metaverse is a space created on the internet which uses 3-D virtual environments. While it is still in its infancy, the metaverse involves integration between virtual and physical spaces. So people interacting in this environment will be able to create their own avatar or character that represents them, place that avatar in a virtual space, manipulate them with hardware like VR tools and effectively live a life in this space that includes consuming a variety of art forms and visual entertainment - including films.
The metaverse that Meta is currently developing will likely use a motion capture system, such as the Oculus (owned by...you guessed it...Meta), to allow players to explore the online space and interact with user generated content.
There’s definitely the possibility for filmmaking to exist and be incorporated into this future online world. But also, I think the core skill of filmmaking, which is visual communication, is already being used in developing the metaverse, whether through virtual reality, augmented reality or gaming.
WHAT THE METAVERSE MEANS FOR FILMMAKING?
So how will the metaverse change the way that movies are produced?
To understand this I think we need to know the four main categories that largely determine the cost of producing a film: sets, actors, crew and gear. The metaverse holds the potential to remove or change all of these boundaries.
Let’s start with sets. In the Metaverse, with a little bit of programming, you can create whatever location you want. In real life you may not be able to block off three avenues in New York to shoot your student film, but in the Metaverse any location you can imagine could become a reality.
Secondly, actors could be replaced with avatars representing any form. Or, actors could still be captured in real life and then placed within a 3-D virtual environment.
Third, crew. The only crew you’ll need are people to capture any live action footage and a team of programmers to do the post production digital grunt work. The hundreds of on set crew members needed for larger productions will be greatly reduced since, well...there won’t be sets.
And fourth, gear. Far more minimal camera and lighting gear will be needed to capture live action. Rather than lighting an entire space, now all that needs to be lit is a character and a green screen. Expensive gear that was once used for the bulk of capturing the footage will now be replaced by computers.
So it may seem that all of these prohibitive boundaries that there once were to make a movie will now dissolve and anyone will be able to produce a blockbuster from the comfort of their own home.
I think this yields interesting opportunities. Just as cheaper digital cameras, editing software and an increase in distribution platforms had an impact on how documentaries are made, I think this jump in metaverse technology has the potential to yield similar possibilities in visual communication.
However, I also can’t help but also be a bit sceptical.
While certain live action aspects of filmmaking, such as sets and actors, may move into the virtual space, it won’t exactly be cheap to make movies. I think celebrity actors will continue to be in demand for their ability to attract an audience and will continue to be paid premium rates whether their performance is in the real or virtual world.
I also think that many of the costs saved on crew, gear and locations will just be re-allocated to hiring a large team of programmers and designers to create the virtual movie - similar to how large budget games are produced.
In the end, when it comes to mass entertainment I still think the same players will dominate. The people who are going to be able to produce the highest-end films will still be the production companies with the largest budget, greatest resources and marketing power.
To remain on the cutting edge of technology, to employ the most talented filmmakers or artists and to promote the end product will always take a lot of money - whether in the real world or the metaverse.
While I think the metaverse and virtual reality filmmaking has many exciting possibilities and may change the landscape of independent filmmaking through creative user generated content, I think that the space of mass entertainment will continue to be dominated by the production companies that are able to spend the most money.
WHY DOES IT MATTER?
So why does it matter to those who are working, studying or interested in film and what impact will it have on them?
Although what I’m suggesting is hypothetical, we can already find practical examples of film production companies working in this virtual space. Visual effects companies such as Digital Domain, founded by James Cameron, are increasingly producing more and more work, such as characters, in the VR and AR space.
While the transition for those who occupy roles in the visual effects and post production side of the film industry is relatively straightforward, what does it mean for other crew members who are used to applying their trade in a two dimensional world - like a cinematographer for instance.
As we transition into this new virtual space there will be a period where capturing the real world will be incorporated with visual effects work. This is actually a job that cinematographers are already performing. Almost every film that is produced nowadays includes some degree of visual effects work incorporated with live action cinematography. Combining traditional photographic skills for capturing images, along with more conceptual skills is already a necessity for most DPs.
For example, Bradford Young was tasked with combining these skills when shooting Arrival.
“It was on us to determine the tenor of the visual effects. The visual effects aren’t going to determine how we make the film. We make the film and the visual effects come into play later.” - Bradford Young, Cinematographer
On Arrival the creative team decided on a set of rules when filming the live action, such as keeping the focus on the character in the foreground.
“We never threw focus or rarely threw focus to effects or a CG element. You know, we always kept it in the foreground. If we had four or five added helicopters we wouldn’t throw focus there and say ‘Hey, this is real!’...The film is not about that. The film is about what is happening in front of us.” - Bradford Young, Cinematographer
To me it would be sad to see sit-down cinema as we know it disappear in the metaverse (never mind the potential negative social effects the metaverse might have on the population at large). But one thing we can never escape from is that art is always changing.
Cinematographers of the future will be faced with tools for creating in the virtual world that may have been impossible before in the physical 2-D realm.
For example, even now with visual effects it is possible for cinematographers to shape light in a way that would have been otherwise impractical without digital help.
“We get out in these situations where we have a long walk and talk. Because of the environment that we’re in and because of the tools we have...people don’t walk with a 12x12 negative fill the whole walk. But when we do visual effects, we forget that it’s a visual effect, you’re lighting it so you can do whatever you want.” - Bradford Young, Cinematographer
The norms of how traditional creative systems are to be adapted are still being formulated, so being at the forefront of them as a creator is an exciting prospect.
CONCLUSION
I guess I’d sum up this piece by concluding that although the metaverse is still in its infancy, I think it’s indisputable that eventually filmmaking, and many other forms of entertainment, will continue to move into an increasingly virtual, online space.
As things become more and more virtual, filmmakers will need to adapt their skills from being more practical to being more conceptual. This process may be slow and take many many decades, but I have a feeling it may happen faster than we think.
The metaverse may open up interesting new possibilities for expression, but I think that the mainstream entertainment space will still be dominated by mass media companies that can spend the most. Bearing in mind that these are all predictions I think that there are a couple of things which most creatives should do to stay abreast of this changing visual world:
One. Stay informed and up to date on technological advancements.
Two. Continue honing and building your conceptual eye for visual communication and storytelling.
Because while the demand for your ability to physically photograph stories may dissolve over time, what has always been important, throughout the evolution of art from its earliest form up to what we have now, is the perspective of the artist. Having a strong artistic perspective and experienced eye for storytelling will ensure you’ll always have a job in whatever medium film, or visual storytelling, ends up being.
How Darren Aronofsky Shoots A Film At 3 Budget Levels
As I do in this series of videos, I’ll take a look at 3 different films made by Darren Aronofsky at 3 increasing budget levels: the low budget Pi, the medium budget The Wrestler, and the high budget Noah, to identify commonalities in his filmmaking and how his style has progressed throughout his career.
INTRODUCTION
The films that Darren Aronofsky makes occupy an interesting space. They straddle the line between experimental and realist, between mainstream and independent, between classical biblical allegories and contemporary tales.
However, what most of his films have in common is a strong emphasis on character and use of perspective to make the audience feel like you’re taking a journey in the shoes of those characters, not just observing their story from afar as an outsider.
As I do in this series of videos, I’ll take a look at 3 different films made by Darren Aronofsky at 3 increasing budget levels: the low budget Pi, the medium budget The Wrestler, and the high budget Noah, to identify commonalities in his filmmaking and how his style has progressed throughout his career.
PI - $60,000
Aronofsky’s introduction to filmmaking came from him studying social anthropology and filmmaking in 1991 at Harvard. His thesis short film for the programme, Supermarket Sweep, starred his friend and actor Sean Gullette. It was well received and won him a spot as a finalist at the 1991 Student Academy Awards. He went on to get his masters in directing from the AFI Conservatory, where he met and began working with his classmate in the cinematography programme Matthew Libatique.
When it came to writing Pi, like with many other low budget films, he decided to focus on a single character. This idea of doing a portrait character study was born out of the verite documentaries he would make in film school which focused on the story of one person.
The experimental, psychological horror film was set in only a few locations, with the primary one being inside a small apartment.
The movie was financed through an early version of what I guess you could call crowdfunding. Aronofsky and his producer Eric Waston went around asking every friend, relative and acquaintance to give them $100 to fund their movie. Eventually they were able to raise $60,000 which, along with a host of other favours, was used to make the film.
Some of those favours included getting the crew to work for deferred pay by granting them shares in the film which would pay out once the film was sold, paying the actors $75 a day and getting a free warehouse which they could use to build their studio set.
Around $24,000 of the budget went towards the cost of buying and developing 16mm film stock and much of the remaining funds were reserved for post production. This left very little money for gear rental, production design or locations on the 28 day shoot.
However, Libatique who would photograph the film, was granted enough to rent a Aaton XTR 16mm camera, three lenses and a free, although small, tungsten lighting package to work with. He chose the XTR for it’s lightness, which helped with the ample handheld work, along with its ability to shoot single frames, which they used for the stop motion board game scene. He got 2 16mm Canon zooms, a 8-64mm and an 11.5-138mm, and one Angénieux 5.9mm wide prime lens.
To support a surrealistic look that Libatique termed ‘low-fi stylisation’ Aronofsky decided to shoot Pi in black and white.
“Darren wanted to shoot Pi in black-and-white for both aesthetic and budgetary reasons. He wanted the most contrasty black-and-white possible, with really white whites and really black blacks.” - Matthew Libatique, Cinematographer
To achieve this look Libatique decided on using reversal film, Eastman Tri-X 200 and Plus-X 50 for daylight scenes, which have high contrast but less dynamic range than negative film. The latitude, the difference between the lightest and darkest part of the image, was so small that he only had about 3 stops before the highlights started blowing.
Which is difficult to comprehend when comparing to modern digital cameras like the Alexa, which can handle more than 14 stops of dynamic range.
Libatique’s lighting and metering of exposure had to be extremely precise as being even half a stop too bright might mean losing all detail. On top of that he used a yellow filter to further increase the contrast and get rich blacks.
Their philosophy behind the look of the film was to create a subjective perspective that put the audience in the shoes of the protagonist. They did this by shooting with a single camera, shooting over the protagonist’s shoulder and moving it in a motivated way. So when the character moved, the camera followed.
To increase this personal perspective they also used a macro lens at times to capture close details in an abstract way that also represented the character’s gaze.
A final example of this subjective perspective can be found in the stylised use of mounting a camera directly onto the actor’s body. Kind of like vlogging, before the concept of vlogging existed. This gave a personal, up close, subjective perspective that mimicked the increasingly manic movement of the character.
They rigged a still photography tripod to a weight belt that was attached to the actor and mounted Aronofsky's own 16mm Bolex camera with a 10mm lens to the tripod. He altered the frame rates, overcranking his close up, and undercranking the camera at 12fps for his POV shots to show his increasing dissociation with the real world.
Aronofsky spent the majority of the low budget on sound in post production, where he was able to find additional funding, as he knew that without a strong sound design and mix the film would fall flat. He was able to get a score from Clint Mansell who, like the crew, worked for a deferred fee.
He was therefore able to pull off Pi on an incredibly low budget by: writing a story with limited locations, characters and no large set pieces, getting crew to work for deferred pay, pulling lots of favours, and using a small gear package to create a vividly experimental, subjective, surrealist look.
THE WRESTLER - $6 Million
After winning the Directing Award at Sundance Film Festival for Pi and selling it to distributor Artisan Entertainment for more than a million dollars, Aronofsky kick started his feature film career.
Following the bigger box office budget flop of The Fountain, Aronofsky picked a lower budget script for his following film, a realistic dramatic portrayal of an aging wrestler, written by Robert D. Siegel. He raised a budget of $6 million to make the movie.
After Nicolas Cage initially expressed interest in the role, it was eventually granted to Mickey Rourke.
Although Rourke admired Aronofsky’s work and wanted to make a film with him, he wasn’t overly happy about the script as he felt that some of the dialogue didn’t accurately portray how his character would realistically talk. Therefore he, along with Aronofsky, re-worked much of the dialogue in the script until they were happy.
Due to the free way that Rourke liked to work, apparently around 40% of the final film was improvised and initially unscripted.
“I tried to approach the film as free as possible. I didn’t go onto set as I usually do with very specific notes and shot lists. I tried to be open every morning to what Mickey was going to bring and then try and figure out after I saw that the best way of capturing it.” - Darren Aronofsky
For example, most of the wrestling scenes were scheduled during real wrestling matches. The crew would wait till about halfway through a match and then bring Rourke into the ring and shoot a bit, using the real energy from the crowd who turned up.
As it was very physically demanding Rourke would then leave the ring, re-gather his energy and come back to shoot a bit more. During these breaks the real wrestlers would keep the crowd entertained while Rourke recovered and the cameras were reloaded with new film stock.
To capture this free way of working, Aronofsky devised a style and approach which both supported how he wanted to tell the story and which was practical.
There’s not much realism in the world of wrestling, which is all about over the top performance, however the life of the main character in The Wrestler is too painfully real. So Aronofsky decided to create a film grounded in cinema verite, which followed his protagonist, literally, with an up close and intimate handheld camera. Again taking on a more subjective perspective, however this time one that was far more centered around realism.
To create this look he hired cinematographer Maryse Alberti who had a track record in both fiction and documentary work.
They shot it on Super 16mm, which suited both the modest budget, as it is cheaper to shoot than 35mm, but the grain from 16mm was also reminiscent of the verite, documentary look that they were going for.
To create the look for this realistic portrait, Alberti shot almost entirely with natural light, mainly using whatever practical lighting was already in the locations. She would sometimes bring in a couple of lights or tweak them slightly in order to achieve exposure but otherwise left the lighting alone whenever possible.
The only exception was the final match, which was a built set. In this she mimicked the lighting setups of many of the other matches which they had already shot - based around using overhead lights and lighting the four corners of the ring.
Since most of the movie was assembled from long shot sequences, photographed from the shoulder on a handheld camera, she chose the Arri 416 for her camera operator Peter Nolan.
She paired the camera with a set of Zeiss Ultra 16 prime lenses and two Angenieux Optimo zooms, a 15-40mm Lightweight and a 28-76mm.
Due to the length of the takes, Peter Nolan came up with some interesting techniques for operating the camera. One involved strapping an applebox to his waist so that when sat down with the camera during a take he could rest his elbows on the apple box and hold the camera steady.
Sometimes these long takes required plenty of choreography and involved grips holding up flags at various points to block out lights from casting shadows of the camera.
So Aronofsky in some ways maintained his perspective of shooting the film in a subjective way, yet moved away from experimentation and more into realism.
The Wrestler’s higher budget allowed Aronofsky to hire a cast of well known actors for this performance heavy drama and pay all the cast and crew fair rates, yet they saved money by shooting on 16mm, in a rough, verite, documentary style which allowed them to work on real locations, without any large production design, grip or electrical setups.
NOAH - $160 Million
The Wrestler proved to be both a critical and financial success.
A few years later he turned to producing a huge scope story which he had been interested in since he was a child: the biblical story of Noah. True to his style, Aronofsky adapted Noah to the screen by straying from the brief source material and including a more surrealistic, allegorical story, which visualised and presented themes through exaggerated characters and images.
Producing such a large scope script, with its epic set pieces, required a hefty estimated budget of around $160 million. Aronofsky turned to his regular DP Matthew Libatique to shoot the film.
“We were handheld on Noah, but it wasn’t like we were floating from character to character in a vérité style. I think we’ve matured as filmmakers and can focus on what’s important, which is subjectivity and storytelling.” - Matthew Libatique, Cinematographer
But, like on The Wrestler, Aronofsky wanted to be able to move the camera in a way that was very fluid and natural, but also in a way that was very controlled. Therefore Libatique mainly used Arricam LT cameras, which were light for handheld work yet also tough enough to handle working outdoors in the elements for extended periods without breaking.
With them he selected Zeiss Ultra Primes, mainly sticking to 3 focal lengths, a wide 24mm, a medium 50mm and a long 85mm.
This time he shot on 35mm, a format with greater clarity and less grain, more suitable for an epic. Libatique shot in the higher resolution 4-perf format for any shots that required post production special effects, and in 3-perf for regular scenes.
Although most of the film was shot handheld with a single camera from a more subjective perspective, certain scenes, such as the large flood scene, was shot with four cameras, two on Chapman Hydrascope cranes and two on the ground, to more quickly cover the many shots needed in this expensive set piece.
The magical exteriors were mainly filmed on location in Iceland.
When it came to lighting characters in those exteriors not much was done except for trying to block scenes so that the actors could be backlit by the sun. Libatique likes to keep things as naturalistic as possible so avoids lighting exteriors whenever he can, only using a muslin bounce occasionally when he needed more fill.
As Libatique says: “Fighting nature to mimic nature is a large undertaking.”
However some interiors and night scenes involved enormous setups. For example, to cover the battle scene at night his team hung 18 daylight balanced helium balloons from condors. Then, two 100-ton cranes each carried 100-foot rain bars, and another 100-ton crane carried an 80-foot rain bar, with two 32K balloons on each rain bar.
Another huge setup was the Arc set, which was constructed in three levels in a studio in New York. Lighting such a big space came at a cost.
For day scenes the rigging grip built a giant white ceiling bounce, made up of smaller UltraBounce surfaces. Bouncing into it were 20 20Ks, which they rigged on each side, underslung on the truss, and also 25 Mole-Richardson 12-lights.
Once production was wrapped, 14 months of post-production work began. During this time Aronofsky tasked Industrial Light & Magic with extensive VFX work including creating 99% of the animals in the film, dropping in background plates, like mountains or trees, and of course creating the mythical elements such as The Watchers.
As with all of Aronofsky’s films dating back to Pi, a score was composed by Clint Mansell.
Noah was therefore produced on a blockbuster budget, which was needed to create massive production design builds, enormous grip and lighting setups, a cast of stars and enormous set pieces which required over a year of innovative visual effects work.
CONCLUSION
Darren Aronofsky’s filmography covers an interesting range all the way from low budget independently financed films up to large studio blockbusters.
Despite this large growth in scale, his preferences for visualising themes and presenting them through characters using a subjective perspective has carried over throughout.
While the maturity of his filmmaking might have grown, it maintains elements of original experimentation and an eye for the surreal that he’s had since his earliest foray into cinema.
The 3 Basics Of Cinematography
I think the most important duties of a director of photography or DP can best be distilled into 3 basic elements: exposure, lighting and camera positioning and movement. Let's take a look at these 3 aspects of cinematography to show why they are crucial in order to fulfil the DPs overarching function of building and capturing the look of a film.
INTRODUCTION
As you can probably gather from the name of this channel, I usually make videos that skip over some of the basics and make content that is a bit more, well, in depth. But since I’ve had some requests in the comments to make a video that goes over the basics of cinematography I thought I’d do just that.
As the role that the cinematographer takes on is a fairly technical and complex one, it’s a bit tricky to distill all the nuanced things that they do into a single YouTube video. However, I think the most important duties of a director of photography or DP can best be distilled into 3 basic elements: exposure, lighting and camera positioning and movement.
These three elements align with the three departments on a film set which the DP manages: the camera, lighting and grip departments. To be a cinematographer you need to be able to control all three of these elements and manipulate them in order to capture a visual style which suits the story being told.
So let's focus on each one of these departments, or aspects of cinematography, to show why they are crucial in order to fulfill the DPs overarching function of building and capturing the look of a film.
CAMERA
Let’s start with a fairly necessary feature of cinematography, the camera.
To capture an image light passes through a glass lens and hits the film plane, which could house a digital sensor or a film stock. How the footage will look is determined by the amount of light that hits the focal plane and the sensitivity of how easily the digital sensor or the film stock absorbs that light.
This is what we call exposure. It refers to the amount of light that is exposed to the film plane. Letting in more light will result in a brighter exposure, while letting in less light will mean a darker exposure. One of the most important parts of a cinematographer's job is measuring and ensuring the correct exposure is achieved. Exposure is an important tool that DPs can easily use to create an image that reflects the correct tone and story.
A simple example can be found in comedies versus horror films. Typically comedies have a brightly exposed image which reflects the light, comedic tone of the story. While horror films often have a darker exposure which sets a broodier, scarier psychological tone.
To control exposure with the camera, the cinematographer can adjust three different variables: the shutter, the aperture and the ISO or film speed.
Motion picture cameras usually use a rotary disk shutter. This is a semi-circular disk that spins in front of the film gate. When the disk passes the film gate light will be blocked and not let in. As it turns there will be an open section where light will be able to hit the film plane.
Since the shutter takes up a percentage of a circle in film cameras it is measured in degrees which is referred to as the shutter angle. Adjusting the shutter angle affects the amount of time that film is exposed to light. So, making the angle smaller, like 45° means that the rotating shutter will block more light. Therefore the exposure will be darker. Inversely, making the shutter angle bigger, like 270°, means that more light will be let through and that the exposure will be lighter.
The shutter angle also controls the secondary function of motion blur. A smaller shutter angle will constrict motion blur and make footage feel choppier, while a larger shutter angle will increase motion blur.
Conventionally the shutter is therefore kept at 180°, which makes movement feel ‘normal’ to the human eye.
Cinematographers therefore primarily usually use the other two variables of aperture and film speed to control exposure.
Aperture is the hole in the lens that can be opened or closed to let in more or less light. On cinema lenses this value is measured as a T-stop and as an f-stop on still photography lenses. Opening the aperture, shooting with a smaller stop number, means that the hole is large and more light is let in. Closing down the aperture, shooting at a deep stop, makes the hole smaller and lets in less light.
Finally, the ISO or film speed refers to how sensitively the digital sensor or film stock responds to light. A low film speed such as 50 ISO is less sensitive so will be darker and a high film speed such as 800 ISO will be brighter.
So to expose an image which is not too bright and not too dark, cinematographers will manipulate this ‘exposure triangle’ of shutter angle, aperture and film speed until the footage has the correctly exposed look.
LIGHTING
Now that we know how to manipulate the brightness of an image by manipulating the amount of light that is let into the camera, cinematographers must focus on how the light that is let in actually looks.
Film lighting is usually done with continuous light sources. This is where the output of an electrical light remains fixed and consistent, unlike in stills where a flash, a brief burst of light, is often used.
There are two primary functions of lighting in film: first, creating a look for the film which is visually appealing and which suits the tone and supports the storytelling of the content which is being filmed, and second, maintaining a consistent source of light so that the exposure does not fluctuate unevenly during shooting.
A mistake I made when I got my hands on a camera and first started shooting short films was not focusing enough on lighting. Beginner cinematographers, myself included, often think creating a beautiful image comes from having a nice camera to shoot on. That it’s all about having the newest, fanciest gear.
However the most important part of cinematography and creating an aesthetically appealing image is rather to focus on how the image is lit.
There are no hard and fast rules when it comes to lighting, but familiarising yourself with some basic lighting ideas and even just being aware how the light hits a subject and affects the way a shot will look, is the most important first step.
The second reason that lighting is important is that if you are shooting only using natural light, it has a tendency to change throughout the day.
For example the sun may begin by shining through a window and then move overhead, throwing the interior into shadow. But, having a strong light source placed outside a window that replicates the sun's light will minimise any changes that might prevent different shots in a scene from matching up.
Maintaining consistent lighting and cinematic continuity between different shots photographed at different times is an important part of being a DP.
GRIPS
Now that the exposure is under control and we have lit a scene in a consistent way that supports the visual style of the story that is being told, the third most important basic element of cinematography is the placement of the camera and how it moves.
Some directors may leave a lot of this to the DP. Others may like to dictate exactly how the camera moves or a shot is framed. Regardless of the creative input, it is the job of the cinematographer to technically execute those choices.
When it comes to camera placement there are some basic guidelines which should be followed, such as the 180-degree rule.
This states that the camera should be kept on one side of an imaginary axis when shooting a scene that cuts between two characters. So that the first character is always frame right of the second character. However, on occasion, filmmakers have intentionally broken this rule for effect.
How each shot in a movie is framed and executed should be a conscious choice by the cinematographer and director, as different movements will have different effects on an audience. Just as different songs can make you feel different emotions, so too can camera movement. But perhaps in a more subtle way.
For example, a manically operated handheld camera has a different feeling to a buttery smooth tracking shot.
Therefore, when it comes to placing and moving the camera, cinematographers need to be knowledgeable about both the psychological effect that camera movement and framing has on an audience, and an understanding of various grip rigs and gear that can be used to create those moves.
CONCLUSION
So there you have it: exposure, lighting and camera placement and movement. Three basics that every DP consciously controls.
As I mentioned at the beginning, these three things only touch on the fundamentals of cinematography. There are a whole host of other decisions that DPs have to make on a minute by minute basis on set.
But, when you’re starting out, be sure to remember these three basic elements and focus on using them to bend the visual tone of the cinematography into a shape that best suits the story.
What A Key Grip Does On Set: Crew Breakdown
In this Crew Breakdown episode I’ll go over the head of department position in the grips department, key grip, a job which not everyone is familiar with but which greatly contributes to the look of many movies.
INTRODUCTION
In this series I go behind the scenes and look at some of the different crew positions on movie sets - the people who go about physically constructing each and every shot that happens on screen.
In this Crew Breakdown episode I’ll go over the head of department position in the grips department, key grip, a job which not everyone is familiar with but which greatly contributes to the look of many movies. I’ll break down what they do, their average day on set and some tips which they use to be the best in their field.
ROLE
The key grip heads the department and is responsible for planning, supervising and delegating instructions to other grips in their crew.
But what exactly do grips do?
Well, whenever you see an impressive shot in a film where the camera moves in an innovative, amazing way, a grip is behind it. So grips are in charge of planning and orchestrating camera movement and the positioning of the camera which is described or delegated by the director and cinematographer.
To do this they rely on constructing and operating a variety of rigs that the camera is attached to. Some examples of these include a dolly, a crane, a ladder pod, a Technocrane, a tripod or hard mounting the camera onto objects like a car, a motorbike or even a shopping cart.
The second duty that a key grip does is rigging.
Their capacity in this regard changes depending on the working style of the country. For example in the US grips are expected to place and secure lighting stands, whereas in other countries, like South Africa where I work, the lighting department handles this.
However in almost all locations grips are expected to help out with the more complex rigging of lighting equipment. For example, securing a polecat or pole between two walls so that lights can be placed high in the ceiling. Or, rigging large scrims or blackouts, such as 20x20 foot sized frames, above actors using heavy duty stands, trusses or attaching them to a Manitou.
The third and final broad category of their work is determining the safety of a setup.
For complicated shots, difficult rigging or placing crew or cast on film equipment, the key grip is consulted on determining the safety of the setups and those involved. Having an experienced grip on the crew is important in this regard to ensure that, most importantly, there are no injuries, or damage to property, sets or gear.
This is why rental companies insist upon an experienced grip to be present when heavy duty equipment, such as Technocranes, are rented or used on set.
Overall the role of the key grip is a broad one, that carefully plans, consults with creative crew members like the director and DP, and then practically executes their vision with a variety of rigs in a safe and responsible manner.
AVERAGE DAY ON SET
The key grip joins a project before the on set shoot begins.
In pre-production they will discuss the shots that need to be gotten with the DP. Sometimes this will be done with the help of storyboards or even a pre-vis animation if the filmmakers are very specific about the kind of moves they need to do.
Some shoots are relatively straightforward and may only require basic handheld shots, while others may require complicated, tricky camera movement, custom grip rig builds or large amounts of lighting rigging that may require more planning.
Before shooting they will also go on a reccee to the shooting locations with the director, cinematographer and other relevant crew members. Here they will physically plan out, measure and assess the grip logistics - such as whether a dolly can fit through a door, or what length of pipe they will need to rig to the ceiling.
The key grip will therefore arrive on set usually with a solid idea of what lies ahead for the day. They will get a shooting schedule from the 1st AD and start preparing for the first shot. This could be a dolly shot.
In this case the DP will use a viewfinder, the camera, or their eye to determine a position and start and end point for the move. The key grip will mark these points and call on his team to bring in the correct lengths of track.
They will place the track, level it then bring in the dolly and lift it onto the track.
It should be noted that depending on the country or size of the shoot there may be a dedicated dolly grip who pushes the camera on the dolly. In some countries, such as mine however, this is usually also done by the key grip.
The grip department is made up of a key grip, who consults directly with the DP and leads the department, making important decisions. Next, the best boy grip, who manages things like gear inventory and oversees a team of grips which will fluctuate depending on the size and needs of the shoot.
The key grip and their team will continue setting up ways to position and move the camera throughout the day, while also assisting the lighting team with any necessary rigging work.
Some other duties they are called on to perform may include operating a gimbal and ensuring the safety of the operator when doing handheld work. This is done by gripping, excuse the pun, a part of the operator’s body, such as their waist, and moving with them as they operate, making sure they don’t trip, loose their balance or crash into any obstacles.
TIPS
Although I’ve never worked as a grip myself, I have observed a few tricks that some of the best use on set.
Since many grip setups, such as laying and leveling tracks or building a crane are very time consuming, the best grips are always a few steps ahead and get their team to use whatever gear is available to start on the next setup before it happens while the key grip stays with the DP close to the camera.
For example, when shooting in a house they may start off with locked off dolly shots then move onto a tracking shot outside the house. A good grip will consult with the DP beforehand to get a position for the tracking shot and then get his team to lay and level tracks so that they are all set up when they move onto the next shot.
Finding ways to save time will ingratiate yourself with the DP and director as it gives them more time to shoot.
Another way of saving time is to always try and get camera moves as correct as possible every time. To do this it’s useful to use tape and make start and end marks next to the dolly wheels. That way the dolly grip knows exactly where to start and stop each move and will never waste time by over or under shooting it.
Also having a tape measure or, ideally, a laser measurer will allow you to quickly determine measurements, such as what pole length to use, or how many track lengths are required.
A last tip is to always have the right tools for the job. This is why key grips often have their own grip truck which is fully kitted out - a process that happens through years of acquiring tools that are necessary on set.
This usually includes a full set of apple boxes, packing and wedges. These useful pieces of wood are always required on set for everything from leveling a track, to propping up a product or, most importantly as a seat for the director.
Grips should be prepared for everything: from having a jigsaw to cut bits of wooden board, clamps, WD-40 to lubricate moving parts, rolls of gaffer tape for making marks, and a selection of wrenches, allen keys and other tools which may be needed to adjust mechanical parts.
Actually scratch all that…I’ve just remembered the real best tip that any grip can follow. Get padded seats for your apple boxes. Have padded apple boxes and the crew will worship you!
Cinematography Style: Chung-hoon Chung
In this episode I’ll look at Chung-hoon Chung’s philosophy on photography and show some of the gear which he uses to execute his vision.
INTRODUCTION
Chung Chung-hoon is a cinematographer whose work doesn’t abide by one particular aesthetic. His photography is flexible and balanced differently for each project.
With that said, much of his work, especially with director Park Chan-wook, has taken place in the thriller or horror genres. Therefore, he does tend to treat these stories with a slightly darker eye and cinematic tone which is slightly surrealistic and elevated from reality.
In this episode I’ll look at Chung Chung-hoon’s philosophy on photography and show some of the gear which he uses to execute his vision.
BACKGROUND
Chung’s entry into the world of cinema didn’t start behind the camera. It started at the age of 5 when he was cast as the lead in a show which became a huge hit in his home country of South Korea. This acting career went on to last 15 years until he began studying at Dongguk University in Seoul where he directed several short films. He also served as his own cinematographer on these projects.
Chung began his official career as a feature film director of photography at 25, which he has continued doing ever since.
His biggest breakthrough came after Park Chan-wook’s editor put his name forward to shoot his next film Oldboy. He was a fan of his prior film Sympathy For Mr. Vengeance. At first he hung up when Park’s production team called him as he thought it was a prank. An hour later he got a call from Park himself. He won the job without Park ever having seen any of his prior work.
Oldboy accelerated his notoriety as a DP in Korea and internationally and began a long working relationship with Park. Some other directors that he’s worked with include his recent collaboration with Edgar Wright, Andy Muschietti and Alfonso Gomez-Rejon.
PHILOSOPHY
“From my viewpoint as a cinematographer, the creation is not about obsessing on making new things that no one has created before but to combine things that fit the film with the right balance.”
Getting this ‘right balance’ is one of the reasons why he does not maintain the same stylistic flairs across projects. To him, the right balance is attained by having a deep appreciation and understanding of the story. He credits his early work as an actor as giving him a solid grasp on storytelling.
“The reason I keep mentioning the drama of a piece is because only by understanding the characters and what they go through can I provide and capture all the movement I want and need to capture.”
“For example, in Oldboy I saw the overall movement as the chasing of the characters and as such there is a lot of long lens and zoom lens work, shots taken from behind and less handheld camerawork, more fluid I guess I would say. I believe if I understand the characters I can accurately build up relevant, sympathetic scenes within whatever overall genre is required.”
After gaining sufficient understanding of the story and characters, when working with Park, they’ll then set out to storyboard the film. Although he says that probably about 40% of the storyboarded shots change, he likes having them as a base to start from.
Since they’ve worked together on so many projects, Park is readily accepting of lots of his ideas. Chung credits Park as being a good director for his ability to discern and accept, or reject, ideas which he puts forward. This is what gives the films their overall style and tone.
When it comes to translating these ideas to the screen, he prefers to shoot with a single camera whenever possible, as it is quicker to light and means he never has to sacrifice any angles or lights that may have to be moved due to the placement of a second camera.
With that said he has used multiple cameras, on films such as It, for the purposes of capturing performances more easily or to speed up production.
GEAR
Although many aspects of his cinematography like the way the camera moves, the colour palette, grade and focal length selection, changes depending on the story and whims of the director there still are some elements of his style which are present throughout his filmography.
For example, many of his movies are photographed in a 2.35 aspect ratio.
In many films he uses a hard backlight with a soft fill. In It this was done by using tungsten Vari-Lite VL1000s rigged to the ceiling to provide backlight from any angle and then filling in the space by rigging 17 Kino Flo Image 85s with daylight tubes to a pipe above the set. The Kino Flos were fitted with egg crates so that the light from above didn’t spill too much onto the walls, which maintained deeper shadows and more contrast.
He’s used this similar technique with different lights in different positions in his other films, dating back to his early work. Backlight characters with harder lights and filling in with something soft and diffused to retain detail in the shadows.
In some films his lighting is more naturalistic and in others it’s a bit more exaggerated depending on what’s best for the story. Although Chung has mentioned that keeping things looking fairly natural and real is a good way of making horror films look and feel scarier.
For interiors he often employs a subtle use of colour, particularly a mix of off-green and tungsten light, which he has done in many films to bring in a slightly sickly feeling of dis-ease.
He may use atmospheric smoke to give shape to the light.
When it comes to choosing a camera his previous work was captured on 35mm film but after the digital revolution he has mainly stuck to shooting with digital cinema cameras. He almost exclusively uses Arri cameras: like the 435 or Arricam ST for film and the Alexa XT or the Mini for digital.
He pairs these with a mix of spherical and anamorphic lenses. When using film he preferred sharper lenses such as Zeiss Ultra Primes or Zeiss Master Primes, but now that he shoots digitally he prefers lenses which are a little bit softer or that have vintage characteristics such as Hawk V-Lite Vintage 74’s or Panavision G-Series anamorphics. Chung carries Angenieux Optimo zooms on most films, usually the 24-290mm.
When working with the Alexa now, he uses a colour accurate monitor and a remote iris control on the camera so that he can ride the exposure to levels that he is happy with by wirelessly changing the iris.
Chung works with a Rec 709 LUT which may be tweaked live on set by his DIT based on his feedback. His main focus is on getting the contrast right on set and has said that he usually leaves the overall colour in the grade down to the director and their preferences.
For example, a lot of his work with Park is quite punchy and saturated with strong contrast and deep shadows, while on Me and Earl and the Dying Girl the grade is more washed out and desaturated.
He isn’t the biggest fan of the steadicam and often prefers to use a dolly and tracks for camera moves with greater stability and control.
CONCLUSION
From comedies to thrillers to horror films, Chung Chung-hoon is familiar with multiple genres. His greatest ability is being able to read the correct cinematic tone for the story and characters and adapt his technical choices to get the right balance.
Sometimes the right balance is manic and stylised. Sometimes the right balance is natural and real. Sometimes it’s dark and moody.
Chung is able to separate himself from any one aesthetic and craft a cinematic language which is always right for the story.
The Most Popular Cinema Cameras (Part 3)
In the third part of this video I’ll go over a few more of the most popular cinema cameras which have been used in the film and commercials industry. I’ll look at a variety of formats and go over their ergonomics as well as the look which each can create.
INTRODUCTION
The camera is an important tool in the cinematographer’s toolbox. Different cameras offer different mediums and influence the look and capabilities of how footage is shot.
In the third part of this video I’ll go over a few more of the most popular cinema cameras which have been used in the film and commercials industry. I’ll look at a variety of formats and go over their ergonomics as well as the look which each can create.
IMAX MSM 9802
Let’s start with a niche but pretty incredible format - Imax - in particular the Imax MSM 9802. Almost all other film cameras for other mediums such as 16mm or 35mm use a vertical pulldown, where the film goes through the magazine, passes through the gate from top to bottom and re-enters the exposed side of the magazine.
Imax is different.
To get the greatest possible exposure area of the negative, Imax cameras use 65mm film which leaves the magazine, passes through the film gate horizontally and then re-renters the magazine. 35mm usually exposes an area which is 3 or 4 perforations long. Imax exposes 15 horizontal perforations for each frame of footage.
This technique produces a negative that is nine times larger than the 35mm format, and three times larger than regular 70 mm film - making it the largest, highest resolution film camera available.
The camera uses 65mm film stock which is then printed to 70mm for screening, with the extra 5mm used for the camera’s audio tracks. It can also be scanned and converted to digital files usually at high resolutions like 4K, 6K or 8K.
It has incredible fidelity, contrast and no trace of grain with an estimated horizontal resolution of around 12K.
But this monster of a format does come with a few limitations, one of the biggest of which is price. A single 1000’ roll, which lasts for a measly 200 seconds of shooting time, costs almost $1,300.
Renting a MSM 9802 will be about $16,000 a week. Factor in development costs and you’re probably looking at the most expensive motion picture format in the world to shoot. Roughly four times the cost of 35mm.
It’s outrageous price tag, as well as it’s large, ungainly ergonomic design which is needed to house the 65mm stock that passes through the camera, means that it is usually used for specific set pieces in films and is always shot very sparingly and deliberately.
This isn’t exactly what you’d call a run and gun camera.
The cameras come with their own rulebook and standard protocol dictates that a specific Imax trained DOP is hired to shoot with the camera. However, on The Dark Knight, Christopher Nolan and Wally Pfister tore up the rulebook and decided to shoot it without the aid of a dedicated Imax DP.
They broke Imax protocol which stipulates maintaining an enormous amount of headroom, to avoid the audience craning their necks in order to see the top of the image in most Imax theaters. Instead of putting the central crosshair on top of the head, they placed them on the eyes for close ups.
Due to the enormous resolution of the negative, small errors like make up flaws or camera tape on the floor can be picked up far more easily.
Imax cameras need to be used with large format lenses that will cover the 65mm negative without vignetting. For example, Hasselblad medium format stills lenses are a common choice. The huge Imax negative, combined with the longer focal lengths which need to be used, creates an extremely shallow depth of field, making the focus puller’s job very challenging.
The MSM is the lightest Imax camera and is therefore preferred for rigs that require a smaller camera build, such as stabilised heads or even a ridiculously heavy steadicam build.
Although these bulky, hefty cameras were designed to be shot in studio mode from a dolly, tripod or stabilised head, convention has occasionally been broken and cinematographers like Hoyte Van Hoytema have even shot with them handheld.
Imax is therefore mainly used by productions with a large budget, who are aware of its ergonomic difficulties, challenging focus and short roll time but which require a 65mm film image with the highest possible fidelity available on the market.
ARRI AMIRA
In past episodes I’ve looked at the Alexa Mini and the Mini LF. Well, the Amira is another camera in Arri’s lineup but for a slightly different market. Although it has the same Super 35 ALEV III CMOS sensor as the Mini it is used less frequently in feature films.
Ergonomically, what makes it different is that the Amira is set up for single operator use, meaning it is ready to shoot without having to be built or configured much. It has great ergonomics for handheld work right out of the box, making it perfect for productions where the operator shoots from the shoulder, such as documentaries, TV shows, or independent films that work with smaller crews and require an easy to operate setup.
Its dust-proof, splash-proof, sealed-off electronics make it a hardy camera and a good pick for the challenging environments of documentary shoots, while its sensor still is able to produce the classic Arri digital look. This makes it possible to easily cut Amira footage alongside shots taken with other Arri cameras.
For example, on Nomadland cinematographer Joshua James Richards paired the Amira with floating gimbal footage shot on the smaller Alexa Mini, which cuts together seamlessly.
It’s ergonomics reflect that it’s designed for use by solo operators. It has three internal FSND filters. The switch for them is well placed just under the lens mount, making it easy for operators who are pulling their own focus or changing their iris to quickly adjust filters on the fly.
It also comes with three XLR inputs so that operators can record sync sound straight into the camera to cut down on post production synchronising work and be able to mix their own sound quickly and easily on the camera - which is often a necessity for documentaries.
The rest of its accessories such as the top handle and viewfinder menu are solid, easy to use and pretty standardised across the Arri range. The Amira can shoot up to 4K UHD in ProRes or in ARRIRAW at 2.8K with a license. It also has the interesting option to use less photosites and shoot in a Super 16 mode with 16mm lenses.
Overall, the Amira’s larger form factor meant that the Mini became Arri’s most popular cinema camera due its greater modularity and ability to more easily be configured into multiple lightweight rigs and forms such as a gimbal or rigs in tight spaces.
However, its larger size and operator focused ergonomics make it perfect for solo shooters who require the Alexa look.
ARRICAM
The Arricam, introduced in 2000, is Arri's most modern, flagship 35mm sync-sound camera line, which replaced the large Arriflex 535.
There are two variations: the Arricam Lite or LT and the Arricam Studio or ST.
The Arricam is a competitor of Panavision’s Millennium XL line. The two share the spot of the most used modern 35mm camera today. Like the Millennium it can shoot 3 or 4 perf 35mm film and comes with an array of updated conveniences, such as: a clear, modern viewfinder, LDS, a low operating noise of 20 decibels or less for 4 perf and an updated video tap system.
The Arricam Integrated Video System outputs a 1080p video feed that is colour accurate and removes the grain and texture from the ground glass. This means focus pullers, cinematographers, directors or any other crew member or client can watch a fairly accurate video approximation of what the final 35mm shot will look like once developed.
On large enough productions an Arricam ST and LT may be carried for different applications. But, if there is only the option to rent a single camera the LT is often the go-to. This is due to it being a more modular camera that can be configured into a greater variety of builds.
A LT Magazine Adapter can be used to mount a 1000’ studio magazine onto the Arricam LT for the ability to roll for longer than the 400’ film magazines it comes with.
The ST usually comes with a 1000’ mag, and, as its name suggests, is better for studio builds where the camera is on a rig where weight and size don’t matter like a tripod or dolly.
An advantage of the ST is its ability to shoot at higher frame rates up to 60FPS, while the LT tops out at 40FPS.
The Arricam is therefore a great modern 35mm camera that comes with all the necessary bells and whistles for modern shoots using film.
CANON 1014 XL-S
Finally, for something different let’s look at a camera that shoots in a format which isn’t as popular, but which still gets plenty of use to this day - Super 8.
When it comes to modern Super 8 cameras, the Canon 1014 XL-S is top draw. Although 8mm isn’t the most common format it is used for it’s lower fidelity look, or in combination with other formats on mixed medium shoots for artistic or creative effect.
It’s tiny negative size, very high grain levels and lack of resolution and sharpness mean that it is rarely used as the sole medium on a project. Although it has been used on some music videos, commercials and no budget films from the past.
Due to its historic use as a low-cost medium to capture so-called ‘home video footage’ by regular people and families, it has associations with nostalgia.
The Canon 1014 is easy to operate but also comes with about all the additional extras possible for people who are more serious about shooting Super 8. It can shoot both sync sound or silent 8mm film, which is easily loaded into the camera by inserting a cartridge without the need to load it in the dark.
The camera comes with a hot shoe mount on top which can hold a microphone and record sound through a 3.5mm mini jack.
The camera comes with a 6.5-65mm f/1.4 zoom lens which has a 72mm front thread for screw in filters. Although I’ve also stuck standard 4x5.6” filters straight onto the lens before with gaffer tape.
Its ease of use and many capabilities such as its ability to shoot at various frame rates, including a slow mo setting of around 36FPS, manual aperture control and automatic ISO setting by the 8mm cartridges, make it one of the most popular Super 8 cameras used for professional usage today.
Why Does It Always Rain In David Fincher's Films?
After watching a few Fincher films I realised that one way he presents a cinematic world which is an exaggerated version of real life is through the weather.
INTRODUCTION
Cinema is an enhanced version of reality.
Even the most naturalistic ‘based on a true story’ movie bends the real world for the sake of more engaging cinema. For example, every film uses lighting to amplify reality for emotional effect. Without it stories would lack a visual appeal and appear bland.
Every movie carefully selects a costume for its characters and uses a deliberately designed set to present information in the frame. If the strict and complete truth was always presented on screen it would diminish the emotional and thematic statements which the filmmaker is trying to make.
After watching a few Fincher films I realised that one way he presents a cinematic world which is an exaggerated version of real life is through the weather.
SEVEN
In Seven it’s always raining. From scene to scene to scene. In fact, just about the only scene where it isn’t pouring with rain, the sun is baking down a sweltering warm light with harsh shadows. So why is the weather often so unrelenting in Fincher’s Seven. Well, I think he uses weather in the same way that cinematographers use light or that costume designers use fabric. It’s a textural, tonal tool that unconsciously amplifies emotion and lifts the story out of mundaneness.
The challenge for any artist has always been how to take a form, in this case a series of projected 2D images, and make the viewer experience a real, life-like, emotional reaction akin to how they feel in the real world.
A word that Fincher himself has used in interviews is ‘visceral’ - defined as relating to deep inward feelings rather than to the intellect. Rain produces deep inward feelings - damp, dirty, cold, wet, chilled to the bone. None of these associations are particularly pleasant. Trudging through rain without an umbrella feels harsh. This is why it’s perfect for the grisly, harsh story of tracking down a serial killer.
Weather reflects the emotional tone of the story.
Take this scene. Reading the screenplay, that Fincher was tasked with turning into a movie, reveals nothing about it pouring with rain.
The description for this scene reads: “The trunk of Somerset’s car contains a homicide kit...Somerset takes out rubber gloves and pockets them, closing the trunk. He walks to join Mills and Officer Davis, a uniformed cop.”
It’s a very factual description which only talks about actions, not tone, nor the weather. Fincher could have set it on a regular sunny day with the sounds of birds chirping in the background. It would have saved on the expense of hiring a bunch of rain machines and the set up time it took.
Instead he opens the sequence with the unscripted shot of Mills standing cold, drenched, hunched, shifting uncomfortably holding two takeaway coffees. The rain beats down. The presence of rain adds a very different psychological feeling than it would if Mills was standing around, hands in his pockets on a blissful summer day. It puts the characters, and us as the audience on edge, setting the tone for the grisly, harsh scene which is about to come.
When asked about the film, Fincher noted that Seven was never meant to be a movie which used violence for the sake of violence. Rather it was meant to capture a psychologically violent tone that gets the audience to imagine inhumanity.
Creating a tense, visceral tone, with things like weather, that hangs around like a mist the entire movie is arguably more effective at creating a feeling of inhumanity than simply showing something gruesome happen. The build up is more terrifying than the pay off.
THE GIRL WITH THE DRAGON TATTOO
With all this analysis and psychological guesswork out the way let’s turn to a quote from Fincher about the rain in Seven: “The movie cost $15 million below the line. We wanted to shoot in Oakland. Beautiful clapboard houses. But we didn’t have enough time. So it’s all downtown LA. The reason it rains all of the time is that we only had Brad Pitt for 55 days, with no contingency. So we did it to stay on schedule, because we knew that if it ever really rained we would have been f-----.”
So I guess that theory goes out the window. Maybe the only role that weather serves is a practical one to maintain continuity when time is limited. But maybe not. I still think his choice of using the rain is one that has a tonal impact on the story.
Seven isn’t the only movie of his which features extreme weather conditions. In Girl With The Dragon Tattoo, from the lighting down to the colour grade, it always feels cold. This bone chilling cold seeps into the story, which also deals with inhumanity and violence, in a visceral way. Indeed many scenes take place with snow.
While you may think that maybe this time Fincher brought in a bunch of snow machines to avoid any weather problems again, the opposite in fact was the case.
“Well, it informed [the story]. You needed to feel it. You know, a lot of the snow and a lot of the weather effects are CG, and I didn’t know if I’d have dared to make it as bold and crazy. I would have thought it was over the top had I not lived through it.”
In many scenes, CGI is used to amplify the weather and feeling of cold in the story. Creating that thing that Fincher is so good at doing - an overall, visceral tone.
Due to the difficulty of accurately predicting weather conditions and getting inconsistent weather, CGI is a way of reliably attaining control over how much of the elements filmmakers would like to add to sets.
Like many of Fincher’s characters, who are drawn to details, he himself is a filmmaker to whom details matter. Using visual effects to do it in post production affords him maximum control.
Although imbibing the film with cold throughout certainly puts you in the dark place of minds of the characters, I think that not all weather symbolises the same emotions. Just as colour usually has symbolic meaning which is not necessarily tied down, so does weather. It’s raining in both of these scenes but the emotional tone differs pretty significantly between them.
THE SOCIAL NETWORK
Even in an arguably less visceral more conventional drama like The Social Network weather is used in subtle ways at key moments in the story to alter emotion.
For example, after the protagonist gets dumped by his girlfriend, he steps outside the pub and takes a walk around. During his walk around the campus, the floor is damp. It has just rained. This subtle touch adds to the dampness of his mood. This was done using a technique called a ‘wet down’, where hoses are used to spray a textured floor, such as tar or paving, prior to shooting.
This both creates highlights on the ground from backlight, breaking up darker parts of the image, and, importantly, adds an atmospheric touch.
Or, when one of the characters finds out that his idea has been stolen he goes outside into the cold night and crosses a bridge littered with patches of snow. This little detail is a decision clearly made by the director to create a cold, desolate feeling.
Or in the build up to a big argument scene in the movie one of the characters arrives and, you guessed it, it’s raining again.
Now I’m not arguing that all weather in films that you see is always intentional. There are definitely situations where you have to just shoot in whatever weather you have. But I would argue that Fincher, and many other filmmakers out there, are as deliberate as possible about how they use weather. They try to control it as much as they can.
Fincher said in an interview that filmmakers usually only get about 60% of what they want to film on set - and that’s on a good day.
He makes up for this gap with special effects using rain machines, fake snow or CGI weather to control the tone of the story.
So the next time that you’re writing a screenplay, directing or even watching a film, take a second to think about what the weather is doing and how it is used to enhance reality and sculpt a very specific filmic tone and visceral feeling.
What A Focus Puller Does On Set: Crew Breakdown
Since, the camera department is what I’m most familiar with, having worked as a camera assistant in the film industry myself, I’ll break down the role of the 1st AC or Focus Puller, to uncover what they do, their average day on set and some tips which they use to elevate their game.
INTRODUCTION
In this series I go behind the scenes and look at some of the different crew positions on movie sets and what each of these jobs entails.
Since, the camera department is what I’m most familiar with, having worked as a camera assistant in the film industry myself for years, I’ll break down the role of the 1st AC or Focus Puller, to uncover what they do, their average day on set and some tips which they use to elevate their game.
ROLE
A Focus Puller or a 1st Assistant Camera are responsible for maintaining the camera lens’ optical focus by manipulating the focus gear on the barrel of the lens. Cinema lenses come with distance markings, which are usually displayed in feet and inches, but sometimes also in metric units depending on the country.
Pulling focus is the act of using these distance measurements to manipulate the focus distance on a lens so that they align with the distance between the subject and the camera's focal plane or sensor. For example, if an actor is 6 feet away from the focal plane and the 1st AC sets the lens to 6 feet then the actor will be sharp and in focus.
Focus can also be determined by referring to sharpness on a monitor using the eye.
Usually the focus will shift during a shot, for example when an actor moves closer towards the camera or if the camera moves closer towards its subject. It is the job of the 1st to time the focus pull so that the distance of the focus is always correct throughout the movement.
In most cases cinematographers will hand pick their 1st AC, based on their track record, prior relationship, or a recommendation from a trusted source. The technical act of capturing the image lies largely in their hands, so DPs are careful about their selection.
Contrary to what the name may suggest, pulling focus is far from being the only job that the focus puller performs. Some other responsibilities include: building the camera and configuring it depending on how it will be rigged, troubleshooting any technical camera glitches, setting the exposure which is dictated by the cinematographer, changing lenses, providing technical advice about camera gear to the DP and being responsible for the overall running of the camera.
Although focus pulling is largely a technical pursuit, it does require an artistic eye or emotional sensitivity to the story. They choose, literally, where to direct the focus of the audience’s eye, anchoring their attention to a particular part of the frame. Often the focus is on the actor, but not always.
Who should be focused on in a two shot featuring two cast members? Should the focus change, or rack, between them? Do we stay on one of them the whole time? When should the focus shift? These are all decisions usually made instinctively by the 1st, sometimes with feedback or suggestions from the DP between takes.
The first thing that many non-film people ask when the role of the focus puller is explained is: Why not just use autofocus? My smartphone can do it. Why can’t a tremendously expensive cinema camera do it?
There are several reasons.
First, autofocus can’t read human emotion or nuance. An autofocus motor will set focus based on what it’s told by software. ‘This thing is in the middle, focus on that!’ However, as mentioned, sometimes the language of cinema will require focus to drift or sharpen or defocus unconventional parts of the image for artistic effect. People can make reactive, creative decisions about focus better than software can.
Second, autofocus is not always smooth. Over the years audience’s have become familiar with the more organic way that focus pullers change the focus using a human touch. Autofocus can sometimes feel overly jerky, too quick and precise, or laggy. Although new technology in this field, like the Preston Light Ranger 2, is rapidly advancing the capabilities of autofocus.
Third, it’s faster. Rather than having to programme a specific area in the frame for the focus to hit, or struggle with any technical glitches, 1sts can do it instantly.
And finally, this way of working has been established as the industry standard over decades. High end cinema cameras, accessories and even how shoots are managed and structured (such as having a rehearsal before shooting) is with focus pullers in mind.
AVERAGE DAY ON SET
Before the day of the shoot, a focus puller does a gear check. They will get a list of camera gear which the cinematographer has requested including lenses, the camera and accessories.
During the gear check they will assemble the camera, shoot lens tests using a chart and determine that all the gear that they have is functioning normally. If there are any issues with the gear they will consult with members of the rental house to fix it or source a replacement. During the gear check they will consult with other crew members on the shoot such as the DIT, VT, sound recordist, and grip to ensure that their gear works properly together with the camera build. For example if there’s a crane shot that the grip has planned then the 1st may need to source an extended power cable or build the camera with an extra iris motor to alter exposure remotely.
Once this is done they will mark up the lenses. This is a process where the distance markings on the lens are synced by either programming or manually transcribing the distances onto the focus rings which they’ll be using.
There are two kinds of follow focus systems which can be used. A traditional follow focus or a wireless follow focus. Both use similar principles but a wireless follow focus has become the standard preference nowadays.
The focus puller mounts the follow focus or wireless motors onto camera rods positioned under or above the lens. The follow focus gears or motor gears are aligned with the gear teeth on the cinema lens. The follow focus is then turned, either physically or using wireless motors and a handset, to change the focus distance on the lens. For each lens, the distance markings should match the markings on the focus ring. So that as the 1st turns the ring, the distance on the ring matches the focus distance on the lens. If you’re really working with no budget, have a super old school cinema lens or a stills lens without a focus gear, or don’t have time to attach and calibrate a follow focus, the focus can also be adjusted by directly turning the barrel of the lens.
The two industry standard wireless follow focus brands are Preston and Arri. But other brands such as RT Motion or Tilta can also be used.
On the day of shooting the focus puller will chat with the DP about what configuration the camera is needed in for the first setup and what focal length they would like to start on. They’ll then assemble the camera, put on a lens, a mattebox and any necessary filters.
If working wirelessly they’ll build their focus setup which will include a wireless handset, a monitor and a receiver which will get fed a live wireless image transmitted from the camera.
The camera will then be placed on a rig, such as a dolly, or handed over to the grips for more complex rigging or handheld work. The first shot is lined up.
The cinematographer will then work out what exposure they want and communicate that to the 1st. There are different ways of doing this. For example the DP may ask for a shooting stop, such as T/2, and then expect the focus puller to add or subtract the adequate ND filtration to achieve that stop throughout. The cinematographer may ask them to change exposure by adjusting digital camera settings such as EI. Or, if working on film, the DP may use their light meter to measure the amount of light, such as T/11 at 250ASA, and then expect the focus puller to calculate the correct amount of ND filters to add so that the lens may be shot wide open.
Once a shot has been lined up and everything is in place, the 1st assistant director will call for a rehearsal. The focus puller will use this time to practice and make any marks that they need to, usually coordinating with the actor’s marks that the 2nd AC will put down.
For example, they may see that the actor starts the shot at 10 feet from the camera and ends the shot at 5 feet. They then know what distance to start at and what distance to end on and can concentrate on getting the timing in the middle right as they roll the focus from 10 feet to 5 feet in time with the actor’s movement.
This is the traditional way of preparing for a focus pull. It was very necessary to work off distance marks when ACs were working on 35mm film, before the assistance of an HD monitor feed, as they had no optical reference to see if they were sharp and had to judge focus distance purely by eye.
As you can imagine this is an extremely difficult skill that took years of training and practice to master.
Now, with HD, and even 4K digital video feeds, focus pullers have the option to either: judge focus purely from looking at an image on a monitor, use a combination of judging distance, using marks and going off a monitor, or, rarely, pull focus completely old school without a monitor.
Film tended to also be slightly more forgiving than digital. This is because DPs working on film would usually give focus pullers a workable stop, for example T/4 instead of shooting wide open at T/1.3. The focus rolloff on film is also more organic and forgiving. There was more of a graduated roll off between the sharpest point of focus and what was out of focus. So, for example, if you are out by 6 inches on the focus on 35mm it may still look sharp enough.
However, because of the high resolution of digital cameras and the precise, modern optics they are paired with, if the focus is not completely on point it will probably look soft.
This makes using a monitor to judge or check, focus an important tool in the digital AC’s arsenal.
Nowadays, because of digital cameras not requiring expensive film stock for each take, rehearsals are becoming increasingly rare and the 1st AD may ask ‘to just shoot one and see what happens’ without a rehearsal.
When ‘roll camera’ is announced the 1st AC will roll, if there’s sync sound they will pull focus to the clapper board and then rack back to the actor or subject once it has been clapped, ready to pull focus for the scene. As they play out the scene the blocking may change or the actor may miss their mark or improvise. It’s the 1st AC’s duty to be able to adjust their pulling accordingly.
After completing a shot the focus puller may be asked to change the lens, filter or rebuild the camera, for a different configuration such as a Steadicam shot.
When shooting on film the focus puller is also required to change and lace the new magazine. They’ll check the film gate before moving onto a new shot to check that there isn’t any dirt or dust which would ruin the take.
TIPS
The cliche of the grumpy focus puller may have a grain of truth to it. This comes from it being a very stressful job on set.
If a high enough percentage of their pulling is out of focus, their performance isn’t up to scratch - or if the film negative in fact gets scratched - they stand the risk of getting fired.
So let’s go over some tips to avoid that from happening.
Every focus puller should know how to use marks and judge distance. An old trick is to carry a tape measure around with you, continually calculate distances in your head, then take out the tape to measure and see if you are correct.
Taking marks is a great way of laying a foundation so that you know where you stand. One way is to use start and end marks as mentioned, another is to use physical marks on the set for in-between distances. For example, if an actor walks up to a table halfway through a shot, the focus puller will measure the distance from the camera to the edge of the table so that they have an in between point.
These marks can either be mentally noted, physically marked on the focus ring with a marker or with what are known as arrows - pointed markers cut out of gaffer tape - or marked using different finger positions on the focus ring.
The only time focus pullers might set the lens to a different distance mark is when they split the focus. This is when they are required to keep two objects sharp which are different distances from the lens. In this case the focus is set to the distance in between the two objects until a middle ground is found where they are both sharp.
Always test to make sure everything is working properly and you are happy with your camera build before lining up a shot. No offence to the assistant directors out there but when they say “Just bring out the camera so we can have a look and you can build it later”...they rarely mean it.
Since camera technicians are dealing with electronics and software, which have a tendency to bug out from time to time, they need to be good at solving technical issues very quickly. When troubleshooting, always be methodical. For example if a monitor isn’t working first check the battery, then try a new SDI cable, then go to the menu to make sure the camera is outputting a feed or recycle the transmitter. Following steps logically and calmly will solve the problem as quickly as possible.
The job is largely psychological. You have to always be confident and calm. The more you begin to panic the worse and less accurate your pulling will get.
It also helps to know what variables will make your job more difficult or easier and what tools you need to assist you. Longer focal lengths, shooting the lens wide open, focusing on objects very close to the lens or using large format cameras will make the depth of field more shallow.
So if the DP decides to use a 150mm lens at T/2 with the actor running straight at the camera you need to control the situation because getting pin sharp focus will be extremely difficult. For example: you could ask the AD for a moment to take marks, you could use a focus bug for a distance indication or you could kindly ask the DP if they could stop down a bit to increase the depth of field.
A final way to make life easier is to be surrounded by a strong team.
Having a competent 2nd AC that ensures the camera is always running, gear is well organised, reloads happen smoothly, actors are always marked and the clapperboard is always put in the correct place, will decrease your workload and allow you to focus on the most important part of the job - the focus.