Gray Kotze Gray Kotze

How Greta Gerwig Shoots A Film At 3 Budget Levels

Let’s look at three films made by director Greta Gerwig at three wildly different budgets: the no budget Nights And Weekends, the mid-range budget Lady Bird, and the Hollywood blockbuster Barbie - to see how her approach to filmmaking shifted along with her filmmaking resources.

INTRODUCTION

Each filmmaking choice is based on its artistic merits but it can also be determined by budget. This leads to some movies being more loose, improvisational, realist and character focused, while others can create their own worlds, be larger than life and precisely manicured.

To show this let’s look at three films made by director Greta Gerwig at three wildly different budgets: the no budget Nights And Weekends, the mid-range budget Lady Bird, and the Hollywood blockbuster Barbie - to see how her approach to filmmaking shifted along with her filmmaking resources.


NIGHTS AND WEEKENDS - $15,000

Gerwig’s first interaction with director Joe Swanberg came when she had a minor role in his movie called LOL.

This began a creative partnership that resulted in her writing and starring in another of his low budget films. These micro budget, independent movies of the early and mid 2000s were dubbed Mumblecore and were grouped together as part of this new filmmaking movement. 

They are distinguished by their heavy focus on naturalism, characters, dialogue, improvisation and lack of a traditional cinematic style or look.

Swanberg and Gerwig’s next collaboration which they co-wrote, directed and starred in, called Nights and Weekends, fell into this Mumblecore category. 

“It’s about a long distance couple who have some problems and it’s kind of tracked over a year of their relationship. And it was shot over 18 months.”  - Greta Gerwig

Most mainstream movies are heavily reliant on a structured plot, with characters that bring that story to life. Mumblecore films flip that convention by focusing on characters and their dialogue as the core of the film - with the plot being quite minimal and a byproduct of the actions of the characters.

Although this focus on the relationship of characters over plot had been done before in films such as Before Sunrise, normally it was accompanied by a more traditional filmmaking aesthetic. 

Because Nights and Weekends had almost no budget, they disposed of these cinematic conventions and worked extremely loosely - almost like a documentary - and brought on Matthias Grunsky a documentary and indie cinematographer to shoot the first half of the movie.

“We didn’t have a script we had, like, an outline of certain scenes that we needed to shoot to tell the story and roughly maybe had an idea of what’s going to happen on the day. 

Greta and Joe would basically go for a coffee. Before they left I would talk with them about roughly where they will be, let’s say the apartment. So I would light that apartment by myself, not knowing exactly what’s going to happen and then they would come back and we would start rolling: very improvised because no one really knew what was going to happen.” - Matthias Grunsky, Cinematographer 

In filmmaking some of your major expenses are normally actors, crew, locations, production design, gear rental and post production. 

Shooting in this super improvised, naturalistic and loose style meant that they could limit the crew to only a cinematographer and their producer who recorded sound, the cast to basically two actors, the locations to borrowed apartments or public spaces and the gear to a single handheld HD video camera.

This low budget filmmaking methodology imposed the naturalistic, documentary feel of the film, and vice versa, this super character focused, realistic style was suited to this filmmaking methodology.

This meant that the audiences for these films were usually very niche, which made it more difficult to sell or make money from these movies. As a result, Nights and Weekends was crafted for a film festival run, to try and garner critical and festival acclaim, rather than a more commercially focused widespread cinema release. 

LADY BIRD - $10 Million

Following this experience, Gerwig shifted her focus to acting for a number of years, before she returned to write and directed her first solo feature - Lady Bird.  

“The working title for the movie was Mothers And Daughters. To me that was the core of it and I feel like every scene I wanted to keep going back to that theme even if it literally wasn’t between a mother and a daughter. I wanted to keep exploring that because I think your relationship with your parents is how you first understand love.”  - Greta Gerwig

Lady Bird was written in a far more structured way that conformed to more of a genre (coming-of-age drama) than that of her much looser, more improvised first film. 

The screenplay had much more mainstream appeal, could be better marketed to a cinema going audience, and therefore was able to raise a mid-level indie film budget of $10 million from IAC films.

This budget meant that the movie could be shot and produced with a much stronger visual style than her more haphazard looking first no budget film. 

“The biggest thing I think a director can do is create almost a bubble of magic safety for their actors and for their department heads and hold a perimeter so that they feel safe to play and bring their whole selves and also with the actors that they feel that they own those roles.”  - Greta Gerwig

One of the department heads she brought on to shoot the film was cinematographer Sam Levy, who this time had a few more resources to work with than just improvised dialogue and an HD camera from 2006.

The visual direction or perimeter that she gave Levy to work within was that the movie should feel like a memory. 

To try and evoke this visually, Levy chose to shoot on an Alexa Mini, and, instead of shooting in the sharper, higher fidelity 3.4K ARRIRAW format, he decided to shoot in the lower resolution 2K format.

On top of this, they wanted to use grain and texture to evoke more of an organic feeling of memory. But, instead of just adding a film grain emulation in the colour grade, as is normal, they added digital video noise that they pulled from the Alexa.

They did this by recording a black signal on the Alexa, pulling the noise from that, and overlaying it on top of the image. Levy and his colourist Joseph Bicknel could now amplify or decrease the noise, especially in the mid tones using the same idea of how you could bump up the gain on older ENG cameras to 12 or 16 DB to create more video noise and texture.

Another feeling that Gerwig wanted, was that the camera’s perspective shouldn’t be intensely subjective, super tight and handheld, nor should it be extremely wide, objective and detached. She wanted it to feel like the audience was an arm length away from the characters. 

They created this sense of distance and a proscenium in the frame by playing out most of the scenes in stable medium shots, close ups which weren't too tight or wide shots which weren’t too far away.   

Using these techniques along with a more traditional filmmaking methodology, Lady Bird was presented with a much more cinematic look that elevated the story from realism, rather than the rough, purely naturalistic feeling of Nights And Weekends.   

After finishing the movie A24 bought the worldwide rights to distribute it. It turned out to be a very successful acquisition, being both a critical success and making close to $80 million at the box office against its $10 million budget.  


BARBIE - $145 Million

“I wanted it to be obviously dazzling and hilarious and this kind of imaginative cornucopia of crazy Barbiness, but I also felt like actually in that state of, kind of, being in that sugar high is the place where you have the opportunity to be even more emotional because people aren’t really expecting it. So in a way, by design, it allows you to get at things that are harder because you have the cover of a glitter parade.” - Greta Gerwig

Gerwig paired up with the iconic global toy brand Barbie, to write and direct her first studio feature at an enormous blockbuster level budget.

Visually, she drew a lot of influence from Technicolor musicals with their vivid, colourful palettes and hand painted backdrops.

They, therefore, made the decision to film the portions of the movie set in Barbieland in a studio soundstage, where they built giant Barbie sets, erected enormous painted backdrops, at such a large scale that they caused an international shortage of pink paint. The size of these practical set builds was a massive financial undertaking. 

Gerwig hired cinematographer Rodrigo Prieto to collaborate on lighting this artificial world and executing these sequences practically.

“I wanted to do practical builds for everything and I also wanted to use, anytime I could use, whatever film technique from like 1959 was. So I had to build the entire thing in miniature then shoot the miniature, then composite that into the image. I spoke with Rodrigo Prieto, Sarah Greenwood and Jacqueline Durran (the DP, set designer and costume designer). I talked to them for a year before we were even in like prep, prep.” - Greta Gerwig 

The first lighting decision they made was that Barbieland should always be sunny. To create a sense of separation between the characters and the flat painted backgrounds he decided to create a beautiful backlit look. His team placed multiple enormous artificial sun sources called Soft Suns in each corner of the set, and had one roaming sun that he could move around on a lift. 

This meant that whichever direction a character was facing he could turn on one of his sun sources that was positioned behind them and give the scene a backlight. 

The spill from these lights was so strong that they would hit the set and bounce light back to fill in the actors. The problem with this was that because almost the entire set was pink this bounced fill light became very magenta, which turned the faces of the actors pink. 

So to kill some of this magenta, but still maintain some bounce he draped the rest of the set that wasn’t being photographed in a grey material. This returned a natural bounced light without a tint.

Other visual decisions to make Babieworld more exaggerated and theatrical included using frontal, symmetrical framing that placed Babrie in the literal centre of each shot, using a LUT that emulated the rich, saturated colours of Technicolor, and using wide angle lenses and a large format Alexa 65 to exaggerate and expand the size of the sets in a cartoonish way. 

They contrasted this look with photography in the real world that used a more neutral colour palette, a more subdued and desaturated film negative LUT and longer focal length lenses that compressed the background.  

Barbie completed Greta Gerwig’s evolutionary journey as a director from micro budget realism to blockbuster cinematic fantasy: both in the subject matter of the characters and story, the size of her budget and audience, as well as in how she went about creating the visual world of each film.

Read More
Gray Kotze Gray Kotze

3 Basic Camera Settings Every Cinematographer Should Know

Cinematography is all about controlling light. In this video I’ll take you through three of the most important camera settings that will allow you to control the amount of light that is let into the camera: ISO, shutter and aperture. This is sometimes called the exposure triangle.

EXPOSURE TRIANGLE


The first step to being a good cinematographer is to understand how images are created. 

You need two things: a photosensitive surface, like a camera sensor or a piece of film, and light. How much light hits the sensor and the sensitivity of that sensor determines how an image looks. This is called exposure.  

If too much light is used the image will be too bright, and if too little light is used then you won’t be able to see anything at all.   

Therefore, cinematography is all about controlling light.

In this video I’ll take you through three of the most important camera settings that will allow you to control the amount of light that is let into the camera: ISO, shutter and aperture. This is sometimes called the exposure triangle.  

If you’ve ever used a consumer camera you may know that there is an auto mode that will adjust all of these settings for you. So, you may be thinking, why not just use that?

I’d say there are two main reasons why changing the settings yourself, called manual exposure, is the preferred method for anyone who wants to take image making seriously.

Number one. When you put the camera in auto mode you are basically giving it permission to constantly change and adjust various settings at will to get you to an evenly exposed image. 

The problem with this is that we will often be able to see the video footage changing from dark to light as it adjusts settings until it finds a level it is happy with. This can be very distracting and breaks the illusion of filmmaking, taking the audience out of the immersion in the story as they say to themselves, “OK, this was obviously shot on a camera.”

Number two. Cinematographers want to be able to precisely control how they want the image to look and not leave that decision up to the whims of a camera. They may want a comedy to look light and bright, a drama to look dark and moody, have a deep depth of field where everything in the image is sharp, or make the background soft, blurry and dreamy.


ISO

ISO, ASA, or EI are all terms that refer to the sensitivity of a camera’s sensor or a film to light. The higher the ISO number is the more light will be taken in by the sensor and therefore the brighter the image will be. And inversely the lower the ISO, the less light it absorbs and the darker the image is. 

Therefore, it’s common to set the camera to a lower ISO when shooting outdoors in bright sunlight, and a higher ISO when shooting in darker conditions at night.

However, higher ISOs come with a tradeoff. The more that you increase this setting the more digital noise, those little dancing grainy specs, you will see.

Most digital cinema camera manufacturers will recommend shooting at a base ISO or a native ISO. Although this differs for each manufacturer, base ISOs usually offer the best trade off between a high quality, low grain image and a good dynamic range.

For example, most Arri cameras have a native ISO of 800. Some cameras, such as the Sony Venice 2 also have what are called dual base ISOs: such as a lower native ISO of 800 and a more sensitive native ISO of 3,200.

So, when setting up your camera to shoot each shot you need to pick an ISO based on two factors. Firstly, consider how much noise or grain you want the image to have, and secondly decide how sensitive you need your camera to be to shoot in the lighting conditions.

And, if you’re still in doubt, it’s probably best to set and leave the camera at whatever base ISO the manufacturer recommends, keeping the noise as clean as possible, and rather adjust other settings.


SHUTTER

Instead of changing the sensitivity of the sensor, another way of altering exposure is by controlling how much light hits the sensor. This can be done with the shutter: a section that can open up to expose the sensor to light, or close and prevent light from entering the camera. 

This setting determines the duration for which a camera’s sensor is exposed to light. 

In photography this is measured in shutter speed: the time a single frame is exposed to light, by quickly opening then closing a rectangular blind shutter. Usually this is measured in fractions of a second, like 1/100.

Cinema cameras need to shoot multiple frames within one second, for example 24 frames. Therefore, they need a differently designed shutter: one shaped like a circle that can spin. 

More or less light can be let through depending on what angle is cut out of the circle. For example, 180 degrees may be cut out, so that each frame is exposed to light for half the time by the open shutter and not exposed to light when the closed half passes the gate.

Or, a 45 degree angle can be used, where only a narrow sliver of the shutter is open. This means that each frame is exposed to light for 1/8th of the time. Less light means the footage will be darker, and it also makes motion feel more choppy.

A 180 degree shutter angle is most commonly used because it gives the footage a normal feeling of motion that is not too jerky or too fluid.

Modern digital cinema cameras don’t have a physical shutter that opens and closes, but it uses the same principle by telling the camera’s sensor when to collect light and when not to collect light. 

If using a camera with a shutter speed setting, a good rule of thumb is to set it to be double the frame rate - to get a normal feeling of motion. So if shooting at 25 frames per second, we will set the shutter speed to 1/50. Then if the camera increases its frame rate to 50 frames per second, we will double that and set our shutter speed to 1/100.


APERTURE

Moving even further away from the sensor, we get to our final important camera setting that controls exposure: aperture.

This is the size of the opening at the front of the lens through which light passes to get to the sensor. The size of the hole is controlled by iris blades narrowing or opening and is measured either in f-stops for photography lenses or t-stops for cinema lenses.  

The wider this opening is, the more light is let through and the lower the stop value is. On the other hand, the narrower the iris is, the less light it lets through and the higher the stop number is.

Each stop represents a doubling, or halving of the amount of light that hits the sensor. So, to make the image brighter we need to decrease the stop number by doubling the light let through - for example from 4 to 2.8. Or to make the footage darker and cut down on half the light let through we can increase the stop from 2.8 to 4.

The aperture setting is a very effective way of changing how bright or dark an image is. However, aperture also has another, secondary effect on the image. 

The wider the iris is, and the lower the stop number, the shallower the depth of field will become. In other words a low stop number means that more of the background will be out of focus.

Since cinematographers often like to shoot with a shallow depth of field that makes the background look soft and focuses on a single subject, the camera is often set to a low stop number, such as 2 or 2.8.



THE FINAL SETTING

But, this runs us into a snag. We’ve set the ISO to the base of 800, we’ve set the shutter angle to get normal motion at 180 degrees or 1/50th of a second, and we’ve set our aperture at 2 to get a shallow depth of field. However, the image is still too bright.

How do we make it darker and get it to the level of exposure that we want without changing our other settings? We can use what are called ND, or neutral density, filters. 

These are pieces of tinted glass which are either placed in front of the lens, or inside the camera behind the lens, that decrease how much light is let into the camera without changing the colour of the image. 

Adding a stronger tinted ND filter will take away more light, while a weaker strength filter will take away less light. Each stop of light taken away is represented by a value of 0.3.

So, most cinematographers will set the camera with a consistent ISO (to keep the same grain), shutter (to keep the same feeling of motion) and aperture (to keep a shallow depth of field) for most shots, then use ND filters to cut down on how much light enters the camera.

Read More
Gray Kotze Gray Kotze

Why Movies DON'T Need To Be Shot With Expensive Cameras

In the wake of the announcement that The Creator was shot on a Sony FX3 let's examine how and why it is now possible to shoot high budget films on prosumer cameras.

A BRIEF HISTORY OF FILMMAKING TECHNOLOGY

Throughout the history of cinema, filmmaking has been inextricably linked to technology. 

Each technical innovation in film gear has unlocked new opportunities for how stories can be told. Early silent films used intertitles in editing as a way of expressing speech without using words. The introduction of sync sound introduced written dialogue to the screen. Colour photography elevated cinema out of a more impressionistic black and white to a more realistic world. Smaller handheld cameras allowed filmmakers to move shots organically like a fly on the wall.

With each and every advance in technology came a new stylistic application.  

One of the largest technical leaps forward came when digital cinema cameras, for the most part, replaced cameras that used 35mm film. Before this, dating back to the start of cinema, movies had been shot on celluloid. 

The widespread adoption of digital cinema cameras on industry film sets happened in the early 2010s after Arri introduced the original Alexa. 

This camera was groundbreaking as it offered filmmakers a digital medium that had great colour reproduction and dynamic range - which, for the first time, rivalled the quality and look of 35mm film. But let’s backtrack a little bit.


GAP BETWEEN PROFESSIONAL & CONSUMER CAMERAS

Before digital cinema cameras, there was an enormous gap between the quality of the image recorded by consumer video cameras onto tape that the average person could buy in a store and 35mm film captured on professional industry cameras. However, renting these high end cameras, and buying and processing 35mm film came at a significant price. 

It costs a large amount of money to make a high end movie. You have to pay a big premium for getting well known actors, hire a large technical crew for weeks or even months at a time, and pay for the construction of large sets. These costs are usually greater than the sum of the gear rental. 

Therefore, although it was expensive to rent a high end cinema camera, it still made financial sense in the scope of the more expensive production, to try to capture the best image possible that could be sold and presented to an audience as more of a spectacle.

The thing is, as time went on and digital cinema cameras replaced film cameras, that enormous gap in image quality between cheap consumer video cameras and expensive high end digital cinema cameras, got exponentially smaller and smaller.  

This trend followed a prediction made by a guy called Gordon Moore in the 1960s, who basically said that computing power would roughly double every two years. Not only would this allow equipment that used computing power, like digital cinema cameras, to get exponentially better, but they would also get increasingly smaller and cheaper on a relative basis. 

This has meant that in the last few years, manufacturers have been able to produce video cameras aimed at the consumer or prosumer market which are affordable, small, yet produce images that are incredibly high quality.

Whereas before the quality of video from a high end camera was many multiples better than that of consumer cameras, now the difference between them is much more difficult to notice for the average viewer - if they notice at all.

THE CREATOR USING A SONY FX3

In this changing technical landscape, news emerged that I think may go down in history in the same way that previous technologies altered the course of filmmaking. It was announced that The Creator - a high budget, sci-fi, Imax distributed blockbuster - would be shot on a Sony FX3: a prosumer video camera.

“For a long time the difference between shooting on film and what you might call consumer or prosumer, the stuff you might buy in a store is getting narrower and narrower. And that gap is getting indistinguishable to some extent.”

“We shot The Creator on a new camera, it’s a Sony camera, it's called an FX3. Pretty much the whole movie is shot on this.”  - Gareth Edwards, Director of The Creator

Shooting with the FX3 allowed them to work with smaller crews and therefore film in 80 real shooting locations, rather than spending truckloads of money on building large sets in a studio and shooting against green screens.

Then in post they basically took the real location footage and used VFX to paint the sci-fi landscapes that they wanted on top. The director, Gareth Edwards, claims this method of working turned a $300 million film into an $80 million movie. 

Shooting with a prosumer camera isn’t new to Edwards as his first film, Monsters, was shot on a Sony EX3 video camera.  

Because so much can be achieved in the grade and created in post production nowadays, you want the footage to be as malleable as possible. What you need from a camera can sort of be boiled down into three main categories: resolution, dynamic range and the bit depth of the colour.

If we take these specs from the $3,900 full-frame Sony FX3 and the $75,000 Arri Alexa Mini LF, you can see how small the technical gap between them is. The FX3 can record in 4K resolution, while the Mini LF can do 4.5K. In terms of dynamic range Sony reports 15 plus stops, while Arri claims 14 plus stops. When it comes to bit depth, the FX3 shoots 10-Bit 4:2:2 internally in SLog, whereas the Arri can shoot 12-Bit 4444 XQ in Log-C.

While of course the Arri does outperform visually, especially in the colour department, the point remains that the gap between them is pretty slim when comparing a prosumer and a professional camera - and seems to be closing more and more every year. 

Also, when you have millions of dollars to polish the footage in post production on top of that then the difference in quality between the two cameras becomes even more negligible.


ADVANTAGES OF PROSUMER CAMERAS

So, what are some of the advantages of using a smaller prosumer camera like an FX3?

Firstly, the size of the little Sony means that all the grips gear needed to move the camera cinematically and the supporting accessories are greatly reduced - in both cost, size and crew.

In the BTS, you can see them working with what looks to be a modified Ronin gimbal rig with handles. The operators can use this to adopt a very flexible shooting style, run and throw the camera around, getting stable, smooth movement - while also eliminating the time, manpower and cost of needing a dolly or a Steadicam that would be paired with a high end cinema camera. 

Or, in other shots you can see them using a mini jib arm with the Ronin as a stabilised head. Again, saving hugely on the time, cost and crew needed to run a high budget industry alternative like a Technocrane.

Working with a little camera also downsizes the accessories needed. For example the BTS shows the focus puller using a low cost, little Tilta Nucleus instead of the larger follow focus systems from Arri and Preston that are normally paired with high end cameras.

The FX3 also has the ability to boost the base ISO to 12,800 and shoot in incredibly low light conditions - whereas the Alexa is usually kept at 800 ISO. 

“The Sony FX3 for example: amazing prosumer camera. I’m about to use that on a film that I’m shooting in Thailand. And it’s so ridiculous that it’s so small and it can [record in] such a high ISO for such a small camera.” - Greig Fraser, Co-Cinematographer of The Creator

This high ISO allowed them to work in a very flexible style. For exterior night scenes they could discard the large, high output lighting fixtures that normal cinema cameras require. 

Instead they used battery powered LED carpet lights rigged to a boom pole, that the best boy could use to walk alongside the talent and handhold the source to give enough illumination. 

“The actors had this freedom. If suddenly the shot looked better over here Nancie would suddenly move around with the light and Oren would be talking to Nancie through a little microphone. And we were just really fluid, organic. We’d shoot for like 25 minutes at a time. It was, like, a very liberated experience and it looks like a much bigger film than it cost because of that reason.” - Gareth Edwards, Director of The Creator

SO, WHY DOES HOLLYWOOD STILL USE HIGH END CAMERAS?

Does this spell the end of the Alexa forever and mean that all future Hollywood productions will use the FX3? Well, no, probably not. 

Why are almost all high end movies and series still shot on variations of the Alexa, the Venice and other high end cinema cameras?

For one, many filmmakers and especially cinematographers still like to try and shoot the highest quality image possible - even if the gap between that and a prosumer image is not as big as it used to be.

Secondly, the film industry has been built over many decades with crews, technicians and filmmakers trained to work in a very specific manner. For technicians this means working with production style cameras, grip rigs and high output lighting setups. 

This legacy has also been established on the production side. Studios know what they will get when shooting with an Alexa, and producers are used to budgeting for gear in terms of an Alexa rental fee.

The entire industry is set up around working with high end production cameras and I don’t think that this will change any time soon. 

However, what we may see is that features from these prosumer cameras - such as its high ISO base and smaller form factor - filter into the higher end cameras. And that this prosumer gear will increasingly be adopted across lower budget projects. 

Then, eventually, as Moore’s law continues to take effect, we may find that productions will only need to work with a camera that can easily fit into the hand.

Read More
Gray Kotze Gray Kotze

How To Use A Clapperboard The Right Way

Since I used to work as a 2nd AC for a number of years it was my job to use the slate or clapperboard on professional film sets. Let's go over how to use a clapperboard the right way.

INTRODUCTION

This little piece of equipment is perhaps the most iconic symbol in filmmaking, but it also has a number of pretty important practical purposes.  

Since I actually used to work as a 2nd AC or loader for a number of years, it was my job to use this little guy on set. 

So I thought I’d track down my old slate and provide a bit of an explainer as to what the purpose of it is, what all the information on the board means and go through the ins and outs of the real ways that a slate is used on an industry level set.


WHAT IS A SLATE?

The slate, sticks, or clapperboard are all names for this piece of translucent acrylic attached to a pair of wooden sticks that can go clap.

It has two main uses on a film set: synchronising the sound with the video footage and conveying information. 

While broadcast, documentary or solo shooters may have a mic that records both sound and video footage as one single file, on an industry film set the footage is recorded on a cinema camera, while the sound is recorded separately on a mixer by a dedicated sound recordist.

A slate uses both a visual cue, the action of the sticks coming together, and an audio cue, the sound of the clap, as a reference marker for synchronising them together. Practically this means that editors can find the moment that the sticks collide, and the moment that the clap is heard, and use that as a starting point for both clips.

Now the video and the sound are in sync.

The second use of a slate is to give the editor information. All of the numbers, letters or even symbols that are written on the acrylic section of the slate in a black whiteboard marker are a way of passing on information to the editor.

Each shot in a movie has a unique series of numbers that identifies it, and places it within the order of the story. Properly labelling each take and having a system to find them is important because there may be hundreds or even thousands of different takes on a production. 

Slate information allows the editor to both know which scene in the screenplay the shot fits into and gives each shot a label so that it can be talked about and identified by the editor, as well as correspond to any notes from the script supervisor or camera report. 

On top of this, the 2nd AC may write additional information on the board, such as to notify the editor that VFX work is intended to be added to the shot or to convey the camera settings that were used. 

There are two types of slates that can be used: a smart slate which has a timecode running on it, and a regular, old clapper board or dummy slate which can only use a clap to sync the sound. The timecode on a smart slate can be synchronised with both the camera and the audio mixer. 

How this works is that the audio recording device will have a kind of digital clock that outputs an exact time in hours, minutes, seconds and frames - called timecode. The sound person will then ‘jam’ the timecode by connecting a timecode cable between the audio recorder and the smart slate. This sets the timecode on the slate to be exactly the same - frame perfectly - as the time on the audio recorder. The same is then done to the cinema camera so that all devices are running the same timecode as the audio recorder.

This means that every time the smart slate is opened on camera it displays a running timecode that can be synchronised perfectly to the same timecode as the audio recorder - thus syncing the footage - with the physical clap being a backup way of syncing the sound.  

SLATE INFORMATION

When a screenplay is written, each scene will be given a number. The first scene is 1, the second scene is 2, etc. However, during the process of shooting, scripts are almost never shot chronologically. 

So, to avoid confusion and to stay on top of what scene is being shot and when, it is labelled on the board with a scene number. 

When the editor gets the footage they can then group all of the shots with the same scene number into the same bin.

Next we’ll move onto the slate. Most scenes that are shot will use multiple camera angles and lenses to cover the action. Every time that the camera moves onto a new shot size, angle, or switches a lens this is called a new slate, or slating up.

There are two systems for representing this: the US system and the UK system. The US system is probably the most common and uses letters to represent a slate.

For example, the first board for the wide shot on scene 26 will just say 26, with no letter. When the camera moves onto the second shot, a close up, the board will slate up to 26A, then when a third shot is filmed, the board will slate up again to 26B.

When speaking about slates we use the NATO phonetic alphabet: 26 alfa, 26 bravo, 26 charlie. Although sometimes 2nd ACs can also come up with creative names that start with the first letter of the slate.

You also usually skip the letters I and O, so that they aren’t confused for a one and a zero. 

If shooting a VFX shot, this can either be written on the board or a V can be added before the scene number. The same can be done with an X to show that this was shot by a second unit, or with an R to indicate a scene that is being reshot. Different productions and different script supervisors may also use different letters or methods for communicating these cases. 

The UK system uses numbers instead of letters. So the first shot of the production will be slate one, the next shot will be slate two, and this slate count keeps going up with each new shot setup or lens change until the production ends. 

This means that you don’t always have to announce the scene number, only the slate number - although the scene number will usually also be written on the board. 

Then, for each slate setup, usually multiple takes will also be recorded. This starts at one for the first take and keeps going up. Then when you change to a new slate you start over with take one again. So, an example of a typical board may read 26A, take three.

At the front we also have the roll number. This comprises a letter followed by a series of numbers. Each letter represents what camera shot this footage. If there is only one camera it will be called A cam. This letter goes up for every extra camera that is used. So if it’s a multicam shoot with one main camera, one Steadicam camera, and one action camera, the main camera will be A cam, the steadicam will be B cam, and the action camera will be C cam.

Then each card that goes into this camera, shoots and gets transferred to hard drive will be labelled with a series of numbers. So the first card shot by A cam will be A001, or the 112th card shot by B cam will be B112. If shooting on film, the same principle applies to each new roll of film that is used. So the 8th roll of film shot by A cam will be A008.

All of this information about the roll number, scene, slate and take will be recorded by the script supervisor along with any extra notes, such as what takes are good, what lenses were used for each shot, or what takes may have had lapses in continuity. 

The board is also labelled with fixed information like the production company, title of the production, director’s name, cinematographer’s name and the date. Because this information doesn’t change much, it is usually either done using a printed label from a label maker, or handwritten with a permanent marker on tape.  

HOW TO USE A SLATE?

When 2nd ACs do a slate they need to be as quick and as accurate as possible. 

If you put the board into shot too early, it will block the framing and view of the camera operator. If you leave the board in shot too long while the camera is rolling then you are wasting valuable card recording time or film stock that is running through the camera.

What 2nd ACs do is carefully listen to the instructions of the 1st assistant director.

They will call ‘roll sound’, to which the sound recordist will press the red button and say ‘sound speed’. The instruction ‘roll sound’ is a good cue that it’s time for the 2nd AC to insert the board into the frame. 

This gives the 1st AC a moment to pull the focus to the board so that it is sharp and can be read by the editor. Next the AD will say ‘roll camera’. 

Once the 1st AC is ready they will press record on the camera and say ‘speed’ or ‘rolling’. Because the board is already in frame and sharply in focus when the camera rolls it means that the first frame that the editor will see for each take will be the slate. This helps the editor immediately identify the shot number without needing to search through the takes for it.

Once the 2nd AC hears the focus puller say ‘speed’ they can announce the board. If using the US system this means saying the scene and slate letter followed by the take: for example, ‘26 bravo take three’. 

The focus puller’s I’ve worked for like to then say ‘mark’ or ‘mark it’ to confirm the camera is rolling, the board is sharp and it can be clapped. Then as soon as they’ve clapped the board, the 2nd will get out of the shot and duck for cover as quickly as possible.

Running through the whole process, it’ll sound like this: 

‘Roll sound.’

‘Sound speed’

‘Roll camera’

‘Rolling’

‘26 alfa take three’

‘Mark’

*clap*

In most cases you want the clap to be quite loud so that it is clearly picked up by the boom and lapel microphones, but not to be deafening. Loaders also modify the strength of the clap depending on how close it is to the actors. 

If shooting a tight close up and the board needs to be right in front of the actor’s face, it’s best to notify the boom operator ahead of rolling that you’ll do a soft clap. They will then get the mic close to the board and you can do a nice gentle clap in front of the actor’s face that doesn’t put them off. 

For tight shots, some loaders carry a cute little insert slate which is small enough to be fully read in a close up.

If they fumble the clap or it is too soft to be picked up, the 2nd AC will then loudly announce ‘second clap’ and hit it again.

When positioning the slate in frame, you want it to be as readable as possible - not too small and far away or too large and close to the camera. The rule of thumb when shooting with a Super35 sensor is that the focal length of the lens should be equivalent to how many feet away from the camera the slate is. So, if using a 50mm lens, the board should be 5 feet away. If using a 100mm lens the board should be 10 feet away.

If shooting with anamorphic lenses or with large format cameras that have a wider field of view, the board can be placed a bit closer to the camera than this rule. Good loaders are able to place the board in frame without looking at a monitor based on knowing the focal length of the lens alone.      

There are three different sound variations that can be done, all of which are signified a different way with the clapper board. The most common is sync sound, where the sound recorded is intended to be synchronised with the recorded footage. To convey sync sound, the clapperboard must always enter the shot with the sticks open, showing the editor that it’s meant to be clapped. 

This is often incorrectly done in pop culture references where the board starts closed, then opens in shot and claps.

Wild sound means that sound will be recorded for the take, however this sound does not have to be synchronised with the footage. This is sometimes done when recording ambient noise or sound effects.

Loaders will write ‘wild’ on the board and this time either enter the frame with the board closed, or with their fingers holding the board between the sticks - this will allow the timecode on a smart slate to run, but indicates that the board will not be clapped.  

Then there is MOS - which signifies that no sound will be rolled on this take. Again, a big MOS will be written on the board and the board will enter and exit frame closed - without a clap. 

If multiple takes in a row will be done within a single roll of a take then the loader may write ‘series’ or put an arrow next to the take.

Next we have the end board. This is where either the 2nd AC is not quick enough to fetch their board and complete a front slate after ‘roll sound’ has been called, or if no board has been requested up front, such as if working with animals or children that may be spooked by a loud clap. 

In this case, after ‘roll sound’ is announced, the 2nd AC will say ‘end board’ loudly at the beginning of the take. The scene will then play out in full. Just as ‘cut’ is called to end the take, the 2nd AC needs to quickly and loudly again announce ‘end board’. They will then quickly put the board in front of the camera with the sticks open, but in an upside down position, they will announce the board, ‘26 alfa take three, end board’, give it a clap, then spin the board so that the editor can read it. 

Putting a board in upside down means that the board signifies whatever was previously shot. For example, if no slate was done on a take, then the board can be placed in frame upside down with the sticks closed, a new clip can be rolled, then the board is spun to the correct orientation. This signifies that this board is the label for the previous take.

If shooting with multiple cameras that can all be synced with one board, then the roll number of each camera should be written on the slate, such as A008 and B007. In this case the loader will say ‘alfa, bravo common mark’ before clapping.

If multiple cameras are being used and there is a 2nd AC for each camera, then each loader will announce their camera, in alphabetical order before clapping. For example, ‘26 apple take three alfa mark’, ‘bravo mark’.  

Finally, if a director decides that they only want to record a small portion of the scene, such as a specific line, then P/U will be added next to the take which stands for pick up. This will be announced as ‘26 alfa take three pick up’.

Read More
Gray Kotze Gray Kotze

How LUTs Can Elevate Your Cinematography

Let's explain the colour workflow process, what LUTs are, and how they can be used to improve the look of your footage.

INTRODUCTION

If you’ve ever shot something in log and accessed that raw footage straight from the card, you’ll know that it looks extremely flat and visually unappealing. But don’t panic. 

This is because that log footage needs to be processed through a specific colour workflow in order to elevate how it looks. Part of this workflow involves using what is called a LUT. 

If you’ve at all dived into the YouTube rabbit hole of LUTs, you may have been left a little confused, with the added expectation that I will start trying to sell you my special LUT pack straight after the intro…Don’t worry I won’t. 

Instead, I’ll try to provide an overview to clearly explain the colour workflow process, what LUTs are, and how they can be used to improve the look of your footage.

WHAT IS A LUT?

The reason that cinematographers choose to shoot in RAW or with a flat colour profile is that it offers the most colour flexibility in post production, with the widest dynamic range.

Shooting with a colour look already applied or baked into the footage that comes out of the camera leaves minimal room for colour correction to be made or a different look applied to the footage later.

While shooting in a flat colour profile, means that you can later alter the look of the colour and exposure to much a greater degree, manipulate the image and easily make any colour corrections, like changing the white balance, or the exposure without the footage falling apart.    

This is all well and good, but what does this have to do with LUTs and what are they?

LUT stands for ‘lookup table’ and is a way of adjusting the colour and tones in an image. The way I like to imagine a LUT in my brain is in terms of layers.

At the bottom layer we have the raw footage that is recorded by the camera. When you download the footage from the card onto a hard drive this is what you will get. As we mentioned, when working with cinema cameras, this is usually recorded in a flat, desaturated colour profile.

A LUT is an extra layer that can be applied on top of the bottom layer. This LUT transforms each pixel’s colour value to give the footage a new look. Different LUTs can be created that output different looks: such as a more standard, natural look, a warm, romantic look, or a look that tries to emulate a film stock.

The reason that I like to think of the raw footage and the LUT as separate layers, is because when using a cinema camera, the LUT is not baked into, or combined with the raw footage. Rather the flat footage is recorded onto the card, while the LUT exists as a separate file that can be applied to the footage or deselected at will.

Because the raw footage is so flat it is difficult to judge things like exposure or colour temperature by using it as a reference on a monitor. To get around this, cinema cameras can apply a LUT as a reference on top of the raw footage that the camera will record, so that the cinematographer can better imagine the final image.

If this same LUT is later applied on top of the flat, recorded footage during the colour grade in software such as Da Vinci Resolve, then the image will look the same as how it was viewed on set with the same reference LUT.

Alternatively, different types of LUTs, other than the reference LUT that was used for exposure on set, can also be chosen and applied on top of the raw footage in the grade. 

If there is a colourist, they may choose to either use a LUT as a starting point for the grade and then make additional colour corrections on top of that, or they may prefer to start from scratch and build a new look during the grade. 

3 WORKFLOW LEVELS

Before I discuss the way in which LUTs are typically used by filmmakers in the industry on movies, series and TV commercials - I think it’s important to address the common colour workflows that are used across three different budget levels: from solo shooter-level, to mid-level, to industry-level. 

Starting at the solo shooter level, such as wedding videographers - many people within this bracket have their own cameras and also edit and grade the footage that they shoot.

Having the ability to completely control how you want the colour in your image to look at this stage is fantastic - as you can deliver the grade exactly as you imagine it. 

However, there’s sometimes a bit of a misconception that a LUT is this magical colour-thing that can be downloaded online then thrown on top of your footage like a filter when you’re editing to make your footage ‘cinematic’.

While this sometimes works, the issue with applying a LUT after you’re already shot all the footage comes down to intention and control. What you want to also be doing is using that same LUT that you use in the colour grade to also monitor how your colour and exposure looks as you’re shooting.

That way you will be able to correctly expose and light the footage in a way that suits the LUT, rather than shooting footage, applying a LUT in the edit and then seeing that you’ve underexposed, overexposed, or lit with an undesirable white balance. 

You want to shoot the footage to suit the LUT, not grade the footage to suit the LUT.

Once you start shooting more mid-level content, such as for broadcast TV, you may find that although you shoot the footage, that footage will now get handed over to an editor, and sometimes go through an online edit, which will be done quickly and which you often have no input in.

The next time you see the footage after you’ve shot it is usually when it is broadcast. In my experience this can sometimes go OK, and other times go disastrously wrong - especially if the online editor just throws a random LUT over everything.

Therefore, what I’ve started doing, to try and regain control back over the look of what I’ve shot, is to expose with a LUT that I’ve created in Resolve, get it as close as possible to the final look that I want on set, then hand over that same LUT file to the editor to use as the base look. They can then make small colour corrections if necessary - which saves them time and at the same time preserves the look that you want.

Finally, at the high-end industry level, particularly on long form jobs, cinematographers often regain most of that control of the colour back. This is because there is now money to spend on a proper colourist, who can help get the footage to the look that you and the director imagine. 

INDUSTRY COLOUR WORKFLOW

Before filmmakers transitioned to using digital cinema cameras, productions were shot, processed and printed on film. It was the job of the cinematographer to choose which film stock worked best for the job and how that film stock should be processed, colour timed and printed at the lab. This all determined the ‘look’ of the footage.

After the digital intermediate and digital cameras were introduced as the norm, some of this control of the colour and ‘look’ of the footage was taken away from cinematographers - especially if they weren’t allowed to participate in the colour grade.

In recent years many cinematographers have tried to regain control of the look by using a workflow with LUTs that treats them more like you would a film stock back in the day - by exposing with the LUT on set rather than plonking a look onto the footage in post production. 

That way they can get as close to the final look of what they want while they are shooting. They can do this by getting a colourist to create a custom LUT for the film before shooting begins. 

“The process for me in prep is how close can I can get things ready so that when we are shooting we’re delivering the best product. You know, we start talking about colour - a lookup table, a LUT. You know, I believe the future is creating the strongest negative you can on set.” - Andrew Wehde, Cinematographer  

Earlier we talked about the idea of a LUT being like an extra layer that’s applied on top to enhance colour, well, there are also a few more nuances to the colour workflow.

Before adding a look or a LUT, the flat files that come straight from the camera need to use colour processing to be converted to the correct colour space. The most common colour space is Rec 709. This adds saturation and contrast so that the colour looks normal or realistic.

In grading software this is often called doing a colour space transform by inputting the colour space of the camera files - such as Arri Log C - and then choosing the output colour space that you want - such as Rec 709.

Now that we have the footage in the correct colour space we can add a LUT layer or a look to the footage. On professional cinema cameras this can be done with either a 3D LUT or what is called a CDL - a colour decision list. 

A CDL is basically a way of tweaking the colour on set as you shoot, by doing things like adding a tint, or controlling how much colour is in the shadows, midtones or highlights. This is usually done using live grading tools on a DIT cart.  

“For about two years now I’ve been working on on set colour and trying to deliver my on set dailies to be as close to final as possible. So I’m doing a large amount of on set adjustments. I have a panel so that I can do my highlight and shadow control, I can do full individual colour channels for high, medium, low and I have tint adjustments. I‘m basically pushing the CDL as far as possible. The Bear season two, what you see on TV is my delivered CDL. That’s the first time I’ve proven I can do final colour on set with minor adjustments.” - Andrew Wehde, Cinematographer

His DIT can then create proxies using the look he’s created on set, which are used for editing and passed all the way down the post production pipeline - retaining his look.

Other methods I’ve seen cinematographers use, without live grading tools on set, is to either expose with a specific LUT that their DIT will use to create the proxies, or to get their DIT to grade their dailies on set with software like Resolve, before they create proxies with that look.

Sometimes the DIT will carry an iPad which they can export graded stills to that the DP can view, get feedback on and approve on set before the proxies with that look are created. 

Whatever method is used, I think it’s good to at least have some idea about the kind of colour look you want to create before you start shooting. I personally really like this new trend of DPs trying their best to maintain as much control over the look of the colour that they can by using a CDL or a LUT - both when exposing the footage on set as well as when finishing it during the grade.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Rodrigo Prieto

Let’s dive into how Rodrigo Prieto’s philosophy on image making affects the camera and pick out some of the film gear he uses to create his masterful images.

INTRODUCTION

If you put together a list of some of your favourite working directors of the last two decades, there’s a decent chance that Rodrigo Prieto has shot for one of them: Martin Scorsese, Spike Lee, Alejandro Iñárritu, Greta Gerwig, Oliver Stone…the list goes on.

Although his cinematography spans decades, it often contains a deliberate use of rich saturated colours, a widescreen aspect ratio and visually bold decisions, which are always focused on presenting a subjective view of the character to the audience.

So, in this episode, let’s dive into how Prieto’s philosophy on image making affects the camera and pick out some of the film gear he uses to create his masterful images.   


PHILOSOPHY

“I try to make the cinematography subjective. And that’s in every movie, really, I do. I try to make the audience, as much as possible, be in the perspective of the character. That is not only literally the camera angle being in the POV of a character. It’s more making the camera behave, and the lighting, and everything about it - the texture and the feel of the movie - behave like the main person we’re following.” - Rodrigo Prieto

The idea of creating images that put the viewer into the shoes of the protagonist is one of the underlying philosophies in his work. For example, how do we visually convey a character’s progression from a naive, straight laced graduate to an excessive, risk taking, paranoid white collar criminal.

The camera could start by moving with a smooth, steady motion, under a well exposed light, and later shift to a rough, raw, handheld aesthetic with harder light and stronger shadows.

Or, if we take another story, how do we visually present a series of interweaving timelines and narratives so that the audience doesn’t get too confused?

You could do it by using a different colour cast for each different character arc. Using more sickly, cooler tones for a man in need of medical care, and a much warmer palette for a man trying to hold his life together through his faith.

Or, how do you put the audience in the shoes of a disorientated warrior during a surreal, near death experience during a battle?

How about by radically shifting colour out of the bounds of reality.

You could pick apart each and every one of the film’s he shoots in this way and uncover a philosophical rationale behind the often bold visual decision making that supports the story.  

It’s more about creating a feeling and a tone that is representative of a character’s state of mind than it is about shooting from the physical POV or perspective of the protagonist.

Each director he works for brings a different artistic sensibility, however the camera’s subjectivity is always present.

“Every director is completely different. For example, Ang Lee is very precise and also very methodical. And he likes to pick the focal length of the lens. And he talks to me about the framing and composition. He’ll look through a viewfinder and tell me have that corner of that window in frame and this and that, you know. Oliver Stone thrives in chaos. So every time I say, ‘Oliver we have this problem with the camera’, there’s a bump in the dolly, whatever, he’d say ‘Rodrigo Perfect is the enemy of good.’ And Scorsese is maybe a mix of both. He’s very precise in the shot listing he designs and he has a very good idea of the energy he needs the camera to have. But he also leaves space for improvisation by the actors and for new ideas to come.” - Rodrigo Prieto

Being able to adapt to how different directors work is an important skill. Cinematographers should be able to offer both their technical skills and practical advice on how to achieve a desired look or an unconventional shot, as well as lighting each scene.

Some director’s like to maintain more control over how each shot is composed, while other director’s may just describe a tone or feeling that they want to achieve and then leave room for the DP to offer their own ideas and suggestions as to how to achieve that.

When working with directors that like to maintain creative control over small details, it helps to build up a trust in their visual sensibilities and accept the challenge of focusing on the lighting and technical execution of the shots.

Sometimes it may also be necessary to surrender control of minor visual details in order to capture better performances. 

“The performances were the essential thing in this film. So, you know, I had to compromise sometimes the possibilities of the lighting to be sure that we captured the performances of these amazing actors.” - Rodrigo Prieto

On The Irishman, this meant embracing the decision to use multiple cameras to cover dialogue scenes - which allowed them to get greater coverage of the performances.

The reason this may compromise cinematic choices is because the more pieces of gear that you place within a set, the more you limit the kind of angles you can shoot, or the space that you can place light without it getting blocked or seen in a shot. 

To further complicate things, they had to use an interesting but cumbersome rig that actually accommodated three cinema cameras in order to de-age characters.

GEAR

This 3-D rig consisted of a Red Helium that could shoot high res, 8K files that could later be used for VFX work. This camera was placed in the centre of the rig and captured the actual shot and frame that they would use.

Two, special infrared Alexa Minis were then placed alongside the capture camera as ‘witness cameras’ that also had an infrared ring light to neutralise shadows that could only be picked up by the Minis and wouldn’t show up on the recorded Red image.

They could use these reference clips with the recorded clips and combine it with some AI and machine learning, powered by a NVIDIA GPU chip, to de-age the actors.

Prieto got his camera and grip team to reconfigure this large rig and made it more modular so that the ‘witness cameras’ could be moved around to either be alongside the main camera or at the top and bottom. This allowed them to use this hefty setup on a variety of grip rigs.  

Prieto’s photographic decisions are often bold, and use colour expressively. Across his career he has manipulated colour in different ways as the technology has progressed. He’s done it photochemically with film, by using a combination of film and the digital intermediate, as well as with digital cameras and the colour grade.

Let’s compare some of the techniques he’s used - starting with film.

The most common way of shooting on film, is to use a colour negative stock and expose and develop it normally. However Prieto has often experimented with different stocks and development methods.

For example, on Alexander he used the rare Kodak Ektachrome 2443 EIR stock. Sometimes called Aerochrome, it is sensitive to infrared light and displays colour in unusual and often unpredictable ways: turning greens to red, purple or blue. He rated this stock at 125 ISO and used an ND0.3 and a Yellow No. 12 filter to make the effects of infrared light more intense.

Another technique he used in many films, such as Amores Perros, is a type of development called bleach bypass processing. During the processing of film in a lab, the step of bleaching the film is skipped. This results in a black and white layer that is overlayed on top of the colour image, which reduces the saturation of colour, but increases both the contrast and film grain - creating a raw, gritty look.

Instead of doing this technique photochemically on Babel, he did it in the digital intermediate. In other words he processed the film normally, then added a half bleach bypass look to the film in the colour grade. 

This allowed him to control the intensity of the look, creating colour that was somewhere in between a bleach bypass and regular development.

As the technology has shifted more towards digital, he’s been able to do everything digitally instead of photochemically: from picking the look of a stock to choosing a development method, all within the grade.   

On The Irishman, he chose to differentiate the time periods by applying different film emulation LUTs to both the digital and film footage from different eras: a Kodachrome look for the 50s, an Ektachrome look for the 60s and a bleach bypass development look for the 70s onward.

You can see how foliage looks different across these looks: including deeper shades of blue and stronger contrast in the shadows for the 50s, a bit of a warmer look in the 60s, and a very destaurated but high contrast look from the 70s onward.

He’s used many lenses over the years, but has often reverted to shooting in a widescreen format with anamorphic glass, such as the Hawk V-Lites, the Arri Master Anamorphics or Panavision G-Series. 

Prieto also likes using Arri cameras, whether that is the Arricam ST or LT when shooting on film, or on variations of the Alexa when shooting digitally. 

Another example of how he uses colour can be found in how he lights night interiors and exteriors. He often uses quite a classic technique of creating alternate planes of colour in different parts of the image. Specifically, he likes to create pockets of warm light indoors and then place cooler, blue sources of light outside of windows. This creates increased colour contrast and more depth in the frame.

CONCLUSION

Although he often paints with lots of colour and quite strong visual techniques, it is always done deliberately. Prieto uses the technical tools at his disposal to craft images that create a mood that mimics that of the main protagonist.

Whether that’s through his use of colour, lighting or camera movement.

The cinematography remains subjective and deliberate in a way that subtly or sometimes not so subtly helps to support the story.

Read More
Gray Kotze Gray Kotze

Cinematic Lighting Vs Natural Lighting

In this video I’ll show you two different lighting setups for two different looks and compare how you can get away with using only natural light, or how you can elevate that look a bit more by supplementing natural light in a motivated way.

INTRODUCTION

You may think that cinematography would be all about using a camera. However, the most important part of a cinematographer’s job is actually lighting. 

Scenes are lit to: create a look that tonally suits the story, to provide a consistent source of illumination that doesn’t change or effect continuity, and to give the camera enough light to be able to properly expose.

In this video I’ll show you two different lighting setups for two different looks and compare how you can get away with using only natural light, or how you can elevate that look a bit more by supplementing natural light in a motivated way. 


MOTIVATED LIGHTING

Lighting can take two forms. It can be more expressionist and exaggerated, to completely elevate the footage out of the real world. Or it can be naturalistic, where, although artificial lights are used, they are used more subtly in a motivated way to keep the story within the bounds of realism.

Today we’ll focus on creating a naturalistic look by using motivated lighting. What exactly is that?

Motivated lighting involves first examining the natural light sources that are present in the space and then placing additional artificial film lights to supplement the natural light.

Or, sometimes, if a source doesn’t exist, cinematographers will create an imaginary motivation for it in their head (outside of the frame of the shot) and then add artificial light using that idea. 

There are three things to consider when lighting in this way: the direction of the light, the quality of the light and the colour of the light.    

Let’s keep these three factors in mind when we go about creating two different looks: a brighter illuminated high key look and a more shadowy low key look.



HIGH KEY - NATURAL 

Let’s start by showing how we can create a high key look - without many shadows on our subject - using only the natural available light.

When only using ambient light in a space, it’s very important to be aware of what the natural light is doing. 

I chose to shoot the natural light look at a specific time in the morning where the sun was still low enough in the sky that it would angle in I through the main window in the space. I checked the forecast beforehand and made sure it was a sunny day. Light scouting, weather observation and scheduling is very important when dealing with only natural light.

Next we need to think about direction. In this room the main source of light comes from a large window on the side and a smaller window from the back. 

Another important part of natural lighting is how you position the subject. Rather than placing her so that she is directly in front of the window and the light source is totally front on and flat, I’ve positioned her so that she is side lit by the sun coming through the window.

Also, placing the main source of light directly behind the camera is normally not a good idea as it may cast the shadow of the camera onto the subject. 

This positioning of the subject means the natural light comes through and creates contrast on one side of the face. Also this little window provides a small backlight which separates her from the background.

Now that direction is sorted we can focus on the quality of the light. I’ve used a muslin curtain to diffuse the intensity of the light, softening any shadows, and reducing the brightness of the illumination outside the window. 

When setting the exposure level for a high key look I’ve focused on the illumination of the skin by increasing exposure - in this case with an ND filter - until I’m happy with the level of light on the face. This may mean that the area by the window blows out a little bit - or turns to pure white - which isn’t ideal but we can’t control that. Not without lights anyway.

Finally, the colour of our light is that of the natural sun - which also can’t be changed. One reason I usually don’t turn on any overhead house lights when using natural light is because mixing the colour of artificial warmer ceiling bulbs and natural daylight may throw off how colour is recorded.

So there we go, a high key look using only natural light.





HIGH KEY - CINEMATIC 

One reason that DPs use lights to still create a naturalistic look is because of this curtain in the background. It’s a bit blown out. In other words the natural light from outside is much too bright and turns to white, lost information. This is not pleasing to the eye.

So to create a better look I will start by setting the exposure of the camera so that it is balanced to the light in the background by only looking at this window. Now it’s not blowing out, however, it’s much too dark to be a high key look. 

So, we need to add light. Let’s start by thinking about direction. 

Our strongest source of light is coming from the window - we’ll call this our key. Then some of that lighting from the window is coming inside and bouncing back as a soft ambient source - we’ll call this our fill. Then, finally, ambient light from that little window is hitting the back of her head - we’ll call that our backlight.

Using three light sources in this way is called three point lighting.

Now that we’ve identified where the light is coming from, let’s add film lights that mimic the direction of the natural sources. 

With our lights on hand, let’s think about the quality of the light that we want. Because the sunlight coming through the big window is strongest we’ll put our biggest light there - a Nanlite Forza 500B II.

The sunlight coming through the window has been diffused by the curtain and is nice and soft, so we’ll do the same and add a softbox, with a layer of stronger diffusion in front of it to soften it as much as possible. I’ve also added an egg crate grid to it which controls the spread of the light, focusing it more directly on our subject and preventing it from spilling everywhere.

Next, we’ll take our second strongest light, a Forza 60B, and use it to recreate some of the natural ambient fill light. This we’ll also diffuse and make nice and soft by using a lantern. This creates more of a soft spread of light. As you can see here it hits the little plant on the table. This mimics the spread and quality of natural ambient sunlight bouncing off a wall. 

Finally I rigged a little tube light on an extended c-stand arm as a backlight. This ever so slightly adds to the feel of the light coming from the back window. 

Now, for our third variable: colour. To me, the brightness of high key lighting feels like it would go well with a warm, morning look, so I cranked all the colour temps on my lights to 5,000 Kelvin - which is just a bit warmer than normal sunlight. 

The 500B also comes with a cool new feature of being able to adjust the amount of magenta or green tint to the light. So I added a bit of magenta which to my eye helps give a warmth to the skin tones.

And there we have it. A high key look - this time with added artificial lighting that should still feel quite natural.




LOW KEY - NATURAL 

Let’s take away those lights and create a new low key look with only natural light. 

Rather than being bright like the high key look, low key lighting accentuates shadows and darker tones and usually has an overall higher contrast between areas of light and shadow.

Since we’re not allowed to use any lights we’ll keep the same positioning, quality and colour of light as before. However, we are going to change our overall exposure. 

To prevent those window highlights from blowing out like they did for the high key look, we’ll lower the exposure using an ND filter on the front of the lens, until we can still read information on the window and it’s not blown out.
This leaves the frame looking much moodier than before, even though the only change made was in exposure, not in lighting.

This creates plenty of shadows across the frame, which may work as a nice look for some stories, however may be a bit too dark for others.






LOW KEY - CINEMATIC 

So, let’s see if we can’t find a middle ground between the very shadowy low key, natural light look and the high key look - by introducing some film lights.

We’ll use almost the same placement for our key light as before. But this time, instead of being more in front of the character, we’ll bring it around a little bit more until it’s right on the edge of the frame and is lighting more from the side. 

This will create just a little bit more contrast, as less light will fall on the side of her face nearest to the camera.

We’ll go ahead and turn on the same backlight as before. However, this time, we’ll leave our fill light off. 

If you compare the high key lighting shot that uses a fill light and the low key shot without one you’ll see that not illuminating the one side of her face creates a gentle shadow on the side that favours the camera - therefore creating more contrast.

Because I’ve moved the key light around, there is less light that spills on the wall behind her, which also makes it feel like more of a low key look.

On top of this, there is a slight difference in colour. Because the low key look is a bit moodier, I cooled down the colour temperature on my fixtures from 5,000K to 6,000K.

So there we go. A low key look that was achieved with motivated lighting, by simply eliminating the fill.      






ADVANTAGES OF USING ARTIFICIAL LIGHT

Four different looks: two created without any lights and two created using artificial sources. Lighting is always subjective and should change depending on the nature of the story you are telling. 

This look may be better suited for commercial applications, while this look works for a film with more dramatic content.

But besides the look, what other advantages does using lights provide? Perhaps most importantly using lights creates a consistent look, which will hold up for much longer periods of shooting. 

If it takes a couple of hours to shoot a scene using only natural light, the look of the ambience may have completely shifted as clouds came over, or the sun got flagged by a building. This means that the consistency and continuity when cutting to different shots will be off.

Using film lights means that even if the natural light changes, the artificial light should maintain the continuity of the look, which means you will be able to shoot for longer.

Also, relying purely on natural light means you have limited to no control over the look of the image. For this video I could pick a day and a specific time where I knew we would get strong sunlight, but that isn’t always the case. If you need an interior to look warm and sunny, but it rains that day and you don’t have any lights, then there’s not much you can do.

Read More
Gray Kotze Gray Kotze

2-Perf vs 3-Perf vs 4-Perf: 35mm Film Formats Explained

By far the most popular film format is 35mm. But what you may not know is that there are then 3 further format choices that need to be made between: 2-perf, 3-perf or 4-perf. But what is a perf and how does it affect both the budget and how the footage looks?

INTRODUCTION

The starting point when it comes to choosing which format to shoot a movie on is between digital and film. If film is selected, by far the most popular film format is 35mm. But what you may not know is that there are then 3 further format choices that need to be made between: 2-perf, 3-perf or 4-perf. But what is a perf and how does it affect both the budget and how the footage looks? Let’s find out.


WHAT ARE PERFS?

The manner in which a piece of 35mm film is exposed is determined by the negative pulldown. This is described in what are called perfs. Perfs stand for perforations and are the little holes that you see in the film that span the length of each individual frame. 

These holes align with sprockets, which turn in order to mechanically pass an unspooling roll of film vertically through the camera. The film is exposed when it is hit by light which is let through the lens. 

35mm film frames can be shot with either four vertical perfs, 3-perf, or 2-perf. As the width of a 35mm frame is standardised to a size of 24.9mm, the number of perfs only effect the height of the frame that is recorded - with 2-perf capturing the skinniest surface area, and 4-perf capturing the tallest surface area.

Exposing a larger area of film to light is kind of like the digital equivalent of recording at a higher resolution - the larger the area the more clarity and higher fidelity it will be. However, the larger the exposure area, the more film needs to be used and the more you will need to pay for film stock and development. So perfs affect both the cost of shooting as well as the quality or fidelity of the image.

The motion picture camera that is used must be specifically set to record frames with a certain number of perfs by adjusting the speed at which the film runs through the camera as well as the height of the gate that lets through light. Most cameras can record either 4-perf or 3-perf, while only specific cameras can record 2-perf frames.

There are two different steps to the filmmaking pipeline. Capturing images on film with a camera and projecting those images on film by passing light through them.

Image capture can happen on either 4, 3 or 2-perf, however 35mm film projectors are set to work with a 4-perf film print. This means that if you capture film in 2 or 3-perf, you would still need to print the final 35mm projection roll in 4-perf frames.

However, now that digital projection has taken over, it’s possible to capture 35mm in either 2, 3 or 4-perf, scan the film negative and then work with the scan in the same way as a digital file - which can later be sent out to cinemas that use a digital projector or for online distributors to upload the file and stream it digitally.

4-PERF

In the late 1800s and early 1900s when motion picture film technology was undergoing development, 4-perf 35mm film capture and projection emerged as the industry standard. This produced a tall aspect ratio of 1.33:1. 

4-perf offers the largest exposure area of 35mm film at 18.7mm tall. Because more surface area is used the film grain will be smaller and the image will be of a higher quality. 

This large surface area also allows lots of possibilities for aspect ratios. When shooting Super35 with normal sphercial lenses the frame can be used for taller aspect ratios like 1.33:1 or the top and bottom can be cropped to get widescreen aspect ratios like 1.85:1 or 2.40:1.

Before digital, this crop would have been done by printing the final film to a different ratio with a letterbox, or by using a narrower plate that chopped off the top and bottom of the frame when projecting. Now this can be done by scanning the negative and using software to crop the image.

4-perf can also be used with anamorphic lenses. These lenses squeeze the image by a factor of 2, to around a 1.2:1 aspect ratio, so that it is captured as a tall, compressed film frame. It is then later de-squeezed by a factor of 2 to get it to a widescreen 2.40:1 aspect ratio.

Because this method uses such a large portion of the tall 4-perf frame, anamorphic negatives have a higher fidelity and low amount of grain.

Another advantage of 4-perf is that when shooting Super35, the extra recorded area on the top and bottom of the image, that will be cropped out, can help with VFX work, such as tracking.

A disadvantage of 4-perf is that more film must run through the camera faster, which makes it noisier. This also means that it uses the most film out of the 35mm formats, which means more money must be spent on buying film stock and developing it.

It also means that a 400’ roll of film will only be able to record for a mere 4 minutes and 26 seconds, before a new roll must be reloaded into the camera.


3-PERF

In the 80s, cinematographer Rune Ericson collaborated with Panavision to produce the first 3-perf mechanism for 35mm cinema cameras. 

Shooting each frame 3 perforations tall as opposed to 4, produced a less tall frame with a height of 13.9mm and an approximate aspect ratio of 16:9. 

When shot with spherical lenses this negative could easily be ever so slightly cropped to get to a 1.85:1 aspect ratio, or more cropped to get to a 2.40:1 aspect ratio. 

Because of the lack of height of the frame, 3-perf wasn’t suitable for using 2x anamorphic lenses, as it would require too much of the frame width to be cropped and therefore go against the point of getting a high quality anamorphic image. Therefore, 3-perf is best when used with spherical lenses.

However, it is possible to use the much less common 1.3x anamorphic lenses with 3-perf film, as they squeeze a 16:9 size negative into a widescreen 2.40:1 aspect ratio.

Due to the decrease in recording surface area, grain will be slightly more prominent in the image than when using 4-perf.

The main reasons for shooting 3-perf rather than 4-perf are financial and practical. 3-Perf uses 25% less film - which means a 25% reduction in the cost of both film stock and processing of the film at a lab. 

It also means that the camera can record for 33% longer than 4-perf. So a 400’ roll gives a total run time of 5 minutes and 55 seconds before the camera needs to be reloaded. This is practically useful especially when shooting during golden hour or in situations where taking the time to reload a camera might mean missing a shot.

2-PERF

2-Perf, first called Techniscope, gained popularity in the 60s when it was used to shoot lots of Spagetti Westerns. These movies were often done on quite low budgets, yet wanted a wide 2.40:1 aspect ratio to frame the characters in sweeping landscapes.

2-Perf does this by further cutting down on the vertical recording height of the negative, taking it to 9.35mm, creating a native widescreen aspect ratio.

At the same time, this reduction in frame size also equates to a reduction in the amount of film that needs to be used. Since it is about half the height of 4-perf, about 50% can be saved on purchasing film stock and processing film. Therefore 2-perf was a great solution to both save money and create a widescreen aspect ratio.

It also basically doubles the recording time of each roll, allowing you to get 8 minutes and 53 seconds with 400’ of film. This means that it’s possible to either roll for longer takes, or that many more short takes can fit on the roll before needing to reload the camera. 

Because it is so skinny and lacks height it’s not possible to use this format with anamorphic lenses - not that you would need to since you get the same aspect ratio by using spherical lenses. 

It’s also only really suitable for using this aspect ratio, as getting a taller ratio would require cropping into the image far too much and increase how the film grain looks significantly.

Although it has the same ratio as anamorphic, it has a different look. Because the surface area is much smaller than 4-perf, the grain shows up as much more prominent. 

In the modern era where film stocks have become much finer grain and cleaner looking some cinematographers like using 2-perf to deliberately bring out more filmic texture and make the footage feel a bit more gritty.

I’d say 2-perf 35mm is basically a middle ground between a cleaner 4-perf 35mm look and a grainier 16mm gauge stock.

CONCLUSION

How many perfs you choose to shoot on has an effect on a number of factors. 

4-Perf records onto a greater surface area, which looks cleaner, with less grain, can be used with both anamorphic lenses, or spherical lenses, and has room to crop to different aspect ratios.

However, this comes at a higher cost, with a camera that makes more noise and very short roll times. 

On the other hand 2 and 3-perf, use less of the negative, which makes the image a bit grainier, isn’t compatible with 2x anamorphic lenses, and limits the amount of taller aspect ratios you can choose from. But, it’s much cheaper and the camera can roll for longer.

In this way, the choice of 35mm film format, is another technical decision which filmmakers can make that effects both the look and feeling of the image, as well as providing certain technical limitations and advantages.

Read More
Gray Kotze Gray Kotze

What Directors Do Vs What Cinematographers Do

How much of the look of each film is created by the director and how much is the look influenced by the cinematographer?

INTRODUCTION

In modern cinema the authorship of a movie is always attributed to the director. And much of a movie is made up of how the visual information is presented in shots. 

However, most directors don’t directly operate a camera, pick out the camera gear or determine how each scene is lit. This is usually overseen by the cinematographer, otherwise called the director of photography. 

This begs the question: how much of the look of each film is created by the director and how much is the look influenced by the cinematographer? The answer is…well, it depends.

Some directors like Stanley Kubrick were famous for having a large hand in the cinematography choices - from framing and shot selection all the way to picking out what individual lenses would be used. 

While other directors may be far more concerned with working on the script and the performance of the actors, and leave many of the photographic choices up to the DP.

Normally though, the answer is somewhere in between these two extremes.

VISUAL LANGUAGE

In order to determine the authorship of a film’s look, it helps to define all the individual elements and creative choices which go into creating a visual language.

Each frame is due to a compilation of choices. This includes: what shot size is used, how the shot is angled and framed, how the actors are blocked within that frame, the arrangement of the production design and what is placed in front of the camera, the choice of medium and aspect ratio, how the camera moves, the choice of lens, how it is lit, graded, and how each shot is placed next to each other and paced through the editing.

There are no doubt other creative choices that also go into creating a visual language, but these are some of the main ones to think about.

Although some directors and some cinematographers may have a hand in guiding each one of those choices, many of these decisions are controlled more strongly by either the director or the DP. 

CREW STRUCTURE

The decision making process on a film set is similar in many ways to how a company operates. It is headed by the director, the CEO, who manages an overall vision and direction, and has to make lots of small decisions quickly to manage the project of making a film.

Below the director are other ‘executives’, who also have a large impact on the film, but who occupy a more specialised role. For example the producer, or CFO, who focuses more on the niche of the finances. 

Or the cinematographer, the CTO, who is responsible for overseeing how technology is used to capture the film.

Then there are loads of other department heads that occupy leadership roles that are increasingly specialised: like the production manager, or the focus puller. 

This analogy isn’t perfect but you get the idea. So, let’s unpack this a bit further by breaking down what a director does versus what a cinematographer does and which visual decisions each is usually responsible for.     

   

WHAT A DIRECTOR DOES VS. WHAT A DP DOES

Creating shots and shot sizes is hugely important in establishing the look. Typically directors and cinematographers collaborate on this, but I’d say more often than not director’s have a stronger say in this, especially in the more structured world of TV commercials - where each shot is storyboarded ahead of shooting.

On larger Studio series or films where shooting time is quite expensive, many directors will create a storyboard in pre-production, which will be passed on to the DP when they come onboard. 

Even on less expensive movies directors often like to use this technique to express their vision, keep to schedule and not overshoot a lot of coverage. For example, the Coen brothers are known for using storyboards and being quite particular about each frame which is shot.

However, other directors, such as Steve McQueen, prefer to work in a more collaborative fashion, coming up with shots with the DP and choosing how they want to cover scenes once they are in the location with the actors. 

Choosing whether to move the camera and how to do so is built into this decision about creating shots. Often directors will determine what kind of camera moves they would like to build into the shots, such as a push in, or lateral tracking motion. 

The cinematographer will then take those ideas and work out the best way to practically execute those moves: whether that be with a gimbal, a Steadicam, a dolly or handheld on a rickshaw.

In other words taking the overall tonal direction and making it happen practically.  

Which lens, particularly which focal length is chosen, has an effect on how the shot looks. This is an area where the cinematographer usually controls this choice more than the director.

However, some directors may like to lean into using particular lenses for a trademark look, for example the Safdies have often used long, telephoto lenses on their films, which helps elevate the tense, voyeuristic tone. 

While in other cases the cinematographer may bring a look to the table based on their lens selection, such as Emmanuel Lubezki’s work, which is known for using extremely wide angle lenses close up to characters. He’s used this technique in different films, working for different directors.     

Blocking, or how actors are placed or moved within a scene, is a visual component that is also entirely determined by the director in most cases. They will work with the actors and walk through the scene, while the cinematographer watches and thinks about camera placement. 

Occasionally DPs may provide suggestions to the director if they think that certain movements or positionings may not work visually - but more often than not they will try to work with whatever blocking the director puts forth.

Another part of the process which is mainly controlled by the director is the production and costume design - which is done in collaboration with the art director and costume designer. When pitching a film or commercial, a director’s treatment will often include direction about the kinds of locations, colour palettes and costume which they envision.

However, some director’s may also be open to collaboration with the cinematographer, particularly when it comes to crafting a colour palette.

The palette can also be influenced by lighting. This is a factor controlled almost entirely by cinematographers, and is probably the biggest stylistic part of the look that they bring to the table.

The easiest way to see this is to look at the work of directors, who have worked with different cinematographers on different projects.

These are all night scenes in films by the same director: Luca Guadagnino. Two of them were shot by cinematographer Yorick Le Saux, which feature toppy lighting, a darker exposure and a more muted, darker palette. 

The other two were shot by cinematographer Sayombhu Mukdeeprom and feature a more vibrant, earthy palette, a brighter, side key light and hanging practical bulbs in the background.

Or how about these films from Quentin Tarantino. Two were shot by Andrzej Sekuła and are lit with hard light from cooler HMIs through windows. These are cut in the background to have different areas of hard light and shadow.

While the other two were lit by cinematographer Robert Richardson, which have more warmth in the skin tones, and are cooler in the shadows. Both use his table spotlight technique: where he fires a hard light rigged in the ceiling into the table, which then bounces a softer warmer light onto the actor’s faces. 

 Again, same director, but subtly different looks from different DPs. 

However, occasionally directors will communicate a specific lighting style across multiple films to the different DPs that they work with. For example, Terrance Malick’s exclusive use of natural light and emphasis on filming in golden hour. 

The choice of medium is one that is probably equally contributed to by directors and cinematographers. By this I mean the choice of whether to shoot digitally or on film, in large format or Super35, with spherical or anamorphic lenses.

These overarching decisions about medium are usually made by the DP and director based on their artistic and practical merits. The further technical nuances of that choice, such as which large format camera to shoot on, or which anamorphic lens to use will then almost always be made by the cinematographer.

Choosing the visual language of how shots are juxtaposed and paced in the edit is almost 100% done by the director and editor. The only input a DP may have in this regard is when they provide guidance about shooting a scene in a very specific way during production - such as using a long take, or shooting with very limited coverage - which leaves the director minimal cutting options in the edit.

Once the final cut enters the grade in post production, on average I’d say the director has slightly more control than the DP. But, not always. Some DPs like to expose and shoot digitally on set with a specially built LUT. This LUT is later used as the basis of the look in the grade. 

Some cinematographers also push to always be present in the grade, as how the footage is shaped in post production hugely contributes to how a film looks.

A good example of this is how the Coen brothers work with two different cinematographers: Roger Deakins and Bruno Delbonnel. 

Whether working digitally with a LUT, or with film in the DI, Deakins tends to favour a more saturated, vibrant, contrasty, look with warmer skin tones and deeper, darker shadows.

While Delbonnel is known for crafting a specific look in post with his film negative that is lower in saturation, cooler in both the highlights and the shadows, and quite often introduces heavy layers of diffusion on top of the image to give it a more of a dreamy look.

CONCLUSION

Ultimately, the creation of the images is a balancing act which is dependent on the input of multiple collaborators - from the director to the DP to the production designer. 

Directors tend towards providing more of a conceptual guidance about how a movie looks, while cines generally are more about taking those ideas and visually executing them by working with technical crew and equipment.

A DP working for a good director, shooting a good story, will make their work look better. And as a director you want someone who will help you to enhance and photographically bring your vision to life.

Regardless of who does what, the most important thing is to find great collaborators and be open to at least hearing what ideas they bring to the table.

Read More
Gray Kotze Gray Kotze

5 Techniques For Framing Cinematic Shots

Filmmakers compose and visually present information within a frame. Let’s go over five different techniques which may help you frame with more of a cinematic eye and tell stories using only images.

INTRODUCTION

Filmmakers compose and visually present each shot within a frame. Frames can be of wide expanses, close up details, symmetrically balanced or even off kilter.

It’s also probably the easiest cinematography skill to practise: as all you need is to be able to snap away on a camera - including the one on the back of your phone. But what is it that makes a good frame? 

In this video, let’s go over five different techniques which may help you frame with more of a cinematic eye and tell stories using only images. 



1 - USING THE BACKGROUND 

What we choose to include or exclude from each shot is a deliberate choice that any image maker should be aware of. 

Most shots, whether in cinematography or photography, can be broken down into two components: the subject which the eye is most drawn to and the background, which the subject is placed against.

When filmmakers run through, or block, a scene with actors, one of the factors that they use to decide on the placement of the camera, and therefore the frame, is what background they want to place the subject against. 

The background does a few things. First and foremost it’s a way of conveying information within a shot. An isolated, tighter shot of a character against a white wall background includes limited information for the audience. While shooting a character in front of a wider, carefully dressed set with windows reveals several layers of information. This person is sitting in a shabby dressing room, so perhaps he’s a performer. 

The highrise building outside suggests that it’s in a city. He’s interacting with another character, but because he is in sharp focus, the camera is suggesting that the man sitting is the main subject in the scene.    

For more untraditional, atmospheric filmmakers, who let stories play out slowly without much exposition, how they present each frame is especially important for the audience to glean as much information about the characters and their environment as they can.

A background can either be flat or have depth. This depends on the distance between the subject of the shot in the foreground and the distance to the objects in the background. 

Normally, shots which have more depth to them are considered a bit more cinematic - as they create more separation between the foreground and the background and therefore a greater feeling of dimensionality, and more of an illusion of reality.

Like this shot which places a wall of highrises far off in the distance, rendering the looming city with a sense of scope while at the same time isolating the character.

This is also why documentary filmmakers often try to pull their interview subjects away from walls or backgrounds, both to render them more out of focus and to create more depth in the frame.

 

2 - LENS SELECTION

Each frame is not only affected by the direction in which the camera is pointed, but also by the focal length of the lens that is chosen.

The focal length is the degree of magnification that a lens has and is denoted in millimetres. An easy way to start thinking about focal lengths is by breaking them into three camps: wide angle lenses, medium lenses and telephoto lenses.

There aren’t any official millimetre categories when it comes to grouping focal lengths but I generally think of Super 35, spherical wide angle lenses being somewhere between 16mm and 25mm. With medium focal lengths being around 35mm to 65mm, and telephoto lenses approximately 75mm or longer.

Not only do wide, medium and telephoto lenses provide different levels of magnification, but they also change how the background of a frame is rendered.

The wider the focal length, the more the frame will be distorted and stretched and therefore the more background you will see. Whereas the longer the focal length, the more the frame will be compressed and the less background you will see.

Therefore when framing a close up shot of a subject it’s important to consider whether you want to use a super wide angle lens, with the camera physically closer to the actor, that displays more information in the background. 

Or, whether you want to frame using a telephoto lens, with the camera further away from the actor, and show less of the background with a shallow depth of field.


3 - FRAMING GUIDELINES

Although there is technically no right and wrong when it comes to framing, there are a few aesthetic and cinematic conventions or guidelines which have been widely adopted in filmmaking over the years.

One of the foundational framing guidelines is called the rule of thirds. This principle suggests dividing the frame into nine segments made up of two evenly spaced vertical lines and 2 evenly spaced horizontal lines.

You then place the most visually strong elements in the frame, like the subject along these lines, or at the intersection of these lines.

Probably the easiest example to show this is by framing the horizon. Usually cinematographers will either frame a landscape so that the sky portion occupies the top third of the frame and the earth portion occupies the bottom two thirds, or they will flip it and place the earth portion on the bottom third and the sky on the top two thirds. 

Another convention is the idea of leading lines. These are where objects in a frame are lined up compositionally so that they create an invisible path which guide’s the audience’s gaze towards a specific part of the frame.

These lines can be created in symmetrical frames by finding elements that lead to a centralised point, like a doorway where a character is standing.  

Filmmakers can also create a frame within a frame by composing the subject within a shape, like a mirror, a door or a window to create a more formal perspective.  


4 - SHOT SIZE & ANGLE

One of the most important decisions there is when it comes to framing is deciding how wide or tight you want your shot to be. 

As we hinted at earlier, wider shots are great at establishing the environment around characters and familiarising the audience with the geography of the film’s world. 

While tighter shots, where the subject occupies a large area within the frame, can be used to punch in and highlight details: whether those are an important object in the story or to better read and empathise with the emotion on someone’s face.

I’ve made a whole video on shot sizes before, which I’ll link below, but I think the biggest take away from it is the idea that, in life, our proximity to a person defines our emotional relationship to them.

Therefore, the tighter we frame a shot on a character, the more intimate it feels, while wider compositions feel more emotionally neutral and observational.

At the same time, the angle at which we point the camera at a subject also has a large effect on how frames can be perceived by audiences. 

Most shots in filmmaking are taken from a neutral, or medium angle, where the camera is positioned roughly at the eye level height of a character without any significant vertical tilt.

This approximates the viewer’s own eye level and creates a level of empathy and identification with characters. It also conveys a sense of normalcy and realism as it’s not visually jarring.

Low angles, where the camera is positioned at a height below the character's eye line and angled upward, creates more of an uneven emotional connection, which is often used to make characters feel more powerful, physically larger, dominant, imposing and stronger.

While high angles, shot from a tall position with the camera tilted down, tend to have the inverse effect of creating a sense of vulnerability, diminished size or weakness.

    

5 - BALANCE

Shots that are thought of as cinematic generally have a degree of balance to them. However, this balance can also be deliberately thrown off and subverted for effect.

A balanced frame is one where no part of the image has an overwhelming visual weight from elements that throws off other areas.

One way to think about this is in terms of negative space, empty areas in the frame without significant visual elements, and positive space, parts of the frame that draw the eye towards a focal point.

Filmmakers can create a symmetrical balance by centrally framing a subject and then equally weighting areas of negative space against the middle area of positive space.

Or they can frame shots with asymmetrical balance by placing the main subject in the image off-centre and then weighting the other side of the image with elements of negative space.

Other visual aspects like colour or areas of light and shadow can also be layered to either achieve symmetrical or asymmetrical balance within a shot.

When it comes to framing a dialogue scene between two characters, a common technique is to use a tik-tok or shot-reverse-shot: where each shot is taken from the same side of a 180 degree circle, in a way that may place the characters on opposite sides of the frame.

This introduces another two framing concepts: breathing room and headroom. 

Breathing room is the amount of negative space between the subject and the edge of the frame. Traditionally this space is left open in front of characters to give a sense of normalcy. Unless filmmakers want to deliberately introduce a bit more uncertainty or tension by leaving characters with limited or no breathing space.

Headroom is the amount of space above a subject's head. This can either be traditionally framed so that there is some negative space above the character, or the subjects can be given a haircut, so that they have no headroom and the top of their scalp is framed out - which may make the shot feel a bit tighter, more intimate and even tense.

CONCLUSION

There’s no magic tool that will allow you to create perfectly cinematic frames. Probably because there’s not really such a thing as a perfectly cinematic frame. Some movies may need uglier, off kilter shots, while others may benefit from a more precise, symmetrical form.

It always comes down to forming a perspective on a story that you then translate into a look. Every brain will do this differently and interpret how the world is framed in different ways. But hopefully the next time you are practising snapping photos or composing your next shot, you will have some of these techniques in the back of your mind that you can use to manipulate how you want your images to look. 

Read More
Gray Kotze Gray Kotze

Cinematography Style: Shabier Kirchner

This video will unpack Shabier Kirchner's work as a cinematographer a bit further by going over how he got his start in the industry, looking at some of his thoughts and philosophies on filmmaking and breaking down some of the gear he’s used to create images.

INTRODUCTION

After getting his first big breakthrough working with director Steve McQueen, Shabier Kirchner has emerged as a prominent, self-taught cinematographer in the world of festival and indie films. 

His photography mixes expressive but not heavy handed lighting, different formats, big close ups, handheld movement and naturalism to create an often dense, grainy, filmic look that evokes unconscious feelings from the audience.  

This video will unpack his work as a cinematographer a bit further by going over how he got his start in the industry, looking at some of his thoughts and philosophies on filmmaking and breaking down some of the gear he’s used to create images.  

BACKGROUND

“Images were always something that I was surrounded by. I was just immediately attracted to drawing and eventually photography as a way of expressing how I was feeling. In a way that I couldn’t really do with words or I couldn’t do with writing.”

Although the Antiguan born cinematographer struggled a bit in school, he developed an early love of photography. This was amplified by his dad who also loved photography, so much so that he had his own darkroom.

Here, Kirchner got to experiment with film and learn the basics of exposure and development. As he got older he began shooting a lot of what he was surrounded by, such as surfing and skateboarding. He slowly built a reel, which he would later use to apply to study a Masters in cinematography at the NFTS film school in London.

After making it to the final round of selection, he wasn’t selected. On his way from a job he landed in New York, where he managed to work as a trainee on a movie. The DP on that movie, Thomas Scott Stanton, immediately landed him the amazing opportunity to shoot 2nd Unit cinematography. 

After that he settled in New York, working on commercials, music videos, short films and indie projects for the next eight years.

One day he got an unexpected call from Sean Bobbitt, Steve McQueen’s regular cinematographer. Since Bobbitt sometimes worked with NFTS, he assumed it was in regard to his earlier application to the film school, however, it was something far better. 

He was looking to introduce a new cinematographer to Steve McQueen, as Bobbitt would be unavailable to shoot an upcoming series he was working on called Small Axe. This sparked another opportunity in his career. 

PHILOSOPHY

By breaking down the choices that cinematographers make, my videos may make it seem like cinematography is a very analytical process. But often it’s not at all. Many DPs internalise their years of practice and formal and technical training, then use that to unconsciously make decisions which dictate the visual language of a film based on what feels best. 

“Like, everything about this project I felt was done on a very unconscious level. It’s something that now looking back on it I feel that there is a lot of value to operating on your impulses and operating on your emotions. Things that you feel that you may not be able to quite put into words.”

This doesn’t mean that he doesn’t prepare. Depending on the project he may put together a collection of images from photographers that he likes, or conduct research through conversations with the relevant people or communities that the story takes place in. While at the same time shaping a perspective based on his own experiences. 

And, of course, he may put together a shotlist. For example, during prep on the first episode of Small Axe, he compiled comprehensive lists of shots for covering scenes - with as many as three different alternatives per shot.

The director, McQueen, dismissed this approach, preferring to work off conversations about the story and characters, which ultimately led them to finding the right visual approach for each episode in the series.

Kirchner also drew from the wall full of period reference photos that the production designer had gathered. This gave everyone a sense of direction of the look, which also fed into his spirit for open collaboration with the crew.

“I want everybody to have read the material. I want everybody to feel what it is that we’re trying to achieve. That, you know, everybody had agency. I think that’s a really important thing. And when you feel that happening throughout a whole crew, the reverberation of that is, like, intoxicating.”

This collaborative environment that he encourages also extends to some of the gear decisions that are made by his technical crew. 

GEAR

Fostering an environment on set where everyone, including the technical crew, is on the same page about the look helped when it came to selecting gear and designing lighting setups.

“I wouldn’t call myself the most technical of people and I’m, like, totally OK with that because I have so much trust in people like Ian and his crew. To go and be able to describe a feeling or describe an aesthetic or a quality of light and have someone like Ian take that and interpret it in a way that is achievable is really great. Here’s a photograph, you know, Eggleston took at night and this is the light and it looks like we’re underwater. What do you think?”

This led to a member of the lighting team proposing to the gaffer that they use ETC Source Four fixtures to create different pools of overhead light. These spotlights, often used in stage lighting, can be used to create crisp beams of light that can be spotted. This means that the spread of light can be controlled and dimmed. 

They were also easy enough to rig, as top down lamps, from the highest windows of the street facing apartments.

They were all gelled blue-green to mimic the silver allied Mercury vapour lights of that era, to create multiple, controllable top down pools of bluish light reminiscent of Kirchner’s reference photo.

When lighting, he often uses contrasting colour temperatures and fixtures, to create different pops of colour across the frame.

For example, in this interior he used super thin LED Lightmats which could be velcroed to the ceiling, diffused with an off-colour fabric and gelled with leaf-green, steel-green or yellow in different areas to break up the modern, perfect feel of LED light.

This overhead ambience lifted the light levels of the entire space, which was further accentuated by practical tungsten wall sconces to create a warm look. This warm interior light was offset by the cooler Source Four street lights that were rigged outside. 

Even for more traditional day interior scenes, which are often lit through windows with stronger, daylight balanced HMIs, he may add little pops of tungsten practicals in the background to contrast the cooler daylight feel with a homely warmth. 

“I have so much love for celluloid. I just think that there is something very special to it. The way it treats skin. But I also think that the process in which we work with film, as well. There’s a lot of value in that. To be able to think, like, in an economical way and not just sort of spin the camera and roll and roll and roll. To, sort of, really trust what you’re doing as well.”

When it comes to choosing a medium, he does love the look of shooting on film, but will also choose digital cameras like the Alexa Mini or a Sony Venice, depending on his practical needs or the needs of the story.

A great example is the five part series Small Axe. Each episode was shot on a different medium. He used the cleaner, wider perspective of the large format digital Sony Venice for one, the digital Super 35 Alexa Mini for another episode for its ability to roll for long takes up to 45 minutes. 

With grainier, 16mm film used to bring out a 1970s aesthetic, the textural, widescreen 2-perf 35mm film look to frame wider aspect ratio shots of a community, and the more stabilised, structured, taller aspect ratio in 3-perf 35mm for another episode.

Each choice of format brought a different look that better suited the story of each episode.

When shooting on film he used 500T stock from Kodak, 5219 for 35mm and 7219 for 16mm. This stock has a beautiful, higher textural grain to it, and - being rated at 500 ASA - is fast enough to practically use it for both day and night scenes. He’s even push processed this film at times to create even more grain.

Kirchner shoots this tungsten balanced film without using any correction filters - even when shooting in daylight. Prefering to correct the colour temperature in the grade, rather than in camera.

Like his choice of formats, how he chooses lenses is also dependent on the kind of look for the story that he is after. For example, he’s used the Cooke Speed Panchro 2s for their soft vintage roll off and warmth, the PVintage range from Panavison for their smooth, fast aperture, romantic look, and the Zeiss Master Primes for their modern, cooler, sharper rendering of detail which helped capture 16mm with a higher fidelity look.

Although the type of camera movement he uses does depend on the director and the story they’re telling, his camera motion often has a rougher, handmade feeling to it. 

Whether through his regular use of handheld, or even by choosing not to stabilise bumps by using software in post production. 

Instead, embracing the little imperfections that come from a human operated crane movement in a sweeping shot across a courtroom. 

CONCLUSION

“I took some wild chances on things that I didn’t really believe that I could do but I just did it anyway and I failed terribly. But if I could go back again and do it all again I’d do it the exact same way because failing is success. I’ve learnt the most from things that I didn’t succeed at 100%.” 

Read More
Gray Kotze Gray Kotze

Grip Rigs For Cinematic Camera Movement (Part 2)

To move a cinema camera in different ways requires different types of mechanical rigs. In this video let’s go over some of the common, interesting and even unusual rigs that are used in the film industry to create motion.

INTRODUCTION

There are many reasons to move the camera in filmmaking. It can be used to reveal more of a space and establish the geography of a scene. It can elevate action in fight  sequences. Evoke an emotion or a tone. Or even provide an unusual perspective to a scene.

To move a cinema camera in different ways requires different types of mechanical rigs. In this video let’s go over some of the common, interesting and even unusual rigs that are used in the film industry to create motion.   

BOLT

The Bolt is a specialised robotic arm rig, which is designed to move the camera at extremely high speeds, extremely precisely. It is built by Mark Roberts Motion Control and is the go to robotic arm for industry level film work. 

So, how does it work? This cinebot has a 6-axis robotic arm - which means it has 6 different points where the arm can swivel, rotate, pan, tilt and roll the camera. This arm is attached to a heavy base which is designed to slide along a track - which come in 3 metre length pieces - giving it an additional lateral movement axis. 

This total of 7-axes of movement means that it can move the camera in very complex ways, almost anywhere within a confined area. What makes the Bolt special is that it comes with software called Flair that is used to program each move that it makes, frame by frame. 

Once a move is programmed it can be saved and repeated as many times as necessary in frame perfect passes. In other words it can perform the exact same motion multiple times, so that each move records exactly the same image, even when broken down frame for frame.

This allows filmmakers to record multiple plate shots of the same take - where they can record different details in different parts of the frame multiple times, then layer different sections of each plate on top of each other in post production.

For example, this is a shot from a commercial that I camera assisted on a few years ago. The Bolt could be used to record two passes. One plate shot of the boy drinking orange juice, and another plate with a dog being cued to jump by an animal wrangler. 

In post, the animal wrangler could be cropped out and the motion of the dog jumping overlayed on top of the shot of the boy, so that it looked like it was recorded in a single take. This is made easy by the Bolt’s frame perfect, repeatable, programmed camera moves.

The Bolt is often combined with a high frame rate camera, like a Phantom, to shoot slow motion because the Bolt can move at extremely high speeds. When shooting slow motion, everything, including camera motion, gets slowed down. This means that to shoot extreme slow mo and still get a normal tracking movement, the camera needs to move at a much faster speed than normal.

It can also be used to get super fast camera motion when shooting with the camera at a normal frame rate.

It’s actually a bit scary how fast this heavy chunk of metal can move. That’s why the Bolt operators will usually either cordon off the area that the arm moves in or give a stern warning to cast and crew not to go anywhere near the arm, unless the operators give permission. Because if this thing were to hit anything at a high speed it’d be super dangerous if not fatal. 

For this reason, camera assistants will usually strip the camera of a monitor, mattebox, eyepiece and any additional weight that could offset balance or upset smooth movement or even pieces that could fly off while the arm moves and stops at extreme speeds.

Another use case for the Bolt is to program it to do very specific, macro moves. Using the Flair software and a special focus motor, the focus distance can also be programmed for each frame - since pulling focus at these extreme speeds manually is very difficult, if not impossible. 

This means it can repeat moves in macro shots, get multiple plates, all while maintaining perfect preprogrammed focus.

Although you can do incredible things with the Bolt, it’s usually reserved for specialised, pre-planned shots only, as it's both an expensive toy to rent and because moving it around and programming it takes a lot of time to do.     

 

TOWERCAM

Another piece of equipment which is designed for a very niche type of camera movement is the Towercam. This is a telescoping camera column which is designed to get completely vertical, booming camera motion. It is remote controlled by an operator near the base of the rig.

Unlike a Technocrane, which is more of an angled telescoping arm, the Towercam is an arm that moves completely vertically and can either be rigged from the ground or rigged from above and telescope up and down.

Although the hydraulic arm of a dolly can also be used to do vertical up and down moves, the range of its arm is much more limited to around 1 metre of vertical boom movement. There are different versions of the Towercam, but the XL can extend the height of the camera to almost 10 metres.

This is a great tool for getting large, symmetrical, up and down moves - which is why Robert Yeoman often uses it when shooting with Wes Anderson, who loves himself some symmetry. Using a dolly for horizontal tracking moves and a Towercam for vertical tracking moves.

But, it can also be rigged with a remote head, which allows an operator on the ground to pan and tilt the camera while it moves vertically. Which is great for this kind of a shot of tracking an actor walking up a flight of spiralling stairs. 

It can also be used for doing fast vertical moves, capturing live events, nature documentaries, or any other application where straight, vertical motion is required.       


3-AXIS GIMBAL

Let’s move on to a piece of gear that most people may know: a 3-axis gimbal. This is a stabilisation device that is used to keep the camera steady while capturing fluid motion. 

3-Axis refers to the three motorised points on the rig that counteract any unwanted camera shake. These axes control the tilt, sometimes called pitch, which is the vertical up and down motion, the pan, sometimes called yaw, which is the horizontal left and right motion and the roll, which is the side to side camera motion.

By stabilising the camera across these three points, these motors minimise shakes and vibrations, and give the camera movement a smooth, floating, fluid feeling - very similar to what a Steadicam does. 

Probably the two most popular gimbals on the market for industry level film production are currently the DJI Ronin 2 and the Freefly Movi Pro. 

There are many reasons these stabilising devices have soared in popularity in recent years. For one, they make it very quick to set up and adjust a shot with stable movement - much quicker than setting up and levelling a dolly and tracks. 

For another, their relatively light weight makes it easy to shoot in hard to access locations, where bringing other heavier gear like a crane or a dolly isn’t possible.

They are also flexible in their applications. For example, they can double up as a stabilised head - which can be operated remotely with camera wheels or a joystick - or they can be operated in a handheld configuration using the natural drift of the motors. 

Whereas in the past it may have been necessary to hire a Steadicam, a Steadicam operator, a stabilised Libra head and a Libra head technician. Now the productions can get away with hiring one gimbal technician with a gimbal - which can be switched between different rigs - including a drone.

Their flexibility also extends to how they can be operated. For example, shooting from a dolly on a track locks you into one lateral line that you can shoot from, while shooting with a gimbal is like shooting with a stabilised handheld rig. It can go anywhere your feet go, which makes it easy to adjust to the movements of actors changing position during a take. 

However, there are a few reasons why some DPs don’t like using them much. 

The camera needs to be balanced so that no undue strain is applied to one of the motors. Sometimes this is easier said than done, especially when using a heavier camera build or longer lenses. Any time a lens is changed it needs to be rebalanced. If a certain weight is exceeded it may not be possible to balance it at all.

Even when accompanied by a gimbal technician, these machines can give technical issues. In my experience, working with a Steadicam and working with a gimbal is like night and day. A Steadicam is simply engineered and almost never runs into technical issues, whereas gimbals almost always need to be tweaked by assistants to deal with technical glitches that pop up.

Also, unless the gimbal is being operated by a second operator with wheels or a joystick, it can be difficult to get precise framing, as the motors create a natural drift of movement as you move or stop, which can be hard to predict.

A last advantage of a gimbal is that they can sometimes be combined with other grip rigs for a special move. For example, when I assisted on this show, the key grip came up with an overhead sliding rail system with a bungee cord that the gimbal could be attached to. This took some of the strain of operating the heavy camera away and made it quick and easy to shoot fight scenes.

Or there are special shots like this, which combine a gimbal with the final rig we’ll talk about: a Spydercam.

SPYDERCAM

A Spydercam is a cable suspended rig that allows for smooth, controlled aerial movement. 

Setting one up begins by rigging several high tensile strength cables which need to be securely anchored above the shooting area so that the cables don’t touch the ground. 

A camera rig with a stabilised remote head is then suspended from those cables and its lateral and horizontal movement is controlled by motorised winches that are connected to the suspension cables.

This way the physical position of the overhead camera can be changed by an operator, called the pilot, using software while at the same time another camera operator can pan, tilt or roll the camera to change the frame that it sees.

The Spydercam was regularly used in films before the widespread use of commercial drones, to get controlled, smooth aerial footage within confined spaces, which couldn’t be shot by using a helicopter.

For example, aerial plate shots which swing or fall through a highrise city which they could later drop shots of characters into.

Even after drones became popular production tools, there are still applications for the Spydercam - most notably for indoor scenes or scenes shot in a studio that require controlled aerial camera moves.

For example, the shot mentioned before used a Spydercam with a specially designed release. This allowed a stabilised gimbal to float from one floor to the next, which the camera operator could then grab on the second floor and begin shooting with in a handheld configuration.

Another reason filmmakers may want to use a Spydercam is for a shot which starts close to actors. Having a drone start this close to the ground, objects or actors would cause a visible wind on screen and would also be a bit dangerous. Instead a precisely controllable Spydercam could be used to start on a nice, tight shot, which was then brought all the way up to a massive aerial wide by winching it up the cables.

Read More
Gray Kotze Gray Kotze

How To Shoot Action: John Wick 4

Let’s break down how cameras, lights and grips were used to elevate action scenes in John Wick: Chapter 4.

INTRODUCTION

The first thing that I thought about while watching the fight scenes in the new John Wick movie, was another fight scene in Oldboy. And it’s not only because of the great action choreography.

Both movies use cinematography in a very deliberate way to enhance these fight scenes. Through the use of wide shot sizes, vividly saturated lighting and a smoothly moving camera.

So, let’s break down how cameras, lights and grips were used to elevate action scenes in John Wick: Chapter 4.


APPROACHES TO SHOOTING ACTION

Watching this movie got me thinking about the different ways that fight scenes can be presented by filmmakers. I think a large part of how we as an audience interpret or get involved in these scenes comes down to the kind of visual style in the cinematography.

There are basically two directions that filmmakers can go in when shooting action: one is more realistic, immersive and first-person, the other approach is more elevated, observational and third-person.

John Wick very much falls into the second camp. The fight scenes are larger than life and something of a spectacle. Somewhere in between semi-reality and a comic book world. 

This look is in a large part created by three aspects of the cinematography: the shot sizes used, the lighting and the camera movement. Lots of slower cutting wide shots are used. The lighting is hard and vividly colourful. And the camera moves are smooth.

Compare this to a fight scene in The Bourne Identity. There are lots of quick cutting close ups. The lighting is softer, more motivated and naturalistic. And the camera moves are rough and handheld.  

To me, this way of shooting feels much more like the audience is being forced into the more realistic perspective of the main character. 

While this way of shooting feels like the audience is being invited to sit back and enjoy a heightened, action packed spectacle.

Of course, this isn’t a hard and fast rule that has to apply to each and every fight scene in a movie. For example, Oldboy has one long-take fight scene that feels almost like you are watching a video game from a wide, 3rd person perspective. 

And, it has another fight scene where the perspective shifts to first person, includes POV shots, and covers the scene with close ups, quicker cuts and a rougher camera movement.


SHOT SIZE

Part of what makes the fight scenes in John Wick impressive from a technical perspective is that they often play out in wider shot sizes, with slower cutting. This relies on the performances of the stunts by the actors to be at a high level, rather than getting stunt doubles to do the action and using lots of quick cutting to break each stunt down shot by shot.

Cinematographer Dan Laustsen enhanced this feeling of width through his camera selection.

“I think that is the goal or the key. To shoot it wide and beautiful. As wide as we can.” - Dan Laustsen

The previous John Wick movies had been shot on Super 35 cameras with Master Anamorphic lenses. One way to pull off an even wider look is to shoot on a camera with a larger sensor - the Arri Alexa LF. 

The bigger the sensor is the wider the field of view that it is able to capture. However, they ran into a snag. Previously you could either choose to shoot with a large format camera and spherical lenses, then crop to widescreen in post, or you could shoot with a Super 35 size sensor with anamorphic lenses that natively creates a widescreen aspect ratio.

To overcome this issue, they turned to Arri, who had recently developed a set of anamorphic lenses which were able to cover large format sensors. These lenses are called the Arri Large Format Anamorphics, or ALFAs. 

This gave them the combination of a large format wide field of view and the anamorphic look - all in one. These were used to shoot 95% of the movie.

Since the widest ALFA lens is a 32mm, the remaining 5% was mainly made up of super wide angle shots using spherical Signature Prime lenses - probably with a 12mm. You can tell which are the spherical lens shots in the movie because of their almost fish eye look and round bokeh, while the ALFAs have oval bokeh. 

“We also shoot wide so Keanu could do what Keanu had to do. And then getting into the close ups for whatever reason super fast because Keanu is running out of speed, of course, because he’s not a stunt guy.” - Dan Laustsen

So, they mainly used wide focal length lenses to capture the stunt scenes in two or three full takes of wide shots. Then used long focal length lenses mainly for close ups during dialogue scenes, not really using many focal lengths in between.

  

LIGHTING

Another large part of what elevates the John Wick movies from reality, and places them in an exaggerated, comic book like world is due to the lighting.

Laustsen is known for lighting with exaggerated colours, often mixing steel blue, cyan, or green tones, with warmer light. In fact it was partly his use of colour in Crimson Peak that won him the job to shoot the franchise with the director.

“I think he loved Crimson Peak because he loved the colours. We talked a lot about colour. He really likes colour. You know, he loved this heavy world.”  - Dan Laustsen

This coloured lighting, that lifted the story out of the normal, realistic world, was created in two ways. Firstly, with practical lighting and secondly by using film lights.

Practical lights are sources that are included in the frame of the shot. These were normally built into the sets themselves using different coloured strips of LEDs. Or, in this movie, they were done by rigging hundreds of RGB Astera Titan Tubes across the set, which were placed inside fluorescent housings. 

These fixtures could then be wirelessly controlled through a DMX board to tweak their brightness or colour.

Often these tubes would be set to a colour that contrasted with the colour of the key light. For example if the key was set to a cyan-green, then the practical tubes could be set to an orangey-red. This not only creates a vivid, exaggerated, colourful palette, but it also separates the background from the foreground.

These practicals also served an extremely important practical purpose - they could be seen in the shot. This is especially necessary for the kind of super wide shot sizes they were shooting, where the camera saw everything. Like this scene, where they rigged these practicals next to the canal for an incredible 2km. 

When you’re shooting in such an enormous space at night, there is little to no place to hide film lights anywhere near the actors without them being seen.

“You know, that is my goal with lighting all the time. I want to have the light as far away as possible. Of course if you do a close up of something I have light close to the camera but normally all my light is far, far, far away.” - Dan Laustsen

The further that you place a light source from an actor, the less effect that light will have. The only way to counteract this is by having stronger sources of light that compensate for the light lost through distance. This means either hiring higher output sources, or increasing the amount of sources - which, either way, is an expensive endeavour.

The wider you want your night exterior shots to be, the more difficult and expensive it becomes to light.

They also shot basically everything at night - even the day interiors and a scene that was meant to take place at dawn. This is because it was the only way to control the level of ambient sunlight and maintain the kind of strong, hard, consistent light for long periods of time without competing with sunlight - which would have overpowered the film lights.

His team either lit the scenes with very high output, hard sources, like T12 or T24 tungsten units. Or they used loads - and by loads I mean like 85 units - of lower output, RGB LED Creamsource Vortex 8s. These gave an even spread of RGB light and were also water resistant - which was very necessary for certain scenes with lots of water.    


CAMERA MOVEMENT

The camera movement generally tracks the action of the choreography with slow, fluid, smooth movement. 

The technical team pulled out almost every piece of gear they could to do this in a variety of different situations: from a drone, to a Spidercam, to a Technocrane. For example, this long take, top-down God’s eye view shot meant rigging a line with a Spidercam, that could move the camera precisely above the large set. 

However, the two main tools that they used to move the camera for the fight sequences were a Steadicam and a dolly - with A-cam on dolly and B-cam on a Steadicam AR.

The AR allows operators to quickly change from high mode to low mode, while keeping the camera completely stabilised. 

Shooting from a Steadicam meant that they could get smooth, flowing movement that followed the action, but which also allowed the operator to be flexible and adjust their movements to the movements of the actors. 

Because the fight scenes were not easy on the actors, they didn’t want to have to do more takes than were necessary. Therefore, the whole movie was shot at quite a deep stop of T/4, which gave the focus puller a little bit of leeway for the difficult, fast moving action scenes.

Because of the speed of the scenes, the 1st AC also chose to often pull focus by judging distance by eye. Since it’s easier to pre-empt where the actors, and therefore the focus, needs to be when you are watching their movements as opposed to watching on the monitor. 

CONCLUSION

John Wick 4 is a great example of how action and the world of the story can be made to feel more elevated due to the cinematic choices made by the filmmakers.

 All three elements of wide shots, vibrant, colourful lighting and fluid movement combined to make the action scenes in John Wick feel like an entertaining, extravagant spectacle. 

Read More
Gray Kotze Gray Kotze

The Most Popular Cinema Lenses (Part 6): Zeiss, DZOFilm, Leica, Angénieux

In part six of this series, I’ll go over four more popular cinema lenses that are used in the high end film industry, break down the images they produce, their practical design and what kinds of filmmaking they are useful for.

INTRODUCTION

The choice of which cinema lens to select for your shoot is a pretty important one. Each lens varies not only in a practical and ergonomic sense, but also in the kind of look that they produce. 

So, in part six of this series, I’ll go over four more popular cinema lenses that are used in the high end film industry, break down the images they produce, their practical design and what kinds of filmmaking they are useful for.



DZOFILM VESPID RETRO

Starting with something a bit unusual, the Vespid Retros are a limited edition set of seven prime lenses. They are modelled on the same optical and mechanical construction of the original Vespid prime set that preceded it. 

They come in a pretty standard set of primes, ranging from 16mm to 125mm focal lengths with a stop of T/2.1, except for the 16mm which has a T2.8 aperture. 

The Retros cover a 46mm image circle, which means they can cover all of the recent Vista Vision, Full Frame and LF digital cinema sensors without vignetting.

Despite this coverage, they come in a very compact, lightweight form factor, which makes them a good choice for cinema rigs that need balancing, such as gimbals. They come with a standardised 80mm front diameter across the range of primes, which makes it easy to quickly swap out a mattebox when changing lenses. All the lenses, except the 16mm also feature a 77mm filter thread which can be used to attach screw-in filters.

They come with a PL mount, but it’s also possible to swap out the mount to a Canon EF mount. This means that they come with all the modern features that make them quick and easy for camera assistants to work with - just like the original Vespid primes.

So what makes them different from the original Vespids? Well they come with a bit of a party trick. The glass is finished with a golden coating. This coating produces warm, amber flares that haze up the lens when you point it at a direct light source. They resolve fine details, but also offer a subtle softness. All of this combines into what they call a ‘retro’ or ‘vintage’ look. 

This warm, dreamy look is good for more experimental applications, like shooting dream or flashback sequences, or for capturing films where you want to play with light and texture.

For example, I recently rented these primes out on a music video shoot, which I lit with lots of backlight in order to get maximum flares for a more dreamy, cosmic musical performance.

And, well, they’re silver, which looks kind of cool. 




ANGENIEUX OPTIMO 15-40MM & 28-76MM

Now, let’s move away from primes and take a look at a pair of popular zoom lenses.

The Angenieux Optimo 15-40mm and 28-76mm are a pair of lightweight zooms with a 2.7x zoom ratio. This means that the longest focal length is 2.7 times longer than the widest focal length in the zoom range.

The reason I pair these two zooms together is because they are constructed to an identical weight and size, and are often rented as a pair as interchangeable wide angle zooms that can cover from wide to medium focal lengths, while maintaining a high degree of optical quality and an aperture of T/2.6.

With an image circle of around 31mm, these lenses are designed to cover ever so slightly wider than Super 35 size sensors. But won’t cover the full width of large format sensors, like the Alexa Mini LF. 

At 1.92kg these zooms are lightweight enough to be used on a Steadicam, a gimbal or handheld on the shoulder. The zoom and focus happens internally in these lenses which means they have a constant lens volume.  

In other words the lens barrel doesn’t extend and twist as it might with some vintage lenses. This means a balanced, stabilised rig like a gimbal will remain balanced even if the lens is ‘zoomed’ during a take.

In terms of the look, they have a good degree of contrast and no vintage softness or aberrations. Basically it’s a good, sharp, clean zoom. The Optimo range retains homogeneity of colour, contrast and resolution across the range. This makes it easy to use multiple zooms in the range on the same shoot, without worrying about optical differences in quality or colour.

On the wide 15mm end the lens also has minimal distortion.

As mentioned, these lenses are perfect for anyone who needs a wide angle zoom on a Steadicam and gimbal, to make quick and easy adjustments to focal length or zoom the lens in a shot. 

They are also used by solo operators in documentary situations - for example, I recently used these lenses in a handheld build with an Amira to shoot a doccie-style commercial. Zooms also save time swapping out lenses, and are often carried on long form jobs to accompany a set of primes.    





ZEISS STANDARD SPEED

The Zeiss Standard Speeds, sometimes also called the Zeiss T/2.1s, are a set of lightweight primes. They were originally manufactured somewhere between the late 60s and mid 70s by Carl Zeiss for Arri to be paired with their Arriflex cameras.

They originally came in 5 focal lengths, 16mm, 24mm, 32mm, 50mm and 85mm focal lengths. However, later Mk II versions of these lenses in the 1980s included additional focal lengths.

The Mk II versions of this lens are the ones most commonly found at rental houses and feature a similar build and coating as the faster T/1.3 Zeiss Super Speeds. For this reason they are often used by rental houses to ‘build out’ or add extra focal lengths to a Super Speed lens set, especially on the longer end with the 100mm and 135mm focal lengths. But also for DPs that like focal lengths in between the traditional ones, like a 40mm.  

A mechanical difference between the Super Speeds and the Standard Speeds is that the actual glass inside the housing of the T/2.1s is pretty tiny - around the size of another vintage prime set from the same era - the Cooke Speed Panchros. While the glass used for the Super Speeds takes up the entire housing.

Regardless, the T/2.1s are still able to cover Super 35 size sensors without vignetting. Because of their small glass elements, these lenses are very light, yet still solidly constructed and easily usable by camera assistants - as long as they have been well maintained and calibrated by the rental house. 

What I love about these lenses is the look. It is a very comparable look to the Super Speeds as, from what I can gather, the T/2.1 Mk I and Mk II sets both have the same T* multicoating on the glass as the Super Speeds.

Because of the older optical design of these lenses they have some vintage characteristics. For example, when shot with a wide open aperture they have lower contrast than modern lenses, like the Cooke S4s or Master Primes. This gives the images more of a washed out feel.

These old lenses have a simple, symmetrical lens design which produces beautiful bokeh which is slightly swirling. When I’ve shot with them I’ve found the Standard Speeds, especially the 40mm, to have really beautiful, creamy, slightly off kilter, oval bokeh - which I actually prefer to the Super Speeds. Despite their age, these primes are still quite sharp when you hit the focus sweet spot.

So, if you’re looking for compact cinema prime lenses that don’t have to have a super fast aperture, with a vintage, slightly low contrast look, and very pretty bokeh, then the Standard Speeds are an often overlooked choice which are worth considering.

   




LEICA SUMMICRON-C

Moving from the past back to the present, let’s take a look at a modern set of spherical primes - the Leica Summicron-Cs. 

In the same way that the Zeiss Standard Speeds are a slower aperture version of the Zeiss Super Speeds, the Summicron-Cs are a slower aperture version of the much more expensive Summilux-Cs. However, at T/2 they are still fast enough for most shoots. 

They come in a set of 11 focal lengths from a 15mm up to a 135mm. 

With a 36mm image circle the Summicrons cover an enlarged Super 35 size sensor, but won’t cover most full frame or large format cameras without vignetting.

One of the big advantages of the Summicrons is their build. For modern cinema lenses with a fast stop they are very compact and lightweight, but still have a long focus throw. This means they have lots of well spaced distance markings that 1st ACs can use to more accurately program focus distances into a handset and pull focus.

The 18mm to 100mm focal lengths lenses are also all the same physical length at exactly 101mm. This small, standardised sizing makes them perfect for rigs that need balancing, like drones, gimbals or Steadicam.

The images that these lenses create are pretty typical of sharp, modern cinema glass. They have authentic colour rendition, good contrast, even wide open, and render fine details with crisp, sharp edges. This look is consistent across the entire set.

Therefore these lenses are well suited to productions shooting on a Super 35 camera, that want a reliable, versatile set of standardised, compact lenses that produce a crisp, consistent, modern look with a fast aperture. 

And for those that need all of the above but are wanting to shoot in very low light conditions or are looking for just a tad more depth of field - the T/1.4 Summilux-Cs are also a viable choice.

Read More
Gray Kotze Gray Kotze

FPV Drones Are Changing Filmmaking

How do FPV drones work and what makes them so special? Let’s find out.

INTRODUCTION

First, there were helicopters. Then there were drones.And now, there are FPV drones.

These little, flying, cinema cams are now able to pull off shots that could have only been dreamt of decades ago, or even a couple years ago. But how do these FPV drones work and what makes them so special? Let’s find out.

WHAT IS AN FPV DRONE?

One of the main differences between a regular drone built for filmmaking and an FPV drone is in the name. FPV stands for First-Person View. This means the drone provides a real-time video feed from its onboard camera to a display - usually in the form of goggles worn by the pilot.

This allows the pilot to see a front-on perspective of exactly what the drone sees, as if they were sitting inside the drone and flying it. This immersive experience provides a sense of flight and enables more precise control and navigation of the drone

FPV drones were primarily designed to be used for sport, where they race each other to see which pilot can manoeuvre the FPV the most nimbly and quickly. 

Because of this they are usually way smaller than the big production drones, like the Freefly Alta 8 or Alta X, which are designed to carry heavy payload cinema packages on a gimbal like a Movi-Pro or DJI-Ronin.

FPV’s smaller frame and lighter weight makes them far more agile and fast, able to fit through small gaps, all with enormous acceleration.   

 

FPV DRONE SETUP

Because cinema FPV drones are a bit of a niche product, they are often custom built by their pilots. However it’s also possible to buy some that are assembled straight out of the box

Let’s break down what a cinema FPV drone may look like. 

The frame is what the camera sits on and is made up of a body - that supports the camera and battery - and arms - that hold the motors that allow it to fly.

Usually frames are constructed from carbon fibre as it is a lightweight material that is also durable. When it comes to building a rig, the name of the game is to keep weight to a minimum.

An example of a frame that may be used for a cinema FPV drone setup is the Hevi Frame from Shendrones. This is a coaxial octocopter. This means that it has four dual motors that can each turn two rotors.

To this frame a FPV camera will be attached to the front of the drone, along with a transmitter, that sends a transmitted video signal using antennas from the FPV camera to a headset that the pilot wears such as a DJI FPV goggles.

Inside this headset the pilot can also set frame lines that mimic what the cinema camera is recording, which are usually cropped in from the wider FPV camera view.

This FPV camera is built to be as lightweight as possible and is only used for monitoring and transmission purposes so that the pilot can see where they are flying. It’s not used for recording.

For the actual video recording you’ll need a cinema grade camera, which is as lightweight as possible. A Red Komodo is currently the most popular. 

This is due to the tradeoff between its light, tiny body and its ability to shoot in a 6K resolution - which gives room for image shake to be stabilised in post - its RAW codec with a high dynamic range and its global shutter -  which reduces artefacts and motion blur when filming objects moving at a high speed. 

When using the Komodo like a crash cam, to shoot in dangerous situations where there’s a possibility that the camera may get damaged, it’s useful that it has no buffering in writing data to the card. 

In other words, the camera is able to transfer the footage to the Cfast card immediately as it shoots it without a delay. So, even if the camera crashes and you are able to pull the card out, then all the data should be preserved up to the moment that the camera crashed. 

The Komodo will usually be paired with a wide angle, compact, lightweight cinema lens, such as a Laowa Zero-D Cine 9mm T/2.9 or a 15mm T/2.1. If shooting in bright conditions a ND filter can be screwed into the thread of the cinema lens.

Because no focus motor is used, the focus distance on the lens will be set at a fixed distance, normally at, or just off of, infinity. 

A lipo battery will be strapped to the bottom of the rig. The longer you want to be able to fly the drone for, the bigger the battery has to be. So, it comes down to a tradeoff between a lighter weight and more manoeuvrability or a heavier, less manoeuvrable longer flying drone.

A 4,500mAH battery may give the pilot somewhere between three to five minutes of flight time depending on how aggressively they fly it.


HOW ARE FPV DRONES CHANGING FILMMAKING?

Aerial shots have two noticeable attributes, one, the perspective they provide, and two, the kind of camera movement they achieve. 

Let’s start with perspective. Aerial shots were first done by shooting from helicopters, which provided filmmakers with a birds eye view perspective that could give context to landscape and provide a sense of scope. For this reason, helicopter shots were often used as a way of establishing a location, before cutting into a scene at that location.

Because of the safety precautions necessary when flying a large vehicle, like a helicopter, this usually limited filmmakers to shooting these establishing shots as wides, far from any building or from the ground.

When drones started taking over, they usually adopted this same, wide, bird’s eye view perspective.

Even though some stunt helicopter pilots are incredibly skilled and can do amazing manoeuvres, their size is still a limiting factor.

FPV drones are different. Because of their much smaller size, and increased manoeuvrability, these drones can physically get much closer to objects or to the ground than ever before. Flying closer to objects with a wide angle lens at a high speed makes aerial establishing shots incredibly dynamic.

“You’d usually have wide interstitials of the location as you move from location to location. Ant and I are just bored by that convention and we wanted to do something more kinetic and more dynamic that would whip you from location to location and just keep the movie moving forward at a breakneck pace.” - Joe Russo, Director

One of these dynamic establishing shots even involved blending an FPV shot that wove through a hospital entrance, with a Steadicam shot that started the scene on the actors. Using a quick panning movement to hide an invisible cut. 

This kept the intensity of the pacing high, while at the same time clearly establishing the space that the scene is unfolding in.

This shot also shows off the second noticeable attribute of aerial footage - movement.

Although helicopters are able to pick up speed, and create dynamic footage, FPV drones are different in that they can make turns and accelerate far quicker, while also using their small size to fit through gaps that wouldn’t have been possible before.

Shooting with a wide lens, close to action has a very different feel to shooting action from far away on a telephoto lens. The physically closer you are to the action the more visceral, kinetic and immersive it is.  

“Drone work is really boring in movies and I wanted to hip it up. I wanted the energy because this is a very intense movie and it just keeps that motion going. And I’m kinda known for how I move my cameras.” - Michael Bay, Director  

So, being able to film aerial footage, in both interiors or exteriors, at high speeds, incredibly close to actors and objects, allows filmmakers to get highly dynamic footage, without using visual effects. This is all possible thanks to the combination of FPV drones and incredibly compact digital cinema cameras.

Although this technology unleashes incredible possibilities, it still has quite a few limitations. They mainly revolve around the weight issue. 

To be as lightweight as possible, these little drones normally can’t use larger cine lenses, a mattebox or use 4x4 or 4x56 inch filters to control exposure or create a filter effect.

This also means that when shooting day exteriors, much of the exposure control will happen by stopping the lens down to a deep stop. However, this is also necessary as usually these drones can’t take the weight, nor afford to power a focus motor.

This means that wide lenses with a deep stop need to be used. Therefore shooting objects close with a shallow depth of field isn’t possible.

Filmmakers are also limited to very small flight times before they need to change the battery. This makes it impractical to roll for long takes. 

Also, unlike remote heads on helicopters, or larger drones that use a gimbal system, the camera cannot be panned, tilted or rolled by an operator. Instead the pilot is the camera operator and needs to position the front of the drone precisely where they want to shoot.

Finally, there’s also the risk factor. Since a lot of these manoeuvres are incredibly fast it can be dangerous, not only for actors, but for the camera. In fact I remember hearing a story a few years ago about a shoot that was using the first Red Komodo in South Africa on an FPV drone for a commercial. The pilot was skimming it over the ocean at a high speed until, suddenly, it wasn’t just above the water, but under it. And that was the end of the first Komodo in the country.

CONCLUSION

Cinematographers often talk about film equipment, like lights, cameras and grip rigs as tools, which they can choose from to chisel each story into the correct shape. An FPV drone is another tool now available to filmmakers, which allows them to get action packed, dynamic aerial movement like never before.

Read More
Gray Kotze Gray Kotze

How Cinematographers Use Hard Vs. Soft Light

n this video I’ll explain what hard and soft light is and how each can be used in different applications to impact the visual storytelling of a film. Then I’ll go over some film equipment and techniques that are commonly used to alter the quality of the light.

INTRODUCTION

Cinematography at its core is all about being able to manipulate and control lighting to tell a story. Light can be cool or warm, have a high output or a low output and, importantly, be either soft or hard.

In this video I’ll explain what hard and soft light is and how each can be used in different applications to impact the visual storytelling of a film. Then I’ll go over some film equipment and techniques that are commonly used to alter the quality of the light.

HARD VS SOFT LIGHT

Before we get into talking about how different qualities of light can be used and are created, we need to know what the difference is between hard and soft light.

Hard light refers to a type of illumination that creates distinct and sharp shadows with defined edges. In other words, the area where the shadow meets the light has a very small, if any, gradient. 

Hard light is usually achieved by using a direct light source, such as a bare light bulb or direct sunlight, that casts a focused beam of light onto the subject. 

Soft light on the other hand creates diffused and gentle shadows.

 So, unlike hard light that has no gradient between where shadows and light meet, soft light has a gradual transition between light and dark areas. Soft light is achieved by using a more diffused or indirect light source, such as sunlight that passes through a sheer curtain, that spreads light evenly over the subject.

Although I’m talking about the quality of light in quite defined terms, it’s important to remember that the softness or hardness of light occurs on a spectrum. 

Usually cinematographers will use a combination of both hard and soft light either in different scenarios or even within the same shot. How light is used is dependent on the context of the location and story that is being told.

HOW IS SOFT LIGHT USED?

Generally, it’s more difficult for cinematographers to create softer light - which we’ll get to a bit later. So what are some reasons and situations why they might want to do that?

Well, firstly, using softer light is seen as a more flattering way of lighting actors. Because of its gentler gradient of shadows and decreased harshness, it can help to smooth out skin texture and highlights in a more subtle and appealing way. 

If you compare this to a harder light source, you’ll see that harsh shadows and strong direct light accentuates textures, wrinkles and skin imperfections, whereas softer light smooths out faces in a gentler way.

For this reason, soft light has historically been used for beauty commercials as a way to smooth out the skin of the on screen talent. Back in the days of the Golden Age of Hollywood, it was also common to use soft, flattering light on the lead actress, while using a harder light with a more rugged feel for the lead actor.

Another, more modern reason that cinematographers use soft light now is because it can feel more organic, natural and realistic. Of course natural sunlight is not always soft. If you’re shooting outside, in direct sun, the source will be very hard.

However, if you turn off all the lights in a room, and observe how the natural light from the sun hits a space, there’s a good chance the light will be soft. 

This is because, unless it comes directly through a window, sunlight usually bounces and reflects off different surfaces within a room - and, as we’ll find later, bounced light is soft.

Also, if there are clearly defined shadows, such as on a wall, it is clear that artificial light, such as a placed film light, is being used and it’s easy to tell which direction it’s coming from. If instead the shadows are more gentle, it may give a feeling that the light is natural and coming from a motivated source other than a film light.

Finally, soft light also creates a tonal atmosphere or a feeling. Because it’s literally softer and lighter, it can be used to create a sense of calmness, lightness or tranquillity in scenes that have a more peaceful, comedic, romantic or introspective tone.


HOW IS HARD LIGHT USED?

Hard light can be used to create a more dramatic and higher contrast look. Because it creates less of a gradient in shadows, hard sources of light can be good for stories that need to be played out in lower key lighting with lots of shadow and contrast.
This can be particularly effective in creating a sense of tension or danger in scenes with a lot of action or suspense, or to make characters seem more imposing or dangerous. As we mentioned, it is less flattering on the skin, so can be used to portray characters as more rugged and less perfect.

In an exterior, hard light can also be used to accentuate the strength of the natural elements, by making the sun’s rays feel hot and strong. 

A harder backlight can also be placed behind characters in a scene to separate them from the background so that they stand out. 

Because it is easier to create contrast with hard light it’s also a good tool for cinematographers that like to illuminate different pockets of light and shadow within the same frame. 

These pockets of shadow can be used either to disguise details or the identity of a character within a shot, or be used to create a more stylised visual interest. 

Hard light even has associations with certain genres, such as film noir, which has extensively used it to elevate stories from realism and evoke a darker, ominous mood by creating carefully shaped, textured shadows - such as the classic office blind shadow.  

EQUIPMENT FOR SOFT LIGHT

There are two main techniques that can be used to transform hard light to soft light: diffusion and bounce.

To diffuse light, a type of translucent material needs to be placed between the light source and the subject the light will hit.

There are many different types of diffusion, but perhaps the most basic are diffusion gels - which come in rolls of white polyester plastic that can be cut into a size that can be directly clipped onto the light or a diffusion frame in front of the light.

A popular diffusion gel is a 216 White Diffusion from Lee Filters. This greatly spreads and softens the light source, but also takes away a stop and a half of light. The same is true of any diffusion. While softening the light source it also reduces the amount of light that hits the subject. The more that you soften light the dimmer the light will be. 

Therefore, DPs that want very soft light may need to use high output fixtures in order to have a bright enough light to expose properly. 

There are many other types of diffusion gels available which spread and soften the light in different ways, such as a Frost, Brushed Silk, Opal or Hampshire Frost. Some gels like a white diffusion gel also come in half, quarter and eighth strengths if DPs need diffusion with less softness that lets through more light.   

Another type of diffusion is to use a textile rather than a plastic gel. This is a cloth material that can be tied onto frames of various sizes such as 8x8, 12x12 or 20x20. 

In my experience there are three main fabrics which are used for diffusion. Silk is a very sheer, natural diffusion fabric and comes in different strength gradients. It offers a light to medium diffusion of light.

Grid cloth is a popular, waterproof fabric that comes in various strengths, such as a 1/2 , 1 or 2. Each number refers to how many stops of light it will reduce, so a grid cloth 1 will reduce the strength of your light by 50%. 

Finally, there is muslin. This is a much thicker textile which comes in either white, called bleached muslin, or a warmer cream, called unbleached muslin. Because of its thickness it takes away a lot of light when used as diffusion, which means it requires very strong, high output film lights to be used.

Other equipment, such as a softbox, uses this same idea of placing a diffusion substance in front of the light, but attaches it directly onto the front of the fixture.

Diffusion can be used in front of artificial fixtures, but it can also be placed between a subject and the biggest light source there is - the sun. Typically this will be done with one of the textiles mentioned on a metal frame such as a 12x12.

The other way to soften a light source is to bounce it. A light will be placed so that it faces away from the subject, but towards a surface that is used to bounce that light onto the subject. Like with diffusion, bouncing light can greatly diminish the strength of the source - so you need a higher output light to compensate. The further a bounce is from the subject the weaker the light will be, but the softer it will be.  

 Again, light is usually bounced off a textile, such as a muslin. However other surfaces, such as a poly board or even the walls inside a house could be used as a bounce. However, remember that whatever colour the bounce is will affect the colour of the bounced light. 

Finally, a book light takes this idea to the extreme, by first bouncing a light and then diffusing the light that comes from the bounce. This makes it super soft, but also means that very high output lights are needed to compensate for the large amount of light that is lost.  

EQUIPMENT FOR HARD LIGHT

Creating hard light is much simpler. The easiest way to get hard light is simply by positioning a direct source of light straight at the subject without diffusing or bouncing it. This could be direct sunlight or a direct film light. The stronger the source of light is, the harder and more contrasty it will appear.

 Also, minimising the amount of light sources is a good way of creating hard, high contrast light with more shadows. The more light sources that are added to a space, the more the light will get evened out.

A cheaper way of getting strong, hard light, without putting up big film lights, is to use a mirror. Mirrors can be used to directly angle a hard beam of light at a subject, without diffusing that light. 

You can also position practical sources which you see in the shot, such as a bare lightbulb. As long as these don’t have diffusion they will create a hard source of light. 

Read More
Gray Kotze Gray Kotze

Editing Transitions Every Filmmaker Should Know

In this video I’ll go over a few types of edits, so that whether you’re a cinematographer, a director, an editor or an audience member, you’ll have a better idea of how and why films get assembled the way they do.

INTRODUCTION

If shooting a movie is like growing produce and gathering a bunch of different ingredients together, then editing is where those ingredients are organised, selected and prepared in a specific way to create a meal.

From simple cuts, to graphic matches, to whatever is happening here. But we’ll get to that later. 

There are many different editing techniques and types of transitions that can be used to put stories together. 

In this video I’ll go over a few types of edits, so that whether you’re a cinematographer, a director, an editor or an audience member, you’ll have a better idea of how and why films get assembled the way they do.  


CUT

Let’s start by looking at a basic cut and then examining some of the creative ways it can be used. 

A cut is simply the act of taking a portion of a video clip and joining it with another clip using editing software, resulting in a seamless transition from one shot to another.

Cuts are often used to create a sense of continuity or, alternatively, to establish a clear change in time or location. The speed at which a series of cuts happens can also unconsciously affect the way that the audience interprets the film: whether it feels more hurried and panicked, or more slow and realistic. 

In a dialogue scene, cuts can be used to switch between the person speaking and the person listening, maintaining the flow of the conversation. This is known as a shot reverse shot


SHOT REVERSE SHOT

Often, cinematographers shoot these shots from the same side of an invisible line, called the 180 degree rule, and use framing to balance each shot with its reverse shot. 

So if character A appears on the left of frame for a close up and character B’s shoulder is on the right of frame, this will get switched for the reverse, where character B appears on the right of frame and character A’s shoulder is on the left of frame.

SHOT SIZES

Editors can also cut in or out of different sized shots. For example from a medium shot to a medium long shot. Cutting from tight to wide could be used as a way of giving more context or information to the audience. For example, here we cut wider as the character’s attention drifts to the bag on her lap.

Cutting from wide to tight focuses the information that the audience is given. For example, cutting from a wide to a tight close up shows which character is most important and allows us to more clearly read their reaction or how they are processing a moment.

INSERT

A variation of this cut is called an insert. This cut shows a close-up of a specific detail or object within a scene. It is usually a brief shot that is inserted into the sequence to provide a visual emphasis or to provide additional information to the audience. It’s a way for the editor to say, “this thing is important.”

All of the shot transitions we’ve discussed so far have been continuous, meaning they portray the scene as unfolding in real time, and try to uphold the suspension of disbelief.


JUMP CUT

The jump cut is a way of breaking continuity in a jarring and abrupt way.   

Jump cuts are created by removing a portion of the footage from the middle of a shot, causing the subject to appear to "jump" from one position to another within the same sized frame. This effect can be used to create a feeling of unease or tension, as it disrupts the viewer's sense of time and continuity. 

However, in other contexts, such as vlogs, where this cut is made as a practical way of quickly removing mistakes or filler content, the jump cut is more whimsical in tone and conveys the passing of time.  


CROSS CUTTING

To amp up tension and create a feeling of suspense, some filmmakers may choose to alternate between shots that take place in two or more different locations - that would otherwise be separate scenes unto themselves. 

This is called cross cutting. It is a way of showing different actions or events happening simultaneously or in close proximity to each other. A traditional example of this would be cutting between the perspectives of different characters in different locations during action sequences to provide a contrast between overlapping scenes while simultaneously injecting a faster energy into the cut. 

Anticipation would be reduced if each scene was allowed to play out in full, before cutting to the next scene. 

Or a more unconventional example is how it is used here to intercut between scenes with the same characters, happening at the same time, in different parallel universes.


MATCH ON ACTION

Not only is this transition an example of cross cutting, but it’s also what we call a match on action or graphic match cut

This is used to create a visual connection between two shots that share a visual similarity in shape, colour, texture, or movement.

In this case, it cuts from a close up, to another close up, where the actor is in the same central position in the frame and occupies the same space. This is a technique that The Daniels did often in this movie.

These shots can also be a bit subtler and based more on a thematic connection or on movement. A famous example is in 2001: A Space Odyssey where we start on the thematic height of technology in a pre-human world - a bone used as a club - and cut forward thousands of years to a similarly shaped spaceship - the new height of technology.   

But in order to make this kind of cut in the edit, it usually needs to be planned for and shot during production. There are a few ways of doing this. You could use a screengrab from the first shot and use that to approximately line up the actor in the same position for the second shot. 

Or, you can use what is called mix and overlay. This is performed by the VT operator on the set using software such as QTake. They take a freeze frame from the first shot, reduce its opacity, and overlay it on top of a live image of what the camera is seeing on a monitor. The camera can then be lined up until it matches the framing of the first shot.

If you don’t have a VT operator and need to do an exact match, the old school way of doing it is to take a see through sheet of plastic, overlay it on a monitor and trace the outline of whatever needs to be matched with a pen. Then when you need to shoot the second shot, you can take this stencil and match the frame to the traced outline.

CONTRAST CUT

So far we’ve only talked about the visual side of making editing transitions, however sound can also play a major role in edits.

A contrast cut can be created by placing shots which are noticeably different next to each other, whether through differences in colour, tone, texture, shape, movement, or sound.   

Here, a series of cuts get louder and louder until a contrast cut is made to a far quieter shot. This supports the moment that the character goes from a constantly growing agitation and non-stop frenetic life, to a breakdown moment of reflection.

J & L-CUTS

Another way that sound can be used in transitions is by extending it, so that it overlaps the two shots.

When the audio from the next shot is heard before the video actually cuts to that shot it’s called a J-cut, which forms a J shape on the timeline. Such as this example from Apocalypse Now where he starts to hear the building sound of a helicopter starting up before we cut to it.

The opposite is called an L-cut, where the sound from the first shot extends into the next shot. Again, this time the choppy sound of helicopter blades bleeds into shots of a soldier alone in a room. 

Throughout this movie, scenes and shots bleed into each other, blending time and the events together in a dissociative way.


DISSOLVE

Rather than using normal cuts, the editor often used a series of dissolves to enhance this feeling. 

This is where one shot gradually fades out while the next shot gradually fades in, overlapping briefly in the process. This creates a blending effect where the two shots appear to merge or dissolve into each other.

Dissolves are often used to indicate the passage of time, to suggest a change in location or mood, or to create a dream-like or nostalgic effect.

This sequence dissolves three different shots over each other: a shot of helicopters, a shot of a soldier and a shot of a fan. The fan and the helicopter blades also provide a subtle graphic match. The effect is a dreamy one, that, at the same time, hints that these past memories of combat haunt the soldier.

Dissolves can also be used more sequentially, one after each other to convey a passing of time. 

FADE TO BLACK

When a shot is dissolved against a back screen, this is called a fade or a fade to black. This effect is often used to indicate the end of a scene, to create a sense of closure, or to suggest the passage of time - kind of like a chapter in a book.

In this case a fade to black is used to end a turning point in the screenplay. When the image cuts back in from a black screen, it is clear that a portion of time has passed and we are viewing the final epilogue. 

WIPE

Occasionally filmmakers will use a more stylised transition, like a wipe. 

This is where one shot is replaced by another shot in a distinctive manner that involves a visual element "wiping" across the screen, gradually revealing the new shot. The visual element can be a simple line, a geometric shape, or any other kind of graphic element. 

If we break this transition down frame by frame we see that a wipe is combined with a dissolve and a graphic match to magically transport a character from one location to another. If we play it back at full speed it creates the illusion that this happens in one shot.


HIDDEN CUT

This is a type of hidden cut, where the transition happens on a moment of movement to disguise that an edit took place. 

If we go all the way back to the example from the beginning of the video and break it down - we’ll see another hidden cut, or in fact six hidden cuts.

 Each cut happens on a moment of fast movement which is a continuation of the movement from the prior shot. When these clips are organised and played back at full speed each cut is magically disguised.

Read More
Gray Kotze Gray Kotze

Cinematography Style: Bill Pope

In this episode of cinematography style I’ll go over how Bill Pope broke into the world of film, look at his philosophy and ideas on cinematography and go over some of the gear and equipment that he uses to execute his work.

INTRODUCTION

After starting out by shooting music videos in the 80s, Bill Pope progressed into being a highly experienced feature film DP responsible for photographing many high budget Hollywood movies.

At the core of his work, he focuses on collaboration and using the structure that director’s provide to fill in the story’s tone visually.

In this episode of cinematography style I’ll go over how Bill Pope broke into the world of film, look at his philosophy and ideas on cinematography and go over some of the gear and equipment that he uses to execute his work. 


BACKGROUND

After graduating from NYU graduate school, Bill Pope didn’t stay in New York for long before he decided to make the move out to California where there was far more work available in the film industry.

“I just kept hammering at it. I’d just shoot for free. I went over to AFI and volunteered to shoot movies for free and I did. I shot many. You fall in with a group of people and I would always just shoot for free. And I worked for free for years. During the day I would be a PA and then I’d take time off to shoot whatever was necessary.”

While working as a camera assistant on a Roger Corman movie, he even used the camera and spare short ends, which is the unused film stock on a reel, to shoot a short film in his garage on weekends. 

One of the networks he built during this time was with a friend whose production company made music videos with budgets between $2,000 and $5,000. Later, when MTV kicked off and they were looking for DPs with music videos on their reel, Pope’s name came up. 

During this time he spent about six to eight years shooting music videos for various well known artists, before director Sam Raimi persuaded Paramount to let Pope shoot his first feature: the superhero movie Darkman.

From there he became a regular features DP, who over the years worked for many other well known directors such as The Wachowskis, Edgar Wright, Jon Favreau and Joe Cornish.   


PHILOSOPHY

“I would say to a young cinematographer the most important thing you have is relationships. And the earlier you form them the better you’re going to be. You can have all the talent in the world but if you don’t know anybody you’re not going to go anywhere.”

Not only does building relationships with directors have the most obvious and most important effect of allowing you to get work, but it also has other downstream effects. Working for a range of different directors exposes cinematographers to many different situations and many different ways of working. It also allows you to pick up on various strengths and weaknesses that directors may have, which will strengthen your skillset as a DP.

“We don’t make the framework. We bring the atmosphere. We bring the thread. We bring the tinsel but they bring the tree.”

Directors will first approach Pope either with a meeting to talk about a project, by sending a screenplay and then a bit later by sending through a shotlist or a storyboard.

Although he has compiled shot lists for certain directors and certain projects, he usually prefers to be a sounding board for the ideas, shots or storyboards that the directors put forth, and then gives his unfiltered feedback to them.

“You’re just bouncing things back and forth. In a way, I’m the first person to see the movie. So, I’m the first audience member and I just say to directors, “When I start, look I can be really annoying, because I’m going to say what is in my head and I’m not going to filter it too much. I’m gonna be direct feedback to you because I feel like that’s my job.” 

This may involve him proposing anything visual, such as an alternative shot, tone, angle, lens, costume palette or art direction to the directors that he feels better conveys the story. These ideas can either be rejected, accepted, or through collaboration, morphed into a new idea.

This process happens both during prep, as well as on the day of shooting, where he needs to be adaptive and quick on his feet to make loads of quick decisions about how to best tell the story. 

This is especially necessary when working for a director such as Edgar Wright who likes to cut quickly to a variety of angles - which makes the shooting schedule very tight. 

Making good decisions, but also making them quickly is an important quality for a DP to have. 

Using his analogy, the director brings the script and structure, or tree, and then they both decide what ornaments, like the tone, shots, or palette, they can add to it, to better accentuate it.   


GEAR

Since he started his career in the 80s, long before the first viable digital cinema cameras were released around the early 2010s, much of what he has shot was done using film. Although his recent projects have mainly been shot digitally on versions of the Alexa, he does occasionally still get opportunities to work photochemically.

“It’s just a discipline. You know the mag is going to last 10 minutes long. The slate comes in fast. Everyone is on their toes. It becomes sort of flaccid in the digital world. They’re on their phone, they’re like whatever. On film it’s a complete old fashioned discipline. It’s gotta be in a can. It’s gotta be split up. It’s gotta go to the lab. It’s gotta be returned. Everything’s got a deadline to it and a precision to it that people step up to.”

On top of film changing the set dynamics, he also values it for its look - which renders images with an elevated realism. On Baby Driver he made the decision to shoot the vast majority of the movie on a single film stock, the more sensitive Kodak 500T. He did this in order to preserve a similar continuity of grain across the entire movie. The more sensitive a film stock is to light the more prominent the grain will be - so a 50D stock will have finer grain than a 500T stock.

However, he did occasionally use the less sensitive Kodak 250D for certain car mount shots, where there wasn’t enough room to add a mattebox to the front of the lens, which, if shooting with a 500 speed stock, would be needed to hold ND filters to cut down on light.  

“Ordinarily I was shooting it all with 5219. Just so that I have enough speed to deal with polarizers and overcranking.”

Polariser filters are pieces of glass that let through certain waves of light while blocking others. Practically this filter can be used to either introduce more contrast to darker tones, most commonly in the sky, or to reduce or manage reflections, for example reducing the reflections seen when shooting through a car window.

However, this filter also reduces the amount of light let into the camera by around one stop. Therefore, Pope chose a faster film stock that was more sensitive to light to counteract this.

He also mentions ‘overcranking’. This means shooting at a frame rate which is faster than the regular 24 frames per second most commonly used. If the frame rate is doubled to 48 frames per second, it creates slow motion, but it also reduces exposure by 1 stop. 

So, again, choosing a more sensitive stock helped counteract that.

This overcranking was taken to the extreme on The Matrix, which was also shot using sensitive 500T for interiors. Because many of the fight scenes had super slow motion shots in them, Pope had to light these sets so that the camera could shoot at up to 300 frames per second. This was done on a high speed Photosonics film camera. 

300 frames per second requires about 4 stops more light to shoot at than regular 24 frames per second. This means that to get to his shooting stop on the lens of T/2.8 he had to light spaces at least 4 stops brighter, at T/11.

This is fine when shooting exteriors lit by strong sunshine, but is far more difficult when shooting the many interior sets or night scenes - which required many, high output film lights.

For the lobby scene which was shot with high frame rates, the idea was to add an ambient top light to the space and then key the scene with a side light. This sounds simple in principle, but getting to this super bright level of light, required doing this on a massive scale. 

His team rigged 1,000 tungsten par cans to the roof for ambience and about nine high output tungsten dino lights that side lit the scene through 12x12 grid cloths that softened the light.   

It also meant that any practical lights that could be seen in the shot had to be custom built using pyrex, and specially fireproofed, so that the extremely high output, hot sources of light didn’t start any fires.

While most shots in the Matrix were achieved on practical sets, there was also some visual trickery, such as the iconic bullet dodge shot. Rather than trying to physically move the camera around the actor, extremely quickly and shoot in extreme slow motion, Pope’s team instead created a rig that mounted many stills cameras placed around the actor in a green screen studio. 

They could then use an image from each camera sequentially, so that it gave the illusion that the camera was moving. This shot of the actor was then cleaned up and dropped into an environment, to achieve the famous circular slow motion shot. 

One of the metaphorical ornaments that was hung on The Wachowski’s metaphorical tree was how they used colour. The real world in the film was devoid of sunlight, so was always lit to a cool, blu-ish colour temperature. While the artificial, computer generated world of the matrix had a sickly green tint to it, inspired by the colour of the cursors of the time.

When working on these larger action based movies that require getting a lot of different shots, he’ll often shoot with multiple cameras to maximise the coverage he can get.

This means hiring crew, such as camera operators and dolly grips, who will technically collaborate with him to frame the images and influence the camera movement.

“Any director likes to have other filmmakers around them. You don’t want a piece of meat who isn’t going to give you an opinion. You want people who are smart and who are contributing all the time. I mean, I always consider operators and dolly grips to be part actor. They have to be able to interact with the actors themselves and feel the drama because the actors are never going to do it twice the same way. You have to feel it. You have to be one with them.”

The movies he shoots often use smooth dolly movement to tell stories by doing things like pulling out from an important object to reveal context, or very slowly pushing in with the camera to elevate an emotional scene or experience the character is undergoing.

Each camera move is another little ornament for the tree.

He’s used both anamorphic and spherical lenses, but does have a tendency of shooting grander, action stories in the anamorphic format, often on Panavision anamorphics, such as the G-series, T-Series or older C-series lenses.    

When deciding where to place multiple cameras, it’s useful to have a viewfinder or pentafinder to more accurately find the frame before placing the heavy cinema cameras in place or laying dolly tracks. 

There are a few photos of him using what looks to be an interesting viewfinder from Kish optics, which has a video tap inside, which can send an image to a little monitor. This makes it easy for both himself and the director to evaluate and find the right frame.    


CONCLUSION

The diversity of the films that he has shot makes it tricky to pin Pope down to only one particular photographic style. Rather, he works in a close collaborative relationship with each director in a different way. Acting as a sounding board for their ideas and providing his own perspective on how best to elevate and execute each story visually using his deep experience and knowledge of cinematography. 

Read More
Gray Kotze Gray Kotze

How James Cameron Shoots A Film At 3 Budget Levels

In this episode I’ll look at three movies shot at three increasing budget levels by James Cameron - Piranha II, The Terminator and Avatar: The Way Of Water - to uncover the similarities between them and how he built his career and style as a director.

INTRODUCTION

There are few director’s who are as well known by mainstream audiences as James Cameron. After all, he’s directed some of the biggest Hollywood blockbusters of all time.

In this episode I’ll look at three movies shot at three increasing budget levels by James Cameron - Piranha II, The Terminator and Avatar: The Way Of Water - to uncover the similarities between them and how he built his career and style as a director.


PIRANHA II - $600,000

“I started working in special effects and I did a little art direction. In fact I got my first directing gig which was Piranha II as a result of Galaxy of Terror. I was doing second unit directing on that film. And they needed a cutaway to a dismembered arm lying on the ground with worms crawling around it. So I have them put a piece of zip-cord, strip it down and lay it underneath the arm and we make it wet so that it’ll conduct and there’s an electrician around behind the set. And the only cue I could think of was action - to tell him to throw the switch. 

What I didn’t know was that these two producers who were looking for a director for Piranha II had come in behind me and they were looking over the setup. And I said, ‘Alright, roll camera’ and they rolled, ‘And action!’ and the electrician behind plugged in and the worms all started to move I said ‘that’s good that’s a cut’. He unplugs. They didn’t see him. Worms stop moving. I turn to them and they say, ‘Come with us we’d like to talk to you about a film we’re making.’”

As the title suggests, Piranha II was the sequel to Piranha, which was basically a B-movie rip off of Jaws that replaced the shark with genetically modified flying Piranhas that could somehow survive out of water. The low budget sequel was taken on by an Italian producer, Ovidio G. Assonitis, known for making B-horror movies. 

James Cameron, who had built up a career as a production designer and special effects director, was initially brought onto the project to do the special effects, but after the first director on the project was quickly fired, due to a creative disagreement with Assonitis, Cameron was replaced as the director of the movie.

Filming of the exteriors took place in Jamaica, with some interior scenes filmed in a soundstage in Rome.

Due to the low budget of $600,000, and an allegedly controlling Italian producer, almost the whole crew was made up of Italians, including the cinematographer, whose name was spelled incorrectly in the credits. This quickly made communication difficult for Cameron, as apparently most of the crew could not understand English.

As the story goes, Cameron was eventually fired from the shoot as director due to constant interference from and disagreements with the producer. After firing two different directors the producer finished directing the rest of the B-movie himself. However, as Assonitis felt that the movie needed an anglicised director name in order to sell, they kept Cameron’s name on the credits.   

Cameron was also allegedly not allowed to view rushes of the footage he shot, and was kept out of the editing process. Although he would later buy back the footage from the distributor so that he could recut, re-score and release the originally intended version for home video.

Although Piranha II will probably not go down in history as a cinematic masterpiece, the low budget horror movie still contains a few directorial stylistic threads that would continue throughout his career. 

One, his fascination with the ocean and setting films in or around water. 

Two, telling stories in a way that draws the audience into the magical world of cinema where realism is replaced by spectacle and plausibility is replaced with a sprinkle of cinematic magic.

Three, his fascination with using special effects and pushing the boundaries  of whatever technology he has at his disposal.

And, finally, presenting larger than life stories with lots of action and set pieces, which are told through the eyes of a focused cast of main characters.


THE TERMINATOR - $6 Million

In the wake of the release of Piranha II, James Cameron fell ill and had a fever dream one night about a metallic, robotic torso. This sparked the idea for his next sci-fi slasher script which he would write about a cyborg assassin sent back in time to kill someone. He called it The Terminator.   

“The first film that I directed that I got fired off of was called Piranha II and I think it was about $600,000. And the second film was The Terminator and that was $6 million.”

Although it was a substantial jump from his first low budget flick, $6 million was still considered a fairly limited budget to tell the kind of action heavy, ambitious sci-fi movie he had written.

“It’s not really about machines from the future. It’s sort of about our relationship with technology. But it’s really kind of about our human potential for dehumanisation ourselves. You see cops as examples. And then, you know, the other idea not to trust technology and even not to trust the fabric of reality.”

Cameron employed Adam Greenberg as the cinematographer on the project who worked with a relatively small camera and lighting package - without any expensive, elaborate gear setups.

Terminator was shot on an Eastman 250T film stock with an Arri 35 BL for dialogue scenes and an Arriflex 35 III for MOS, action or insert shots that didn’t need synchronised sound.

Since the script featured lots of night scenes, Greenberg chose a set of spherical prime lenses with a fast aperture that could let in lots of light - the Zeiss Super Speeds. Also, based on the classic 80s halation that you can see around the highlights, it’s likely that he used a diffusion filter, such as a Tiffen Low Con. 

When it comes to action, Cameron edits it tighter and more coherently by creating little pockets of quick cuts that group different shots of the same action or idea together, rather than just cutting back and forth between various unrelated shots.

“I think it’s a very analytical process. I remember literally having a discussion with the editor and saying, ‘Look, in this battle sequence every single time we’ve improved the battle we’ve done it putting like with like, by putting the attack with the attack, the retreat with the retreat and grouping things that way.”

The cars in the chase sequences were never allowed to drive faster than 40 miles per hour. So, Greenberg helped to create a faster sense of motion with light. He placed film lights with operators on moving vehicles next to the shooting vehicle. The operators could quickly pan and move the source around to create lighting and reflections that gave the illusion that the streetlights were passing by quicker than they actually were. 

Another analytical way of creating a menacing tone for The Terminator was to often shoot him from a low angle, while the camera shot Sarah Connor at a more comfortable, familiar neutral angle.

Overall he also used a large amount of backlight - placing hard light sources behind the actors in almost every night scene. Generally using softer sources for Sarah Connor and harder backlight for other characters. And don’t forget about the 80s hair light. This created a more silhouette-y shadow-y look which both helped to hide some of the visual effects and created the moody noir feel that they were after. 

 Another way of masking special effects was through a combination of clever angles and editing. Cameron’s philosophy was if you supply the audience with a shot A and a shot B, the audience will quickly fill in shot C using their own imagination. A good example of this is in the famous eye extraction scene. By showing shot A, a profile shot of the blade lifted up to an eye, and shot B, blood dripping into water, the audience fills in the gruesome shot C, the cyborg cutting out its own eye, by themselves.

AVATAR: THE WAY OF WATER - $400 Million

After the huge financial success of Avatar in 2009, a sci-fi epic which pushed the technological boundaries of motion capture and CGI, Cameron promised a series of sequels.

It took over a decade of development and production to create the first in a series of follow ups - which were all filmed simultaneously.

Avatar combines live action capture, as well as motion capture that is later used to create computer generated characters, in computer generated environments. As I mentioned before, a trademark of Cameron’s work has been how he pushes the technical limits of gear and technology. He has shown an interest in visual effects his whole career, so much so that he started his own VFX company, Digital Domain, in the 1990s.

Since Avatar uses an unusual process to make the movie, let’s break down the full workflow of how it was prepped, shot and then finished in post.

Cameron selected his old cinematographer Russel Carpenter, who worked on Titanic, to shoot the film. The photographic process started in 2018 where Carpenter oversaw the virtual lighting of the CGI sequences, using a programme called Gazebo which was developed by Weta FX visual effects house. He pushed for lighting with different colour nuances for different virtual spaces. This virtual lighting was also valuable as he would have to match it in the next step of the process which was shooting live action scenes with real human characters.

These scenes were filmed in 3-D, using a 3-D rig designed by one of Cameron’s companies, with two Sony Venice cameras and Fujinon zoom lenses. They had to frame for two different aspect ratios during shooting. The taller 1.85:1 was used for 3-D presentation, while the wider 2.39:1 ratio was used for regular 2-D screenings.

They also made the unusual decision to shoot at 48 frames per second, instead of the more regular 24, which decreased the amount of motion blur. To compensate for the one stop loss of light that comes from shooting at a higher frame rate, they shot using the Venice’s higher ISO base of 2,500.    

Once the live action portions were done they turned to capturing the virtual performances using motion capture technology.

“So, it’s a set without a set. Your coral reef is going to be some pieces of tubing which are screwed together to create contact points for the actors. So if they’re pulling themself through coral we don’t build the corral. We build that which they need to touch and then we warp reality to fit their hand contacts later.” 

Motion capture doesn’t really use cameras in the traditional sense. Instead, actors are put in skintight suits that have reference markers or sensors on them. A few cameras, or sensors are then placed around them that are able to capture how the actor moves in a 3-D space.

This data is then later used in post production to give a digital character the same movement as the actor on set.

For the underwater sequences with digital characters, Cameron played out scenes with his actors in mocap suits in a custom built pool in a studio. Actually shooting underwater gave a more realistic sense of motion but came with some challenges. They had to construct underwater housings for the mocap cameras, and shoot at a wavelength of light near ultraviolet which would better capture motion through water. The pool was lit by rigging 60 Skypanel S-60s over the water. They then placed little plastic beads on the water’s surface. This both diffused the light and minimised capturing reflections underwater - which would cause false data.

After all the live action and motion capture footage had been acquired they could then plug the footage back into the pre-designed lighting and virtual environments. From there an intensive post production team was used to build and finish the characters, movements and environments in the film.

Read More
Gray Kotze Gray Kotze

How Cinema Cameras On Movies Are Built

This video is a template for how to put all the pieces of a cinema camera together, as well as a guide to what camera accessories are commonly used and rented on industry level jobs.

INTRODUCTION

In one of the early videos on my channel I created a guide for how cinema cameras are built and all the accessories that are usually attached to them on an industry level production.

Since that video has been removed, due to copyright issues, I thought I’d update that video and create a new guide that can act as a template for how to put all the pieces of a camera together, as well as a guide to what camera accessories are commonly used and rented on industry level jobs.

Before we begin, just a reminder that camera technology and gear gets updated often, and although the gear shown in this video may soon be outdated, the types of accessories shown are pretty standardised across the industry and should at least remain relevant in concept in the future.      


BASE PLATE

Before we begin adding accessories, we first need to be able to stabilise the camera so that it can be worked on. This means mounting the bottom of the camera to a base plate which can be fitted to a tripod head.

For a standard studio build, the base plate gets screwed into the bottom of a sliding dovetail plate. To attach the camera to the dovetail requires a bridge plate, such as an Arri BP-8.

Bridge plates are designed with two holes that can accept bars or rods which come in either 19mm or 15mm sizes. These rods are used to mount other accessories which we’ll get to later. 

The release on the side of the bridge plate is moved to lock and unlock the plate from sliding. Once unlocked, the camera is able to slide into the dovetail base. Sliding the camera is also a quick way of balancing the weight of the camera on the tripod head so that it is not too front heavy or too back heavy.     

Base plates and bridge plates on industry level cinema cameras are standardised to the same design and size so that the different cameras, from an Alexa to a Venice to a Red, can be securely mounted onto other rigs, such as a car mount, or a stabilised head, without having to change the mounting accessories on the camera.  


LENS

Next, we can put on a lens. The most common cinema mount found at the back of the lens is a PL, or positive lock, mount. Although there are also PV mount lenses offered by Panavision, as well as large format LPL mounts. The principle for attaching a lens to the camera is the same. 

For a camera to accept a lens the mount at the back of the lens needs to be compatible with the mount on the camera. 

Line up the locating pin, found at 45 degrees, with the lens, and gently insert it into the mount. Once the lens is in, turn the PL release clockwise to lock the lens in place. To remove a lens you have to hold it in place and then turn the release counter-clockwise until the lens is loose and can be taken out. 


MATTEBOX

To the front of the lens we’ll attach a mattebox, this has two main purposes: to flag light from hitting the lens, which reduces flares, and to hold filters.

There are two types of matte boxes: clip on mateboxes, which are usually smaller, take 4” or 4x5.6” filters and can be clamped straight onto the front of a lens, and rail mounted matte boxes which are usually larger, take 6” filters, and need to be slid on and clamped to 19mm, or sometimes, 15mm rods. 

For clip on matte boxes it’s important to make sure that the back of the mattebox is the same size as the front diameter of the lens that you will attach it to. For example, if you’re using Zeiss Super Speeds with a 80mm front diameter, then your mattebox back must also be 80mm wide. 

To attach a matte box we loosen the release, push it onto the lens and then tighten or clamp the release until it holds securely onto the front of the barrel.


POWER

Before we go any further we’ll need power for both the camera and any accessories that need to be run. Cinema cameras typically have two methods of powering the camera, on-board power, where batteries are mounted onto the back of the camera, and block power, where cameras are tethered to a large battery with a cable.

On-board batteries are usually used for handheld camera builds or other builds where the camera will move a lot and be mobile. While block batteries are often used for studio setups, such as when the camera is on a tripod or a dolly - which has a platform where the battery can rest. 

They also enable on-board batteries to be replaced or hot swapped without powering down the camera.

An on-board battery plate for V-mount, gold mount, or B-mount batteries may be slid onto the 15mm rods at the back of the camera in some cases. These batteries don’t power the camera as long as cabled batteries, but they allow the camera to move untethered.


MONITOR

Now that the camera has powered up and we can see an image, let's attach a monitor onto the camera that the DP or operator can use to assess exposure and frame. 

Using an articulating arm, such as a Noga arm, we’ll screw one side into the base of the monitor and the other side into a quarter inch mounting point on the camera, such as into the handle or the top plate.

There are two cables that need to be plugged in to get the monitor working. The first is a power cable, such as a D-Tap or a two pin, which we’ll plug into a port on the battery plate, into the battery itself or into a port on the camera. It’s also possible to power monitors with their own batteries. 

The second cable is an SDI which provides a video feed from the camera to the monitor. This is plugged from an SDI output on the camera into an SDI input on the monitor.  


TRANSMITTER

Occasionally, directors and focus pullers like to stand next to the monitor on camera, but more often than not they like to have a transmitted video feed to their own monitor.

To set this up we’ll attach a wireless transmitter, such as a Bolt 500, to the camera. Again this requires 2 cables: a power cable and an SDI video feed.

This Bolt will then transmit a live video feed wirelessly from the camera to a receiver. The signal is then either received by a VT operator at their station where it will be output to different monitors, or it can be directly attached and cabled to a director’s monitor, or paired with a sidekick receiver that goes into a focus puller’s monitor. So that they can see a live video feed of what the camera is recording while remaining physically separated from it.    


FOLLOW FOCUS

Next we’ll need to be able to change the focus on the lens.

There are two types of gear we can use: a standard follow focus, that needs to be operated by standing next to the camera, or a wireless follow focus which is operated remotely using a handset.

The 1st AC will attach a wireless follow focus motor such as from Arri, or Preston, by mounting it onto a rod. This could be top mounted from a short rod, or bottom mounted onto one of the 15mm or 19mm rods on the bridge plate. The teeth of the focus motor need to be aligned with the gears on the barrel of the lens.

You then need to plug a power cable from the camera to the motor. Sometimes an additional MDR box may need to be mounted on the camera, which the motor will be plugged into. The focus puller can then alter the focus of the lens remotely by turning the knob on the handset. 

FOCUS ASSIST

A modern tool that some 1st ACs use is a focus assist device, like a Cine Tape, which sends out an ultrasonic signal to measure the distance between the camera and a subject. This allows the 1st AC to more accurately judge and pull focus. 

This tool is especially useful when shooting on film, where focus needs to be judged more off distance than by looking at an image on a monitor. 

The focus assist is mounted using an articulating arm. It usually comes with a box that needs to be mounted onto the camera and plugged into a power source as well as the focus assist itself. This then wirelessly sends a distance readout to the box as well as to the focus puller’s handset.

SOUND

Finally, sound equipment may be attached. This could either be in the form of a small box, such as a tentacle, which is used to maintain the same timecode between the audio recorder and the camera - which is plugged into the timecode port, or a device which feeds the audio signal from a mixer into the camera through an input such as a 6-pin Lemo.


CONCLUSION

Cinema cameras are modular by design and therefore capable of being configured into an array of different shapes and sizes depending on what rigs will be used, or how the DP would like to operate the camera.

If you’re new to cinema cameras and don’t have much hands-on experience, the easiest way to practise assembling cameras is at a gear rental house. Reaching out to them to intern there or asking them if you could pop in and job shadow a bit will allow you to see various cameras in the flesh and get experience on how to assemble them.

It’ll also help you to better learn the gear and be able to put together a camera gear list of what you need to rent for a shoot. So that, hopefully, when you see a cinema camera on your next job, you’ll have some experience and knowledge on how to go about setting one up.  

Read More