Shading Cameras

Ever see a church video production where they cut from one camera with a reddish tint to another camera with a greenish tint?  Or a low-budget production where one camera had good exposure and another was too dark or over-blown?  How about one camera with good detail and another which was too sharp or so soft it looks like they’re shooting with a swatch of pantyhose affixed to the lens?

When designing a multi-camera production system to record your church’s services for DVD or internet video-on-demand distribution, live Imag projection or live web-streaming, one of the most important things you should be deliberate about is planning-in a capability to remotely shade (aka “paint”) cameras from your control booth.

In a live multi-camera application (as opposed to single-camera announcements, roll-ins, testimonials, missions videos, etc.) shading cameras live in real time is essential. Unfortunately, not all cameras offer this functionality and sometimes churches purchase cameras manufactured and intended for other applications and shooting styles (mis)using them for live multi-camera production instead. The bad news is unless the camera choice decision is made intentionally; often churches don’t find out what functionality they are missing until it’s too late and the cameras are already purchased.

“Shading” (aka “painting”) means to be able to adjust live, in real-time, the lens aperture and the camera’s ISO setting (aka: “master gain”), shutter, red/blue color gains, master pedestal, red/blue peds, flares and gammas, detail and if possible the filter wheel settings of the camera, all remotely from your video control booth or other engineering area of the building.  The person doing this chore might be given the title video operator, shader, V1, video engineer or DIT (digital imaging technician), but no matter what the title they are essentially all performing the same task.  In smaller production or church environments the video switcher operator or robotic camera operator may wear two hats and also shade cameras.

The piece of equipment that makes all these adjustments possible is a camera CCU (camera control unit). A camera CCU system should allow you to power the camera, transmit HD-SDI video from the camera to the control booth, transmit HD-SDI video from the control booth back to the camera (return video), transmit SD or HD video from the control booth back to the camera (prompter), transmit stereo audio from & to the camera (especially if a ENG/EFP model), and allow duplex communications via a headset, video gen-lock and timecode jam-sync. Phew!

Ideally, all the functionality would be transmitted on ONE CABLE going to the camera. The cable type is typically a SMPTE fiber optic or Triax copper cable. If power is not being fed to the camera via the camera cable, another option is (one or more, typically two) TAC fiber.

 

Hitatchi SK-HD1000 700

There are cameras designed specifically for live multi-camera production with all these features and capabilities.  Some include Sony’s HDC1700, HDC2000, HDC2400, HDC2500, HDC2550, HDC3300, HSC100, HSC300 and HCXD70. From Panasonic, there is the AKHC3500 and the AKHC3800. From Hitachi there is the SK-HD2200, SK-HD1500, SK-HD1200, SK-HD1000, SK-HD6000 and SKHD-5000. There are other manufacturers and models as well, but the one thing all these cameras have in common is SMPTE fiber or Triax camera connections, a CCU (camera control unit) and an RCP panel (remote control panel). These cameras are sometimes described as “studio cameras,” however you will also find them on multi-camera remotes at stadiums, arenas and concert halls.

When one fully understands all the functionality these cameras have, it becomes clear why for example Blackmagic’s “Studio Camera” is not a “studio camera” at all.

The second class of cameras is ENG/EFP “news gathering” cameras. The unique feature of this class of camera is they always have internal recording capability, but what we care about is they also have a remote paint box spigot and remote lens control capability. ENG/EFP class cameras require a bundle of cables harnessed together, or a fiber optic “transceiver” (like a Telecast/Miranda/Grass Valley Copperhead 3200) to give a ENG/EFP cameras all the functionality of the “studio cameras” described above. ENG/EFP cameras have multi-pin lens cable ports (typically a 12-pin on the front of the camera, to control the iris aperture motor of a removable lens) AND a multi-pin remote paint box port (for Sony typically a 8-pin on the back of the camera). It is to this paint box rear port (along with audio, video, timecode, and gen-lock going to their respective spigots) that the bundle of cables OR the fiber transceiver will connect to, to make the ENG/EFP camera act as a studio camera.

Sony PMW350

Examples of cameras in this class are Sony’s PDW700, PDWF800, PMW300, PMW320, PMW350, Panasonic AJHPX2000, AGHPX500, AGHPX370, AGHPX255, and JVC’s GY-HM790, GY-HM750, GY-HM710.

OK, so now we have three ways to skin the shading multi-camera cat. (1) buy SMPTE or Triax tethered studio cameras, (2) buy ENG/EFP cameras and use a bundle of cables, or (3) buy ENG/EFP cameras and fiber transceivers to get all the camera functions on one SMPTE, TAC2 or Triax cable.

Note that there are some internal recording cameras similar to what I describe above as ENG/EFP, which have a multi-pin remote paint box port but will not have a multi-pin lens cable port (like Sony’s PMWF3). These should be avoided as most lenses will not allow remote aperture control with them (unless you add a lot of expensive equipment).

The last class of cameras are similar to the ENG/EFP cameras in so far as they have internal recording capability, however this last class of cameras does not have multi-pin remote paint box spigots. Lack of this feature make them all but impossible (without spending a lot of money and grief) to have the shading and CCU capability described above.

handicams

Examples of cameras in this class are Sony’s HXRNX30, NEXFS700, NEXFS100, Panasonic’s AGHMC40, AGAC7, JVC’s GYHM150, (all) Blackmagic cameras, (all) RED cameras, (all) Go-Pro cameras and non-internal-recording dome-style PTZ cameras.

Unfortunately, for owners of this last class of camera (whether in a form factor of a handicam, a dome PTZ or a digital cinema camera), you are out of luck when it comes to cost-effectively controlling the aperture and shading colorimetry in a live multi-camera environment.

The Shader’s Rig:

Now, assuming you obtain cameras in the first or second classes described above, the following is a typical shading set-up and signal flow:

Terminal_Shading annotated_400w

  • All cameras (actually all sources) are fed to video distribution amplifiers (or a large router) before input signals proceed to other areas of the control booth
  • After the DAs (or router) one signal from each source goes to the production switcher, a second signal from each camera source goes to monitors or a multiviewer (if not built into the switcher) in the control room, a third signal to camera repeat monitors (or multiviewer) at the Shader’s area (typically not in the control room) and a fourth signal goes to a 6×1 or 12×1 router panel (or in the case of the large router above, a remote “single-destination panel”).
  • The output of the 6×1 or 12×1 router panel (or “single-destination panel”) then feeds  …

1) A Waveform monitor/Vectorscope, and then loops through to…

2) A Color Critical monitor

All of the above should be in the highest quality signal format your system can produce (for example 1080 HD in a 1080 HDSDI system).

The shader’s physical location ideally should not be in the control room if possible as it is important that the shader’s area be very dark (to accurately shade blacks with a color critical monitor) and this conflicts with production people in the control room who need some light to read run-downs or scripts, see their stop watch or dial a phone keypad.

Now, some people (typically with film-school experience) might think, why not just control the camera’s iris remotely and forget about all those other settings.  Are they really needed anyway?

I would respond, in a live multi-camera environment what benefit is controlling the iris if the colors do not match? If the master pedestal (blacks) are different? If the flare and gamma curves are different? Controlling iris-only on cameras is similar to trying to match and blend three projectors by using the brightness-controls-only but not geometry, sizing, and color controls on the projectors; or by mixing sound for a choir by only using channel volumes on your desk but not adjusting each mic’s position on the stage, input gains, EQ, compression, gates and reverb.

Adjusting iris (only) in a multi-camera shoot is really a “film-style” concept. It is seen on professional shoots (with digital cinema cameras like Arri Alexas, REDs, Canon C300’s, etc.) where the recordings are RAW and all the footage will go through a digital intermediate color correction pass before the audience views it. It is not a live-video process and is not appropriate for church applications because our Imag and webstream audiences are watching live (not after the event has been edited and color corrected) and even when churches edit most record in compressed formats like H.264 or ProRes, not RAW).

In a broad sense, there are two (maybe more) schools of thought regarding live multi-camera camera shading:

(1) Manually shade all cameras pre-show and then touch nothing other than iris when in-show. This approach is not far different than performing an “auto white-balance” on each camera pre-show, although hopefully much more accurate and effective to deal with camera-to-camera variances which often occur even when all cameras have the same manufacturer and model. Or,

(2) Treat shading cameras-live in a sense how a feature film colorists treats a digital intermediate pass during post on a film project. (Hey, wait a minute!  Aren’t you contradicting yourself?  What about all those “film-school” comments?)

A feature film colorist will not just match cameras (if some scenes were shot multi-camera or on a variety of stocks), but will also use colorimetry to set an aesthetic look and feel which changes on a scene by scene basis to evoke a wanted emotional response from the audience.  The difference is that opposed to doing that in post at our leisure, we must do it LIVE, which requires the right control capability (remotely) in our camera choices.

Some examples of live-live or as-live shows that use the second approach (shading to create an aesthetic feel) are the Sound of Music, Live! special which aired on NBC last December, the Victoria’s Secret Fashion Show, the YouTube Awards, the music performance series Pepsi Smash (no longer in production) and the Bon Jovi clip below.

Notice on this Bon Jovi video, about 0:35 seconds-in the band goes black & white while the upstage set, the house and the audience remain in color. This was done live!

However even on productions that have one constant “look” through-out the show, adjustments to gains, peds, flares and gammas may be needed during the course of the taping. Talent’s occasional positioning near a LED screen or a song lit with un-corrected 5600k moving lights or follow spots may require a filter wheel or colorimetry adjustment in the camera, live. Scenes shot day-for-night as in on Sound of Music Live require ped (black level) adjustments (and could potentially be pushed to favor blues in shading).  A female performer may require less detail whereas a male may require more.  Various amounts of daylight/sunset as in the two Judy Collins videos below also require song-by-song live colorimetry changes:

No different than shot composition and movement, shading is a tool we have at our disposal to better communicate the story we are attempting to tell.  Some people will intentionally decide not to use the tools available to them, but the worst thing we can do is be ignorant of what tools are available, how to utilize them and the associated costs & benefits.

Photo Attribution: David Tames on Flickr

  • Would love to know more about how the Bon Jovi video was produced. Is this done with lighting colors?

    • Hi Larry, in short the switcher’s chroma-key capability is used. The band is lit with one color, for example green, the audience is lit normally, the switcher is told to de-saturate (make black & white) anything that is green.

      • Interesting effect – thanks for the response!