Easy to Make ‘Awful’ 3D, World Cup-Savvy Sony Engineers Say
LONDON -- “It’s very easy to make awful 3D,” but “making good 3D is hard,” said Jonathan White, director of sales at Sony’s Professional and Broadcast Division, Thursday at the U.K. launch of Sony’s 3D consumer product line. The Sony engineers who designed the 3D capture equipment being used to shoot the World Cup in South Africa, were on hand in a side room to showcase that equipment. They also showed us how the gear can be used to upconvert 2D to 3D “on the fly” when there are not enough 3D cameras to capture vital action.
Twenty-five World Cup matches in five South African cities will be broadcast in 3D, using camera rigs built by Sony, the engineers said. All the while, Sony’s MPE-200 3D processor box will continually check for optical effects that could fatigue viewers, they said. Rather than clamping two cameras side by side, Sony points one HDC-1500 camera horizontally straight at the pitch through a half-silvered mirror, while another camera points vertically upward at the same mirror and captures a reflected view of the pitch, they said. “This lets us use professional cameras, which are too large to clamp together at normal interaxial eye spacing,” said Paul Cameron of Sony Professional. “We can also increase or decrease the interaxial spacing. Most of the cost of the rig is in the mirror array, which must pass exactly 50 percent of the light in each direction. The other advantage of a mirror system is that the cameras can be easily separated after a 3D shoot and used for normal 2D filming."
The camera rig and MDE-200 processor box are used with a passive-polarizing TV, made by Sony for professional use with low-cost glasses. Sony’s consumer 3D TVs use active-shutter technology. We noted that there were plenty of passive glasses lying around the demo area, whereas the active shutter glasses for the demonstrations of Sony’s consumer sets were in shorter supply. The MDE-200 processor has its own analyzer screens, which show in monochrome how the left- and right-eye images coincide at the flat (zero parallax) screen plane and overlap in one direction to create (positive parallax) depth behind the screen, and in the other direction to create (negative parallax) action in front of the screen.
The box has a “depth budget” adjustment to limit the overlap and prevent excessive, fatiguing effects -- especially negative parallax in front of the screen. “Most of the time the depth is behind the screen, because it looks more natural and doesn’t make people feel their eyeballs are being pulled out of their sockets,” explained Cameron. “The processor also puts black guide borders down the edges of the screen to prevent the unpleasant effect you get with negative parallax when an object in front of the screen moves to off the screen to one side and the other eye loses sight of it first -- which is not what the brain expects."
The processor can be manually set, with a game-style roller ball controller, to process 2D into mildly upconverted 3D, the engineers said. To do this, the 2D feed is cloned to produce two identical images, which are then slightly separated -- by around 1 percent of the image width or around 10 pixels -- to create a little artificial depth behind the screen, they said. Then when there is no true 3D shot available of vital action, for example near a goal, the upconverted shot can be cut into the 3D feed, they said. “If it is only a brief insert, between true 3D shots, you don’t notice the reduced depth,” said Cameron.
As for Sony’s demonstrating its 3D LCD TV lineup at the U.K. launch, the company confirmed that the six sets needed to be carefully spaced and angled around the large room to avoid infrared interference among them. In the demos, we also noticed a repeat of what we have seen in past showcases -- marked loss of 3D effects when the active screens were viewed from the side. The loss of 3D effects was even more profound when the viewer’s head was only slightly tilted.