User:DaliaHashim1287/sandbox
Cite error: There are <ref>
tags on this page without content in them (see the help page).
2D to Stereo 3D Conversion
[edit]Stereoscopic 3D is the latest and most powerful trend in the cinema, television, Blu-ray, internet streaming, video on demand, advertising and broadcasting markets with an increasing potential for the next decades [1]. Especially 3D display technology is maturing and entering professional and consumer markets [2]. However, the mass-market for 3D entertainment is hindered from achieving its full potential, which is portrayed by the clear gap between the availability of 3D-ready equipment and the lack of 3D content. Hence the success of the 3D entertainment market heavily depends on both a wide availability of 3D content and the quality of the 3D content[3].
The Human Visual System and the Perception of Depth
[edit]-
Figure 1: The human visual system [Source: www.scheffel.og.bw.schule.de]
The monocular and binocular field of view
[edit]When capturing a 3D environment with a standard 2D camera, the depth information is lost to a great extent. Humans have two eyes that capture their environment from two slightly different perspectives. The normal distance between the human eyes ranges from 63 mm to 65 mm. The human brain processes the visual information and generates a stereoscopic depth perception.
Figure 1 shows a cross sectional area of the head with the visual cortex, the visual nerve system and the monocular and binocular field of view. The left monocular field of view is mapped to the right part of the visual cortex and vice versa. On the other hand, the left and right binocular fields of view, which are responsible for the stereoscopic depth perception, are mapped to the same side of the visual cortex.
Perception of depth
[edit]Depth perception arises from a variety of depth cues. The human brain can decode two kinds of depth cues: monocular and binocular depth cues[4]. Monocular depth cues include perspective, aerial perspective, relative size, familiar size, light and shading, texture gradient, blur, depth from motion, motion parallax, interposition and accommodation. The latter are vergence and stereopsis, also collectively referred to as retinal disparity. More generally speaking, monocular cues provide depth information when viewing a scene with only one eye, while binocular cues provide depth information when viewing a scene with both eyes by exploiting differences between the images perceived on the retina.
Conflicts of Depth Cues
[edit]Conflicts of depth cues cause visual discomfort, which is the reason for eyestrain, headache and nausea [5][6]. This is comparable to the phenomenon of "sea sickness" often experienced on a ship in rough water. This phenomenon occurs when the equilibrium organ within the middle ear detects rigorous motion that does not coincide with the perceived visual cues present within a ship. The confusion between what the brain expects and what it actually sees makes people sick. T. P. Piantanida calls this phenomenon the barfogenic zone [7].
A similar problem arises from conflicting depth cues, although not all conflicting depth cues cause the same degree of visual discomfort. Some of the conflicts of depth cues are described in the following subsections. A summary of the here mentioned conflicts if given in Table 1.
Vergence vs. Accommodation
[edit]Stereoscopic 3D is an illusion that tricks the brain into believing a displayed scene is 3D although it is not. This is done by providing each eye with a slightly different perspective of the same scene. Differences between the images usually occur horizontally and are referred to as parallax or disparity. This parallax causes the observer’s eyes to point towards an object appearing behind the screen or in front of the screen depending on whether a certain object’s parallax is positive or negative, respectively. This aspect of stereoscopic 3D is known as vergence. In nature the eye’s focal length will adapt to the distance of a 3D object under observation. This is known as accommodation which in itself provides a secondary depth cue. When viewing stereoscopic content the observer’s eyes must always focus, unlike in nature not onto the position of the 3D object as indicated by the vergence depth cue, but they must focus onto the screen plane regardless. This creates the intrinsic conflict of vergence and accommodation when watching stereoscopic 3D. To a certain degree the brain can tolerate this but if the conflict between these depth cues becomes too large eye fatigue or even nausea occurs.
Stereopsis vs. Interposition
[edit]-
Figure 2: Frame violation (left: negative parallax, right: interposition)
-
Figure 3: Floating window technique
The most critical conflict of depth cues is the discrepancy between the binocular depth cue "vergence" and the monocular depth cue "interposition". A conflict of depth cues occurs if for example object 1 is partly occluded by object 2 – i.e., object 1 must be behind object 2 (interposition) – but has more negative or less positive parallax than object 2 – i.e., object 1 appears closer than object 2 (stereopsis).
A good example for this conflict of depth cues is framing or frame violation. This is a typical phenomenon when displaying stereoscopic 3D content on a screen with limited screen size. If an object has a negative parallax and crosses the image borders, it is truncated. The truncation indicates that the object must lie behind the screen (inter- position). However, the negative parallax of the object indicates that the object is in front of the screen as demonstrated in Figure 2. In post- production this can be corrected using a floating window (also called proscenium arch), which is a virtual shift of the screen plane towards the viewer [8] by adding black borders to the left and/or right side of the left and/or right images as depicted in Figure 3.
Stereopsis vs. Blur
[edit]Stereoscopic disparities and blur are directly related and both affect the perception of depth for a given 3D scene. This is why a conflict of stereopsis and image blur will be detected and cause visual discomfort. An excess of blur with respect to a scene’s depth will have the effect of miniaturization which can be observed in the tilt-shift effect as illustrated by Held et al. in [9]. They further derive the diameters of blur circles as 1/12 the magnitudes of the disparities, which is related to human vision, where the average pupil’s diameter roughly 1/12 the distance between both eyes (64mm). The thereby created blur corresponds to natural human viewing and additionally guides the viewer to the plane of minimal blur, which can be a desirable effect in entertainment-based content. A more thorough mathematical description of the relationship between blur and disparities along with additional examples can be found in [10][11].
Stereopsis vs. Perspective
[edit]Perspective is a strong depth cue. Rail tracks for example are known to be parallel. In an image rail tracks seem to converge with increasing depth and intersect at the vanishing point on the horizon. If the vanishing point has the same parallax as the rail tracks on the bottom of the image, the human visual system will get confused.
Header text | 3D live action | High-quality conversion | Real-time conversion | |||
---|---|---|---|---|---|---|
3D conflict | How to fix | Complexity | How to fix | Complexity | How to fix | Complexity |
Example | Example | Example | Example | Example | Example | Example |
Example | Example | Example | Example | Example | Example | Example |
Example | Example | Example | Example | Example | Example | Example |
Example | Example | Example | Example | Example | Example | Example |
Example | Example | Example | Example | Example | Example | Example |
Example | Example | Example | Example | Example | Example | Example |
Binocular Rivalry
[edit]Physical misalignments between the left and right stereo images lead to conflicting images [12], also called binocular rivalry. This mostly occurs when shooting directly in 3D, e.g. with a stereo rig, or when displaying 3D with an imperfect projection system. Binocular rivalry heavily affects the visual perception depending on the strength of the misalignments.
All of the following misalignments lead to binocular rivalry and cause eyestrain, headache or nausea (see Table 2).
In any 3D project one has to take care of both the consistency of the depth cues and the avoidance of binocular rivalry. Table 3 summarizes how these conflicts can be avoided or fixed. Additionally, the table indicates the level of complexity if conflicts have to be fixed in post-production. Finally, the table compares 3D live action with a stereo camera rig, high quality 2D-to-3D conversion and real- time conversion.
Binocular Rivalry | Characteristics | Reasons |
---|---|---|
Vertical misalignment | Improper vertical alignment of left and right images | Left and right camera or lens not properly matched; Projection system not properly matched |
Luminance/colorimetry | Left or right image is lighter or darker and/ or of different hue | Left and right camera not properly matched; Beam splitter diffraction |
Reflections, Flares, Polarization | Reflections on objects not matching left and right view | Different camera positions/angles; Beam slitter polarization/mirror rig |
Contamination | Dust, water, dirt or other particles in one of the images | Lenses or mirror not properly cleaned; Bad environmental conditions |
Depth of field | Focus of left and right camera not properly matched | Different aperture settings; Focal length of left and right camera not properly matched |
Pseudo 3D | Left and right image are swapped | Camera cables are mixed up; Left and right image are swapped when displaying on a 3D device or wrong naming of left and right view |
Partial pseudo 3D | Parts (e.g. layers) of left and right image are swapped | Composition error in post |
Synchronization | Left and right images are not properly synchronized | Cameras are not synchronized; Editing errors in post |
Hyperconvergence | Objects are too close to the viewer's eyes --> single binocular vision no longer possible | Too much negative parallax; Improper camera settings (e.g. too large baseline) |
Hyperdivergence | Objects appear too far behind the screen --> divergence of the viewer's eyes | Too much positive parallax; Improper camera settings |
Depth mismatch | Improper depth composition (objects are in the wrong depth position) | Composition error in post |
Visual mismatch | Objects that are just visible in one image | Composition error in post |
Ghosting | Double images (the left-eye view leaks through to the right-eye view and vice versa) | Improper separation of left and right images by the 3D glasses; Refresh rate of the 3D device |
Header text | Header text | Header text | Header text |
---|---|---|---|
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Example | Example | Example | Example |
Shooting conditions on the set
[edit]As already indicated in Table 3, binocular rivalry mostly occurs while shooting with a stereo camera rig. This section should give a brief overview of some practical issues when shooting 3D live action footage.
First of all it is clear that the two stereo cameras are physically not identical due to construction tolerances. There are always small variations e.g. of the two lenses, sensors, etc. Additionally, synchronization, alignment, zooming and converging of the two cameras are issues that are difficult to handle. There are some conditions on the set where the shooting with a stereo rig is much too complex or even impossible. When capturing e.g. a landscape from a helicopter's point of view, the baseline between the two cameras needs to be increased drastically to achieve the desired depth effect. However, it is either not possible to sufficiently increase the baseline or if it is made possible the previously mentioned issues, such as alignment and synchronization, become increasingly more difficult to handle.
Mirror-rigs that are widely used especially for close-ups produce errors like different polarization, reflections, flares and contamination by nature. These errors have to be fixed in post. The worse the shooting conditions, the more likely is the occurrence of binocular rivalry. While shooting e.g. in a desert, it is quite hard or almost impossible to keep the mirror of a stereo mirror rig clean since there are lots of sand particles in the air that obstruct or may even scratch the mirror.
Specific 2D-to-3D conversion problems
[edit]Binocular rivalry is not the main issue when converting footage from 2D to 3D, there are many other issues where conversion is quite complex and extremely difficult. These are shots that contain semi-transparencies, reflections, water, rain, fire and explosions.
The masking of raindrops during a rotoscoping or keying process for example is extremely labour intensive if done accurately. Furthermore, precise 3D modelling of the masked rain is almost impossible. Thus, it is more efficient to re-produce rain with vfx tools and add it to the converted stereo content.
-
Figure 4: The Native Pixel Parallax (NPP) compared at different resolutions against increasing screen sizes.
Screen Size and Viewing Distance
[edit]It is well known that both the screen size and the distance of a viewer to the screen have an impact on the perceived depth [13]. Moreover, the screen size may also have an influence on the occurrence of conflicting depth cues and binocular rivalry. One of the most important errors is generally the easiest to avoid – hyperdivergence. This effect occurs when the parallax exceeds the observer’s eye separation. In nature, distant objects such as the stars in a clear night’s sky cause the optical axes of the observer’s eyes to run parallel, thus making the stars appear at infinity. With stereoscopic 3D however, it is possible to provide the observer with a parallax greater than the eye separation, causing the optical axes of the observer’s eyes to diverge. Since objects appearing beyond infinity will not be processed by the brain, this causes nausea within seconds.
Conclusion
[edit]It is a fatal mistake to believe that shooting directly with a stereo camera rig is easier, less expensive, and causes fewer errors. In fact, artefacts resulting from shooting with stereo camera rigs can be greater than those created by a 2D-to-3D conversion process.
In a stereo mirror rig for example, one camera has to focus through a beam splitter mirror while the second camera is capturing the reflection of the mirror. This shooting results in binocular rivalry, caused by luminance differences, polarization, reflections, flares, contaminations, misalignments, etc. Additionally, it is the nature of any 3D live shooting that different depth of fields, vertical misalignments or synchronization errors occur. In general, most of the errors have to be fixed during post-production, but due to the absence of compositing layers this can be very complex and expensive. Thus, 2D-to-3D conversion techniques are often applied as part of the post-production if the 3D shooting results are inadequate and the regular post-production process would not be financially viable.
Real-time conversion is currently a hot topic in the entertainment industry, especially in the 3D- display market. Due to a lack of 3D content, real-time conversion seems to be a fast-tracked and cost effective way of entering this market. Nevertheless, most experts believe that real-time conversion is dangerous for the future of 3D due to the resulting “bad” 3D video quality. As depicted in Table 3, many conflicts of depth cues and binocular rivalry issues cannot be solved properly. Thus, eye fatigue, headache, and even nausea are inevitable. It is impossible to watch an entire 90 minute feature film without any such issues. As a result consumers may get frustrated and choose to avoid 3D in future. Furthermore, real-time conversion does not provide any additional creative input possibilities or applications essential for the movie industry.
All issues regarding depth cues and binocular rivalry can be managed in a high-quality conversion process if done properly. As a result visual discomfort can be avoided, which is essential for producing “good 3D”. This high quality conversion process is found to be labour intensive and much more expensive than real-time conversion, but compared with 3D live production and its subsequent post-production it is much more cost efficient in most cases. This applies unless native 3D is produced with great accuracy by properly using stereo camera rigs, but even then significant post-production efforts are still necessary to avoid or reduce artefacts. 3D animation was not addressed in Table 3 because native 3D with no artefacts can be achieved quite easily. In fact, production costs are more a matter of workflow optimization. However, in some CGI-productions 2D-to-3D conversion is used in the compositing process because of cost efficiency. If the render layers are available, the labour intensive processes like rotoscoping and clean-plate creation can be reduced substantially.
CGI-productions are still the driver for 3D in the cinema and home entertainment markets with many releases and announcements. Due to the lack of 3D content, 2D-to-3D conversion will dominate the home entertainment market within the next couple of years. However, when native 3D eventually overcomes its aforementioned most critical issues, 3D live action will have bright prospects in the future.
imcube 3D Solutions
[edit]Quality & Productivity
[edit]For over 10 years, the imcube founders and researchers have been involved in the research of image processing topics related to 3D production, post-production and conversion. Today, 10 researchers and software developers work systematically to invent and develop tools and algorithms that are targeted at improving 3D quality, and making production, post-production and conversion processes easier and faster, thereby considerably reducing time and manual labour. As a result, the costs of all these processes have decreased, while their quality has simultaneously improved.
As a result of these research and development efforts, the features that have been implemented into the imcube conversion services include:
User-Friendliness
[edit]imcube Cinema and imcube Home are software frameworks designed only for 2D-to-3D conversion. The operation of the GUI (Graphical User-Interface) is intuitive and straightforward. Due to the simplified structure and user-friendliness of our softwares, operators and conversion artists do not need extensive training.
Rendering Speeds
[edit]Rendering takes place on the graphics card using Cuda Technology. This speeds up the rendering time for the conversion process.
Preview Possibilities
[edit]In the imcube software, any change or modification in the depth-grading process directly results in an adjustment of the stereoscopic output. This can be previewed via an additional output window on a 3D display of your choice.
Seamless Cooperation with Other Software
[edit]- Import feasibility of Mocha roto shapes, with the ability to manipulate the shapes internally.
- Import feasibility of 3D tracking data from PFTrack and Boujou.
Extra Features
[edit]imcube’s software has several unique 2D-to-3D features:
- Direct (on the fly) pre-visualization of the stereo output during the depth grading process.
- Colored depth maps or entire OpenGL 3D models for visual feedback, to determine whether objects will be placed in front of, on or behind the screen.
- Highlighting of disoccluded regions (gaps) as visual feedback for the conversion artist.
- User-friendly tools for the effective and realistic depth-assignment of objects (object-shaping).
- Easy and intuitive depth-budget adjustment for different screen-size adjustment.
- High-precision forward pixel transformation with high-quality interpolation capability during the depth image-based rendering processes. This is the forward warping technique that guarantees better quality than any other conversion system available.
- Algorithm-driven inpainting solutions based on highly advanced computer vision technology.
- High quality automatic conversion using structure-from-motion techniques for image sequences with certain camera movements.
Quality Control Tweaking Feature
[edit]After the initial conversion process has been carried out by the conversion artists, the stereographer or director may wish to tweak the depth of a shot, or adjust some single elements such as depth-cues within the shot. Using nearly every other software, the entire shot has to be returned to the operator’s workstation for the layers to be separated out and reverted to their original state, and the depth reworked. With imcube, this depth-assignment fine-tuning can be done in real time during the approval phase. This cuts an enormous amount of time and ensures superior quality control, essentially adding copious creative flexibility to a highly complex process.
imcube labs GmbH
[edit]imcube labs GmbH, in Berlin, Germany, is the R&D center of the imcube 3D Solutions Group, with operations in Germany, China, Hong Kong and the UAE. It is a spin-off of the Communication Systems Group of the Technical University in Berlin and it has close ties with the Fraunhofer Heinrich-Hertz-Institute for Image Processing. imcube’s founders have been involved in “3D Quality” research and development for over 10 years, in the fields of Computer Vision, 2D to stereoscopic 3D Conversion and native stereo 3D repair. Recent projects have extended into an alternative hybrid stereoscopic 3D post-production work-flow for feature films and into a dedicated 2D to stereo 3D conversion workflow for broadcast and internet applications. This includes semi-automated tools for stereographers, such as the imcube 3D Instructor for easy conversion instructions, the imcube 3D QC Player for conversion quality control, and the imcube 3D Daily for quick scribble-based test conversions, pre-visualization and mobile conversions.
Native 3D Services
[edit]Native Repair
[edit]There are times in which filmmakers working on native stereoscopic projects will run into problems, be it damaged rushes, a missing eye on a shot, a camera fault, or badly aligned images. For these problems imcube offers a Native Repair service, converting 2D assets so as to allow them to fit seamlessly within a native project. The conversion of the chosen 2D images can be specified by a client stereographer, allowing complete creative control.
Native Stereo Optimization
[edit]The depth within all stereoscopic images must be precisely placed within a project, both to ensure maximum creative use of 3D and to avoid any visual discomfort to the viewer. imcube offers a full optimization/depth-grading service for both converted and native stereoscopic images.
3D Conversion Services
[edit]Conversion from 2D to 3D: Who Needs It?
[edit]Filmmakers Need It.
[edit]Shooting 3D on a film set is not simple… yet. Even in the latter half of 2012, finding matching lenses is very difficult, keeping them perfectly clean is tricky, lining up 3D shots steals quite a lot of time away from the daily schedule, and any mistakes made on set are expensive and difficult — or indeed impossible — to fix in post-production. Not all of Avatar was shot in 3D; and 50% of Transformers 3 was converted from 2D to stereo 3D.
Editors Need It.
[edit]Once you have shot 3D on set, it is almost impossible to manipulate the spatial characteristics of the shots in post-production. This can make an editor’s life extremely difficult, what with backgrounds suddenly “jumping” around in space within the edited scene. None of these problems exist in 2D-to-3D conversion. You can edit the film as you wish, on the equipment of your choice, and convert it later.
Film Producers Need It.
[edit]The time-variance between a 2D shoot and a 3D shoot is getting shorter with the emergence of good stereo shooting rigs; but it is still time-consuming. While the amount of camera setups per day shooting 3D native is reduced compared to shooting normal 2D, it is still impossible to shoot certain shots in 3D. It is an excellent fallback feature for a producer to know that there are high-quality conversion facilities that s/he can use to achieve perfect 3D results.
Film Library Owners Need It.
[edit]There is a lack of 3D content out there. Whether you are considering a cinema re-release or your film is headed for the home-entertainment circuit, converting it to 3D will complement it with depth and excitement. When Star Wars was digitally remastered, a global audience greeted the release of the film enthusiastically — even though there were ‘only’ 2 or 3 scenes and a sound remix added! The new release of Star Wars in 3D generated an even better response. Don’t forget, we are talking about a film that was made in 1979! Titanic 3D also made a box office smash of $ 343m, incurring production (conversion) costs of only $ 18m. Since these two successes, the conversion of film classics has become its own category, in which Top Gun and 2012 feature as the latest additions.
Film Audiences Need It.
[edit]The amount of TV channels that show 3D is growing globally on an unprecedented scale, and the number of cinema theaters that now offer digitally projected 3D is increasing faster than expected. These figures are well documented and need almost daily to be updated just to keep track. This proves that audiences are attracted to the spectacle of modern 3D — they love to be immersed in the 3D experience. James Cameron describes the phase in which we presently find ourselves in 3D shooting, postproduction and projection as being “just a few months down the road after the Wright Brothers first flew.” But even so, the ‘wow’ effect for audiences is staggeringly exciting. We, too, at imcube are avid members of the audience — we know that the gimmick of seeing spatially into a 3D environment quickly translates into viewer keenness to become immersed in the story, in the content. If we manage to create a holistic experience, in which the audience may be transported into the world being presented before them, and if we manage to materialize this space from a 2D origination, then all of our efforts will have been worthwhile.
Cinema/Theatrical Conversion
[edit]Whether we speak of native 3D or of conversion from 2D to stereoscopic 3D, all good 3D must not only be of high technical quality, but all possible visual discomfort must be eliminated by taking into account the aspects of the human binocular vision system. This is essential for the consumer’s 3D experience, as well as for the success of the projection-hardware and entertainment industries involved with this new exciting format.
It is a common misconception that shooting 3D with a stereo camera rig is easier, less expensive and contains fewer errors than converting a film into 3D after first shooting it in 2D. In fact, artefacts resulting from shooting with stereo camera rigs can be greater than those created during a 2D-to-3D conversion process.
In a stereo mirror rig for example, one camera lens has to focus through a beam-splitter mirror (i.e. capturing polarized light), while the second camera lens captures the reflection of the mirror. This type of photography can result in binocular rivalry, which includes luminance differences, polarization, reflections, flares, contaminations, misalignments… etc.
Additionally, it is the nature of any 3D live production that produces different depths of field, vertical misalignments or synchronization errors. In general, most of the errors have to be fixed in post-production. Due to the absence of compositing layers, post-production can be very complex and expensive.
When done properly, all of these issues regarding depth cues and binocular rivalry can be managed in a high-quality 2D-to-3D conversion process. However, high-quality conversion is still extremely labour intensive.
Compared to 3D live production and post-production, 2D-to-3D conversion is generally less expensive. Native 3D can be produced more accurately with stereo camera rigs. However, to avoid or reduce artefacts, significant efforts have to be made in post-production.
3D-TV and Other 3D Displays in the Home Entertainment Market
[edit]The success of the 3D home entertainment market depends largely on the amount and quality of the content provided via broadcast, cable, packaged media, download and streaming services, such as IP-TV, and video-on-demand (VoD).
The number of 3D-TV, cable and Internet channels is growing globally each month, and the availability of 3D programming is unable to keep up.
In the same way that library titles have supplemented the theatrical 3D market with the 3D conversion releases of classics like Titanic, Lion King and Star Wars 3D in early 2012, proven library titles are also required to fill the 3D home entertainment pipeline.
The requirements for 3D on small display sizes differ substantially from the requirements for large cinema or IMAX screens. This is reflected in the number of depth layers, the number of segmented objects per frame, the picture resolution and the colour depth of the image. These reductions lead to less manual work input in the conversion process, while maintaining the same quality that a viewer would have in a theatrical environment.
A conversion solution for the home market must bring the quality requirements in line with available budgets and short-term return-on-investment projections.
imcube has developed a conversion software and a workflow which is not simply a stripped down version of a theatrical conversion conduit, but has its own unique features to address the challenges of conversion for the home entertainment market.
imcube Home can be delivered not only in HD, but also in 4K, or even in a higher resolution, in order to address the growing market of ULTRA-HD (4K) for TV displays and broadcasters.
Mobile Devices
[edit]The success of the 3D mobile device industry, which includes 3D phones, 3D tablets, and 3D gaming devices, depends heavily on the amount and quality of the content provided via download and streaming services such as video-on-demand platforms, and IP-TV.
The quality requirements for 3D on small displays differ substantially from the requirements for large TV displays and cinema screens. This is reflected in the number of depth layers, the number of segmented objects per frame, the picture resolution and the colour depth required. On the other hand, the large quantity of content needed for the 3D web and mobile markets requires a conversion solution that brings the quality requirements in line with available budgets. imcube has designed a software and workflow that can address the demand for high-volume content at cost levels from $ 500/min upwards.
References
[edit]- ^ Bernard Mendiburu, 3D Movie Making – Stereoscopic Digital Cinema from Script to Screen. Focal Press, May 2009, ISBN 0240811372
- ^ Vincent Teulade, 3D Here and Now... a goose that lays a golden egg? PricewaterhouseCoopers, 2010.
- ^ Vincent Teulade, 3D Here and Now... a goose that lays a golden egg? PricewaterhouseCoopers, 2010.
- ^ Lenny Lipton, The CrystalEyes Handbook. San Rafael, CA, USA : StereoGraphics Corp., 1991
- ^ M. Lambooij and W. Ijsselsteijn, Visual Discomfort and Visual Fatigue of Stereoscopic Displays: a Review. Journal of Imaging Science and Technology, 2009, 53(3), pp. 030201-1 to 03201-14.
- ^ Rémi Ronfard and Gabriel Taubin (Eds.), Image and Geometry Processing for 3-D Cinematography. Springer, 2010, ISBN 9783642123917
- ^ Jeff Hecht, The Barfogenic Zone. NewScientist, December, 2010, pp. 42 to 43.
- ^ Bernard Mendiburu, 3D Movie Making – Stereoscopic Digital Cinema from Script to Screen. Focal Press, May 2009, ISBN 0240811372
- ^ R.T. Held et al., Using blur to affect perceived distance and size, ACM Transactions on Graphics, 2010.
- ^ Y.Y. Schechner and N. Kiryati, Depth from defocus vs. stereo: How different really are they?, International Journal of Computer Vision, 2000, pp. 141 to 162.
- ^ D.M. Hoffman and M.S. Banks, Focus information is used to interpret binocular images, Journal of vision, 2010.
- ^ Technicolor, The 3D Issues Poster. Stereoscopy News, Januar 9, 2011.
- ^ L. Chauvier, K. Murray, S. Parnall, R. Taylor, and J. Walker, Does size matter? The impact of screen size on 3D, IBC, 2010, 2010.
1. Bernard Mendiburu, 3D Movie Making – Stereoscopic Digital Cinema from Script to Screen. Focal Press, May 2009, ISBN 0240811372
2. Vincent Teulade, 3D Here and Now... a goose that lays a golden egg? PricewaterhouseCoopers, 2010.
3. Lenny Lipton, The CrystalEyes Handbook. San Rafael, CA, USA : StereoGraphics Corp., 1991
4. M. Lambooij and W. Ijsselsteijn, Visual Discomfort and Visual Fatigue of Stereoscopic Displays: a Review. Journal of Imaging Science and Technology, 2009, 53(3), pp. 030201-1 to 03201-14.
5. Rémi Ronfard and Gabriel Taubin (Eds.), Image and Geometry Processing for 3-D Cinematography. Springer, 2010, ISBN 9783642123917
6. Jeff Hecht, The Barfogenic Zone. NewScientist, December, 2010, pp. 42 to 43.
7. R.T. Held et al., Using blur to affect perceived distance and size, ACM Transactions on Graphics, 2010.
8. Y.Y. Schechner and N. Kiryati, Depth from defocus vs. stereo: How different really are they?, International Journal of Computer Vision, 2000, pp. 141 to 162.
9. D.M. Hoffman and M.S. Banks, Focus information is used to interpret binocular images, Journal of vision, 2010.
10. Technicolor, The 3D Issues Poster. Stereoscopy News, Januar 9, 2011.
11. Tim Dashwood, Shooting Stereoscopic 3D – A Beginners Guide. FCPUG Supermag (4), April 18, 2010.
12. L. Chauvier, K. Murray, S. Parnall, R. Taylor, and J. Walker, Does size matter? The impact of screen size on 3D, IBC, 2010, 2010.
13. Aljoscha Smolic, Peter Kauff, Sebastian Knorr, Alexander Hornung, Matthias Kunter, Marcus Mueller, and Manuel Lang, 3D Video Post-Production and Processing, Proceedings of the IEEE, Vol. 99, No. 4, April 2011, pp. 607-625.
--DaliaHashim1287 (talk) 12:53, 4 February 2013 (UTC)