Electronic Holography: The Newest

Mark Lucente, Stephen A. Benton, Pierre St.-Hilaire
Spatial Imaging Group, Media Laboratory
Massachusetts Institute of Technology
20 Ames Street, Room E15-416
Cambridge, MA 02139, USA

This paper was presented at by Mark Lucente at the International Symposium on 3-D Imaging and Holography in Osaka, Japan, Nov. 1994.

NB: This is an HTML version of this paper. Figures, equations and algebraic variables are not quite finished, so some imagination is required by the reader. A PostScript version of this paper is also available.


Electro-holography, invented at the MIT Media Laboratory Spatial Imaging Group only five years ago, is a truly three-dimensional real-time digital imaging medium. Recent work in electro-holography or "holovideo" demonstrates that the two crucial technologies - computation and optical modulation - can be scaled up to produce larger, interactive, color holographic images. Synthetic images and images based on real-world scenes are quickly converted into holographic fringe patterns using newly developed "diffraction-specific" computational algorithms. To diffract light to form an image in real time, the display employs a scanned, time-multiplexed acousto-optic modulator, and utilizes parallelism at all stages. Holovideo has numerous potential applications in the fields of visualization, entertainment, and information.


Many visual media exist for the expression and communication of ideas. Electro-holography - also called holovideo - is the newest visual medium. A holovideo display produces three-dimensional (3-D) holographic images electronically in real time. Holovideo is the first visual medium to produce dynamic images that exhibit all of the visual sensations of depth and realism found in physical objects and scenes. Holovideo has numerous potential applications in visualization, entertainment, and information, including education, telepresence, medical imaging, interactive design, and scientific visualization.

The new technical field of electro-holography is essentially the marriage between holography and digital computational technologies. Holography [1] is used to create 3-D images by recording and reproducing optical wavefronts. To reconstruct an image, the recorded interference pattern modulates an illuminating beam of light. The modulated light diffracts and reconstructs a 3-D replica of the wavefront that was scattered from the object scene. Optical wavefront reconstruction makes the image appear to be physically present and tangible.

A holographic fringe pattern ("fringe") diffracts lights because its feature size is generally on the order of the wavelength of visible light (about 0.5 micrometer). Because of this microscopic resolution, the fringe pattern contains an enormous amount of information - roughly ten million resolvable features per square millimeter. Early on, researchers [2] began to consider the computation, transmission, and use of holographic fringes to create images that were synthetic and perhaps dynamic. These researchers encountered the fundamental problems inherent to computational holography: both the computation and display of holographic images are difficult due to the large amount of information contained in a fringe pattern. Roughly ten million samples per square millimeter are required to compute a discretized (sampled) fringe that matches the resolution (diffractive power) of an optically made hologram. The possibility of computing a ten-billion-sample fringe pattern at a rate of once per second was impossible, and the possibility of modulating a beam of light with such a fringe pattern was beyond any spatial light modulation technologies available at that time. For decades, the enormous sample count of a holographic fringe prohibited and discouraged the pursuit of real-time electronic 3-D holographic imaging.

In 1989, researchers at the MIT Media Laboratory Spatial Imaging Group[3] created the first display system capable of producing real-time 3-D holographic images. The images were small but were made possible by information reduction strategies that lowered the number of fringe samples to only 2 MB - the minimum necessary to create an image the size of a golf ball. A modulation scheme based on time-multiplexing an acousto-optic modulator was used to modulate a beam of light with the 2-MB of discretized computed holographic fringe pattern. Computation of the 2-MB fringe pattern still required several minutes for simple images using traditional computation methods that imitated the optical creation of holographic fringes. Speed was limited by two factors: (1) the huge number of samples in the discretized fringe, and (2) the complexity of the physical simulation of light propagation used to calculate each sample value. As display size increased, the amount of information increased too, roughly proportional to the image volume (i.e., the volume occupied by the 3-D image).

In this paper we describe the advances in holographic display systems and in holographic computation that made possible the interactive display of 3-D holographic images. We describe our most recent holovideo display, capable of producing images that occupy a volume approximately equivalent to a 100-mm-edge cube. We describe computational techniques have increased speeds by over two orders of magnitude compared to traditional approaches.


A computer-generated hologram (CGH) represents a fringe pattern as an array of discrete samples[4]. The sampling rate (or pitch) must be high enough to accurately represent the holographic information. Given a fixed hologram size, the required sample count is simply the width times number of samples per unit length - the pitch, p. The relationship between (minimum) sampling pitch p and angle of diffraction Q is

	p = 4/lambda  sin(theta/2)			(1).

Since a horizontal-parallax-only (HPO) CGH contains only a single vertical perspective (i.e., the viewing zone is vertically limited to a single location), spatial frequencies are low (~10 lp/mm) in the vertical dimension. The vertical image resolution is the number of hololines. (A hololine is a single horizontal line of the fringe). Eliminating vertical parallax reduces CGH information content by at least a factor of 100 by reducing the vertical spatial frequency content from roughly 1000 to roughly 10 lp/mm. Essentially, the 2-D holographic pattern representing an HPO 3-D image can be thought of as a vertical array of 1-D holograms or hololines[5]. Each hololine diffracts light to a single horizontal plane to form image points describing a horizontal slice of the image. Therefore, one hololine should contain contributions only from points that lie on a single horizontal slice of the object.

Consider the typical holographic set-up in the following illustration. Light scattered from the object, Eo, interferes with reference light, Er. (Optical wavefronts are represented by mutually coherent, identically polarized, spatially varying complex time-harmonic electric field scalars.) The total electric field incident on the hologram is the interference of the light from the entire object and the reference light, Eo + Er. The total interference fringe intensity is

	I_T =  | Eo + Er |^2 				(2).

A real-time holographic display uses the computed fringes to modulate a beam of light and produce an image. The heart of a holographic display is the spatial light modulator (SLM) used to modulate light with a computed fringe pattern. Ideally, a holographic SLM must display over 100 gigasamples. Current SLMs, however, can provide up to only 10 megasamples. Examples of SLMs include the flat-panel liquid-crystal display (LCD) and the magneto-optic SLM. These SLMs are capable of displaying a very small CGH pattern in real time. Early researchers employed a magneto-optic SLM[6], an LCD SLM[7], or a deformable mirror device (DMD)[8]. More recent work employed LCDs with higher pixel counts[9,10], but the images were still very small and essentially two-dimensional.

Display Systems

The display systems that we have developed during the past five years used the combination of an acousto-optic modulator (AOM) and a series of lenses and scanning mirrors to assemble a real 3-D holographic image at video frame rates. This time-multiplexed SLM approach is sometimes called the "Scophony geometry" after the early contender for television displays[11]. A partial schematic is shown in the following figure. The viewer sees a real 3-D image located just in front of the output lens of the system. The viewer experiences the depth cue of horizontal motion parallax in this HPO image. Vertical parallax is sacrificed to simplify the display. (This restriction does not limit the display's usefulness in most applications.)

This figure shows a partial schematic diagram (top view) of an MIT holovideo display. The scanning mirror system angularly multiplexes the image of the modulated light. A vertical scanning mirror (not shown) positions each hololine vertically. Electronic control circuits synchronize the scanners with the incoming holographic signal.

Display Progress

In 1989, we demonstrated the first real-time 3-D holographic display[3]. It used a single channel, providing 2 MB of fringe pattern. By 1990, this display had increased to 6 MB by using a 3-channel AOM[12]. In 1991, this 3-channel system was used to create the first full-color images[13]. Three separate 2-MB fringes were computed and fed to the 3-channel AOM. Each channel was illuminated with a primary color of laser light (red, green, and blue). In these early display systems, the image volume was approximately 36 mm wide, 24 mm tall, and 50 mm deep. The size of the viewing zone was approximately 15 degrees. These displays used a spinning, 18-sided polygon scanning mirror to provide horizontal scanning.

In 1993, we demonstrated a 36-MB 18-channel display system[14]. The increased size of the image volume and of the viewing zone created a very convincing 3-D image. In addition to an increase in channel parallelism, this system employed a novel multi-mirror horizontal scanning system composed of six galvanometric scanners. Each 19.5-mm-wide mirror was scanned in virtually perfect synchrony, essentially functioning as a 120-mm-wide Fresnel mirror. A system of 18 2-MB framebuffers provided the 18 signals to a pair of cross-fired AOMs, necessary to provide a bidirectional Scophony geometry that made use of both the forward and the backward mirror scans.

The following table summarizes the important parameters of holovideo display systems developed by the MIT Spatial Imaging Group:

Display creation date		1989		1991		1993
Size of fringe			2 MB		6 MB		36 MB
Number of channels		1		3		18
Viewing zone			15 degrees	15 degrees	30 degrees
Color				red		full color	red
Samples/hololine		32 K		32 K		256 K
Hololine scan rate		2290 KHz	2750 KHz	150 KHz
Total number of hololines	64		64 x 3 		144
Image volume			36x24x50 mm	36x24x50 mm	150x75x150 mm
width x height x depth
Development of the 36-MB system demonstrated that the scanned-AOM approach can be scaled up by increasing the degree of parallelism (i.e., the number of channels) in the system. As the sample count of the hologram increases, however, rapid fringe computation becomes more important.

Computation Techniques

Computational holography generally begins with a 3-D numerical description of the object or scene to be reproduced. The most straightforward approach to the computation of holographic fringes resembles 3-D computer graphics ray-tracing. Light from a given point or element of an object contributes a complex wavefront at the hologram plane. Each of these complex wavefronts is summed to calculate the total object wavefront, which is subsequently added to a reference wavefront.

Traditionally, computational holography[4] was slow due to two fundamental properties of fringe patterns: (1) the enormous number of samples required to represent microscopic features, and (2) the computational complexity associated with the physical simulation of light propagation and interference. A typical full-parallax hologram 100 mm 100 mm in size has a sample count (also called space-bandwidth product or SBWP or simply "bandwidth") of over 100 gigasamples of information. A larger image requires a proportionally larger number of samples. Several techniques have been used to reduce information content to a manageable size. The elimination of vertical parallax[15] provided great savings in display complexity and computational requirements[12] without greatly compromising the overall display performance.

Interference-Based Approaches

Traditional holographic computation imitated the interference that occurs when a hologram is produced optically using coherent light. This "interference-based computation" attempted to produce fringe patterns that closely resembled the fringes recorded in optical holography[4]. This seemed logical: since optically produced physical fringes diffracted light to form images, then their computed counterparts should do the same. Indeed, the analytical treatment of optical holography guaranteed that given certain conditions in the recording and reconstruction setups, an image was faithfully reproduced. Also reproduced, however, are unwanted noise components in the diffracted light.

Recalling Equation 2, the expression for total intensity in an interference pattern expands to

	I_T  =  |Eo|^2  +  |Er|^2  +  2 Re{ Eo Er* }	(3).
		object	  reference	useful
		self-	  bias		fringes
The total intensity is a real physical light distribution comprising three components.

Bipolar Intensity

Bipolar intensity[15,16] was developed to eliminate the problem of noise inherent to interference-based fringe computation. Simply stated, the bipolar intensity method is to compute only the terms of the expression for the total fringe pattern (Equation 3) that actually diffract useful image light. This leaves only the last term of Equation 3, henceforth called the bipolar fringe intensity. The bipolar intensity term results from the interference between the object wavefront and the reference beam. This fringe pattern contains the holographic information that is sufficient to reconstruct an image. In the bipolar intensity method of computation, the object self-interference and the reference-bias terms are simply excluded during computation. The dc bias of the reference term ensures that a physical fringe pattern contains only positive definite values, as is necessary for a real physical intensity. Computed intensities, however, can be bipolar (i.e., can range both positive and negative), making the dc reference bias unnecessary. If a fringe pattern needs to be positive definite, then a dc offset can be added in during the normalization process. Normalization is the numerical process that limits the range of the total fringe pattern by introducing an offset and a scaling factor to tailor the fringe pattern to fit the requirements of a display system.

As discussed in the references by Lucente[15,16], the expression for the bipolar intensity (in Equation 3) was simplified to involve only real-valued arithmetic, resulting in a computation speed increase of a factor of 2.0. There are many advantages to the use of bipolar intensity computation. There is no object self-interference noise. There is no reference bias - in fact there is no need to specify the reference beam intensity - resulting in a more efficient use of the available dynamic range of the fringe pattern. The most interesting advantage of the bipolar intensity method is that linear summation of elemental fringes is possible, with each elemental fringe representing a single image element. Real-valued summation enables the efficient use of precomputed elemental fringes, an approach which, when implemented on a supercomputer, achieved CGH computation at interactive rates[16].

Precomputed Elemental Fringes

To increase computing speed, a large array of elemental fringes were precomputed and stored for later access during actual fringe computation. Each precomputed elemental fringe represented the contribution of a single image element located at a discrete 3-D location of the image volume. Linearity allows for the scaling of a given elemental fringe (to represent the desired brightness of an image point) and then summation at each applicable sample in the fringe. The complexity of computation for each image point was reduced to only two operations: one multiplication and one addition. Speed increased by a factor of about 25 when implemented on a Connection Machine Model 2 (CM2) supercomputer with 16K data-parallel processors, and by a factor of over 50 when implemented on a serial computer[16].

Bipolar intensity implemented with precomputed elemental fringes allowed for the first ever interactive display of holographic images in 1990. Even though the interactively generated images were small, this early work demonstrated the power of linear summation and the possibility of generating a fringe as a linear combination of precomputed fringes provided. These two concepts provided guidance for the recent development of an entirely new approach to fringe computation: "diffraction-specific computation."

Diffraction-Specific Fringe Computation

Diffraction-specific fringe computation[17] is based on the discretization of space and spatial frequency in the fringe pattern. The architecture of diffraction-specific computation was directed by two primary goals: (1) to produce fringes at a faster rate and (2) to enable holographic encoding schemes to reduce the bandwidth required to display holographic images. Traditional computing methods achieved neither of these goals. Traditional computation imitated the interference occurring in the optical generation of fringes. In contrast, diffraction-specific computation is based on only the diffraction that occurs during the reconstruction of a holographic image. The diffraction-specific approach is a better match to holovideo since the purpose of a real-time holographic display is to generate 3-D images through the modulation and subsequent diffraction of light.

The application of diffraction-specific computation provided a means for encoding fringes to make the most efficient use of computational power and electronic and optical bandwidth[17]. Holographic fringes contain far less usable information than is intimated by a simple measure of bandwidth. The limited acuity of the human visual system (HVS) cannot utilize the extremely high image resolution provided by optical holograms.

Lucente[17] recently reported the development and implementation of diffraction-specific fringe computation and its adjunct holographic encoding schemes (called "fringelet encoding" and "hogel-vector encoding"). These holographic encoding schemes achieved compression ratios of 16 and higher. Holographic encoding adds a predictable amount of image blur. The analysis of diffraction-specific computation revealed an important three-way trade-off between compression ratio, image fidelity, and image depth. The decreased image resolution (increased point spread) that was introduced into holographic images due to encoding was imperceptible to the human visual system under certain conditions. A compression ratio of 16 was achieved (using either encoding method) with an acceptably small loss in image resolution. Total computation time is reduced by a factor of over 100 to less than 7.0 seconds per 36-MB holographic fringe using the fringelet encoding method implemented on a standard serial workstation. Diffraction-specific computation more efficiently matches the information content of holographic fringes to the capabilities of the human visual system.

This figure shows a a digitally photographed picture of a 3-D image produced on the MIT holovideo display. The image is an EPX concept car design, courtesy of Honda Research and Development. The 36-MB fringe pattern, computed using diffraction-specific fringe computation, was displayed using the most recent holovideo display system.


The current 36-MB MIT holovideo display system has been used to generate images from many databases. For example, the preceding illustration shows a Honda EPX concept car, modeled using a computer-aided design system. Other data sources include MRI and medical images, images modeled using standard computer-graphics tools, and scientific data such as the double helix structure of a molecule of DNA or the numerically generated electron densities in a semiconductor lattice. Also, images of real scenes were computed by digitizing multiple perspective views of the scene using a video camera and digital frame-grabber. Current research includes improvements to image quality and the implementation of diffraction-specific fringe computation in specialized hardware to achieve the interactive holographic display of large images. Longer term efforts will continue to increase the image size and the computation speed.


Research on the MIT holographic video system has been supported in part by the Defense Advanced Research Projects Agency (DARPA) through the Rome Air Development Center (RADC) of the Air Force System Command (under contract No. F30602-89-C-0022) and through the Naval Ordnance Station, Indian Head, Maryland (under contract No. N00174-91-C0117); by the Television of Tomorrow research consortium of the Media Laboratory, MIT; by US West Advanced Technologies Inc.; by Honda Research and Development Co., Ltd.; by NEC Corporation; by International Business Machines Corp.; by General Motors; and by Thinking Machines, Inc.

The "Connection Machine" supercomputer was manufactured by Thinking Machines, Inc., Cambridge, MA, USA.

The authors gratefully acknowledge the support of researchers in the MIT Spatial Imaging Group and in the MIT Media Laboratory: Carlton J. Sparrell, Wendy J. Plesniak, Michael Halle, Shawn Becker, John Watlington, V. Michael Bove, Jr., Brett Granger, Michael Klug, Tinsley Galyean, Ravikanth Pappu, John D. Sutter, Derrick Arias, Jeff Breidenbach. Thanks also to Professor Tomas A. Arias and Dr. Shuguang Zhang.


[1] P. Hariharan. Optical Holography: Principle, Techniques and Applications. Cambridge University Press, Cambridge, 1984.

[2] E. Leith, J. Upatnieks, K. Hildebrand and K. Haines, "Requirements for a wavefront reconstruction television facsimile system," J. SMPTE, vol. 74, pp. 893-896, 1965.

[3] P. St. Hilaire, S. A. Benton, M. Lucente, M. L. Jepsen, J. Kollin, H. Yoshikawa and J. Underkoffler. ``Electronic display system for computational holography''. In Practical Holography IV, Proceedings of the SPIE, volume 1212-20, pages 174-182, Bellingham, WA, 1990.

[4] W. J. Dallas. Topics in applied physics. In B. R. Frieden, editor, The Computer in Optical Research, volume Vol. 41, chapter 6: "Computer-Generated Holograms", pages 291-366. Springer-Verlag, New York, 1980.

[5] Detlef Leseberg and Olof Bryngdahl, ``Computer-generated rainbow holograms''. Applied Optics, 23(14):2441-2447, July 1984.

[6] D. Psaltis, E.G. Paek, and S.S. Venkatesh, "Optical image correlation with a binary spatial light modulator," Opt. Eng. vol. 23, #6, pp. 698-704, 1984.

[7] F. Mok, J. Diep, H.-K. Liu and D. Psaltis, "Real-time computer-generated hologram by means of liquid-crystal television spatial light modulator," Opt. Lett., vol. 11 #11, pp. 748-750, Nov. 1986.

[8] J. M. Florence and R. O. Gale, "Coherent optical correlator using a deformable mirror device spatial light modulator in the Fourier plane," Applied Optics, vol. 27, #11, pp. 2091-2093, 1 June 1988.

[9] S. Fukushima, T. Kurokawa, and M. Ohno, "Real-time hologram construction and reconstruction using a high-resolution spatial light modulator," Appl. Phys. Lett., vol. 58 #8, pp. 787-789, Aug. 1991.

[10] N. Hashimoto, S. Morokawa, and K. Kitamura, "Real time holography using the high-resolution LCTV-SLM," in SPIE Proceedings #1461 Practical Holography V, (SPIE, Bellingham, WA, 1991), S.A. Benton, editor, pp. 291-302.

[11] L. M. Myers, "The Scophony system: an analysis of its possibilities," TV and Shortwave World, pp. 201-294, April 1936.

[12] P. St.-Hilaire, M. Lucente, and S. A. Benton, "Synthetic aperture holography: a novel approach to three dimensional displays," Journal of the Optical Society of America A, vol. 9, #11, pp 1969 - 1977, Nov. 1992.

[13] P. St.-Hilaire, S. A. Benton, M. Lucente, and P. M. Hubel, "Color images with the MIT holographic video display," in SPIE Proceedings #1667 Practical Holography VI, (SPIE, Bellingham, WA, 1992), paper #1667-33.

[14] P. St.-Hilaire, S. A. Benton, M. Lucente, J. D. Sutter, and W. J. Plesniak, "Advances in holographic video", in SPIE Proceedings #1914 Practical Holography VII, (SPIE, Bellingham, WA, 1993), paper #1914-27.

[15] Mark Lucente, "Optimization of hologram computation for real-time display," in SPIE Proc. #1667 Practical Holography VI, 1667-04 (SPIE, Bellingham, WA, 1992), S.A. Benton, editor, pp. 32-43.

[16] Mark Lucente, "Interactive computation of holograms using a look-up table," Journal of Electronic Imaging, vol. 2, #1, pp. 28-34, Jan 1993.

[17] Mark Lucente, "Diffraction-Specific Fringe Computation for Electro-Holography," Ph. D. Thesis, Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Sept. 1994.

Biographical Note

Mark Lucente worked for five years in the MIT Media Lab Spatial Imaging Group, where he developed the interactive generation of 3-D holographic images. His college degrees (Ph.D., S.M., S.B.) were bestowed upon him by the Department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. Currently, he is employed by the MIT Media Laboratory as a Postdoctoral Research Fellow. He is a member of SPIE, Tau Beta Pi, Eta Kappa Nu, and Sigma Xi.

In his work, Dr. Lucente combines a knowledge of optics, spatial light modulation, computation, visual perception, and communication systems to develop electro-holography into a practical medium. His earlier work involved the application of lasers to high-bandwidth optical communication systems and 3-D imaging systems, and to the study of device physics.