Researchers at Bell Labs have long been at the forefront of Coding and Information Theory. 

Claude Shannon, who is credited with the founding of digital circuit design theory as well as being a pioneer in cryptography, is widely recognized as the founder of Information Theory. In the late 1940s, Richard Hamming devised error-correcting and error-detecting coding concepts that permitted data to be stored, retrieved and transmitted error-free. Among the many others at Bell Labs who have contributed to the field are Manfred Schroeder, who developed the psychoacoustic masking codec that was the basis for the MP3 codec, as well as David Slepian, Neil Sloane, Jessie MacWilliams, and more recently, Amin Shokrollahi and Alexei Ashikhmin for their research on algebraic coding theory.

Reconstructed images from a lensless prototype camera using compressive sampling.
Reconstructed images from a lensless prototype camera using compressive sampling.

Strong coding expertise continues to be a hallmark of the Nokia Bell Labs research program. In source coding, teams are incorporating and expanding compressive sensing techniques across a wide range of projects. For example, in association with image and video applications, Hong Jiang led a team including Razi Haimi-Cohen, Gang Huang and Larry Liu to devise a network-based system for video analysis by using compressive sensing. The team’s research led to such technologies as the lensless camera that directly captures a compressed representation and motion and anomaly detection that utilize low bandwidth. Patrice Rondao Alface and others are studying representations suitable for transmission of spherical and panoramic video.

Emina Soljanin’s research applies coding above the physical layer, including network coding and application-layer coding. These are used, for example, to efficiently broadcast related content to users with diverse requirements over varying channel conditions, or to provide fast access to information in a physically distributed storage system.

Advanced digital signal processing and coding devised by Andreas Leven, Laurent Schmalen, Chongjin Xie, Sebastian Randel, Jeremie Renaudier and others have led to record-breaking achievements in high-speed and high-capacity optical transmission demonstrations.

Compressive Sampling Techniques 

All too frequently, the terms “data” and “information” are used interchangeably. The intent is generally “big information” rather than “big data.”  “Information” is consumed — not “data.” Yet networks today are designed to carry data, where a bit loss could result in significant loss of information. In contrast to data networking, every transmitted bit in an information network is an independent representation of the totality of information; a bit loss results in only a little information loss.

How then do we create bits that represent information rather than just data? Several researchers in Nokia Bell Labs are applying compressive sampling techniques to multimedia information transmission issues.

Compressive sensing is a mathematical tool used to represent signals; in the case of multimedia, compressing video signals into “measurements.” The use of linear projections onto pseudo-random bases allows for far fewer numbers of measurements to represent a video than the number of pixels. Furthermore, compressive sensing of video exhibits such features as scalability with transmission bandwidth, adaptivity with application and robustness against noise, which makes it an ideal technology for use in information networks.

In one phase of this research, an imaging architecture for making compressive measurements, was created without using a lens. The architecture consists of an aperture assembly and a sensor. The aperture assembly consists of a two-dimensional array of aperture elements. The transmittance of each aperture element is independently controllable. The sensor is a single detection element. A compressive sensing matrix is implemented by adjusting the transmittance of the individual aperture elements according to the values of the sensing matrix. The device can be used for capturing images of visible and other spectra such as infrared, or millimeter waves, in surveillance applications for detecting anomalies or extracting features such as the speed of moving objects. Multiple sensors may be used with a single aperture assembly to capture multi-view images simultaneously. The prototype is built using a transparent monochrome liquid crystal display (LCD) screen and two photovoltaic sensors enclosed in a light tight box, as illustrated in Figure 1.

Figure 1 - Compressive Image Acquisition
Compressive image acquisition.

Typically, pixel-by-pixel capture needs a large number of captures – with each capture resulting from one open element and containing data associated with only one pixel. In a compressive sampling approach, each capture results from many open elements, and contains information about many pixels. No physical image is formed. Instead, based on the collected information (rather than “data,”) a virtual image is defined mathematically. 

Although the lensless compressive imaging architecture was designed to address the need in camera networks, it may have far-reaching consequences in new classes of applications resulting from form-factor and cost reductions. For example, it may find applications in medical devices for embedded transmission of monitoring signals, and in providing far more sophisticated and new levels of analyses enabled by integrating the reconstruction of images and the anomaly detection processes from millions or billions of aperture devices.