Statistiche



DISP

Responsible:

Giancarlo Calvagno

DISP - Digital Image and Signal Processing Lab

The research activities of the Digital Image and Signal Processing (DISP) Laboratory are focused on several aspects of digital signal processing, related to, among the others, acquisition, coding, filtering and transmission issues. In particular, following the most recent trends in these fields, our research is mainly involved with digital image and video signals. We conduct our researches in terms of both theoretical studies (within the information theory field), and computer aided simulations (C/C++ or MATLAB based).

Ongoing research activities include:

  • Source Coding
  • Lossy/Lossless Image Compression
  • Video Compression
  • Scalable Video Coding
  • Distributed and Multiple Description Video Coding
  • Multiple Description Video Coding
  • Cross-Layer Architecture for Video Transmission
  • Image Demosaicing
  • Sparse Signal Represeentation
  • Image and video post processing
  • 3D Image Processing

DISP.jpg

The DISP Laboratory

It is located within the Department of Information Engineering (DEI), University of Padova, on the second floor of building DEI/A (room #231, corridor on the right). It has about 8 permanent PC stations (running both Linux and Windows XP), equipped with the most recent software tools for signal processing. There is also a 3D display, a Canon XM-1 digital video camera, a Kinect sensor, and some other image capturing devices.

Phone contact: +39 049 827 7641

176.jpeg

Image and Video Compression

With the words “Data Compression” one refers in general to the techniques that aim to reduce the amount of data required to represent, store or transmit a digital signal. Compression is said to be “lossless” if perfect reconstruction from the compressed data is possible (this is needed, for example, for medical images/signals), and “lossy” otherwise. Lossy compression techniques aim to minimize the perceived reconstruction error given the compression ratio, and are heavily employed for image (e.g. by the JPEG and JPEG-2000 standards) and video (e.g. by the MPEG-x and H.264/AVC standards) compression. Other than high compression ratios, the current frameworks of intense digital item dissemination (over the Internet or over wireless/mobile networks) require scalability, robustness, low-complexity features. Scalability permits to reconstruct lower resolution versions of a video content from subsets of the compressed data, robustness enables the decoding of corrupted compressed streams, and low-complexity makes possible the utilization of the compression tools on power-limited (portable, battery-operated) devices. These requirements can be satisfied by the design of cross-layer architectures, which jointly optimize the protocols and devices of different layers involved in the video transmission (source coder, channel coder, queues at MAC/Network layer, transmission power). A possible alternative to cross-layer optimization is the adoption of distributed video coding schemes that permit designing low-complexity encoding devices and increasing the robustness of the video stream at the same visual quality and compression ratio of the previous techniques.

In the lab, we are currently working on improved algorithms for H.264/AVC-based video coding, cross-layer techniques for video transmission over IEEE 802.11x and P2P networks (involving the NS2 and OmNet++ simulators), multiple description-based robust compression techniques, wavelet-based and H.264/SVC-based algorithms for scalable video coding, and innovative distributed source coding-based solutions for robust and low-complexity video encoding, which can be employed in scalable and multiview video coding too.

mosaic.jpg

Image Demosaicing and Denoising

In the digital cameras the colors of the scene are usually captured by a single CCD or CMOS sensor array that detects only a single color (typically red, green or blue) for each pixel. This kind of sensor is called “Color Filter Array” (CFA). The most popular CFA arrangement is called “Bayer pattern” after the name of its inventor, and it samples the green band using a quincunx grid, while red and blue are sampled on a rectangular grid. In this way, the density of the green samples is twice the red and blue densities. Due to this subsampling, an interpolation step is required in order to reconstruct a full color representation of the image. This process is called “Demosaicing” and aims to minimize the introduction of visible artifacts (such as aliasing and zippering). The earliest proposed techniques were based on well-known interpolation methods for images (nearest-neighbor replication, bilinear interpolation, cubic spline interpolation), but they were not able to provide good performance. Better results are achieved by techniques that take advantage of edge-detectors to locate the image discontinuities, and then perform a directional interpolation, avoiding interpolation across edges. Other techniques exploit the correlation existing between the high frequencies of the color components to estimate the missing data. Alternative approaches consist in estimating the luminance component from the sampled color values, and then using it to reconstruct the full color image.

In our lab, we designed a new demosaicing algorithm that couples high performance results and a low computational cost. Currently, we are investigating new methods for demosaicing based on analysis in the wavelet domain. Moreover, we are taking some regularization approaches that make use of some prior knowledge about natural color images. Also, in order to remove the noise introduced by the sensor, procedures to jointly perform demosaicing and denoising are being considered.