About me

I am a graduate student at Massachusetts Institute of Technology (MIT) pursuing a Ph.D. degree in Electrical Engineering and Computer Science. I work in the Digital Integrated Circuits and Systems Group with Prof. Anantha Chandrakasan. My current research focus is on the design of low power circuits for computational photography, computer vision and machine learning applications.

I received my S.M. degree in Electrical Engineering and Computer Science from MIT in June, 2013. As a part of my S.M. thesis I worked on a reconfigurable bilateral filtering processor for computational photography along with my colleagues, Rahul Rithe and Nathan Ickes.

I received my B.Tech degree in Electrical Engineering in 2011 from Indian Institute of Technology (IIT) Delhi. For my undergraduate thesis, I worked with Prof. Anshul Kumar from the Department of Computer Science and Engineering on reliable Transactional Memory architecture for multi-core processors. I was awarded the Institute Silver Medal for the best academic performance in Electrical Engineering in the class of 2011.


Research

My research interests are in designing energy-efficient circuits and systems for accelerating multimedia applications on mobile devices. As a part of my masters thesis project, I worked on the design of a reconfigurable bilateral filtering processor for computational photography applications such as high dynamic range imaging and low light image enhancement. For the first part of my PhD thesis project, I worked on designing an energy-scalable accelerator for image deblurring. I am currently working on hardware accelerators for motion magnification and stereo vision systems.


An Energy-Scalable Accelerator for Blind Image Deblurring

Camera shake is the leading cause of blur in cell-phone camera images. Removing blur requires deconvolving the blurred image with a kernel which is typically unknown and needs to be estimated from the blurred image. This kernel estimation is computationally intensive and takes several minutes on a CPU which makes it unsuitable for mobile devices.

This work presents the first hardware accelerator for kernel estimation for image deblurring applications. Our approach, using a multi-resolution IRLS deconvolution engine with DFT-based matrix multiplication, a high-throughput image correlator and a high-speed selective update based gradient projection solver, achieves a 78x reduction in kernel estimation runtime, and a 56x reduction in total deblurring time for a 1920 x 1080 image enabling quick feedback to the user. Configurability in kernel size and number of iterations gives up to 10x energy scalability, allowing the system to trade-off runtime with image quality. The test chip, fabricated in 40nm CMOS, consumes 105mJ for kernel estimation running at 83MHz and 0.9V, making it suitable for integration into mobile devices.

Publications and Talks

  • P. Raina, M. Tikekar, A. P. Chandrakasan, "An Energy-Scalable Accelerator for Blind Image Deblurring" accepted at European Solid-State Circuits Conference (ESSCIRC), Sep. 2016.
  • P. Raina, M. Tikekar, A. P. Chandrakasan, "An Energy-Scalable Co-processor for Blind Image Deblurring," presented at IEEE International Solid-State Circuits Conference (ISSCC) Student Research Preview (SRP) Poster Session, Feb. 2016. Selected to receive the 2016 ISSCC Student Research Preview Award.

Reconfigurable Processor for Computational Photography

Computational photography refers to a wide range of image capture and processing techniques that extend the capabilities of digital photography and allow users to take photographs that could not have been taken by a traditional camera. Since its inception less than a decade ago, the field today encompasses a wide range of techniques including high dynamic range (HDR) imaging, low light enhancement, panorama stitching, image deblurring and light field photography. These techniques have so far been software based, which leads to high energy consumption and typically no support for real-time processing.

This work focuses on a hardware accelerator for bilateral filtering which is commonly used in computational photography applications. Specifically, the 40 nm CMOS test chip performs HDR imaging, low light enhancement and glare reduction while operating from 98 MHz at 0.9 V to 25 MHz at 0.9 V. It processes 13 megapixels/s while consuming 17.8 mW at 98 MHz and 0.9 V, achieving significant energy reduction compared to previous CPU/GPU implementations, enabling real-time computational photography applications on mobile devices. Live demonstration at the ISSCC Demo Session. [video]

Publications and Talks

  • R. Rithe, P. Raina, N. Ickes, S. V. Tenneti, A. P. Chandrakasan, "Reconfigurable Processor for Energy-Efficient Computational Photography," IEEE Journal of Solid-State Circuits (JSSC), vol. 48, no. 11, pp. 2908-2919, Nov. 2013. [paper]
  • R. Rithe, P. Raina, N. Ickes, S. V. Tenneti, A. P. Chandrakasan, "Reconfigurable Processor for Energy-Scalable Computational Photography," IEEE International Solid-State Circuits Conference (ISSCC), 164-165, February 2013. [paper]

In the News

  • Picture Perfect: Quick, efficient chip cleans up common flaws in amateur photographs. (MIT News)
  • Image Processor Makes for Better Photos and Performance (IEEE The Institute)
  • MIT imaging chip creates natural-looking flash photos. (Engadget)
  • MIT's new chip promises 'professional-looking' photos on your smartphone. (DPReview)
  • Improve your smartphone's photo quality with this chip. (Mashable)

A 3D Vision Processor for a Navigation Device for the Visually Challenged

3D imaging devices, such as stereo and time-of-flight (ToF) cameras, measure distances to the observed points and generate a depth image where each pixel represents a distance to the corresponding location. The depth image can be converted into a 3D point cloud using simple linear operations. This spatial information provides detailed understanding of the environment and is currently employed in a wide range of applications such as human motion capture. However, its distinct characteristics from conventional color images necessitate different approaches to efficiently extract useful information.

This chip is a low-power vision processor for processing such 3D image data. The processor achieves high energy-efficiency through a parallelized reconfigurable architecture and hardware-oriented algorithmic optimizations. The processor will be used as a part of a navigation device for the visually impaired. This handheld or body-worn device is designed to detect safe areas and obstacles and provide feedback to a user. We employ a ToF camera as the main sensor in this system since it has a small form factor and requires relatively low computational complexity.

Publications and Talks

  • D. Jeon, N. Ickes, P. Raina, H. C. Wang, A. P. Chandrakasan, "24.1 A 0.6V 8mW 3D vision processor for a navigation device for the visually impaired," IEEE International Solid-State Circuits Conference (ISSCC), Feb. 2016. [paper]

In the News

  • A virtual “guide dog” for navigation. (MIT News)
  • MIT researchers have developed a ‘virtual guide dog’. (boston.com)
  • Wearable with 3D camera to guide visually impaired. (Pune Mirror)

Contact

MIT Microsystems Technology Laboratory
50 Vassar Street, 38-107
Cambridge, MA 02139
praina@mit.edu