You are here

Background: 

Embedded vision is considered one top-tier, fast-growing are. Embedded vision refers to the deployment of visual capabilities to embedded systems for better understanding of 2D/3D visual scenes. It covers a variety of rapidly growing markets and applications. Examples are Advanced Driver Assistance System (ADAS), industrial vision, video surveillance and robotics. With the industry movement toward ubiquitous vision computing, vision capabilities and visual analysis will become an inherent part of many embedded platforms. Embedded vision technology would be a pioneer market in digital signal processing.

Problem Definition: 

While vision computing is already a strong research focus, embedded deployment of vision algorithms is still in a fairly early stage. Vision computing increases the demand for extremely high performance, coupled with very low power and desire for low cost. Embedded vision platforms must offer extreme compute performance while consuming very little power (often less than 1 W) and still be sufficiently programmable. The conflicting goals (high performance, low power) imposes massive challenges in architecting embedded vision platforms.

Project Roadmap: 

The aim is to architect / design and implement customized vision processors that can deliver power efficiency and performance at the same time. The primary approach is separation between streaming and algorithm-intrinsic traffic. Streaming traffic is communication for accessing data under processing (e.g. image pixels, signal data samples) – inputs/outputs of the algorithm. Conversely, algorithm-intrinsic traffic is a communication for accessing the data required by algorithm itself (algorithm-intrinsic data). The traffic separation allows customizing algorithm-intrinsic traffic for a quality/bandwidth tradeoff and reducing the traffic with respect to algorithm quality. Furthermore, the traffic separation is a major step toward efficient chaining of multiple vision algorithms constructing a complete vision flow. As an example, our Zynq-prototyped solution performs 40GOPs at 1.7Watts of on-chip power for a complete object detection/tracking vision flow.

Candidate Requirements: 

Candidates need to be familiar with Hardware description language preferably (Verilog-HDL) and C/C++. Furthermore, candidates will benefit if they have the experience of working with Xilinx toolchains (ISE or Plan-ahead or Vivado). Experience in Zynq platform is a plus.

Learning Opportunities: 

Learning opportunities in this project are many! Overall, accepted student candidates will have a chance to work with Zynq platform and learn HW/SW co-design in a very practical way. Learning DMAs, AXI buses, embedded real-time software development, HW accelerator design are only few examples. 

Related Research: 

Novel Architecture for Streaming Applications

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer