GPU, as it provides massively parallel cores, has emerged as a suitable platform to accelerate parallel programs. Among all the parallel programs, computer vision is gaining more and more importance because of its usage in surveillance and mobile computing. This project aims to study the efficiency of GPU to accelerate computer vision algorithms. To accomplish this goal, both high performance and embedded GPU will be examined. Till now, we have implemented and optimized background subtraction algorithm (Mixture of Gaussian, MoG) on high-performance GPU (Nvidia Tesla C2075). To do achieve best performance, both general and algorithm-specific optimizations. Right now, we are doing implementation and analysis of MoG on embedded GPU (vivante gc2000).