Object Detection and Pose Estimation
Single Image 3D Object Detection and Pose Estimation
for Grasping,
M. Zhu, K. Derpanis, Y. Yang, S. Brahmbhatt, M. Zhang, C. Phillips, M. Lecce and K. Daniilidis, International Conference on Robotics and Automation (ICRA), 2014. [pdf]
M. Zhu, K. Derpanis, Y. Yang, S. Brahmbhatt, M. Zhang, C. Phillips, M. Lecce and K. Daniilidis, International Conference on Robotics and Automation (ICRA), 2014. [pdf]
In this paper, we address the problem of a robot grasping
3D objects of known 3D shape from their projections in
single images of cluttered scenes.
We present a novel approach for detecting objects
and estimating their 3D pose in single images of cluttered
scenes.
Action Recognition and Detection
From Actemes to Action: A Strongly-supervised Representation
for Detailed Action Understanding,
W. Zhang, M. Zhu and K. Derpanis,
International Conference on Computer Vision
(ICCV), 2013.
[pdf]
Human action classification (“What action is present in
the video?”) and detection (“Where and when is a particular
action performed in the video?”) are key tasks
for understanding imagery.
This paper presents a novel approach for analyzing
human actions in non-scripted, unconstrained video
settings based on volumetric, x-y-t, patch classifiers,
termed actemes.
Monocular 3D
Monocular Visual Odometry and Dense 3D Reconstruction
for On-Road Vehicles,
M. Zhu, S. Ramalingam, Y. Taguchi and T. Garass, European Conference on Computer Vision, workshop on Computer Vision in Vichecle Technology (ECCV), 2012. [pdf]
M. Zhu, S. Ramalingam, Y. Taguchi and T. Garass, European Conference on Computer Vision, workshop on Computer Vision in Vichecle Technology (ECCV), 2012. [pdf]
More and more on-road vehicles are equipped with cameras
each day. Accurate ego-motion estimation of a vehicle from
video sequences is a challenging and important problem in
robotics and computer vision. This paper presents a novel
method for estimating the relative motion of a vehicle
from a sequence of images obtained using a single
vehicle-mounted camera.
Literate PR2
Reading texts from natural scene is an easy task for human being
but not for a robot. However our PR2 robot, Graspy, became quite smart
in this case. In this project, Graspy is programed to be able
to detect and recognize the text from an indoor environment.
What's more, Graspy is so excited to have this ability that he
reads out every single word he sees!
literate_pr2 is now an open source ROS package:
http://www.ros.org/wiki/literate_pr2.
To learn more about ROS and PR2, take a look at
Willow Garage's website
In the Press
......