Printable Version of this PageHome PageRecent ChangesSearchSign In

Research Projects

Current projects


Interactive Behavior Learning

Low-level navigation behaviors are difficult to create and tune by hand. Also, many characteristics we would like navigation controllers to have are difficult to parameterize. Instead of hand-crafted behaviors, I am working on driving by example. While a human drives a robot with a remote control, the robot remembers each situation and the action taken by the human. At runtime, the robot uses the stored situation-action pairs to drive autonomously.

December 2007:
[ Interim report pdf ]

November 2007:
[ Slides pdf ]
[ Demo mov ] [ Full Video mov ]


Learning-based visual odometry

A visual odometry system that learns vehicle rotation rates and velocity directly (almost) from what the camera sees. Other visual odometry systems use geometric calculations to determine vehicle motion.

[ ICRA '08 pdf ] [ Video mov ]

Pointing gesture interpretation

Human low-level vision

Past projects


Mobile Manipulation

A robot arm and manipulator, mounted on a Segway RMP200 platform, that serves coffee. It locates a coffee maker and fills a mug from it, and brings the coffee to a customer.

[ Video mov ]


Using RANSAC to Determine Camera Pose

My final project for Computer Vision (CS4495). I used the 4-point algorithm to automatically calibrate camera position.

[ More info ]

Using Finite-Element Analysis to Compare Jaw Strength in Hericthys minckleyi

I used Finite-Element Analysis (FEA), a computer simulation of mechanical stresses normally used in engineering, to quantify the strength of H. minckleyi jaws. I used Micro Computed Tomography (µ-CT) scans to generate 3D models of the jaws, and wrote software to help analyze the data. I then studied the relationship between the location of stress in the jaws during biting, and the differences in jaw shape that occur between individuals that do and do not crush hard prey.

Last modified 18 February 2008 at 12:15 pm by Richard Roberts