Low-level navigation behaviors are difficult to create and tune by hand. Also, many characteristics we would like navigation controllers to have are difficult to parameterize. Instead of hand-crafted behaviors, I am working on driving by example. While a human drives a robot with a remote control, the robot remembers each situation and the action taken by the human. At runtime, the robot uses the stored situation-action pairs to drive autonomously.
[ Interim report pdf ]
A visual odometry system that learns vehicle rotation rates and velocity directly (almost) from what the camera sees. Other visual odometry systems use geometric calculations to determine vehicle motion.
A robot arm and manipulator, mounted on a Segway RMP200 platform, that serves coffee. It locates a coffee maker and fills a mug from it, and brings the coffee to a customer.
[ Video mov ]
My final project for Computer Vision (CS4495). I used the 4-point algorithm to automatically calibrate camera position.[ More info ]
I used Finite-Element Analysis (FEA), a computer simulation of mechanical stresses normally used in engineering, to quantify the strength of H. minckleyi jaws. I used Micro Computed Tomography (µ-CT) scans to generate 3D models of the jaws, and wrote software to help analyze the data. I then studied the relationship between the location of stress in the jaws during biting, and the differences in jaw shape that occur between individuals that do and do not crush hard prey.