Lekha Mohan

I am Lekha Mohan, currently a robotics engineer at Luminar Technologies, Palo Alto, California. I have been working on core perception algorithms for our customized Lidar sensor in order to leverage self-driving capabilities for autonomous vehicles.

Previous to this, I was a research assistant at The Robotics Institute Carnegie Mellon University under Abhinav Gupta. I graduated from M.S in Robotic systems Development from RI in 2016 where I worked with several robot platforms to enhance complex robot manipulation.

My research interests revolve around leveraging complex robot manipulation and perception behaviors. I am interested in harnessing useful robot databases. I am excited about enabling robots to perform complex and comprehensive manipulation skills. I have largely worked on the confluence perception, robot learning and robot manipulation and to an extent on motion planning. My recent research focused on Learning from Human Demonstrations using Baxter Robot Platform

Check out my CV here

Email: lekhawm[AT]gmail.com

Research and Selected Projects

Learning from Human Demonstrations
Lekha, Lerrel Pinto , Abhinav Gupta

Can Baxter perform complex manipulation actions by looking at your demo video? I am currently working on imitation learning by the Baxter robot focusing on leveragin complex robot manipulation. ImageNet had immensely accelerated the growth of computer vision models. Inspired the research impact of ImageNet and to address lack of sufficient robot data, I am developing a novel large-scale dataset of 13,000 human-robot demonstrations for 20 classes of diverse manipulation actions. These actions include opening bottles, passing objects from one hand to another, stacking etc., using objects of various shapes and sizes. I record human demonstrations and the kinesthetic imitation of the same task on the robot. Using this large-scale data, I am trying to jointly-learn correspondences between the human actions and robot actions in order to develop robust imitation learning frameworks and improve generalization across unseen tasks. We will be releasing this dataset for the benefit of other researchers.



GIF

Human Assistive Robotic Picker - UR5 Platform
video Lekha Mohan, Alex Brinkman, Rick Shanor, Abhishek Bhatia and Feroze Naina

Amazon has automated their warehouses by using robots to move storage shelves. However, they still require human intervention to pick each object from the shelf bin and place it into the shipping box. Our primary goal was to solve this problem by developing a robot system that can automatically parse a list of items, identify desired items on a shelf, and pick and place them into the order bin. We collaborated with Maxim Likhachev and the Search Based Planning Lab to compete in the 2016 Amazon Picking Challenge, Leipzig. Our system, the Human Assistive Robotic Picker (HARP), consists of perception, grasping and planning sub systems. The UR5 robot platform, outfitted with a suction gripper, picks up small household objects from the twelve shelf bins. .



GIF

Table Clearing Task by HERB Platform
Lekha Mohan, Gauri gandhi, Rohith Dasarathi, Keerthana Manivannan

Robots performing household tasks is a big challenge, where the operator/user needs to give sequential commands to the robot for each subtask. In order to perform a complicated task in say,a kitchen environment, task planners are needed that can reason over very large sets of states by manipulating partial descriptions, while geometric planners operate on completely detailed geometric specifications of world states. Home Exploring Robot Butler (HERB) currently uses a sequential task planner for table clearing which plans for the entire task and obtains a feasible global plan before beginning execution. We implemented a task planner that reduces the planning time by breaking the high level task into subgoals, making choices and committing to them, greatly reducing the length of plans. This approach is suitable for complex tasks where strict optimality is not crucial. This framework is Hierarchical Planning and its advantages led us to use these key features (fluents, operators, etc.) in a sub-version that uses a breadth-first approach to plan in the now.



Visualization tools for predicting robots trajectory
Robotics Intern'16 at 5D Robotics

During this summer internship, I developed algorithms to predict the ground robot's trajectory inorder to prevent potential accidental collisions due to latency caused by in-house custom sensors. I integrated my algorithm in a RViz plugin which was deployed at one of the 5D Robotics's industrial facility.

GIF

Direct perception based automation of car in racing game
Lekha Mohan, Sai Prabhakar, Aishanou Rait

In vision based autonomous systems, direct perception based approach that is used to estimate the affordance for driving was proposed recently. Here the input image is mapped to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving. Our project uses AlexNet training from scratch for this mapping. We explored the possibility using transfer learning for learning the mapping. We used pre-trained Alexnet to map the input image to a set of key perception indicators. These indicators enables a simple controller to drive the car autonomously. We fine-tuned weights for all the convolution layers (layers 1-5) in the network and trained from scratch for all the fully connected layers (layers 6-8) to preserve the aspect ratio of the images obtained from the simulator.


Source stolen from here