My name is Ankur Mohan. The purpose of this page is to provide you information about who I am and what I have done over the last 10 years. There is a lot of information on this page, so if you are interested in a specific part of my background, please select from the drop down list. Otherwise, read on! 🙂
I obtained a bachelors in Electrical Engineering from the Indian Institute of Technology – Bombay in 2000, a Masters in Electrical Engineering from the University of Maryland in 2004 and a M.B.A. from Georgetown University in 2013. For those who believe in standardized tests, I scored in the 99.5% on both the GRE (Graduate Record Exam, Score: 2270) and the GMAT (Graduate Management Admissions Test, score:760)
My bachelors and masters research was focused on applying computer vision techniques to practical problems. I worked on face detection and recognition technologies, pose estimation from image feature correspondences, structure from motion algorithms and object detection and classification using stereo vision and boost classifier trees. After graduation, I joined a startup in State College, PA focusing on developing algorithms to obtain customer demographics such as age, gender and ethnicity from images and videos captured from point of service cameras in restaurants. Around the middle of 2006, I returned back to the Washington DC area to work on a few exciting DARPA projects with researchers from the University of California – San Diego and University of Maryland. I’ll mention three projects here.The first one dealt with developing a computer vision based 3D handheld mouse that used edge length ratio and angle semantics to locate a computer screen in cluttered images and pose estimation techniques to determine the line of sight of the camera. The intersection point of the camera line of sight and the computer screen was used to position the cursor on the screen. The second project was a face detection and recognition system that consisted of detecting faces in images using the Viola-Jones face detection system followed by identifying the person corresponding to each face from a head model database by computing the rotation and translation needed to align each head model to the face image using a variant of the Lukas-Kanade optical flow method.
The third project aimed to develop a system that could be deployed on mobile devices to recognize certain keywords from Arabic documents by using image processing techniques to segment the document into lines, words and characters and then using optical character recognition techniques to recognize characters. These projects gave me the opportunity to work with world renowned researchers in computer vision and image recognition and I honed my mathematical, analytical and programming skills.
In 2008, I joined Scaleform Corporation, a startup founded by my college friends, Michael Antonov and Brendan Iribe. Scaleform was developing a flash runtime (named Scaleform – GFx) for Video Games with the goal of bringing the power and flexibility of flash technology to common video game platforms such as consoles, PCs and handheld and mobile devices. We developed the entire Flash player including the ActionScript virtual machine, a high performance tessellator and a cross platform rendering engine. I developed one of our add-ons named IME (Input Method Editor) that enabled text input in Asian languages that require additional UIs such as reading windows and candidate lists. I also developed engine integrations with popular third party 3D engines that constitute the heart of video games.For those not familiar with how video games are made, here’s a quick summary. Video games are highly complex software systems. Many components go into making games – the most important of which is commonly referred to as the 3D Engine which is a piece of software that calculates how computer graphics objects look and behave. Other relatively smaller components solve other problems in game making such as managing and displaying user interfaces, handling Audio and Video streams and so on. These components are collectively referred to as “middleware”. Scaleform was one such middleware vendor. Games are made by video game companies that specialize in making beautiful games with creative gameplay. In order to avoid reinventing the wheel and save money, these companies typically buy the necessary software components from 3D engine makers and middleware companies.
Scaleform GFx became a highly successful product and came to be adopted by most of the leading video game producers in the world. In March 2010, Scaleform was acquired by Autodesk, Inc. and I joined the Autodesk family along with the rest of our team.
At Autodesk, I continued to work as a senior software engineer, helping support new platforms and new engine integrations. While I loved software engineering and programming, I had a keen interest in business and started a part time MBA at Georgetown University. In parallel with my MBA, I started taking on more business and marketing oriented tasks at work. I launched a bi-weekly webinar series for our customers where I would go over a particular feature of our product in detail. I also started traveling extensively to evangelize our product at industry conferences and trade shows and speak to customers. My travels took me all around the world, including multiple trips to Europe, Japan, China and Korea. In 2012 I was promoted to a product manager role and took on the additional responsibility of handling our OEM business relationships with major third party vendors such as Nintendo, Epic games, Intel corporation, Sony entertainment and many others. I led negotiations on a 5 million dollar, 5 year contract with Nintendo from Japan. I also led the negotiations on renewing our OEM contracts with Intel corporation and Crytek of Germany. I continued to travel extensively around the world to promote our products and winning new customers.
In 2013, our division was mandated to expand our footprint into markets that were adjacent to gaming but shared the requirement of a high performance, cross platform rendering system provided by our product. I led discussions with world leading companies such as Samsung and LG to understand their workflow and determine how our product could meet their needs. I also worked with medical device makers, smart appliance makers, set-top box manufacturers, eLearning companies to drive adoption in those markets.
In 2014, we acquired a Swedish game engine maker and our product line was expanded to include a game engine. As the product manager for our expanded group, I led the creation and implementation of a documentation system that automatically created our reference documentation from code comments and correlated our reference docs with instances of code usage in our samples and demos. I also managed a team of 6 senior engineers (reporting directly to me) to launch a system for customers to create support tickets with specified SLA (service level agreement), handle customer questions promptly and evangelize product features through youtube videos and webinars. In the engineering management role, I set direction and goals for team members, organized weekly team meetings to review progress, conducted performance reviews and made salary adjustment and bonus decisions.
Around July 2014, I became keenly interested in the emerging, exciting new field of small UAVs. I have been interested in flight since I was a kid and while I did not study Aerospace engineering in college, my strong foundation in Computer Science and Electrical Engineering enabled me to quickly learn about embedded electronics, brushless motors, RTOS, flight control systems and other components that go into a UAV. I believe the best way to learn about something is to make it yourself therefore I decided to make a quadcopter on my own. I wrote from scratch a flight control system that featured a DCM based sensor fusion system to get orientation, a rate PID controller to control the yaw/pitch/roll and a SONAR based altitude PID to control the altitude of the quadcopter. I also wrote a wireless command and control system to control the quadcopter wirelessly from my ground station (PC) and a Qt command and diagnostic application that enabled me to adjust PID parameters, visualize sensor measurements in a oscilloscope like graph view and a logging system to record and playback sensor and PID controller outputs.
In addition to building the quadcopter, I have been learning and writing about Realtime Kinematic (RTK) GPS systems, aerial photography and surface reconstruction using photogrammetry techniques, and various sensor fusion algorithms.
I have also been a leader in the Washington area sUAV community and have given multiple talks about flight control systems, safe and responsible usage of consumer sUAV systems and computer vision techniques. Information about some of my talks is included below.
“Fantastic 3hr fully detailed pesentation by Ankur!! Wow I learned a lot!!! 🙂 thx Ankur for generously sharing your expertise”, Anne V.
Speaker Series: Flying with a 3DRobotics IRIS
Wednesday, Aug 26, 2015, 7:00 PM
Nova Labs 1916 isaac newton square Reston, VA
29 Operators Went
Ankur is back to do a talk about his experiences in using the 3DRobotics IRIS!The talk will focus on:- Setting up the Iris- First flight – pre-arm checks- Best practices with using the Iris- Various sensors on board the Iris- Communicating with the Iris – Radio/MavLink- Various flight modes – Alt Hold, Loiter, Circle, Auto etc.- Various Fai…
“Thank you, Ankur, for a fascinating presentation. Thanks for sharing all the details of your knowledge and experience.”, Stuart Showalter
Tech Talk: Building a Drone
Monday, Apr 27, 2015, 7:00 PM
*Nova Labs (needs room assigment) 1916 Isaac Newton Square West Reston, VA
36 Operators Went
Ankur, a member of our DC DUG community and engineer by training, has been working on making his own drone including the hardware and the software (flight control system, communication system, and flight control application). He can control the yaw/pitch/roll using an Xbox controller connected to his computer. He has volunteered to give a talk abou…
Over the last couple of months, I have been working on a client-server based system for sharing UAV telemetry information over a network so that government agencies and other pilots can see the telemetry information being sent by the ground station of a pilot flying a drone. Such a system is meant to address the issue of small UAVs being invisible to anyone other than their pilot as they only communicate with the pilot. For example, an air traffic controller at an airport has no way of knowing how many small UAVs are flying in the vicinity. My system solves this problem in a robust, efficient and scalable manner. Detailed information about the system can be found here: