Talks

Fine-grained Recognition using Pose-normalization.

Dr. Ryan Farrell
Brigham Young University
Date: 25 October 2016

Abstract

While humans can readily identify basic-level categories such as table, tiger, or trumpet, recognition of subordinate-level categories within a domain (e.g. species of birds or make/model/year of vehicles) is very difficult and typically requires extensive experience or expertise with the given domain. To date, research efforts to develop computational approaches for the recognition of such subordinate or "fine-grained" categories have largely sought to apply the same techniques used for basic-level recognition, only on a larger scale (more categories). In this talk, speaker will describe directions that we are currently pursuing in his research lab at BYU to address the specific challenges inherent in fine-grained recognition. The key underlying paradigm is a pose-normalized representation which pairs a domain-level models of geometry with category-specific appearance models. This representation enables objects to be perceived independent of pose, articulation or viewing angle. Distinguishing features are learned and recognition is performed in this pose-normalized space. He will conclude by discussing practical deployment for widespread applications and work on incorporating human domain expertise in computational models.



Biography

Dr. Ryan Farrell is an Assistant Professor in the Computer Science Department at Brigham Young University in Provo, Utah. His research interests are primarily focused on the challenges inherent in fine-grained recognition but broadly include computer vision, machine learning and robots. He has recently served as an Area Chair for CVPR 2016 and is currently co-organizing his fourth workshop on Fine-grained Visual Categorization to be held at CVPR 2017.





Symbiotic Robot Autonomy and Learning.

Prof. Manuela Veloso
Carnegie Mellon University
Date: 22 August 2016

Abstract

We research on autonomous mobile robots with a seamless integration of perception, cognition, and action. In this talk, I will first introduce our CoBot service robots and their novel localization and symbiotic autonomy, which enable them to consistently move in our buildings, now for more than 1,000km. I will then introduce the CoBot robots as novel mobile data collectors of vital information of our buildings, and present their data representation, their active data gathering algorithm, and the particular use of the gathered WiFi data by CoBot. I will further present an overview of multiple human-robot interaction contributions, and detail the use and planning for language-based complex commands. I will then conclude with some philosophical and technical points on my view on the future of autonomous robots in our environments. The presented work is joint with my CORAL research group, and in particular refers to the past PhD theses of Joydeep Biswas, Stephanie Rosenthal, and Richard Wang, and recent work of Vittorio Perera.



Biography

Manuela M Veloso is Herbert A Simon University Professor and the Head of the Machine Learning Department, in the School of Computer Science at Carnegie Mellon University. She researches in Artificial Intelligence and Robotics. She founded and directs the CORAL research laboratory, for the study of autonomous agents that Collaborate, Observe, Reason, Act, and Learn, www.cs.cmu.edu/~coral. Professor Veloso is IEEE Fellow, AAAS Fellow, AAAI Fellow, and the past President of AAAI and RoboCup. Professor Veloso and her students have worked with a variety of autonomous robots, including mobile service robots and soccer robots. See www.cs.cmu.edu/~mmv for more.





Agile and adaptive soft robots.

Dr. Vishesh Vikas
Tufts University, USA
Date: 19 July 2016

Abstract

Soft materials have unique properties of conforming to surfaces, altering dimensions, storing elastic energy and absorbing impacts. These properties make such materials attractive for application to robot assisted search & rescue, agriculture robotics, intelligent elder care, rehabilitation robotics and intelligent wearable clothing. Development of soft material robots pose design and control challenges. The design challenges extend from manipulation of friction for effective locomotion, actuator design, material selection to fabrication techniques. The talk discusses first-of-a-kind 3D printed soft body robots actuated by shape memory alloys and motor-tendon combination coupled with shape-dependent or directional friction manipulation mechanism to produce fastest moving soft material terrestrial robots with speeds up to 0.55 body lengths/sec. Here, multi-material additive manufacturing is very useful for prototyping where simulation of non-linear elastic material! s is computationally expensive and difficult. These different multi-limb robots are controlled using a novel, data-driven, reinforcement learning-inspired model-free control framework that addresses limitations posed by conventional techniques. This framework is based on control primitives and draws analogies from graph theory to mathematically define periodic locomotion gaits (simple cycles). This mathematical representation facilitates the robot to learn from environment interactions, transition to a new environment and adapt instantaneously to scenarios like limb-loss and loss of actuation.



Biography

Dr. Vishesh Vikas is an assistant professor at University of Alabama, Tuscaloosa. Previously he was a postdoctoral researcher at Neuromechanics and Biomimetics Lab (BDL), Tufts University where he worked on development of soft material robots capable of terrestrial locomotion. He completed his PhD in Mechanical Engineering from the University of Florida, Gainesville and his B.Tech from Indian Institute of Technology Guwahati. He has research interests in the areas of soft robotics, bio-inspired robotics, non-linear controls, robot-assisted search and rescue, machine learning and artificial intelligence.





UAVs: applications for single and multiple vehicles.

Dr. P B Sujit
IIIT Delhi
Date: 11 March 2016

Abstract

UAVs have been used in several applications. Most of these applications are in the military domain, however, with recent open source avionics and low costs systems that are available in the market have triggered a new set of civilian applications (including Amazon prime shipping). A key ingredient essential for these UAVs operating either as a single entity or as a team is autonomous decision-making capability. At a single vehicle level, I will talk about a fast path planner that has a speed up of several folds compared to the current state of the art path planning algorithms, vision-based landing of quad-rotor, and path planning in GPS-denied area. At team level, I will talk about fault tolerant area coverage for multi-agent systems. Fault tolerance is essential in team operations, however this aspect has not received adequate attention. Our algorithm shows how a team of robots detects faults among team members under limited communication range constraints and redistribute ! the area to ensure complete area coverage is achieved.





Fast Visual Simulation of Complex Multiscale Phenomena.

Ming C. Lin
University of North Carolina at Chapel Hill
Date: 28 December 2015

Abstract

From turbulent fluid flow to chaotic traffic patterns, many phenomena observed in nature and in society show complex emergent behavior on different scales. The modeling and simulation of such phenomena continues to intrigue scientists and researchers across different fields, from computational sciences, medicine, traffic engineering, urban planning, to social sciences. Understanding and reproducing the visual appearance and dynamic behavior of such complex phenomena through simulation is valuable for enhancing the realism of virtual scenes, for improving the efficiency of design evaluation, for planning of complex procedures, and for training of skilled personnel. This is also essential for interactive applications, where it is impossible to manually animate all the possible interactions and anticipate all responses beforehand. In this talk, I survey several recent advances that synthesize together macroscopic models of the large-scale flows and local representations of intricate behaviors to capture both the aggregate dynamics and fine-grained details of such phenomena with significantly accelerated performance on commodity hardware, as well as novel algorithms that integrate physics-based modeling and data-driven synthesis to solve challenging research problems. Some of the example dynamical systems that I will describe using these hybrid techniques include soft tissue modeling, turbulent fluid, granular flows, crowd simulation, traffic visualization, and multimodal interaction. I conclude by discussing some possible future directions.



Biography

Ming C. Lin is currently John R. & Louise S. Parker Distinguished Professor of Computer Science at the University of North Carolina (UNC), Chapel Hill. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and nine best paper awards at international conferences. She is a Fellow of ACM and IEEE. Her research interests include physically-based modeling, virtual environments, sound rendering, haptics, robotics, and geometric computing. She has (co-)authored more than 250 refereed publications in these areas and co-edited/authored four books. She has served on over 150 program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is the Chair of IEEE Computer Society (CS) Transactions Operation Committee, a member of IEEE CS Board of Governor, and a former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics (2011-2014). She also has served on several Editorial Boards, steering committees, and advisory boards of international conferences, as well as government and industrial technical advisory committees.





Aerial Robotics for Dynamic Load Manipulation and Transportation.

Dr. Koushil Sreenath
Carnegie Mellon University
Date: 15 December 2015

Abstract

In this talk, author discussed why there is a pressing need for aerial load transportation using small unmanned aerial robots. He presented the design of planning and control policies for achieving dynamic aerial manipulation. He discussed a method that draws inspiration from aerial hunting by birds of prey for object retrieval at high speeds. Further, he showed how a coordinate-free, geometric mechanics formulation of the dynamics of a quadrotor carrying a suspended payload allows us to synthesize nonlinear geometric controllers with almost-global stability properties for aggressive maneuvers. Finally, he presented the problem of cooperative transportation of a cable-suspended payload using multiple aerial robots, and show how we can design dynamically feasible trajectories that can handle hybrid dynamics resulting from the cable tension going to zero.



Biography

Koushil Sreenath is an Assistant Professor of Mechanical Engineering, Robotics Institute, and Electrical & Computer Engineering, Carnegie Mellon University. He received the M.S. degree in Applied Mathematics and Ph.D. degree in Electrical Engineering: Systems from the University of Michigian at Ann Arbor, MI, in 2011. His research interest lies at the intersection of highly dynamic robotics and applied nonlinear control. His work on dynamic legged locomotion on the bipedal robot MABEL was featured on The Discovery Channel, CNN, ESPN, FOX, and CBS. His work on dynamic aerial manipulation was featured on the IEEE Spectrum, New Scientist, and Huffington Post. His work on adaptive sampling with mobile sensor networks was published as a book entitled Adaptive Sampling with Mobile WSN (IET). He received the best paper award at the Robotics: Science and Systems (RSS) Conference in 2013, and the Google Faculty Research Award in Robotics in 2015.





The Daksh and The Nethra.

Mr. Alok Mukherjee
DRDO, Pune
Date: 4 December 2015

Abstract

The Daksh's a highly popular robot developed by RnD Engineers, Pune, extensively used in search and rescue, bomb defusal and mine clearance in remote areas. In the same vein The Nethra is an extremely popular drone developed by Ideaforge (http://www.ideaforge.co.in/home/) and popularized by RnD Engineers. The Nethra was used extensively during the Uttarakhand disaster. This talk will be informal while going through the development cycle of Daksh and showcase flagship products that bring together mechanism design, control, perception, electronics, computer vision and above all robust engineering.







An Information Theoretic Framework for Sensor Data Fusion and its Applications in Autonomous Navigation of Vehicles.

Dr. Gaurav Pandey
Indian Institute of Technology Kanpur
Date: 6 November 2015

Abstract

In this talk, author presented an information theoretic framework for signal­ level multimodal sensor data fusion. In particular he focused on the fusion of 3D lidar and camera data, which are commonly used perception sensors in mobile robotics. It is well­ known among the robotics community that having multiple sensors is necessary for robust autonomous navigation. One type of sensor (e.g., camera, lidar or radar) alone cannot provide robust solutions to problems related to autonomy in vehicles. Therefore, we need multi­modality sensors that are complimentary in nature. Most of the autonomous vehicle platforms are generally equipped with different modality sensors. However, despite the fact that these sensors provide complimentary information about the surroundings, they are typically used independently. In this talk, he demonstrated how we can exploit the statistical dependence between the data obtained from different modalities in an information theoretic framework to enhance the robustness of algorithms used for autonomous navigation of vehicles.



Biography

Gaurav Pandey is an Assistant Professor in the Electrical Engineering department of IIT Kanpur. Prior to IIT Kanpur he was a Research Scientist in the Automated Driving group of Ford motor company in Dearborn, USA; where he worked on the autonomous vehicle research project. His research focus is on visual perception for mobile robotics using tools from computer vision, machine learning and information theory. He did his B­Tech from Indian Institute of Technology, Roorkee in 2006. Before joining University of Michigan for the PhD he worked in the vision group of Kritikal Solutions Pvt Ltd. (KSPL), a student based startup of IIT Delhi in India. In KSPL he worked on various commercial computer vision and image processing related projects.





Flying Fast and Low Among Obstacles.

Dr. Srikanth Saripalli
Arizona State University
Date: 21 September 2015

Abstract

Currently Unmanned Aerial Vehicles (UAVs) are expected to expand their applications to several civil domains, such as commercial photography, precision agriculture, infrastructure monitoring & disaster response. However, for the UAVs to perform these tasks with full autonomy, they must satisfy multiple requirements: 1) Fly in GPS-denied environments 2) Take-off and land autonomously 3) Sense and avoid obstacles. In this talk, author presented overview of his algorithms on combining vision with Inertial Measurement Unit and GPS for accurate state estimation of the vehicle. He described algorithms that combine vision with his low level controller to perform vision based autonomous landing. He then described his path planning method, based on RRT (Rapidly exploring random tree) for obstacle avoidance for UAVs. He further, demonstrated results from several flight experiments that validate the efficacy of this method. He also discussed his recent work on fast (5-10m/s) GPS free flight for quadrotors. Finally, he talked about his work on using robotics for enabling science: ground robots for planetary exploration, underwater vehicles for water quality measurement and aerial vehicles for disaster management.



Biography

Dr. Srikanth Saripalli is an Associate Professor in the School of Earth and Space Exploration at Arizona State University. Dr. Saripalli's expertise is in Navigation and estimation algorithms for VTOL vehicles http://robotics.asu.edu/. Before coming to ASU, he was at NASA/JPL where he developed algorithms for autonomous navigation of aerial robots for exploration of Titan. Over the years he has developed several aerial systems and has worked extensively on autonomous navigation for aerial vehicles: specifically autonomous landing and GPS-denied estimation and navigation. He was a visiting researcher at CSIRO, Australia, CMU and UPM, Spain. He is Senior Member of IEEE and is a currently the technical director for the American Helicopter Society, AZ chapter. He is the recipient of the International Young Investigator Award from Chinese Academy of Sciences and NASA Group Achievement Award for development of Testbed for Titan Missions.





Multi agent Positional Consensus under Various Information Paradigms.

Dr. Kaushik Das
IIT Bhubaneswar
Date: 18 September 2015

Abstract

This work addresses the problem of positional consensus of multi‐agent systems. A positional consensus is achieved when the agents converge to a point. Some applications of this class of problem are in mid‐ air refueling of the aircraft or UAVs, targeting a geographical location, etc. In this research work some positional consensus algorithms have been developed. They can be categorized in two part (i) Broadcast control based algorithm (ii) Distributed control based algorithm. In case of broadcast based algorithm control strategies for a group of agents is developed to achieve positional consensus. The problem is constrained by the requirement that every agent must be given the same control input through a broadcast communication mechanism. Although the control command is computed using state information in a global framework, the control input is implemented by the agents in a local coordinate frame. The mathematical formulation has been done in a linear programming framework that is computationally less intensive than earlier proposed methods. Moreover, a random perturbation input in the control command, that helps to achieve reasonable proximity among agents even for a large number of agents, which was not possible with the existing strategy in the literature, is introduced. This method is extended to achieve positional consensus at a pre‐specified location. A comparison between the LP approach and the existing SOCP based approach is also presented. Some of the algorithm has been demonstrated successfully on a robotic platform made from LEGO Mindstorms NXT Robots. In the second case of broadcast based algorithm, a decentralized algorithm for a group of multiple autonomous agents to achieve positional consensus has been developed using the broadcast concept. Even here, the mathematical formulation has done using a linear programming framework. Each agent has some sensing radius and it is capable of sensing position and orientation with other agents within their sensing region. The method is computationally feasible and easy to implement. In case of distributed algorithms, a computationally efficient distributed rendezvous algorithm for a group of autonomous agents has been developed. The algorithm uses a rectilinear decision domain (RDD), as against the circular decision domain assumed in earlier work available in the literature. This helps in reducing its computational complexity considerably. An extensive mathematical analysis has been carried out to prove the convergence of the algorithm. The algorithm has also been demonstrated successfully on a robotic platform made from LEGO Mindstorms NXT Robots.





Robust Simultaneous Localization and Mappping.

Dr Prateek Agarwal
PhD Univ of Freiburg
Date: 13 May 2015

Abstract

Robust localization and mapping are fundamental requirements to enable mobile robots to autonomously navigate in complex environments. It is challenging for a mobile robot to perform both mapping and localization with noisy sensors. This requires a robot to reliably recognize a previously visited place, while being robust to noise in sensor measurements and perceptual aliasing due to repetitive structures in the environment. In this talk, I will describe novel simultaneous localization and mapping (SLAM) algorithms which are robust to place recognition and sensor errors. The algorithms covered in the talk are robust to gross non-Gaussian errors in sensor measurements and mitigate the effects of poor initialization. Additionally, I believe robots should be able to navigate with already available maps even if they are built for humans. I will outline a novel approach to metric localization that leverages geotagged imagery on Google Street View as an accurate source of global positioning and localize a robot using only a monocular camera with odometric estimates. I will also present a method that can localize a robot with a hand-drawn sketch map.



Biography

Pratik Agarwal obtained his Bachelors in CSE from Manipal University, Masters from UMich Ann Arbor and PhD from Univ of Frieburg, with Prof Wolfram Burgard. His paper Robust Map Optimization using Dynamic Covariance Scaling was a best paper candidate in ICRA 2013. His research has essentially focused on optimization techniques for Robust SLAM. During his BTech he interned with RRC during Spring 2010 where he worked on navigation algorithms for soccer playing robots.





Lifting 3D Manhattan Lines from a Single Image.

Dr. Srikumar Ramalingam
University of California
Date: 10 January 2014

Abstract

In the first part of the talk, I will present a novel and an efficient method for reconstructing the 3D arrangement of lines extracted from a single image, using vanishing points, orthogonal structure, and an optimization procedure that considers all plausible connectivity constraints between lines. Line detection identifies a large number of salient lines that intersect or nearly intersect in an image, but relatively a few of these apparent junctions correspond to real intersections in the 3D scene. We use linear programming (LP) to identify a minimal set of least-violated connectivity constraints that are sufficient to unambiguously reconstruct the 3D lines. In contrast to prior solutions that primarily focused on well-behaved synthetic line drawings with severely restricting assumptions, we develop an algorithm that can work on real images. The algorithm produces line reconstruction by identifying 95% correct connectivity constraints in York Urban database, with a total computation time of 1 second per image. In the second part of the talk, I will briefly mention about my other work in graphical models, robotics, geo-localization, generic camera modeling and 3D reconstruction.



Biography

Srikumar Ramalingam is a Principal Research Scientist at Mitsubishi Electric Research Lab (MERL). He received his B.E from Anna University (Guindy) in India and his M.S from University of California (Santa Cruz) in USA. He received a Marie Curie Fellowship from European Union to pursue his studies at INRIA Rhone Alpes (France) and he obtained his PhD in 2007. His thesis on generic imaging models received INPG best thesis prize and AFRIF thesis prize (honorable mention) from the French Association for Pattern Recognition. After his PhD, he spent two years in Oxford working as a research associate in Oxford Brookes University, while being an associate member in visual geometry group in Oxford University. He has published more than 30 papers in flagship conferences such as CVPR, ICCV, SIGGRAPH ASIA and ECCV. He has co-edited journals, coauthored books, given tutorials and organized workshops on topics such as multi-view geometry and discrete optimization. His research interests are in computer vision, machine learning and robotics problems.





Capture and statistical modeling of human performances.

Dr. Kiran Babu Varanasi
MPI, Saarbruecken
Date: 26 March 2013

Abstract

Recent advances in multi-camera imaging and computer vision technologies have enabled us to capture real human performances as 3D mesh sequences. Apart from highly detailed and time-varying 3D geometry, incident illumination and surface reflectance properties can also be captured. In this talk, I will explain how this can be done for a general scene without highly intrusive capture setups, such as mo-cap suits or light stages. The acquired models enable a wide range of applications, including relighting of captured performances and semantically rich video-editing. Besides, the movements of the human body can now be acquired and systematically studied through statistical modeling on a range of subjects. I will conclude my talk by our initial results in this regard, and enlist future research directions in human-centered design and human-computer interaction.