WCIR 2019

Research in cognitive robotics is concerned with endowing robots with higher level cognitive functions that enable them to reason, act and have conceptual understanding of the world like humans. For example, such robots must be able to reason about goals, actions, when to perceive and what to look for while they autonomously explore their environment and determine which are the important features that need to be considered for making a decision. They should also understand the cognitive states of other agents required for collaborative task execution, do dialogue exchange with other agents if they do not understand instructions and can enhance itself quickly by learning new behaviour automatically from the observations of a dynamic environment. In short, cognitive robotics is concerned with integrating AI, perception, reasoning, human-robot interactions, continual learning, and action with a theoretical and implementation framework. Such frameworks will have a big role to play in Service Robotics, Enterprise Robotics and Industry 4.0 which aims to revolutionalize current industrial automation through the use of technologies such as Cloud Computing, IOT, additive manufacturing, robotics and Artificial Intelligence.

This workshop aims at bringing together researchers involved in all aspects of the theory and implementation of cognitive robots, and discussing current work and future directions.

3 Speakers
1 Day

Call for Papers

The workshop invites original contributions in the following areas:

  • Cognitive Robotics
  • Cognitive Architectures for Intelligent Robots
  • Neuroscience inspired algorithms and architectures
  • Planning for Cognitive Robots
  • Artificial Intelligence
  • Perception and World Modelling
  • Meta-learning for Robotics
  • Visual cognition and computer vision
  • Natural Language understanding and Dialogue Exchange through question and answer for robotics
  • SLAM, Cognitive Navigation, Semantic Goal based Navigation
  • Human Robot interaction
  • Deep-learning in robotics
  • Reinforcement learning in Robotics
  • Applications of Cognitive Robots

In addition to theoretical contributions, the workshop encourages researchers and practitioners to submit system and application papers demonstrating the use of cognitive and interactive robotics for various real world problems in warehouses, industries and other service sectors.

Dates

    Paper Submission:
    July 31, 2019
    Review:
    Aug. 10, 2019
    Camera Ready:
    Aug. 31, 2019

Papers

  • Page length is 8 pages (max) including references.
  • Use the IEEE Ro-MAN latex / MS Word template

Paper Submission

Organizers

Here are the diligent people behind the workshop

Brojeshwar
Bhowmick

Senior Scientist
TCS Research

Madhava
Krishna

Professor
IIIT Hyderabad

Mohan
Sridharan

Senior Lecturer
University of Birmingham

Swagat
Kumar

Researcher
TCS Research

Balamuralidhar
P

Principle Scientist
TCS Research

Arpan
Pal

Chief Scientist
TCS Research

Rajesh
Sinha

Head of Research & Innovation Program - Smart machines
TCS Research

Balaraman
Ravindran

Professor
IIT Madras

Arun Kumar
Singh

Associate Professor
University of Tartu

Rachid
Alami

Director of Research
LAAS-CNRS

Ilana
Nisky

Senior Lecturer
Ben Gurion

Our speakers

Our invited speakers come from top research institutions and companies around the globe, and are leading figures in the topics covered by the workshop. This diverse selection will prove valuable for academic as well as industry researchers and practitioners.

Dinesh
Manocha

Professor
University of Maryland at College Park

Motion Planning Technologies for Human-Robot Interaction

Robotics are increasing being use for manufacturing, assembly, warehouse automation, and service industries. However, current robots have limited capabilities in terms of handling new environments or working next to humans or with the humans. In this talk, we highlight some challenges in terms of developing motion and task planning capabilities that can enable robots to operate automatically in such environments. These include real-time planning algorithms that can also integrate with current sensor and perception techniques. We present new techniques for realtime motion planning and how they can be integrated with vision-based algorithms for human action prediction as well as natural language processing. We address many issues related to human motion prediction and mapping high level robot command to actions using appropriate planning algorithms. We also new collision and proximity algorithms for handling sensor data and real-time optimization algorithms that take into account various constraints and utilize commodity parallel processors (e.g. GPUs) to compute realtime solutions. Furthermore, we combine with dynamics and stability constraints to generate plausible plans. We also extend these ideas for simulation and navigation for high DOF manipulators and demonstrate their benefits in dense scenarios. The resulting approaches use a combination of ideas from AI planning, topology, optimization, computer vision, machine learning, natural language processing, and parallel computing. We also demonstrate many applications in terms of autonomous picking (e.g. Amazon Picking Challenge), avoiding human obstacles, cloth manipulation, robot navigation in dense environments, and operating as cobots for human-robot interaction.

Mohan
Sridharan

Senior Lecturer
University of Birmingham

Refinement-based Architecture for Knowledge Representation, Explainable Reasoning, and Interactive Learning in Robotics

This talk describes an architecture for robots based on the principle of step-wise refinement, and inspired by theories of human cognition and control. The architecture computationally encodes theories of intention, affordance, and explanation, and the principles of persistence, non-procrastination, and relevance. It is based on tightly-coupled transition diagrams of the domain at different resolutions, with a fine-resolution transition diagram defined as a refinement of a coarse-resolution diagram. For any given goal, non-monotonic logical reasoning with incomplete commonsense knowledge at the coarse resolution provides a plan of abstract actions. Each abstract action is implemented as a sequence of more concrete actions by automatically zooming to and reasoning with the relevant part of the fine-resolution transition diagram. Execution of each concrete action is based on probabilistic models of the uncertainty in sensing and actuation, with the corresponding outcomes being used for subsequent coarse-resolution reasoning. In addition, the architecture uses inductive learning, relational reinforcement learning, and deep learning to acquire previously unknown knowledge of domain dynamics. Furthermore, the architecture provides explanatory descriptions of the decisions, the underlying beliefs, and the related experiences, at the desired level of abstraction. This talk will illustrate the architecture's capabilities in the context of simulated and physical robots assisting humans in moving and manipulating objects in indoor domains.

Rachid
Alami

Director of Research
LAAS-CNRS

Implementing Robot Navigation in Human Environment as a Human-Robot Cooperative Activity

We claim that navigation in human environments can be viewed as cooperative activity especially in constrained situations. Humans concurrently aid and comply with each other while moving in a shared space. Cooperation helps pedestrians to efficiently reach their own goals and respect conventions such as the personal space of others.

To meet human comparable efficiency, a robot needs to predict the human intentions and trajectories and plan its own trajectory correspondingly in the same shared space. In this work, I present a reactive navigation planning that is able to plan such cooperative trajectories.

Sometimes, it is even necessary to influence the other or even force him/her to act in a certain way.

Using robust social constraints, potential resource conflicts, compatibility of human-robot motion direction, and proxemics, our planner is able to replicate human-like navigation behavior not only in open spaces but also in confined areas. Besides adapting the robot trajectory, the planner is also able to proactively propose co-navigation solutions by jointly computing human and robot trajectories within the same optimization framework. We demonstrate richness and performance of the cooperative planner with simulated and real world experiments on multiple interactive navigation scenarios.

Program Committee

Your papers are in great hands!

This workshop is proudly backed by the following program committee composed of very influential robotics researchers.

Rachid
Alami

Director of Research
LAAS-CNRS

Brojeshwar
Bhowmick

Senior Scientist
TCS Research

Vineet
Gandhi

Assistant Professor
IIIT Hyderabad

Madhava
Krishna

Professor
IIIT Hyderabad

Swagat
Kumar

Researcher
TCS Research

Soumyadip
Maity

Researcher
TCS Research

Balaraman
Ravindran

Professor
IIT Madras

Chayan
Sarkar

Scientist
TCS Research

Mohan
Sridharan

Senior Lecturer
University of Birmingham

The schedule

2:00 PM to 5:00 PM, 14th October, 2019

Venue: R5

  • 2:00 PM - 2:10 PM Intial Remarks
  • 2:10 PM - 3:00 PM Keynote by Prof. Rachid Alami [abstract]
  • 3:00 PM - 3:45 PM Keynote by Prof. Dinesh Manocha [abstract]
  • 3:45 PM - 4:30 PM Keynote by Prof. Mohan Sridharan [abstract]
  • 4:45 PM - 5:15 PM Paper presentations

Workshop Venue

  • R5, Le Meridien, Windsor Place, New Delhi, India
  • E-mail : b.bhowmick@tcs.com, mkrishna@iiit.ac.in