Deep Learning and more traditional AI paradigms are implicitly based on Descartes’ idea of mind body separation. The very fact that we have two distinct disciplines one for the body (Robotics) and one for the mind (AI) is hard to accept from a philosophical and epistemological standpoint. In particular, the application to physical systems and in particular robotic systems of current Machine/Deep Learning approaches is not straightforward, as usually the data coming from robot sensors are in comparatively limited amounts and the robots interact and affect their environment making for example real time object recognition more problematic. As a matter of fact, the principles of organization of natural intelligent and cognitive agents are rather different from the mainstream design principles of intelligent autonomous systems. In nature, cognition and intelligence are usually embedded in a physical system (a body), emerging bottom-up from the interaction of large numbers of loosely coupled components and is usually associated to Life, while the ‘mechatronics paradigm’ used to build mainstream robots, implements top-down controls, keeping well divided the body (usually a complex mechanical structure, made of rigid parts actuated by electric motors with sophisticated sensors and actuators) from the mind (a set of complex algorithms running on microprocessors arrays).
Modeling and control of intelligent autonomous systems capable to enable the design of complex physical intelligent systems still raise nontrivial research challenges.
Coping with those challenges is necessary if we aim to develop artificial intelligent systems with levels of robustness and adaptivity on par with the natural ones and understand natural intelligence, cognition and life itself.
In this workshop we will analyze strengths and weaknesses of current and novel methods and discuss how to move forward in research and applications of AI to Robotics, more precisely in Physical AI