In this project, the development of a novel shared control framework for robotic manipulation systems is presented. Achieving robotic behaviors capable of successfully operating in dynamic environments involves numerous challenges, including managing environmental complexity, avoiding critical failures, and adapting to new tasks. These challenges are often addressed through control schemes based on human-robot interaction, which leverage the cognitive capabilities of humans to compensate for the system's lack of knowledge.
Various architectures have been proposed within the broader context of shared autonomy. These architectures differ in the roles assigned to humans and robots in perceiving, acting, and planning during tasks.
This study proposes a novel scheme in which humans and robots share almost equal roles, both possessing specific forms of autonomy and collaborating continuously to achieve the desired outcomes. This approach requires an exchange of information between human and robot, which, however, is primarily implicit. Consequently, the robot’s local intelligence must interpret the working scenario and the user’s intentions using exteroceptive sensors, such as cameras, and proprioceptive sensors, such as IMUs and force sensors, integrated into the robot.
The problem of robotic grasping and manipulation using dexterous hands will be addressed as a paradigmatic case study and utilized for experimental validation. In prosthetic applications, it is well known that users cannot directly control all degrees of freedom of a robotic anthropomorphic hand. To address this limitation, several approaches, such as those based on synergies, have been proposed to simplify the mapping between user inputs and hand movements. Nevertheless, in such applications, human control—encompassing both planning and acting—is essential and cannot be entirely replaced by fully autonomous device behavior.
This project aims to achieve a dual objective. First, it seeks to enhance the role of the human in the control loop by studying the interaction between sEMG-based control inputs and haptic feedback delivered via cutaneous stimulations and vibrotactile wearable devices. This would allow users to directly control the force exerted by the robotic hand, enabling fine regulation. Second, to reduce task complexity and the cognitive load on the user, the robotic hand will be equipped with local autonomy. Based on the application scenario, the hand will autonomously select the appropriate pre-grasp configuration and optimal grasping modalities. This autonomy will involve recognizing the specific task being performed by the user using images of the scene and the relative positioning of the hand and the target object or position.
Finally, the mutual interaction between human and robotic intelligence will be experimentally analyzed to validate the proposed method.