AI Driven Robotic Manipulation Using Multi Sensor Fusion
Keywords:
Robot Manipulation, Advanced Perception, Multi-Sensor Fusion, Vision, Tactile/Force Sensing, Proprioception, Audio Sensing, AI-Based Systems, Hierarchical Policy Architectures, Reinforcement Learning, Deep Learning, Multimodal Inputs, Simulation, Real-Robot Experiments, Manipulation Precision, Force Control, Generalization, Contact Situations, Task Success, Assembly, Grasping, In-Hand Manipulation, Sensor Synchronization, Sensor Noise, Learning ComplexityAbstract
The real-world situations of robot manipulation usually require advanced perception and control measures to overcome the unpredictability, contacts, and occlusions. A strong solution, multi-sensor fusion, i.e. integration of modalities, which include vision, tactile/force, proprioception and in some cases audio, is integrated to complement information in making robust decisions. This paper discusses AI-based systems that combine several streams of sensory data to enable robots to perform intricate tasks through manipulation. We analyze hierarchical policy architectures, reinforcement learning (RL) and deep learning models which use multimodal inputs. By simulating and using real-robots, these systems exhibit increased manipulation precision, force control and generalization in different contact situations. The results demonstrate that the incorporation of vision and force/tactile feedback plays a vital role in enhancing the success of a task in assembly, grasping, and in-hand manipulation. We also examine issues such as synchronization, sensor noise as well as learning complexity and give recommendations to future studies. The data presented in this paper consist of a review of literature concerning robotic manipulation and sensor fusion, tactile sensing, vision, reinforcement learning and multimodal perception.




