The rapid development of technology is leading to the emergence of smart factories where the Artificial Intelligence paradigm of deep learning plays a significant role in processing data streams from machines. This paper presents the application of Augmented Attention Blocks embedded in a deep convolutional neural network for the purposes of estimating the state of remote machines using remotely collected acoustic data. An Android application was developed for the purposes of transferring audio data from a remote machine to a base station. At the base station, we propose and developed a deep convolutional neural network called MAABL (MobileNetv2 with Augmented Attention Block). The structure of the neural network is constructed by combining an inverted residual block of MobileNetv2 with an augmented attention mechanism block.
Testing and implementation of Human-Robot Collaboration (HRC) could be dangerous due to the high-speed movements and massive forces generated by industrial robots. Wherever humans and industrial robots share a common workplace, accidents are likely to happen and always unpredictable. This has hindered the development of human robot collaborative strategies as well as the ability of authorities to pass regulations on how humans and robots should work together in close proximities. This paper presents the use of a Virtual Reality digital twin of a physical layout as a mechanism to understand human reactions to both predictable and unpredictable robot motions. A set of established metrics as well as a newly developed Kinetic Energy Ratio metric are used to analyse human reactions and validate the effectiveness of the Virtual Reality environment. It is the aim that Virtual Reality digital twins could inform the safe implementation of Human-Robot Collaborative strategies in factories of
the future.
High value manufacturing systems still require ergonomically intensive manual activities. Examples include the aerospace industry where the fitting of pipes and wiring into confined spaces in aircraft wings is still a manual operation. In these environments, workers are subjected to ergonomically awkward forces and postures for long periods of time. This leads to musculoskeletal injuries that severely limit the output of a shopfloor leading to loss of productivity. The use of tools such as wearable sensors could provide a way to track the ergonomics of workers in real time. However, an information processing architecture is required in order to ensure that data is processed in real time and in a manner that meaningful action points are retrieved for use by workers. In this work, based on the Adaptive Control of Thought—Rational (ACT-R) cognitive framework, we propose a Cognitive Architecture for Wearable Sensors (CAWES); a wearable sensor system and cognitive architecture that is capable of taking data streams from multiple wearable sensors on a worker’s body and fusing them to enable digitisation, tracking and analysis of human ergonomics in real time on a shopfloor. Furthermore, through tactile feedback, the architecture is able to inform workers in real time when ergonomics rules are broken. The architecture is validated through the use of an aerospace case study undertaken in laboratory conditions. The results from the validation are encouraging and in the future, further
tests will be performed in an actual working environment.
Recent introduction of low-cost 3D sensing and affordable immersive virtual reality have lowered the barriers for creating and maintaining 3D virtual worlds. In this paper, we propose a way to combine these technologies with discrete-event simulation to improve the use of simulation in decision making in manufacturing. This work will describe how feedback is possible from real world systems directly into a simulation model to guide smart behaviors. Technologies included in the research include feedback from RGBD images of shop floor motion and human interaction within full immersive virtual reality that includes the latest headset technologies.
Current advances in Task and Motion Planning (TAMP) framework often rely on a specific and static task structure. A task structure is a sequence of how work pieces should be manipulated towards achieving a goal. Such systems can be problematic when task structures change as a result of human performance during human-robot collaboration scenarios in manufacturing or when redundant objects are present in the workspace, for example, during a Package-To- Order scenario with the same object type fulfilling different package configurations. In this paper, we propose a novel integrated TAMP framework that supports learning from human demonstrations while tackling variations in object positions and product configurations during massive-Package-To-Order (mPTO) scenarios in manufacturing as well as during human-robot collaboration scenarios. We design and apply a Graph Neural Network(GNN) based high-level reasoning module that is capable of handling variant goal configurations and can generalize to different task structures. Moreover, we also built a two-level motion module which can produce flexible and collision-free trajectories based on important features and task labels produced by the reasoning module. Through simulations and physical experiments, we show that our framework holds several advantages when compared with state-of-the-art previous work. The advantages include sample-efficiency and generalizability to unseen goal configurations as well as task structures.
Deep reinforcement learning, by taking advantage of neural networks, has made great strides in the continuous control of robots. However, in scenarios where multiple robots are required to collaborate with each other to accomplish a task, it is still challenging to build an efficient and scalable multi-agent control system due to increasing complexity. In this paper, we regard each unmanned aerial vehicle (UAV) with its manipulator as one agent, and leverage the power of multi-agent deep deterministic policy gradient (MADDPG) for the cooperative navigation and manipulation of a load. We propose solutions for addressing navigation to grasping point problem in targeted and flexible scenarios, and mainly focus on how to develop model-free policies for the UAVs without relying on a trajectory planner. To overcome the challenges of learning in scenarios with an increasing number of grasping points, we incorporate the demonstrations from an Optimal Reciprocal Collision Avoidance (ORCA) algorithm into our framework to guide the policy training and adapt two novel techniques into the architecture of MADDPG.
AI Website Generator