Veröffentlichungen

[1] Rieß, S.; Laub, J.; Coutandin, S. & Fleischer, J. (2020), „Demontageeffektor für Schraubverbindungen mit ungewissem Zustand“, ZWF Zeitschrift für wirtschaftlichen Fabrikbetrieb, Band 115, Nr. 10, S. 711-714. 10.3139/104.112401 [30.11.-1].

Abstract

In diesem Beitrag wird die systematische Entwicklung einer Schraubeinheit für Industrieroboter vorgestellt, mit derer Hilfe ein Roboter die Demontage von Elektromotoren vornehmen kann. Aufgrund der Anwendung im Remanufacturing, haben die Elektromotoren bereits einen Lebenszyklus durchlaufen und zeichnen sich durch ungewisse Produktzustände aus. Der Schraubeffektor ist unabhängig davon im Stande, verschiedene Schrauben zu lösen und dabei Messwerte aufzunehmen.
[2] Wurster, M.; Häfner, B.; Gauder, D.; Stricker, N. & Lanza, G. (2021), „Fluid Automation - A Definition and an Application in Remanufacturing Production Systems“. Digitalizing smart factories, Elsevier, S. 508-513.

Abstract

Production systems must be able to quickly adapt to changing requirements. Especially in the field of remanufacturing, the uncertainty in the state of the incoming products is very high. Several adaptation mechanisms can be applied leading to agile and changeable production systems. Among these, adapting the degree of automation with respect to changeover times and high investment costs is one of the most challenging mechanisms. However, not only long-term changes, but also short-term adaptations can lead to enormous potentials, e.g. when night shifts can be supported by robots and thus higher labor costs and unfavorable working conditions at night can be avoided. These changes in the degree of automation on an operational level are referred to as fluid automation, which will be defined in this paper. The mechanisms of fluid automation are presented together with a case study showing its application on a disassembly station for electrical drives.
[3] Fleischer, J.; Gerlitz, E.; Rieß, S.; Coutandin, S. & Hofmann, J. (2021), „Concepts and Requirements for Flexible Disassembly Systems for Drive Train Components of Electric Vehicles“. Procedia CIRP, Elsevier, S. 577-582.

Abstract

An increase in the sales number of battery electric vehicles within the last year can be recorded. At the end-of-life these vehicles require a reliable disassembly for recycling or remanufacturing. On the one hand, drivetrain components of those vehicles contain valuable resources and thus are mainly relevant for recycling or remanufacturing. On the other hand, the automated disassembly of especially electric motors and Li-ion battery systems encloses major challenges. Especially the high number of variants and the unknown specifications and conditions of the components are challenging points for the disassembly system. Conventional automated disassembly systems provide limited flexibility and adaptability for the disassembly of these products. Within this contribution two robot-based flexible disassembly systems are systematically derived for Li-ion battery modules and supplementary electric motors. Both products are analysed and the product-specific challenges and requirements are identified. The state of the art regarding flexible disassembly systems is captured using the methodology of a morphological box. Four subsystems are identified: Kinematic, Tools, Workpiece fixation, Safety system. Based on the results, concepts for disassembly systems for both Li-ion battery modules and supplementary electric motors are developed and presented in detail. Especially the structure and functionality of both systems are explained. This is followed by an assessment of the approaches and an identification of limitations as well as possible optimization potentials.
[4] Dreher, C.; Wächter, M.; Asfour, T. (2020), „Learning Object-Action Relations from Bimanual Human Demonstration Using Graph Networks“, IEEE Robotics and Automation Letters 5(1), S. 187-194. 10.1109/LRA.2019.2949221

Abstract

Recognizing human actions is a vital task for a humanoid robot, especially in domains like programming by demonstration. Previous approaches on action recognition primarily focused on the overall prevalent action being executed, but we argue that bimanual human motion cannot always be described sufficiently with a single action label. We present a system for framewise action classification and segmentation in bimanual human demonstrations. The system extracts symbolic spatial object relations from raw RGB-D video data captured from the robot's point of view in order to build graph-based scene representations. To learn object-action relations, a graph network classifier is trained using these representations together with ground truth action labels to predict the action executed by each hand. We evaluated the proposed classifier on a new RGB-D video dataset showing daily action sequences focusing on bimanual manipulation actions. It consists of 6 subjects performing 9 tasks with 10 repetitions each, which leads to 540 video recordings with 2 hours and 18 minutes total playtime and per-hand ground truth action labels for each frame. We show that the classifier is able to reliably identify (action classification macro F1-score of 0.86) the true executed action of each hand within its top 3 predictions on a frame-by-frame basis without prior temporal action segmentation.
[5] Klas, C.; Hundhausen, F.; Gao, J.; Dreher, C.; Reither, S.; Zhou, Y.; Asfour, T. (2021), „The KIT Gripper: A Multi-Functional Gripper for Disassembly Tasks “, IEEE International Conference on Robotics and Automation (ICRA)

Abstract

We introduce a multi-functional robotic gripperequipped with a set of actions required for disassembly ofelectromechanical devices. The gripper consists of a robot armwith 5 degrees of freedom (DoF) for manipulation and a jawgripper with a 1-DoF rotation joint and a 1-DoF closing joint.The system enables manipulation in 7 DoF and offers the abilityto reposition objects in hand and to perform tasks that usuallyrequire bimanual systems. The sensor system of the gripperincludes relative and absolute joint encoders, force and pressuresensors to provide feedback about interaction forces, a tool-mounted camera for screw detection and precise placement ofthe tool tip using image-based visual servoing. We present adata-driven method for estimating joint torques based on theoutput voltage and motor speed. Further, we provide methodsfor teaching disassembly actions based on human demonstra-tion, their representation as movement primitives and executionbased on sensory feedback. We provide quantitative resultsregarding positioning and torque estimation accuracy, disassem-bly success rate and qualitative results regarding the successfuldisassembly of hard disc drives.