Human-centred robotics and the EU AI Act: selected standards and implications

Authors

  • Ralf Roßkopf Technical University of Applied Sciences Würzburg-Schweinfurt

DOI:

https://doi.org/10.59476/mtt2025.v2i21.730

Keywords:

AI Act, robot, industry, human, risk, obligation, law

Abstract

Robotics and AI are key factors in enhancing business and national resilience, particularly in maintaining high-wage manufacturing in countries facing demographic challenges. Both are instrumental in making manufacturing more agile and flexible. Robots and humans will have to collaborate closely. Creating the necessary intelligent autonomous systems will go well beyond the typical considerations of machine learning. In line with the EU’s Industry 5.0 vision for a sustainable, human-centred and resilient European industry, humans and robots are expected to work so closely together that robot software must be designed with humans in mind from the outset. Thus, ethical, legal, and social implications (ELSI) must also be considered. Legal implications are multifaceted, ranging from AI Law, Product Liability Law, Product Safety Law, Machinery Law, to Technical Standards, Data Protection Law, Copyright and IP Law, Labour Law, etc. This contribution will briefly introduce the required technical features and, based on that, explore selected relevant legal implications and related standards of the new EU AI Act. The Act aims to promote human-centricity and entered into force on 1 August 2024, with its applicability phased in over a period of three years until 2 August 2027. Special references will be made to human-centredness, the subsumption of the plant owner to the categories of obligated parties (provider, product manufacturer, deployer, authorised representative or distributor), as well as the classification to the AI risk scheme (prohibited risks, high risks, transparency risks, general-purpose risks, and systemic risks as well as minimal risks) and related obligations. The high relevance of the AI Act for industrial human-robot settings will be shown. As AI evolves exponentially, so does its significance for industrial and national agility and resilience. Obligations vary widely according to risk classification and the obligated operator. Coordination and information flows between providers and deployers of related AI systems are key. As plant owners might become providers themselves, the duties could be more varied than initially assumed. Design and implementation of human-centred robotic scenarios, thus, should be well planned, structured, documented, and constantly evaluated. The respective AI models and AI systems need to be legally compliant and human-centred by design. An unprecedented dialogue across disciplinary fields is required.

References

1. Bear, D. M., Fan, C., Mrowca, D., Li, Y., Alter, S., Nayebi, A., Schwartz, J., Fei-Fei, L., Wu, J., Tenenbaum, J. B., & Yamins, D. L. K. (2020). Learning physical graph representations from visual scenes. In 34th Conference on Neural Information Processing Systems (NeurIPS), 1–23. https://doi.org/10.48550/arXiv.2006.12373

2. Bernsteiner, C., & Schmitt, T. R. (2024). Art. 52 AI Act. In M. Martini, & C. Wendehorst (Eds.), KI-VO: Verordnung über künstliche Intelligenz, 799–807. Beck.

3. Boget, Y., Gregorova, M., & Kalousis, A. (2024). Discrete graph auto-encoder. Transactions on Machine Learning Research, 3, 1–26. https://openreview.net/pdf?id=bZ80b0wb9d

4. Breque, M., De Nul, L., & Petridis, A. (2021). Industry 5.0: Towards a sustainable, human-centric and resilient European industry. European Commission Policy Brief. https://op.europa.eu/en/publication-detail/-/publication/468a892a-5097-11eb-b59f-01aa75ed71a1/

5. Brunke, L., Greeff, M., Hall, A. W., Yuan, Z., Zhou, S., Panerati, J., & Schoellig, A. P. (2022). Safe learning in robotics: From learning-based control to safe reinforcement learning. Annual Review of Control, Robotics, and Autonomous Systems, 5, 411–444. https://doi.org/10.1146/annurev-control-042920-020211

6. Donath, J. (2020). Ethical issues in our relationship with artificial entities. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford Handbook of Ethics of AI, 52–73. Oxford Academic. https://doi.org/10.1093/oxfordhb/9780190067397.013.3

7. Ebers, M. (2024). Regulierung generativer künstlicher Intelligenz in der KI-VO. In M. Ebers, & B. M. Quarch (Eds.), Rechtshandbuch ChatGPT: KI-basierte Sprachmodelle in der Praxis, 43–91. Nomos.

8. EC. (2024, August 1). Artificial Intelligence – Questions and Answers. https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683

9. EC. (2025). Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act). C (2025) 924 final. 6/2/2025. https://ec.europa.eu/newsroom/dae/redirection/document/112455

10. Eisenberger, I. (2024). Art. 16 AI Act. In M. Martini, & C. Wendehorst (Eds.), KI-VO: Verordnung über künstliche Intelligenz, 467–474. Beck.

11. Gössl, S. L. (2024). Art. 25 AI Act. In M. Martini, & C. Wendehorst (Eds.), KI-VO: Verordnung über künstliche Intelligenz, 538–557. Beck.

12. High-Level Expert Group on Artificial Intelligence (HLEG). (2019). Ethics guidelines for trustworthy AI. https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1

13. Kroemer, O., Niekum, S., & Konidaris, G. (2022). A review of robot learning for manipulation: Challenges, representations, and algorithms. Journal of Machine Learning Research, 22(1), 1395–1476. https://www.jmlr.org/papers/volume22/19-804/19-804.pdf

14. Lin, C. J., & Lukodono, R. P. (2025). Learning performance and physiological feedback-based evaluation for human–robot collaboration. Applied Ergonomics, 124, 104425. https://doi.org/10.1016/j.apergo.2024.104425

15. Martini, M. (2024). Art. 50 AI Act. In M. Martini, & C. Wendehorst (Eds.), KI-VO: Verordnung über künstliche Intelligenz, 760–788. Beck.

16. Pasquale, V. D., Farina, P., Fera, M., Gerbino, S., Miranda, S., & Rinaldi, M. (2024). Human robot-interaction: a conceptual framework for task performance analysis. IFAC PapersOnline, 58(19), 79–84. https://doi.org/10.1016/j.ifacol.2024.09.096

17. Ramapuram, J., Gregorova, M., & Kalousis, A. (2020). Lifelong generative modelling. Neurocomputing, 404, 381–400. https://doi.org/10.1016/j.neucom.2020.02.115

18. Ruschemeier, H. (2024). Annex III AI Act. In M. Martini, & C. Wendehorst (Eds.), KI-VO: Verordnung über künstliche Intelligenz, 1133–1157. Beck.

19. Sanfilippo, F., Zafar, M. H., & Zambetta, F. (2025). From caged robots to high-fives in robotics: Exploring the paradigm shift from human-robot interaction to human-robot teaming in human-machine interfaces. Journal of Manufacturing Systems, 78, 1–25. https://doi.org/10.1016/j.jmsy.2024.10.015

20. Sastry, G., Heim, L., Belfield, H., Andljung, M., Brundage, M. Hazell, J., O’Keefe, C., Hadfield, G. K., Ngo, R., Pilz, K., Gor, G., Bluemke, E., Shoher, S., Egan, J., Trager, R. F., Avin, S., Weller, A., Bengio, Y., & Coyle, D. (2025). Computing power and the governance of artificial intelligence. 1–103. https://doi.org/10.48550/arXiv.2402.08797

21. Schmager, S., Pappas, I. O., & Vassilakopoulou, P. (2025). Understanding human-centred AI: A review of its defining elements and a research agenda. Behaviour & Information Technology, 1–40. https://doi.org/10.1080/0144929X.2024.2448719

22. Wendehorst, C. (2024a). Art. 3 AI Act. In M. Martini, & C. Wendehorst (Eds.), KI-VO: Verordnung über künstliche Intelligenz, 140–234. Beck.

23. Wendehorst, C. (2024b). Art. 5 AI Act. In M. Martini, & C. Wendehorst (Eds.), KI-VO: Verordnung über künstliche Intelligenz, 240–286. Beck.

24. Wulfmeier, M., Byravan, A., Hertweck, T., Higgins, I., Gupta, A., Kulkarni, T., Reynolds, M., Teplyashin, D., Hafner, R., Lampe, T., & Riedmiller, M. (2021). Representation matters: Improving perception and exploration for robotics. In IEEE International Conference on Robotics and Automation (ICRA), Xian, China. https://doi.org/10.48550/arXiv.2011.01758

Downloads

Published

2025-11-03

How to Cite

Human-centred robotics and the EU AI Act: selected standards and implications. (2025). Mokslo Taikomieji Tyrimai Applied Research, 2(21), 40-47. https://doi.org/10.59476/mtt2025.v2i21.730