Laboratory for Trustworthy AI
Artificial intelligence

Artificial Intelligence (AI) presents new opportunities and challenges for our societies. To understand what AI means, we must go beyond innovation to apprehend how to employ decision-making technology in an ethical, mindful, and sustainable way. It is crucial to increase understanding of practices for Trustworthy AI.
The Laboratory for Trustworthy AI at Arcada University of Applied Sciences is a transdisciplinary and international research community who trains organisations and actors to assess the use of artificial intelligence. The lab connects academia and civil society, including developers of AI solutions, students, end-users, researchers, and stakeholders.
The lab connects academia and civil society, including developers of AI solutions, students, end-users, researchers, and stakeholders.
We work for a human-centric approach to AI and towards closing the gap between ethically sound AI development and the technical and methodological practices. We embrace technical innovativeness and assist organisations in mapping socio-technical scenarios that are used to assess risk. We collaborate closely with international networks such as the Z-Inspection® assessment method for Trustworthy AI . The Z-Inspection® approach is a validated assessment method that helps organizations to deliver ethically sustainable, evidence based, trustworthy and user-friendly AI driven solutions. The method is published in IEEE Transactions on Technology and Society
How can we help you?
The Laboratory for Trustworthy AI emphasises a curious but mindful use of technology. We are always interested in collaborating with both local and international organisations to develop an ethical use of AI.
- We provide workshops and continuous learning possibilities for citizens, local and regional private entities, healthcare professionals, engineers and developers who want to learn more concerning the ethical use of AI
- We perform assessments of Trustworthy AI using the Z-Inspection® method. In collaboration with many international and national participants, we assess AI use cases in Finland and beyond.
Do you want to collaborate with the Laboratory for Trustworthy AI? Please do not hesitate to contact us!
Research activities
The research activities of the Lab expounds and develops upon the EU requirements for Trustworthy AI proposed by the EU High-Level Expert Group on Artificial Intelligence.
Use cases in Trustworthy AI
- Assessing the Ethical Implication of AI for Predicting Cardiovascular Risks
- Machine learning as a supportive tool to recognise cardiac arrest in emergency calls
- Deep learning based Skin Lesion Classifiers
- Deep learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients
Selected research publications in the areas of:
- Trustworthy AI
- Big Data Analytics
- Data Protection and Privacy
- Technical robustness and safety
- Ethics
- IT Security and Reliability
People involved (includes a link to their publications)
- Jonny Karlsson, PhD in Computer Science, Senior Lecturer in Information Technology
- Leonardo Espinosa Leal , PhD in in Computational Materials Science, Degree Program Director in Big Data Analytics
- Henrika Franck , Docent, Dr.econ, Dean of the Graduate School and Research at Arcada
- Elisabeth Hildt, Professor of Philosophy and Director of the Center of the study of Ethics in the Professions, Illinois Institute of Technology
- Ira Jeglinsky , PhD in Medical Science, Arcada Health Tech Hub
- Magnus Westerlund , Dr., Director of the Trustworthy AI Laboratory, Associate Professor at Kristiania University Collegue, Principal Lecturer
- Roberto Zicari , Professor, Z-Inspection® Initiative