Bridging Perspectives: Enhancing Trustworthy AI Through Transparency, Reliability, and Safety
2024 (English)In: IEEE International Conference on Industrial Engineering and Engineering Management, IEEM 2024, IEEE Computer Society , 2024, p. 556-559Conference paper, Published paper (Refereed)
Abstract [en]
When developing and implementing ethical Artificial Intelligence (AI) in industrial settings, various viewpoints on building trustworthy AI emerge. This research emphasizes these differences and provides suggestions to close these divides. It goes beyond conventional ethical dilemmas, such as the trolley problem, to tackle intricate issues surrounding trustworthy and ethical AI. We identify three core principles of trustworthy AI: transparency, reliability, and safety. Transparency involves clear, open communication about AI decision-making processes, which fosters trust among stakeholders. Reliability ensures consistent, dependable performance under various conditions, essential for critical operations. Safety focuses on preventing harm to humans, the environment, and infrastructure, requiring robust safeguards and adherence to safety standards. By prioritizing these pillars, the research provides practical recommendations for developing AI systems that balance technological advancement with ethical principles, enhancing user trust and ensuring responsible AI integration across industries.
Place, publisher, year, edition, pages
IEEE Computer Society , 2024. p. 556-559
Keywords [en]
Trustworthy AI, decision making, ethical AI, Safety, Transparency, Reliability
National Category
Artificial Intelligence Ethics
Research subject
Machine Learning
Identifiers
URN: urn:nbn:se:ltu:diva-111844DOI: 10.1109/IEEM62345.2024.10857191Scopus ID: 2-s2.0-85218048683OAI: oai:DiVA.org:ltu-111844DiVA, id: diva2:1942665
Conference
IEEE International Conference on Industrial Engineering and Engineering Management (IEEM 2024), Bangkok, Thailand, December 15-18, 2024
Note
ISBN for host publication: 979-8-3503-8609-7
2025-03-062025-03-062025-10-21Bibliographically approved