Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles
Luleå tekniska universitet, Institutionen för hälsa, lärande och teknik, Hälsa, medicin och rehabilitering.ORCID-id: 0000-0003-3503-4676
Luleå tekniska universitet, Institutionen för hälsa, lärande och teknik, Hälsa, medicin och rehabilitering.
2023 (Engelska)Ingår i: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 7, nr 2, artikel-id 10Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

When deciding whether to cross the street or not, pedestrians take into consideration information provided by both vehicle kinematics and the driver of an approaching vehicle. It will not be long, however, before drivers of autonomous vehicles (AVs) will be unable to communicate their intention to pedestrians, as they will be engaged in activities unrelated to driving. External human–machine interfaces (eHMIs) have been developed to fill the communication gap that will result by offering information to pedestrians about the situational awareness and intention of an AV. Several anthropomorphic eHMI concepts have employed facial expressions to communicate vehicle intention. The aim of the present study was to evaluate the efficiency of emotional (smile; angry expression) and conversational (nod; head shake) facial expressions in communicating vehicle intention (yielding; non-yielding). Participants completed a crossing intention task where they were tasked with deciding appropriately whether to cross the street or not. Emotional expressions communicated vehicle intention more efficiently than conversational expressions, as evidenced by the lower latency in the emotional expression condition compared to the conversational expression condition. The implications of our findings for the development of anthropomorphic eHMIs that employ facial expressions to communicate vehicle intention are discussed.

Ort, förlag, år, upplaga, sidor
MDPI, 2023. Vol. 7, nr 2, artikel-id 10
Nyckelord [en]
external human-machine interfaces, autonomous vehicles, pedestrians, traffic flow, virtual human characters, emotional facial expressions, conversational facial expressions
Nationell ämneskategori
Farkostteknik Tillämpad psykologi
Forskningsämne
Psykologi
Identifikatorer
URN: urn:nbn:se:ltu:diva-90171DOI: 10.3390/mti7020010ISI: 000941015100001Scopus ID: 2-s2.0-85148725161OAI: oai:DiVA.org:ltu-90171DiVA, id: diva2:1651565
Anmärkning

Validerad;2023;Nivå 2;2023-01-24 (johcin);

This article has previously appeared as a manuscript in a thesis.

Tillgänglig från: 2022-04-12 Skapad: 2022-04-12 Senast uppdaterad: 2024-03-07Bibliografiskt granskad
Ingår i avhandling
1. Virtual Human Characters for Autonomous Vehicle-to-Pedestrian Communication
Öppna denna publikation i ny flik eller fönster >>Virtual Human Characters for Autonomous Vehicle-to-Pedestrian Communication
2022 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Pedestrians base their street-crossing decisions on both vehicle-centric cues, like speed and acceleration, and driver-centric cues, like gaze direction and facial expression. In the future, however, drivers of autonomous vehicles will be preoccupied with non-driving related activities and thus unavailable to provide pedestrians with relevant communicative cues. External human-machine interfaces (eHMIs) hold promise for filling the expected communication gap by providing information about the current state and future behaviour of an autonomous vehicle, to primarily ensure pedestrian safety and improve traffic flow, but also promote public acceptance of autonomous vehicle technology. The aim of this thesis is the development of an intuitive, culture-transcending eHMI, that can support multiple pedestrians in parallel make appropriate street-crossing decisions by communicating pedestrian acknowledgement and vehicle intention. In the proposed anthropomorphic eHMI concept, a virtual human character (VHC) is displayed on the windshield to communicate pedestrian acknowledgement and vehicle intention via gaze direction and facial expression, respectively. The performance of different implementations of the proposed concept is evaluated in the context of three monitor-based, laboratory experiments where participants performed a crossing intention task. Four papers are appended to the thesis. Paper I provides an overview of controlled studies that employed naive participants to evaluate eHMI concepts. Paper II evaluates the effectiveness of the proposed concept in supporting a single pedestrian or two co-located pedestrians make appropriate street-crossing decisions. Paper III evaluates the efficiency of emotional facial expressions in communicating non-yielding intention. Paper IV evaluates the efficiency of emotional and conversational facial expressions in communicating yielding and non-yielding intention. An implementation of the proposed anthropomorphic eHMI concept where a male VHC communicates non-yielding intention via an angry expression, cruising intention via cheek puff, and yielding intention via nod, is shown to be both highly effective in ensuring the safety of a single pedestrian or even two co-located pedestrians without compromising traffic flow in either case, and the most efficient. Importantly, this level of effectiveness is reached in the absence of any explanation of the rationale behind the eHMI concept or training to interact with it successfully.

Ort, förlag, år, upplaga, sidor
Luleå tekniska universitet, 2022
Serie
Doctoral thesis / Luleå University of Technology 1 jan 1997 → …, ISSN 1402-1544
Nyckelord
external human-machine interfaces, pedestrian acknowledgement, vehicle intention, traffic safety, traffic flow, gaze direction, facial expression
Nationell ämneskategori
Farkostteknik Tillämpad psykologi
Forskningsämne
Teknisk psykologi
Identifikatorer
urn:nbn:se:ltu:diva-90172 (URN)978-91-8048-061-1 (ISBN)978-91-8048-062-8 (ISBN)
Disputation
2022-06-10, A117, Luleå tekniska universitet, Luleå, 13:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2022-04-13 Skapad: 2022-04-12 Senast uppdaterad: 2022-05-30Bibliografiskt granskad

Open Access i DiVA

fulltext(4401 kB)219 nedladdningar
Filinformation
Filnamn FULLTEXT01.pdfFilstorlek 4401 kBChecksumma SHA-512
6daa0105baa69e749507ca535e7203f383beef40ee781020741596817f3d618582b42f6c6d0a9907577e9d50894dd67fa5b5167f316e9fc2aed8e5201fefa1a5
Typ fulltextMimetyp application/pdf

Övriga länkar

Förlagets fulltextScopus

Person

Rouchitsas, AlexandrosAlm, Håkan

Sök vidare i DiVA

Av författaren/redaktören
Rouchitsas, AlexandrosAlm, Håkan
Av organisationen
Hälsa, medicin och rehabilitering
FarkostteknikTillämpad psykologi

Sök vidare utanför DiVA

GoogleGoogle Scholar
Totalt: 219 nedladdningar
Antalet nedladdningar är summan av nedladdningar för alla fulltexter. Det kan inkludera t.ex tidigare versioner som nu inte längre är tillgängliga.

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 144 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf