Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Virtual Human Characters for Autonomous Vehicle-to-Pedestrian Communication
Luleå tekniska universitet, Institutionen för hälsa, lärande och teknik, Hälsa, medicin och rehabilitering.ORCID-id: 0000-0003-3503-4676
2022 (engelsk)Doktoravhandling, med artikler (Annet vitenskapelig)
Abstract [en]

Pedestrians base their street-crossing decisions on both vehicle-centric cues, like speed and acceleration, and driver-centric cues, like gaze direction and facial expression. In the future, however, drivers of autonomous vehicles will be preoccupied with non-driving related activities and thus unavailable to provide pedestrians with relevant communicative cues. External human-machine interfaces (eHMIs) hold promise for filling the expected communication gap by providing information about the current state and future behaviour of an autonomous vehicle, to primarily ensure pedestrian safety and improve traffic flow, but also promote public acceptance of autonomous vehicle technology. The aim of this thesis is the development of an intuitive, culture-transcending eHMI, that can support multiple pedestrians in parallel make appropriate street-crossing decisions by communicating pedestrian acknowledgement and vehicle intention. In the proposed anthropomorphic eHMI concept, a virtual human character (VHC) is displayed on the windshield to communicate pedestrian acknowledgement and vehicle intention via gaze direction and facial expression, respectively. The performance of different implementations of the proposed concept is evaluated in the context of three monitor-based, laboratory experiments where participants performed a crossing intention task. Four papers are appended to the thesis. Paper I provides an overview of controlled studies that employed naive participants to evaluate eHMI concepts. Paper II evaluates the effectiveness of the proposed concept in supporting a single pedestrian or two co-located pedestrians make appropriate street-crossing decisions. Paper III evaluates the efficiency of emotional facial expressions in communicating non-yielding intention. Paper IV evaluates the efficiency of emotional and conversational facial expressions in communicating yielding and non-yielding intention. An implementation of the proposed anthropomorphic eHMI concept where a male VHC communicates non-yielding intention via an angry expression, cruising intention via cheek puff, and yielding intention via nod, is shown to be both highly effective in ensuring the safety of a single pedestrian or even two co-located pedestrians without compromising traffic flow in either case, and the most efficient. Importantly, this level of effectiveness is reached in the absence of any explanation of the rationale behind the eHMI concept or training to interact with it successfully.

sted, utgiver, år, opplag, sider
Luleå tekniska universitet, 2022.
Serie
Doctoral thesis / Luleå University of Technology 1 jan 1997 → …, ISSN 1402-1544
Emneord [en]
external human-machine interfaces, pedestrian acknowledgement, vehicle intention, traffic safety, traffic flow, gaze direction, facial expression
HSV kategori
Forskningsprogram
Teknisk psykologi
Identifikatorer
URN: urn:nbn:se:ltu:diva-90172ISBN: 978-91-8048-061-1 (tryckt)ISBN: 978-91-8048-062-8 (digital)OAI: oai:DiVA.org:ltu-90172DiVA, id: diva2:1651577
Disputas
2022-06-10, A117, Luleå tekniska universitet, Luleå, 13:00 (engelsk)
Opponent
Veileder
Tilgjengelig fra: 2022-04-13 Laget: 2022-04-12 Sist oppdatert: 2022-05-30bibliografisk kontrollert
Delarbeid
1. External Human-Machine Interfaces for Autonomous Vehicle-to-Pedestrian Communication: A Review of Empirical Work
Åpne denne publikasjonen i ny fane eller vindu >>External Human-Machine Interfaces for Autonomous Vehicle-to-Pedestrian Communication: A Review of Empirical Work
2019 (engelsk)Inngår i: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 10, artikkel-id 2757Artikkel, forskningsoversikt (Fagfellevurdert) Published
Abstract [en]

Interaction between drivers and pedestrians is often facilitated by informal communicative cues, like hand gestures, facial expressions, and eye contact. In the near future, however, when semi- and fully autonomous vehicles are introduced into the traffic system, drivers will gradually assume the role of mere passengers, who are casually engaged in non-driving-related activities and, therefore, unavailable to participate in traffic interaction. In this novel traffic environment, advanced communication interfaces will need to be developed that inform pedestrians of the current state and future behavior of an autonomous vehicle, in order to maximize safety and efficiency for all road users. The aim of the present review is to provide a comprehensive account of empirical work in the field of external human–machine interfaces for autonomous vehicle-to-pedestrian communication. In the great majority of covered studies, participants clearly benefited from the presence of a communication interface when interacting with an autonomous vehicle. Nevertheless, standardized interface evaluation procedures and optimal interface specifications are still lacking.

sted, utgiver, år, opplag, sider
Frontiers Media S.A., 2019
Emneord
traffic interaction, human–vehicle interaction, autonomous vehicles, vehicle-to-pedestrian communication, external human–machine interfaces, vulnerable road users
HSV kategori
Forskningsprogram
Teknisk psykologi
Identifikatorer
urn:nbn:se:ltu:diva-77420 (URN)10.3389/fpsyg.2019.02757 (DOI)000504252200001 ()31920810 (PubMedID)2-s2.0-85077251777 (Scopus ID)
Merknad

Validerad;2020;Nivå 2;2020-01-29 (johcin);

For correction, see: Rouchitsas A and Alm H (2020) Corrigendum: External Human–Machine Interfaces for Autonomous Vehicle-to-Pedestrian Communication: A Review of Empirical Work. Front. Psychol. 11:575151. doi: 10.3389/fpsyg.2020.575151

Tilgjengelig fra: 2020-01-15 Laget: 2020-01-15 Sist oppdatert: 2023-09-07bibliografisk kontrollert
2. Ghost on the Windshield: Employing a Virtual Human Character to Communicate Pedestrian Acknowledgement and Vehicle Intention
Åpne denne publikasjonen i ny fane eller vindu >>Ghost on the Windshield: Employing a Virtual Human Character to Communicate Pedestrian Acknowledgement and Vehicle Intention
2022 (engelsk)Inngår i: Information, E-ISSN 2078-2489, Vol. 13, nr 9, artikkel-id 420Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Pedestrians base their street-crossing decisions on vehicle-centric as well as driver-centric cues. In the future, however, drivers of autonomous vehicles will be preoccupied with non-driving related activities and will thus be unable to provide pedestrians with relevant communicative cues. External human–machine interfaces (eHMIs) hold promise for filling the expected communication gap by providing information about a vehicle’s situational awareness and intention. In this paper, we present an eHMI concept that employs a virtual human character (VHC) to communicate pedestrian acknowledgement and vehicle intention (non-yielding; cruising; yielding). Pedestrian acknowledgement is communicated via gaze direction while vehicle intention is communicated via facial expression. The effectiveness of the proposed anthropomorphic eHMI concept was evaluated in the context of a monitor-based laboratory experiment where the participants performed a crossing intention task (self-paced, two-alternative forced choice) and their accuracy in making appropriate street-crossing decisions was measured. In each trial, they were first presented with a 3D animated sequence of a VHC (male; female) that either looked directly at them or clearly to their right while producing either an emotional (smile; angry expression; surprised expression), a conversational (nod; head shake), or a neutral (neutral expression; cheek puff) facial expression. Then, the participants were asked to imagine they were pedestrians intending to cross a one-way street at a random uncontrolled location when they saw an autonomous vehicle equipped with the eHMI approaching from the right and indicate via mouse click whether they would cross the street in front of the oncoming vehicle or not. An implementation of the proposed concept where non-yielding intention is communicated via the VHC producing either an angry expression, a surprised expression, or a head shake; cruising intention is communicated via the VHC puffing its cheeks; and yielding intention is communicated via the VHC nodding, was shown to be highly effective in ensuring the safety of a single pedestrian or even two co-located pedestrians without compromising traffic flow in either case. The implications for the development of intuitive, culture-transcending eHMIs that can support multiple pedestrians in parallel are discussed.

sted, utgiver, år, opplag, sider
MDPI, 2022
Emneord
external human–machine interfaces, autonomous vehicles, vehicle-to-pedestrian communication, traffic safety, gaze direction, emotional facial expressions, conversational facial expressions, neutral facial expressions
HSV kategori
Forskningsprogram
Teknisk psykologi
Identifikatorer
urn:nbn:se:ltu:diva-90165 (URN)10.3390/info13090420 (DOI)000856467300001 ()2-s2.0-85138736115 (Scopus ID)
Merknad

Validerad;2022;Nivå 2;2022-09-12 (hanlid)

Tilgjengelig fra: 2022-04-12 Laget: 2022-04-12 Sist oppdatert: 2023-05-08bibliografisk kontrollert
3. Communicating Vehicle Non-Yielding Intention via Emotional Facial Expressions: Angry vs. Surprised
Åpne denne publikasjonen i ny fane eller vindu >>Communicating Vehicle Non-Yielding Intention via Emotional Facial Expressions: Angry vs. Surprised
2022 (engelsk)Inngår i: Artikkel i tidsskrift (Annet vitenskapelig) Submitted
HSV kategori
Forskningsprogram
Teknisk psykologi
Identifikatorer
urn:nbn:se:ltu:diva-90168 (URN)
Tilgjengelig fra: 2022-04-12 Laget: 2022-04-12 Sist oppdatert: 2022-05-03
4. Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles
Åpne denne publikasjonen i ny fane eller vindu >>Smiles and Angry Faces vs. Nods and Head Shakes: Facial Expressions at the Service of Autonomous Vehicles
2023 (engelsk)Inngår i: Multimodal Technologies and Interaction, E-ISSN 2414-4088, Vol. 7, nr 2, artikkel-id 10Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

When deciding whether to cross the street or not, pedestrians take into consideration information provided by both vehicle kinematics and the driver of an approaching vehicle. It will not be long, however, before drivers of autonomous vehicles (AVs) will be unable to communicate their intention to pedestrians, as they will be engaged in activities unrelated to driving. External human–machine interfaces (eHMIs) have been developed to fill the communication gap that will result by offering information to pedestrians about the situational awareness and intention of an AV. Several anthropomorphic eHMI concepts have employed facial expressions to communicate vehicle intention. The aim of the present study was to evaluate the efficiency of emotional (smile; angry expression) and conversational (nod; head shake) facial expressions in communicating vehicle intention (yielding; non-yielding). Participants completed a crossing intention task where they were tasked with deciding appropriately whether to cross the street or not. Emotional expressions communicated vehicle intention more efficiently than conversational expressions, as evidenced by the lower latency in the emotional expression condition compared to the conversational expression condition. The implications of our findings for the development of anthropomorphic eHMIs that employ facial expressions to communicate vehicle intention are discussed.

sted, utgiver, år, opplag, sider
MDPI, 2023
Emneord
external human-machine interfaces, autonomous vehicles, pedestrians, traffic flow, virtual human characters, emotional facial expressions, conversational facial expressions
HSV kategori
Forskningsprogram
Psykologi
Identifikatorer
urn:nbn:se:ltu:diva-90171 (URN)10.3390/mti7020010 (DOI)000941015100001 ()2-s2.0-85148725161 (Scopus ID)
Merknad

Validerad;2023;Nivå 2;2023-01-24 (johcin);

This article has previously appeared as a manuscript in a thesis.

Tilgjengelig fra: 2022-04-12 Laget: 2022-04-12 Sist oppdatert: 2024-03-07bibliografisk kontrollert

Open Access i DiVA

fulltext(1064 kB)492 nedlastinger
Filinformasjon
Fil FULLTEXT01.pdfFilstørrelse 1064 kBChecksum SHA-512
808a7a539ac043407d548106e001e653ccc4e008491f9531e39f524222a26042c354287ad96f000871aac9be9cd8b37b44ae1c1c05e3c41a062cb8f445561802
Type fulltextMimetype application/pdf

Person

Rouchitsas, Alexandros

Søk i DiVA

Av forfatter/redaktør
Rouchitsas, Alexandros
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar
Totalt: 492 nedlastinger
Antall nedlastinger er summen av alle nedlastinger av alle fulltekster. Det kan for eksempel være tidligere versjoner som er ikke lenger tilgjengelige

isbn
urn-nbn

Altmetric

isbn
urn-nbn
Totalt: 772 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf