Change search
Refine search result
1 - 40 of 40
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ciardo, F.
    et al.
    Istituto Italiano di Tecnologia, Genoa, Italy.
    Wykowska, Agnieszka
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human and technology. Istituto Italiano di Tecnologia, Genoa, Italy.
    Response Coordination Emerges in Cooperative but Not Competitive Joint Task2018In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 9, article id 1919Article in journal (Refereed)
    Abstract [en]

    Effective social interactions rely on humans' ability to attune to others within social contexts. Recently, it has been proposed that the emergence of shared representations, as indexed by the Joint Simon effect (JSE), might result from interpersonal coordination (Malone et al., 2014). The present study aimed at examining interpersonal coordination in cooperative and competitive joint tasks. To this end, in two experiments we investigated response coordination, as reflected in instantaneous cross-correlation, when co-agents cooperate (Experiment 1) or compete against each other (Experiment 2). In both experiments, participants performed a go/no-go Simon task alone and together with another agent in two consecutive sessions. In line with previous studies, we found that social presence differently affected the JSE under cooperative and competitive instructions. Similarly, cooperation and competition were reflected in co-agents response coordination. For the cooperative session (Experiment 1), results showed higher percentage of interpersonal coordination for the joint condition, relative to when participants performed the task alone. No difference in the coordination of responses occurred between the individual and the joint conditions when co-agents were in competition (Experiment 2). Finally, results showed that interpersonal coordination between co-agents implies the emergence of the JSE. Taken together, our results suggest that shared representations seem to be a necessary, but not sufficient, condition for interpersonal coordination. 

  • 2.
    Ehrlich, Stefan
    et al.
    Institute for Cognitive Systems (ICS), Technische Universität München.
    Wykowska, Agnieszka
    Institute for Cognitive Systems (ICS), Technische Universität München.
    Ramirez-Amaro, Karinne
    Institute for Cognitive Systems (ICS), Technische Universität München.
    Cheng, Gordon
    Technische Universität Muünchen, Institute for Cognitive Systems.
    When to engage in interaction - And how?: EEG-based enhancement of robot's ability to sense social signals in HRI2014In: IEEE-RAS International Conference on Humanoid Robots, Piscataway, NJ: IEEE Computer Society, 2014, p. 1104-1109, article id 7041506Conference paper (Refereed)
    Abstract [en]

    Humanoids are to date still limited in reliable interpretation of social cues that humans convey which restricts fluency and naturalness in social human-robot interaction (HRI). We propose a method to read out two important aspects of social engagement directly from the brain of a human interaction partner: (1) the intention to initiate eye contact and (2) the distinction between the observer being initiator or responder of an established gaze contact between human and robot. We suggest that these measures would give humanoids an important means for deciding when (timing) and how (social role) to engage in interaction with a human. We propose an experimental setup using iCub to evoke and capture the respective electrophysiological patterns via electroencephalography (EEG). Data analysis revealed biologically plausible brain activity patterns for both processes of social engagement. By using Support Vector Machine (SVM) classifiers with RBF kernel we showed that these patterns can be modeled with high within-participant accuracies of avg. 80.4% for (1) and avg. 77.0% for (2).

  • 3.
    Feldmann-Wüstefeld, Tobias
    et al.
    Department of Experimental and Biological Psychology, Philipps-University Marburg.
    Wykowska, Agnieszka
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg.
    Context heterogeneity has a sustained impact on attention deployment: Behavioral and electrophysiological evidence2013In: Psychophysiology, ISSN 0048-5772, E-ISSN 1469-8986, Vol. 50, no 8, p. 722-733Article in journal (Refereed)
    Abstract [en]

    In visual search, similar nearby stimuli can be grouped and thus enhance processing of an embedded target. The aim of the present study was to examine the time course of attention deployment after a brief presentation of stimulus arrays of different heterogeneity. Targets in less heterogeneous, grouped contexts yielded higher accuracy and larger N2pc amplitudes than targets in more heterogeneous, random contexts, indicating more efficient selection in the former. Subsequently presented probes yielded shorter reaction times and a larger posterior positivity when presented at the target location. This advantage was more pronounced after grouped compared to random contexts at the shorter compared to the longer interstimulus interval. The results show that less heterogeneous contexts that allow for grouping not only enhance processing of stimuli within that context, but have a sustained effect on visual attention

  • 4.
    Gauchou, H.L.
    et al.
    Laboratoire Psychologie de la Perception, FRE 2929 CNRS ^Universite¨ Paris Descartes, Paris.
    Wykowska, Agnieszka
    Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    Schubö, Anna
    Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    Regan, J.K.
    Laboratoire Psychologie de la Perception, FRE 2929 CNRS ^Universite¨ Paris Descartes, Paris.
    An ERP study of visual-change detection: Is the N2 component a marquer of consciousness?2007In: Perception, ISSN 0301-0066, E-ISSN 1468-4233, Vol. 36, no Suppl., p. 22-Article in journal (Refereed)
  • 5.
    Johansson, Jan
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science.
    Abrahamsson, Lena
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science.
    Bergvall-Kåreborn, Birgitta
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Fältholm, Ylva
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science.
    Grane, Camilla
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science.
    Wykowska, Agnieszka
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science.
    Work and Organization in a Digital Industrial Context2017In: Management Revue, ISSN 0935-9915, E-ISSN 1861-9908, Vol. 28, no 3, p. 281-297Article in journal (Refereed)
    Abstract [en]

    There are clear signs that digitalization attempts such as Industry 4.0 will becomemore apparent in workplaces. This development requires reflections and considerationsso we do not create more problems than we solve. In our paper, we have raisedseveral questions related to the Industry 4.0 that need answers: Is Industry 4.0 a discourse,an organizational model, or just technology? Does the requirement for flexibilitycall for a new labour market? How will Industry 4.0 affect competence andskill requirements? Will Industry 4.0 encourage a new gender order? Will Industry4.0 take over dangerous routine work or will old work environmental problems appearin new contexts and for other groups of workers? Can we rely on robots aswork mates or will they spy on us and report to management? Based on our analysis,we addressed four knowledge gaps that need more research in relation to thedigitalization of work: The relationship between new technology, working conditions,qualifications, identity, and gender; the future of the workers' collective;crowdsourcing in an industrial context; and human-machine interaction with a focuson integrity issues.

  • 6.
    Kajopoulos, Jasmin
    et al.
    Neuro-Cognitive Psychology Master Program, Department of Psychology Ludwig-Maximilians-Universität Munich.
    Wong, Alvin Hong Yee
    Institute for Infocomm Research (I2R)Agency for Science, Technology and Research (a*STAR) Singapore.
    Dung, Tran Anh
    Institute for Infocomm Research (I2R)Agency for Science, Technology and Research (a*STAR)Singapore.
    Kee, Tan Yeow
    nstitute for Infocomm Research (I2R)Agency for Science, Technology and Research (a*STAR) Singapore.
    Wykowska, Agnieszka
    General and Experimental Psychology Unit, Department of PsychologyLudwig-Maximilians-Universität, Munich.
    Robot-Assisted Training of Joint Attention Skills in Children Diagnosed with Autism2015In: Social Robotics (ICSR 2015) / [ed] Tapus, A; Andre, E; Martin, JC; Ferland, F; Ammi, M, Berlin: Springer Berlin/Heidelberg, 2015, p. 296-305Conference paper (Refereed)
    Abstract [en]

    ue to technological and scientific advances, a new approach to autism therapy has emerged, namely robot-assisted therapy. However, as of now, no systematic studies have examined the specific cognitive mechanisms that are affected by robot-assisted training. This study used knowledge and methodology of experimental psychology to design a training protocol involving a pet robot CuDDler (A* STAR Singapore), which targeted at the specific cognitive mechanism of responding to joint attention (RJA). The training protocol used a modified attention cueing paradigm, where head direction of the robot cued children's spatial attention to a stimulus presented on one of the sides of the robot. The children were engaged in a game that could be completed only through following the head direction of the robot. Over several weeks of training, children learned to follow the head movement of the robot and thus trained their RJA skills. Results showed improvement in RJA skills post training, relative to a pre-training test. Importantly, the RJA skills were transferred from interaction with the robot to interaction with the human experimenter. This shows that with the use of objective measures and protocols grounded in methods of experimental psychology, it is possible to design efficient training of specific social cognitive mechanisms, which are the basis for more complex social skills.

  • 7.
    Kompatsiari, K.
    et al.
    Istituto Italiano di Tecnologia, Social Cognition in Human-Robot Interaction, Centre for Human Technologies, Genoa, Italy. Ludwig Maximilian University, Planegg, Germany.
    Ciardo, F.
    Istituto Italiano di Tecnologia, Social Cognition in Human-Robot Interaction, Centre for Human Technologies, Genoa, Italy.
    Tikhanoff, V.
    Istituto Italiano di Tecnologia, iCub Facility, Genoa, Italy.
    Metta, G.
    Istituto Italiano di Tecnologia, iCub Facility, Genoa, Italy. University of Plymouth, Drake Circus, Plymouth, United Kingdom.
    Wykowska, Agnieszka
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human and technology. Istituto Italiano di Tecnologia, Social Cognition in Human-Robot Interaction, Centre for Human Technologies, Genoa, Italy.
    It’s in the Eyes: The Engaging Role of Eye Contact in HRI2019In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805Article in journal (Refereed)
    Abstract [en]

    This paper reports a study where we examined how a humanoid robot was evaluated by users, dependent on established eye contact. In two experiments, the robot was programmed to either establish eye contact with the user, or to look elsewhere. Across the experiments, we altered the level of predictiveness of the robot’s gaze direction with respect to a subsequent target stimulus (in Exp.1 the gaze direction was non-predictive, in Exp. 2 it was counter-predictive). Results of subjective reports showed that participants were sensitive to eye contact. Moreover, participants felt more engaged with the robot when it established eye contact, and the majority attributed higher degree of human-likeness in the eye contact condition, relative to no eye contact. This was independent of predictiveness of the gaze cue. Our results suggest that establishing eye contact by embodied humanoid robots has a positive impact on perceived socialness of the robot, and on the quality of human–robot interaction (HRI). Therefore, establishing eye contact should be considered in designing robot behaviors for social HRI. 

  • 8.
    Kompatsiari, Kyveli
    et al.
    Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genoa .
    Tikhanoff, Vadim
    iCub Facility, Istituto Italiano di Tecnologia, Genoa.
    Ciardo, Francesca
    Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genoa.
    Metta, Giorgio
    iCub Facility, Istituto Italiano di Tecnologia, Genoa.
    Wykowska, Agnieszka
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science. Social Cognition in Human-Robot Interaction, Istituto Italiano di Tecnologia, Genoa.
    The Importance of Mutual Gaze in Human-Robot Interaction2017In: ICSR: International Conference on Social Robotics: International Conference on Social Robotics / [ed] Kheddar A. et al., Cham, 2017, p. 443-452Conference paper (Refereed)
    Abstract [en]

    Mutual gaze is a key element of human development, and constitutes an important factor in human interactions. In this study, we examined –through analysis of subjective reports– the influence of an online eye-contact of a humanoid robot on humans’ reception of the robot. To this end, we manipulated the robot gaze, i.e., mutual (social) gaze and neutral (non-social) gaze, throughout an experiment involving letter identification. Our results suggest that people are sensitive to the mutual gaze of an artificial agent, they feel more engaged with the robot when a mutual gaze is established, and eye-contact supports attributing human-like characteristics to the robot. These findings are relevant both to the human-robot interaction (HRI) research - enhancing social behavior of robots, and also for cognitive neuroscience - studying mechanisms of social cognition in relatively realistic social interactive scenarios.

  • 9.
    Leszczyński, Marcin
    et al.
    Deptartment of Psychology, Ludwig-Maximilians-Universität München.
    Wykowska, Agnieszka
    Deptartment of Psychology, Ludwig-Maximilians-Universität München.
    Perez-Osorio, Jairo
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Müller, Hermann J.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Deployment of spatial attention towards locations in memory representations2013In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 8, no 12, article id 83856Article in journal (Refereed)
    Abstract [en]

    Recalling information from visual short-term memory (VSTM) involves the same neural mechanisms as attending to an actually perceived scene. In particular, retrieval from VSTM has been associated with orienting of visual attention towards a location within a spatially-organized memory representation. However, an open question concerns whether spatial attention is also recruited during VSTM retrieval even when performing the task does not require access to spatial coordinates of items in the memorized scene. The present study combined a visual search task with a modified, delayed central probe protocol, together with EEG analysis, to answer this question. We found a temporal contralateral negativity (TCN) elicited by a centrally presented go-signal which was spatially uninformative and featurally unrelated to the search target and informed participants only about a response key that they had to press to indicate a prepared target-present vs. - absent decision. This lateralization during VSTM retrieval (TCN) provides strong evidence of a shift of attention towards the target location in the memory representation, which occurred despite the fact that the present task required no spatial (or featural) information from the search to be encoded, maintained, and retrieved to produce the correct response and that the go-signal did not itself specify any information relating to the location and defining feature of the target

  • 10.
    Perez-Osorio, Jairo
    et al.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Müller, Hermann J.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Wiese, Eva
    Department of Psychology, George Mason University, Fairfax.
    Wykowska, Agnieszka
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Gaze following is modulated by expectations regarding others' action goals2015In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 10, no 11, article id e0143614Article in journal (Refereed)
    Abstract [en]

    Humans attend to social cues in order to understand and predict others' behavior. Facial expressions and gaze direction provide valuable information to infer others' mental states and intentions. The present study examined the mechanism of gaze following in the context of participants' expectations about successive action steps of an observed actor. We embedded a gaze-cueing manipulation within an action scenario consisting of a sequence of naturalistic photographs. Gaze-induced orienting of attention (gaze following) was analyzed with respect to whether the gaze behavior of the observed actor was in line or not with the action-related expectations of participants (i.e., whether the actor gazed at an object that was congruent or incongruent with an overarching action goal). In Experiment 1, participants followed the gaze of the observed agent, though the gaze-cueing effect was larger when the actor looked at an action-congruent object relative to an incongruent object. Experiment 2 examined whether the pattern of effects observed in Experiment 1 was due to covert, rather than overt, attentional orienting, by requiring participants to maintain eye fixation throughout the sequence of critical photographs (corroborated by monitoring eye movements). The essential pattern of results of Experiment 1 was replicated, with the gazecueing effect being completely eliminated when the observed agent gazed at an actionincongruent object. Thus, our findings show that covert gaze following can be modulated by expectations that humans hold regarding successive steps of the action performed by an observed agent. © 2015 Perez-Osorio et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

  • 11.
    Perez-Osorio, Jairo
    et al.
    Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa.
    Müller, Hermann Josef
    General and Experimental Psychology, Department Of Psychology, Ludwig-Maximilians-Universität.
    Wykowska, Agnieszka
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science.
    Expectations regarding action sequences modulate electrophysiological correlates of the gaze-cueing effect2017In: Psychophysiology, ISSN 0048-5772, E-ISSN 1469-8986, Vol. 54, no 7, p. 942-954Article in journal (Refereed)
    Abstract [en]

    Predictive mechanisms of the brain are important for social cognition, as they enable inferences about others' goals and intentions, thereby allowing for generation of expectations regarding what will happen next in the social environment. Therefore, attentional selection is modulated by expectations regarding behavior of others (Perez-Osorio, Müller, Wiese, & Wykowska, 2015). In this article, we examined—using the ERPs of the EEG signal—which stages of processing are influenced by expectations about others' action steps. We used a paradigm in which a gaze-cueing procedure was embedded in successively presented naturalistic photographs composing an action sequence. Our results showed (a) behavioral gaze-cueing effects modulated by whether the observed agent gazed at an object that was expected to be gazed at, according to the action sequence; (b) the N1 component locked to the onset of a target was modulated both by spatial gaze validity and participants' expectations about where the agent would gaze to perform an action; (c) a more positive amplitude, locked to the shift of gaze direction for action-congruent gaze, relative to incongruent and neutral conditions—over parieto-occipital areas in the time window between 280 and 380 ms. Taken together, these findings revealed that confirmation or violation of expectations concerning others' goal-oriented actions modulate attentional selection processes, as indexed by early ERP components

  • 12.
    Schubö, Anna
    et al.
    Department of Experimental and Biological Psychology, Philipps-University Marburg, Department of Psychology, Ludwig-Maximilians University, Munich.
    Wykowska, Agnieszka
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Müller, Hermann J.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Detecting pop-out targets in contexts of varying homogeneity: Investigating homogeneity coding with event-related brain potentials (ERPs)2007In: Brain Research, ISSN 0006-8993, E-ISSN 1872-6240, Vol. 1138, no 1, p. 136-147Article in journal (Refereed)
    Abstract [en]

    Searching for a target among many distracting context elements might be an easy or a demanding task. Duncan and Humphreys (Duncan, J., Humphreys, G.W., 1989. Visual search and stimulus similarity. Psychol. Rev. 96, 433-458) showed that not only the target itself plays a role in the difficulty of target detection. Similarity among context elements and dissimilarity of target and context are two main factors also affecting search efficiency. Moreover, many studies have shown that search becomes particularly efficient with large set sizes and perfectly homogeneous context elements, presumably due to grouping processes involved in target-context segmentation. Especially N2p amplitude has been found to be modulated by the number of context elements and their homogeneity. The aim of the present study was to investigate the influence of context elements of different heterogeneities on search performance using event-related brain potentials (ERPs). Results showed that contexts with perfectly homogeneous elements were indeed special: they were most efficient in visual search and elicited a large N2p differential amplitude effect. Increasing context heterogeneity led to a decrease in search performance and a reduction in N2p differential amplitude. Reducing the number of context elements led to a marked performance decrease for random heterogeneous contexts but not for grouped heterogeneous contexts. Behavioral and N2p results delivered evidence (a) in favor of specific processing modes operating on different spatial scales (b) for the existence of homogeneity coding postulated by Duncan and Humphreys.

  • 13.
    Wiese, Eva
    et al.
    Department of Psychology, George Mason University.
    Müller, Hermann Josef
    General and Experimental Psychology, Department Of Psychology, Ludwig-Maximilians-Universität.
    Wykowska, Agnieszka
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science.
    Using a gaze-cueing paradigm to examine social cognitive mechanisms of individuals with autism observing robot and human faces2014In: Social robotics: 6th International Conference, ICSR 2014, Sydney, NSW, Australia, October 27-29, 2014. Proceedings / [ed] Beetz M.,Beetz M.,Williams M.-A.,Johnston B.,Williams M.-A, Cham: Springer, 2014, p. 370-379Conference paper (Refereed)
    Abstract [en]

    This paper reports a study in which we investigated whether individuals with autism spectrum disorder (ASD) are more likely to follow gaze of a robot than of a human. By gaze following, we refer to one of the most fundamental mechanisms of social cognition, i.e., orienting attention to where others look. Individuals with ASD sometimes display reduced ability to follow gaze [1] or read out intentions from gaze direction [2]. However, as they are in general well responding to robots [3], we reasoned that they might be more likely to follow gaze of robots, relative to humans. We used a version of a gaze cueing paradigm [4, 5] and recruited 18 participants diagnosed with ASD. Participants were observing a human or a robot face and their task was to discriminate a target presented either at the side validly cued by the gaze of the human or robot; or at the opposite side. We observed typical validity effects: faster reaction times (RTs) to validly cued targets, relative to invalidly cued targets. However, and most importantly, the validity effect was larger and significant for the robot faces, as compared to the human faces, where the validity effect did not reach significance. This shows that individuals with ASD are more likely to follow gaze of robots, relative to humans, suggesting that the success of robots in involving individuals with ASD in interactions might be due to a very fundamental mechanism of social cognition. Our present results can also provide avenues for future training programs for individuals with ASD.

  • 14.
    Wiese, Eva
    et al.
    Department of Psychology, George Mason University, Fairfax, Department of General and Experimental Psychology, Ludwig-Maximilian-University.
    Wykowska, Agnieszka
    Department of General and Experimental Psychology, Ludwig-Maximilian-University, Munich.
    Müller, Hermann J.
    Department of Psychology, Ludwig-Maximilians University, Munich, Department of General and Experimental Psychology, Ludwig-Maximilian-University, Munich.
    What we observe is biased by what other people tell us: Beliefs about the reliability of gaze behavior modulate attentional orienting to gaze cues2014In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 9, no 4, article id e94529Article in journal (Refereed)
    Abstract [en]

    For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a topdown mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes.

  • 15.
    Wiese, Eva
    et al.
    Department of Experimental Psychology, Ludwig Maximilian University Munich.
    Wykowska, Agnieszka
    Müller, Hermann Josef
    Department of Experimental Psychology, Ludwig Maximilian University Munich.
    Making eyes with robots: Readiness to engage in human-robot-interaction depends on the attribution of intentionality2013In: Proceedings of the Human Factors and Ergonomics Society annual meeting: Sept. 30 - Oct. 4, 2013, San Diego, California USA, 2013, p. 1174-1178Conference paper (Refereed)
    Abstract [en]

    One of the most important aims of social robotics is to improve Human-Robot Interaction by providing robots with means to understand observed behavior and to predict upcoming actions of their in-teraction partners. The most reliable source for inferring the action goals of interaction partners is their gaze direction. Hence, to anticipate upcoming actions, it is necessary to identify where others are currently look-ing at and to shift the attentional focus to the same location. Interestingly, it has been shown that observing robot gaze direction also induces attentional shifts to the location that is gazed-at by the robot. Given this, gaze direction can be actively used by the robot to direct the attentional focus of interaction partners to im-portant events in the world. In this paper, we review findings from two studies indicating that the readiness to engage attentional resources in interactions with robots is modulated by the degree to which intentionality can be attributed to the robot: Robots believed to behave similar to humans cause stronger gaze-cueing effects than robots perceived as machines, independently of their physical appearance. Based on these find-ings, and on results from a pilot study with a sample of patients diagnosed with autism spectrum disorder (ASD), we derive guidelines for improving human-robot interaction by emulating social gaze behavior.

  • 16.
    Wiese, Eva
    et al.
    Department of Psychology, George Mason University, Fairfax, Ludwig-Maximilians-Universität, München.
    Wykowska, Agnieszka
    Ludwig-Maximilians-Universität, München.
    Zwickel, Jan
    Ludwig-Maximilians-Universität, München.
    Müller, Hermann J.
    Department of Psychology, Ludwig-Maximilians University, Munich, Ludwig-Maximilians-Universität, München.
    I See What You Mean: Attentional Selection Is Shaped by Ascribing Intentions to Others2012In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 7, no 9, article id e45391Article in journal (Refereed)
    Abstract [en]

    The ability to understand and predict others' behavior is essential for successful interactions. When making predictions about what other humans will do, we treat them as intentional systems and adopt the intentional stance, i.e., refer to their mental states such as desires and intentions. In the present experiments, we investigated whether the mere belief that the observed agent is an intentional system influences basic social attention mechanisms. We presented pictures of a human and a robot face in a gaze cuing paradigm and manipulated the likelihood of adopting the intentional stance by instruction: in some conditions, participants were told that they were observing a human or a robot, in others, that they were observing a human-like mannequin or a robot whose eyes were controlled by a human. In conditions in which participants were made to believe they were observing human behavior (intentional stance likely) gaze cuing effects were significantly larger as compared to conditions when adopting the intentional stance was less likely. This effect was independent of whether a human or a robot face was presented. Therefore, we conclude that adopting the intentional stance when observing others' behavior fundamentally influences basic mechanisms of social attention. The present results provide striking evidence that high-level cognitive processes, such as beliefs, modulate bottom-up mechanisms of attentional selection in a top-down manner

  • 17.
    Wykowska, Agnieszka
    et al.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Anderl, Christine
    Leiden University.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg, Goethe University, Frankfurt.
    Hommel, Bernard
    Philipps University, Marburg.
    Motivation modulates visual attention: evidence from pupillometry2013In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 4, article id 59Article in journal (Refereed)
    Abstract [en]

    Increasing evidence suggests that action planning does not only affect the preparation and execution of overt actions but also "works back" to tune the perceptual system toward action-relevant information. We investigated whether the amount of this impact of action planning on perceptual selection varies as a function of motivation for action, which was assessed online by means of pupillometry (Experiment 1) and visual analog scales (VAS, Experiment 2). Findings replicate the earlier observation that searching for size-defined targets is more efficient in the context of grasping than in the context of pointing movements (Wykowska et al., 2009). As expected, changes in tonic pupil size (reflecting changes in effort and motivation) across the sessions, as well as changes in motivation-related scores on the VAS were found to correlate with changes in the size of the action-perception congruency effect. We conclude that motivation and effort might play a crucial role in how much participants prepare for an action and activate action codes. The degree of activation of action codes in turn influences the observed action-related biases on perception.

  • 18.
    Wykowska, Agnieszka
    et al.
    Allgemeine und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität, Munich.
    Arstila, Valtteri
    Department of Philosophy, and Centre for Cognitive Neuroscience, Department of Psychology, University of Turku.
    On the flexibility of human temporal resolution2014In: Subjective Time: The Philosophy, Psychology, and Neuroscience of Temporality, Cambridge, Massachusetts: MIT Press, 2014, p. 431-452Chapter in book (Other academic)
  • 19.
    Wykowska, Agnieszka
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science. Technische Universität München, Institute for Cognitive Systems, Arcisstraße 21, München, Germany.
    Chaminade, Thierry
    Institut de Neurosciences de la Timone, Aix Marseille University—CNRS, Marseille.
    Cheng, Gordon
    Technische Universität Muünchen, Institute for Cognitive Systems.
    Embodied artificial agents for understanding human social cognition2016In: Philosophical Transactions of the Royal Society of London. Biological Sciences, ISSN 0962-8436, E-ISSN 1471-2970, Vol. 371, no 1693, article id 20150375Article in journal (Refereed)
    Abstract [en]

    In this paper, we propose that experimental protocols involving artificial agents, in particular the embodied humanoid robots, provide insightful information regarding social cognitive mechanisms in the human brain. Using artificial agents allows for manipulation and control of various parameters of behaviour, appearance and expressiveness in one of the interaction partners (the artificial agent), and for examining effect of these parameters on the other interaction partner (the human). At the same time, using artificial agents means introducing the presence of artificial, yet human-like, systems into the human social sphere. This allows for testing in a controlled, but ecologically valid, manner human fundamental mechanisms of social cognition both at the behavioural and at the neural level. This paper will review existing literature that reports studies in which artificial embodied agents have been used to study social cognition and will address the question of whether various mechanisms of social cognition (ranging from lower- to higher-order cognitive processes) are evoked by artificial agents to the same extent as by natural agents, humans in particular. Increasing the understanding of how behavioural and neural mechanisms of social cognition respond to artificial anthropomorphic agents provides empirical answers to the conundrum ‘What is a social agent?’

  • 20.
    Wykowska, Agnieszka
    et al.
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität.
    Chellali, Ryad
    Instituto Italiano di Tecnologia-PAVIS.
    al-Amin, Md Mamun
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität.
    Müller, Hermann Josef
    General and Experimental Psychology, Department Of Psychology, Ludwig-Maximilians-Universität.
    Does observing artificial robotic systems influence human perceptual processing in the same way as observing humans?2012Conference paper (Refereed)
    Abstract [en]

    Humanoid robots are designed and shaped to have physical bodies resembling humans. The anthropomorphic shape is aiming at facilitating interactions between humans and robots with the ultimate goal of making robots acceptable social partners. This attempt is not very new to roboticists and there is an increasing body of research showing the importance of robots' appearance in HRI; the Uncanny Valley proposed in the 70's [1] is however still an open problem. Our aim in this contribution is to examine how human perceptual mechanisms involved in action observation are influenced by the external shape of observed robots. Our present results show that observing robotic/cartoon hands performing grasping/pointing movements elicits similar perceptual mechanisms as observing other humans. Hence, it seems that observing actions of artificial systems can induce similar perceptual effects as observing actions of humans

  • 21.
    Wykowska, Agnieszka
    et al.
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität München.
    Chellali, Ryan
    Pattern Analysis and Computer Vision, Istituto Italiano di Tecnologia-PAVIS, Via Morego, 30, 16165 Genova.
    al-Amin, Md Mamun Amun
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität München.
    Müller, Hermann J.
    Department of Psychology, Ludwig-Maximilians University, Munich, General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität München.
    Implications of Robot Actions for Human Perception: How Do We Represent Actions of the Observed Robots?2014In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 6, no 3, p. 357-366Article in journal (Refereed)
    Abstract [en]

    Social robotics aims at developing robots that are to assist humans in their daily lives. To achieve this aim, robots must act in a comprehensible and intuitive manner for humans. That is, humans should be able to cognitively represent robot actions easily, in terms of action goals and means to achieve them. This yields a question of how actions are represented in general. Based on ideomotor theories (Greenwald Psychol Rev 77:73-99, 1970) and accounts postulating common code between action and perception (Hommel et al. Behav Brain Sci 24:849-878, 2001) as well as empirical evidence (Wykowska et al. J Exp Psychol 35:1755-1769, 2009), we argue that action and perception domains are tightly linked in the human brain. The aim of the present study was to examine if robot actions would be represented similarly, and in consequence, elicit similar perceptual effects, as representing human actions. Our results showed that indeed robot actions elicited perceptual effects of the same kind as human actions, arguing in favor of that humans are capable of representing robot actions in a similar manner as human actions. Future research will aim at examining how much these representations depend on physical properties of the robot actor and its behavior.

  • 22.
    Wykowska, Agnieszka
    et al.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Hommel, Bernard
    Philipps University, Marburg, Leiden University.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg, University of Marburg.
    Action-induced effects on perception depend neither on element-level nor on set-level similarity between stimulus and response sets2011In: Attention, Perception & Psychophysics, ISSN 1943-3921, E-ISSN 1943-393X, Vol. 73, no 4, p. 1034-1041Article in journal (Refereed)
    Abstract [en]

    As was shown by Wykowska, Schubö, and Hommel (Journal of Experimental Psychology, Human Perception and Performance, 35, 1755-1769, 2009), action control can affect rather early perceptual processes in visual search: Although size pop-outs are detected faster when having prepared for a manual grasping action, luminance pop-outs benefit from preparing for a pointing action. In the present study, we demonstrate that this effect of action-target congruency does not rely on, or vary with, set-level similarity or element-level similarity between perception and action-two factors that play crucial roles in standard stimulus-response interactions and in models accounting for these interactions. This result suggests that action control biases perceptual processes in specific ways that go beyond standard stimulus-response compatibility effects and supports the idea that action-target congruency taps into a fundamental characteristic of human action control

  • 23.
    Wykowska, Agnieszka
    et al.
    Department of Psychology, Ludwig-Maximilians-Universität.
    Hommel, Bernard
    Philipps University, Marburg, Department of Psychology, Leiden University.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg.
    Imaging when acting: Picture but not word cues induce action-related biases of visual attention2012In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 3, no Issue OCT, article id 388Article in journal (Refereed)
    Abstract [en]

    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing -an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters

  • 24.
    Wykowska, Agnieszka
    et al.
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität, Leopoldstr. 13, Munich.
    Kajopoulos, Jasmin
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität, Leopoldstr. 13, Munich.
    Obando-Leitón, Miguel
    Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität, Großhaderner Str. 2, Planegg-Martinsried.
    Chauhan, Sushil Singh
    Singapore Institute for Neurotechnology (SINAPSE), National University of Singapore.
    Cabibihan, John-John
    Department of Mechanical and Industrial Engineering, Qatar University.
    Cheng, Gordon
    Institute for Cognitive Systems, Technische Universität München.
    Humans are Well Tuned to Detecting Agents Among Non-agents: Examining the Sensitivity of Human Perception to Behavioral Characteristics of Intentional Systems2015In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 7, no 5, p. 767-781Article in journal (Refereed)
    Abstract [en]

    For efficient social interactions, humans have developed means to predict and understand others’ behavior often with reference to intentions and desires. To infer others’ intentions, however, one must assume that the other is an agent with a mind and mental states. With two experiments, this study examined if the human perceptual system is sensitive to detecting human agents, based on only subtle behavioral cues. Participants observed robots, which performed pointing gestures interchangeably to the left or right with one of their two arms. Onset times of the pointing movements could have been pre-programmed, human-controlled (Experiment 1), or modeled after a human behavior (Experiment 2). The task was to determine if the observed behavior was controlled by a human or by a computer program, without any information about what parameters of behavior this judgment should be based on. Results showed that participants were able to detect human behavior above chance in both experiments. Moreover, participants were asked to discriminate a letter (F/T) presented on the left or the right side of a screen. The letter could have been either validly cued by the robot (location that the robot pointed to coincided with the location of the letter) or invalidly cued (the robot pointed to the opposite location than the letter was presented). In this cueing task, target discrimination was better for the valid versus invalid conditions in Experiment 1 where a human face was presented centrally on a screen throughout the experiment. This effect was not significant in Experiment 2 where participants were exposed only to a robotic face. In sum, present results show that the human perceptual system is sensitive to subtleties of human behavior. Attending to where others attend, however, is modulated not only by adopting the Intentional Stance but also by the way participants interpret the observed stimuli

  • 25.
    Wykowska, Agnieszka
    et al.
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität München.
    Kajopoulos, Jasmin
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität, Leopoldstr. 13, Munich.
    Ramirez-Amaro, Karinne
    Department for Cognitive Systems, Technische Universität München.
    Cheng, Gordon
    Technische Universität Muünchen, Institute for Cognitive Systems, Institute for Cognitive Systems, Technische Universität München, Department for Cognitive Systems, Technische Universität München.
    Autistic traits and sensitivity to human-like features of robot behavior2015In: Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems, ISSN 1572-0373, E-ISSN 1572-0381, Vol. 16, no 2, p. 219-248Article in journal (Refereed)
    Abstract [en]

    This study examined individual differences in sensitivity to human-like features of a robot's behavior. The paradigm comprised a non-verbal Turing test with a humanoid robot. A "programmed" condition differed from a "human-controlled" condition by onset times of the robot's eye movements, which were either fixed across trials or modeled after prerecorded human reaction times, respectively. Participants judged whether the robot behavior was programmed or human-controlled, with no information regarding the differences between respective conditions. Autistic traits were measured with the autism-spectrum quotient (AQ) questionnaire in healthy adults. We found that the fewer autistic traits participants had, the more sensitive they were to the difference between the conditions, without explicit awareness of the nature of the difference. We conclude that although sensitivity to fine behavioral characteristics of others varies with social aptitude, humans are in general capable of detecting human-like behavior based on very subtle cues.

  • 26.
    Wykowska, Agnieszka
    et al.
    Department of Experimental Psychology, Ludwig Maximilians Universität, München, .
    Maldonado, Alexis
    Computer Science Department, Chair IX, Technische Universität, München.
    Beetz, Michael
    Computer Science Department, Chair IX, Technische Universität, München.
    Schubö, Anna
    Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    How humans optimize their interaction with the environment: The impact of action context on human perception2009In: Communications in Computer and Information Science, ISSN 1865-0929, E-ISSN 1865-0937, Vol. 44, p. 162-172Article in journal (Refereed)
    Abstract [en]

    Humans have developed various mechanisms to optimize interaction with the environment. Optimization of action planning requires efficient selection of action-relevant features. Selection might also depend on the environmental context in which an action takes place. The present study investigated how action context influences perceptual processing in action planning. The experimental paradigm comprised two independent tasks: (1) a perceptual visual search task and (2) a grasping or a pointing movement. Reaction times in the visual search task were measured as a function of the movement type (grasping vs. pointing) and context complexity (context varying along one dimension vs. context varying along two dimensions). Results showed that action context influenced reaction times, which suggests a close bidirectional link between action and perception as well as an impact of environmental action context on perceptual selection in the course of action planning. Such findings are discussed in the context of application for robotics

  • 27.
    Wykowska, Agnieszka
    et al.
    Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    Maldonano, Alexis
    Computer Science Department, Chair IX, Technische Universität, München.
    Beetz, Michael
    Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg, Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    How humans optimize their interaction with the environment: The impact of action context on human perception2011In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 3, no 3, p. 223-231Article in journal (Refereed)
    Abstract [en]

    This paper reports empirical findings on human performance in an experiment comprising a perceptual task and a motor task. Such findings should be considered in design of robots, since drawing inspiration from natural solutions not only should prove beneficial for artificial systems but also human-robot interaction should then become more efficient and safe. Humans have developed various mechanisms to optimize the way actions are performed and the effects they induce. Optimization of action planning (e.g., grasping, reaching or lifting objects) requires efficient selection of action-relevant features. Selection might also depend on the environmental context in which an action takes place. The present study investigated how action context influences perceptual processing in action planning. The experimental paradigm comprised two independent tasks: (1) a perceptual visual search task and (2) a grasping or a pointing movement. Reaction times in the visual search task were measured as a function of the movement type (grasping vs. pointing) and context complexity (context varying along one dimension vs. context varying along two dimensions). Results showed that action context influenced reaction times, which suggests a close bidirectional link between action and perception as well as an impact of environmental action context on perceptual selection in the course of action planning. These findings are discussed in the context of application for robotics and design of users' interfaces.

  • 28.
    Wykowska, Agnieszka
    et al.
    University of Münich.
    Müller, Hermann Josef
    General and Experimental Psychology, Department Of Psychology, Ludwig-Maximilians-Universität.
    Schubö, Anna
    Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    How do we ignore the irrelevant?: Electrophysiological correlates of the attentional set2011In: Perception, ISSN 0301-0066, E-ISSN 1468-4233, Vol. 40, no Suppl. S, p. 47-48Article in journal (Refereed)
  • 29.
    Wykowska, Agnieszka
    et al.
    University of Munich.
    Rangelov, Dragan
    University of Munich.
    Review of: Effortless Attention: A New Perspective in the Cognitive Science of Attention and Action, Cambridge, MA: MIT Press, ABradford Book, 20102014In: Journal of consciousness studies, ISSN 1355-8250, E-ISSN 2051-2201, Vol. 21, no 1-2, p. 209-215Article, book review (Refereed)
  • 30.
    Wykowska, Agnieszka
    et al.
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität München.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg, Fachbereich Psychologie, Philipps-Universität, Marburg.
    Action intentions modulate allocation of visual attention: electrophysiological evidence2012In: Frontiers in Psychology, ISSN 1664-1078, E-ISSN 1664-1078, Vol. 3, article id 379Article in journal (Refereed)
    Abstract [en]

    In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing - an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms.

  • 31.
    Wykowska, Agnieszka
    et al.
    University of Munich.
    Schubö, Anna
    Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    An ERP study on the time course of top-down control of visual attention2008In: International Journal of Psychology, ISSN 0020-7594, E-ISSN 1464-066X, Vol. 43, no 3-4, p. 481-482Article in journal (Refereed)
  • 32.
    Wykowska, Agnieszka
    et al.
    University of Munich.
    Schubö, Anna
    Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    Attending to what is relevant!: Irrelevant singletons do not capture attention but can produce filtering costs2010In: Perception, ISSN 0301-0066, E-ISSN 1468-4233, Vol. 39, no Suppl. S, p. 117-Article in journal (Refereed)
  • 33.
    Wykowska, Agnieszka
    et al.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg, Department of Psychology, Ludwig-Maximilians University, Munich.
    Irrelevant singletons in visual search do not capture attention but can produce nonspatial filtering costs2011In: Journal of cognitive neuroscience, ISSN 0898-929X, E-ISSN 1530-8898, Vol. 23, no 3, p. 645-660Article in journal (Refereed)
    Abstract [en]

    It is not clear how salient distractors affect visual processing. The debate concerning the issue of whether irrelevant salient items capture spatial attention [e.g., Theeuwes, J., Atchley, P., & Kramer, A. F. On the time course of top-down and bottom- up control of visual attention. In S. Monsell & J. Driver (Eds.), Attention and performance XVIII: Control of cognitive performance (pp. 105-124). Cambridge, MA: MIT Press, 2000] or produce only nonspatial interference in the form of, for example, filtering costs [Folk, Ch. L., & Remington, R. Top-down modulation of preattentive processing: Testing the recovery account of contingent capture. Visual Cognition, 14, 445-465, 2006] has not yet been settled. The present ERP study examined deployment of attention in visual search displays that contained an additional irrelevant singleton. Display-locked N2pc showed that attention was allocated to the target and not to the irrelevant singleton. However, the onset of the N2pc to the target was delayed when the irrelevant singleton was presented in the opposite hemifield relative to the same hemifield. Thus, although attention was successfully focused on the target, the irrelevant singleton produced some interference resulting in a delayed allocation of attention to the target. A subsequent probe discrimination task allowed for locking ERPs to probe onsets and investigating the dynamics of sensory gain control for probes appearing at relevant (target) or irrelevant (singleton distractor) positions. Probelocked P1 showed sensory gain for probes positioned at the target location but no such effect for irrelevant singletons in the additional singleton condition. Taken together, the present data support the claim that irrelevant singletons do not capture attention. If they produce any interference, it is rather due to nonspatial filtering costs.

  • 34.
    Wykowska, Agnieszka
    et al.
    Department of Psychology, Ludwig-Maximillian University.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg, Department of Psychology, Ludwig-Maximillian University.
    On the temporal relation of top-down and bottom-up mechanisms during guidance of attention2010In: Journal of cognitive neuroscience, ISSN 0898-929X, E-ISSN 1530-8898, Vol. 22, no 4, p. 640-654Article in journal (Refereed)
    Abstract [en]

    Two mechanisms are said to be responsible for guiding focal attention in visual selection: bottom-up, saliency-driven capture and top-down control. These mechanisms were examined with a paradigm that combined a visual search task with postdisplay probe detection. Two SOAs between the search display and probe onsets were introduced to investigate how attention was allocated to particular items at different points in time. The dynamic interplay between bottom-up and top-down mechanisms was investigated with ERP methodology. ERPs locked to the search displays showed that top-down control needed time to develop. N2pc indicated allocation of attention to the target item and not to the irrelevant singleton. ERPs locked to probes revealed modulations in the P1 component reflecting top-down control of focal attention at the long SOA. Early bottom-up effects were observed in the error rates at the short SOA. Taken together, the present results show that the top-down mechanism takes time to guide focal attention to the relevant target item and that it is potent enough to limit bottom-up attentional capture

  • 35.
    Wykowska, Agnieszka
    et al.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg.
    Perception and Action as Two Sides of the Same Coin: A Review of the Importance of Action-Perception Links in Humans for Social Robot Design and Research2012In: International Journal of Social Robotics, ISSN 1875-4791, E-ISSN 1875-4805, Vol. 4, no 1, p. 5-14Article in journal (Refereed)
    Abstract [en]

    This paper focuses on the topic of human cognitive architecture in the context of links between action and perception. Results from behavioural studies, neuro-imaging, human electrophysiology as well as single-cell studies in monkeys are discussed. These data as well as theoretical background are brought forward to argue that a close connection between action and perception should be considered in designs of artificial systems. Examples of such systems are described and the application of those approaches to robotics is stressed

  • 36.
    Wykowska, Agnieszka
    et al.
    General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität München.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg, General and Experimental Psychology Unit, Department of Psychology, Ludwig-Maximilians-Universität München.
    Selecting When Acting: How Human Perception Is Tuned to Action Goals and How Robotics Can Benefit from That2010In: Social Robotics: Second International Conference on Social Robotics, ICSR 2010, Singapore, November 23-24, 2010. Proceedings / [ed] Shuzhi Sam Ge ; Haizhou Li; John-John Cabibihan ; Yeow Kee Tan, Berlin: Encyclopedia of Global Archaeology/Springer Verlag, 2010, p. 275-284Conference paper (Refereed)
    Abstract [en]

    This paper reviews theoretical perspectives and empirical evidence speaking in favor of a close link between action control and perceptual selection in humans. Results from behavioural studies, neuro-imaging, human electrophysiology as well as single-cell studies in monkeys are described. These data as well as theories are brought forward to argue that close connection between action and perception should be considered in designs of artificial systems. Examples of such systems are described and the application of those approaches to robotics is stressed.

  • 37.
    Wykowska, Agnieszka
    et al.
    Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    Schubö, Anna
    Department of Experimental and Biological Psychology, Philipps-University Marburg, Department of Experimental Psychology, Ludwig Maximilians Universität, München.
    Hommel, Bernard
    Philipps University, Marburg, Institute of Psychology, Leiden University.
    How You Move Is What You See: Action Planning Biases Selection in Visual Search2009In: Journal of Experimental Psychology: Human Perception and Performance, ISSN 0096-1523, E-ISSN 1939-1277, Vol. 35, no 6, p. 1755-1769Article in journal (Refereed)
    Abstract [en]

    Three experiments investigated the impact of planning and preparing a manual grasping or pointing movement on feature detection in a visual search task. The authors hypothesized that action planning may prime perceptual dimensions that provide information for the open parameters of that action. Indeed, preparing for grasping facilitated detection of size targets while preparing for pointing facilitated detection of luminance targets. Following the Theory of Event Coding (Hommel, Müsseler, Aschersleben, & Prinz, 2001b), the authors suggest that perceptual dimensions may be intentionally weighted with respect to an intended action. More interesting, the action-related influences were observed only when participants searched for a predefined target. This implies that action-related weighting is not independent from task-relevance weighting. To account for our findings, the authors suggest an integrative model of visual search that incorporates input from action-planning processes.

  • 38.
    Wykowska, Agnieszka
    et al.
    University of Münich.
    Wiese, Eva
    Department of Psychology, George Mason University.
    Müller, Hermann Josef
    General and Experimental Psychology, Department Of Psychology, Ludwig-Maximilians-Universität.
    Treating others as intentional agents influences our own perception: An EEG study2012In: Perception, ISSN 0301-0066, E-ISSN 1468-4233, Vol. 41, no Suppl. S, p. 28-29Article in journal (Refereed)
  • 39.
    Wykowska, Agnieszka
    et al.
    Department of Psychology, Ludwig-Maximilians-Universität.
    Wiese, Eva
    Department of Psychology, George Mason University, Fairfax, Department of Psychology, Ludwig-Maximilians-Universität.
    Prosser, Aaron
    Neuro-Cognitive Psychology Master Program, Ludwig-Maximilians-Universität, Munich.
    Müller, Hermann J.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Beliefs about the minds of others influence how we process sensory information2014In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 9, no 4, article id e94339Article in journal (Refereed)
    Abstract [en]

    Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans' beliefs about the observed scene influence sensory processing

  • 40.
    Özdem, Ceylan
    et al.
    Department of Psychology, Vrije Universiteit Brussels.
    Wiese, Eva
    Department of Psychology, George Mason University, Fairfax.
    Wykowska, Agnieszka
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Human Work Science. Chair for Cognitive Systems, Technische Universität München.
    Müller, Hermann J.
    Department of Psychology, Ludwig-Maximilians University, Munich.
    Brass, Marcel
    Ghent Institute for Functional and Metabolic Imaging, University of Ghent.
    Overwalle, Frank Van
    Department of Psychology, Vrije Universiteit Brussels.
    Believing Androids?: fMRI activation in the right temporo-parietal junction is modulated by ascribing intentions to non-human agents2017In: Social Neuroscience, ISSN 1747-0919, E-ISSN 1747-0927, Vol. 12, no 5, p. 582-593Article in journal (Refereed)
    Abstract [en]

    Attributing mind to interaction partners has been shown to increase the social relevance we ascribe to others’ actions and to modulate the amount of attention dedicated to them. However, it remains unclear how the relationship between higher-order mind attribution and lower-level attention processes is established in the brain. In this neuroimaging study, participants saw images of an anthropomorphic robot that moved its eyes left- or rightwards to signal the appearance of an upcoming stimulus in the same (valid cue) or opposite location (invalid cue). Independently, participants’ beliefs about the intentionality underlying the observed eye movements were manipulated by describing the eye movements as under human control or preprogrammed. As expected, we observed a validity effect behaviorally and neurologically (increased response times and activation in the invalid vs. valid condition). More importantly, we observed that this effect was more pronounced for the condition in which the robot’s behavior was believed to be controlled by a human, as opposed to be preprogrammed. This interaction effect between cue validity and belief was, however, only found at the neural level and was manifested as a significant increase of activation in bilateral anterior temporoparietal junction.

1 - 40 of 40
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf