Show simple item record

dc.contributor.authorPantelides, Stephanie N.en
dc.contributor.authorKelly, Jonathan W.en
dc.contributor.authorAvraamides, Marios N.en
dc.creatorPantelides, Stephanie N.en
dc.creatorKelly, Jonathan W.en
dc.creatorAvraamides, Marios N.en
dc.description.abstractThree experiments investigated whether spatial information acquired from vision and language is maintained in distinct spatial representations on the basis of the input modality. Participants studied a visual and a verbal layout of objects at different times from either the same (Experiments 1 and 2) or different learning perspectives (Experiment 3) and then carried out a series of pointing judgments involving objects from the same or different layouts. Results from Experiments 1 and 2 indicated that participants pointed equally fast on within-And between-layout trials; coupled with verbal reports from participants, this result suggests that they integrated all locations in a single spatial representation during encoding. However, when learning took place from different perspectives in Experiment 3, participants were faster to respond to within-than between-layout trials and indicated that they kept separate representations during learning. Results are compared to those from similar studies that involved layouts learned from perception only. © 2015 Taylor & Francis.en
dc.sourceJournal of Cognitive Psychologyen
dc.titleIntegration of spatial information across vision and languageen
dc.description.endingpage185Σχολή Κοινωνικών Επιστημών και Επιστημών Αγωγής / Faculty of Social Sciences and EducationΤμήμα Ψυχολογίας / Department of Psychology
dc.description.notesCited By :1en
dc.contributor.orcidAvraamides, Marios N. [0000-0002-0049-8553]

Files in this item


There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record