Abstract
Search in repeated contexts can lead to increased search efficiency due to contextual cueing. Contextual cueing has mainly been investigated in two-dimensional displays. However, contextual cueing is affected by depth information as well as by the “realism” of the search environment. To investigate these aspects further, we present a guide to design contextual cueing experiments in virtual three-dimensional environments. Specifically, we provide a general introduction to the Unity gaming engine and scripting in C#. We will focus on experimental workflows, but also cover topics like timing precision, how to process and handle participants’ input, or how to create visual assets and manipulate aspects like color. Ultimately, we will turn the entire project into a virtual reality experiment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
Rendering is a term established by computer graphics that refers to an automated process, which combines different aspects like lighting, color, and shape to generate an image that can be presented to a viewer.
- 5.
- 6.
Note that changes made in the Inspector have higher priority and values in the script will be overwritten.
- 7.
Spoiler alert: If you place the line of code in the Update(); method, Unity will write “Hello World” once per frame to your console. This may lead to many (many) “Hello Worlds” in seconds.
- 8.
If you can ensure that your hardware produces, for example, constant 100 frames per second, you can (in theory) measure with a precision of 10 ms.
- 9.
You can find a script that collects time data from three different methods here: https://bit.ly/2W4g0XN.
- 10.
This method was supported by the friendly people over at Stack Overflow: https://bit.ly/2YyF46j.
- 11.
The complete project on the GitHub page contains all the variables needed for inference.
- 12.
Best practices for performance optimization of your experiment is provided here: https://bit.ly/2VGVw2k.
- 13.
Keep in mind: The code provided here may not work properly with future versions, but we will migrate everything to the newest stable release and update the project on our GitHub page: https://github.com/nimarek.
References
Brockmole JR, Hambrick DZ, Windisch DJ, Henderson JM (2008) The role of meaning in contextual cueing: evidence from chess expertise. Q J Exp Psychol 61(12):1886–1896
Brockmole JR, Henderson JM (2006) Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements. Q J Exp Psychol 59(7):1177–1187
Chun MM, Jiang Y (1998) Contextual cueing: implicit learning and memory of visual context guides spatial attention. Cogn Psychol 36(1):28–71
Colagiuri B, Livesey EJ (2016) Contextual cuing as a form of nonconscious learning: theoretical and empirical analysis in large and very large samples. Psychon Bull Rev 23(6):1996–2009
Chun MM, Jiang Y (2003) Implicit, long-term spatial contextual memory. J Exp Psychol Learn Mem Cogn 29(2):224
Jiang YV, Sisk CA (2019) Contextual cueing. In: Pollmann S (ed) Spatial learning and attention guidance, Neuromethods. Humana Press, Totowa
Chua KP, Chun MM (2003) Implicit scene learning is viewpoint dependent. Percept Psychophys 65(1):72–80
Kawahara JI (2003) Contextual cueing in 3D layouts defined by binocular disparity. Vis Cogn 10(7):837–852
Zang X, Shi Z, Müller HJ, Conci M (2017) Contextual cueing in 3D visual search depends on representations in planar-, not depth-defined space. J Vis 17(5):17–17
Tsuchiai T, Matsumiya K, Kuriki I, Shioiri S (2012) Implicit learning of viewpoint-independent spatial layouts. Front Psychol 3:207
Jiang YV, Swallow KM (2013) Spatial reference frame of incidentally learned attention. Cognition 126(3):378–390
Schmidt A, Geringswald F, Sharifian F, Pollmann S (2018) Not scene learning, but attentional processing is superior in team sport athletes and action video game players. Psychol Res:1–11
Schmidt A, Geringswald F, Pollmann S (2018) Spatial contextual cueing, assessed in a computerized task, is not a limiting factor for expert performance in the domain of team sports or action video game playing. J Cognit Enhancement:1–12
Geringswald F, Pollmann S (2015) Central and peripheral vision loss differentially affects contextual cueing in visual search. J Exp Psychol Learn Mem Cogn 41(5):1485–1496
Kleiner M, Brainard D, Pelli D, Ingling A, Murray R, Broussard C (2007) What’s new in psychtoolbox-3. Perception 36(14):1
Peirce JW (2007) PsychoPy—psychophysics software in python. J Neurosci Methods 162(1–2):8–13
Acknowledgments
This work was supported by a grant of the Deutsche Forschungsgemeinschaft (DFG PO-548/14-2 to S.P.). We thank Rebecca Burnside for her help editing a draft of this chapter.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Science+Business Media, LLC
About this protocol
Cite this protocol
Marek, N., Pollmann, S. (2019). Contextual Cueing in Virtual (Reality) Environments. In: Pollmann, S. (eds) Spatial Learning and Attention Guidance. Neuromethods, vol 151. Humana, New York, NY. https://doi.org/10.1007/7657_2019_32
Download citation
DOI: https://doi.org/10.1007/7657_2019_32
Published:
Publisher Name: Humana, New York, NY
Print ISBN: 978-1-4939-9947-7
Online ISBN: 978-1-4939-9948-4
eBook Packages: Springer Protocols