Skip to main content

Contextual Cueing in Virtual (Reality) Environments

  • Protocol
  • First Online:
Spatial Learning and Attention Guidance

Part of the book series: Neuromethods ((NM,volume 151))

  • 690 Accesses

Abstract

Search in repeated contexts can lead to increased search efficiency due to contextual cueing. Contextual cueing has mainly been investigated in two-dimensional displays. However, contextual cueing is affected by depth information as well as by the “realism” of the search environment. To investigate these aspects further, we present a guide to design contextual cueing experiments in virtual three-dimensional environments. Specifically, we provide a general introduction to the Unity gaming engine and scripting in C#. We will focus on experimental workflows, but also cover topics like timing precision, how to process and handle participants’ input, or how to create visual assets and manipulate aspects like color. Ultimately, we will turn the entire project into a virtual reality experiment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Protocol
USD 49.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/xenko3d/xenko

  2. 2.

    https://www.unrealengine.com/

  3. 3.

    https://unity.com/

  4. 4.

    Rendering is a term established by computer graphics that refers to an automated process, which combines different aspects like lighting, color, and shape to generate an image that can be presented to a viewer.

  5. 5.

    www.blender.org

  6. 6.

    Note that changes made in the Inspector have higher priority and values in the script will be overwritten.

  7. 7.

    Spoiler alert: If you place the line of code in the Update(); method, Unity will write “Hello World” once per frame to your console. This may lead to many (many) “Hello Worlds” in seconds.

  8. 8.

    If you can ensure that your hardware produces, for example, constant 100 frames per second, you can (in theory) measure with a precision of 10 ms.

  9. 9.

    You can find a script that collects time data from three different methods here: https://bit.ly/2W4g0XN.

  10. 10.

    This method was supported by the friendly people over at Stack Overflow: https://bit.ly/2YyF46j.

  11. 11.

    The complete project on the GitHub page contains all the variables needed for inference.

  12. 12.

    Best practices for performance optimization of your experiment is provided here: https://bit.ly/2VGVw2k.

  13. 13.

    Keep in mind: The code provided here may not work properly with future versions, but we will migrate everything to the newest stable release and update the project on our GitHub page: https://github.com/nimarek.

References

  1. Brockmole JR, Hambrick DZ, Windisch DJ, Henderson JM (2008) The role of meaning in contextual cueing: evidence from chess expertise. Q J Exp Psychol 61(12):1886–1896

    Article  Google Scholar 

  2. Brockmole JR, Henderson JM (2006) Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements. Q J Exp Psychol 59(7):1177–1187

    Article  Google Scholar 

  3. Chun MM, Jiang Y (1998) Contextual cueing: implicit learning and memory of visual context guides spatial attention. Cogn Psychol 36(1):28–71

    Article  CAS  Google Scholar 

  4. Colagiuri B, Livesey EJ (2016) Contextual cuing as a form of nonconscious learning: theoretical and empirical analysis in large and very large samples. Psychon Bull Rev 23(6):1996–2009

    Article  Google Scholar 

  5. Chun MM, Jiang Y (2003) Implicit, long-term spatial contextual memory. J Exp Psychol Learn Mem Cogn 29(2):224

    Article  Google Scholar 

  6. Jiang YV, Sisk CA (2019) Contextual cueing. In: Pollmann S (ed) Spatial learning and attention guidance, Neuromethods. Humana Press, Totowa

    Google Scholar 

  7. Chua KP, Chun MM (2003) Implicit scene learning is viewpoint dependent. Percept Psychophys 65(1):72–80

    Article  Google Scholar 

  8. Kawahara JI (2003) Contextual cueing in 3D layouts defined by binocular disparity. Vis Cogn 10(7):837–852

    Article  Google Scholar 

  9. Zang X, Shi Z, Müller HJ, Conci M (2017) Contextual cueing in 3D visual search depends on representations in planar-, not depth-defined space. J Vis 17(5):17–17

    Article  Google Scholar 

  10. Tsuchiai T, Matsumiya K, Kuriki I, Shioiri S (2012) Implicit learning of viewpoint-independent spatial layouts. Front Psychol 3:207

    Article  Google Scholar 

  11. Jiang YV, Swallow KM (2013) Spatial reference frame of incidentally learned attention. Cognition 126(3):378–390

    Article  Google Scholar 

  12. Schmidt A, Geringswald F, Sharifian F, Pollmann S (2018) Not scene learning, but attentional processing is superior in team sport athletes and action video game players. Psychol Res:1–11

    Google Scholar 

  13. Schmidt A, Geringswald F, Pollmann S (2018) Spatial contextual cueing, assessed in a computerized task, is not a limiting factor for expert performance in the domain of team sports or action video game playing. J Cognit Enhancement:1–12

    Google Scholar 

  14. Geringswald F, Pollmann S (2015) Central and peripheral vision loss differentially affects contextual cueing in visual search. J Exp Psychol Learn Mem Cogn 41(5):1485–1496

    Article  Google Scholar 

  15. Kleiner M, Brainard D, Pelli D, Ingling A, Murray R, Broussard C (2007) What’s new in psychtoolbox-3. Perception 36(14):1

    Google Scholar 

  16. Peirce JW (2007) PsychoPy—psychophysics software in python. J Neurosci Methods 162(1–2):8–13

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by a grant of the Deutsche Forschungsgemeinschaft (DFG PO-548/14-2 to S.P.). We thank Rebecca Burnside for her help editing a draft of this chapter.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nico Marek .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Science+Business Media, LLC

About this protocol

Check for updates. Verify currency and authenticity via CrossMark

Cite this protocol

Marek, N., Pollmann, S. (2019). Contextual Cueing in Virtual (Reality) Environments. In: Pollmann, S. (eds) Spatial Learning and Attention Guidance. Neuromethods, vol 151. Humana, New York, NY. https://doi.org/10.1007/7657_2019_32

Download citation

  • DOI: https://doi.org/10.1007/7657_2019_32

  • Published:

  • Publisher Name: Humana, New York, NY

  • Print ISBN: 978-1-4939-9947-7

  • Online ISBN: 978-1-4939-9948-4

  • eBook Packages: Springer Protocols

Publish with us

Policies and ethics