Abstract
In this article, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine. BubbleView is a mouse-contingent, moving-window interface in which participants are presented with a series of blurred images and click to reveal “bubbles” -- small, circular areas of the image at original resolution, similar to having a confined area of focus like the eye fovea. Across 10 experiments with 28 different parameter combinations, we evaluated BubbleView on a variety of image types: information visualizations, natural images, static webpages, and graphic designs, and compared the clicks to eye fixations collected with eye-trackers in controlled lab settings. We found that BubbleView clicks can both (i) successfully approximate eye fixations on different images, and (ii) be used to rank image and design elements by importance. BubbleView is designed to collect clicks on static images, and works best for defined tasks such as describing the content of an information visualization or measuring image importance. BubbleView data is cleaner and more consistent than related methodologies that use continuous mouse movements. Our analyses validate the use of mouse-contingent, moving-window methodologies as approximating eye fixations for different image and task types.
Supplemental Material
Available for Download
Supplemental movie, appendix, image and software files for, BubbleView: An Interface for Crowdsourcing Image Importance Maps and Tracking Visual Attention
- Amer Al-Rahayfeh and Miad Faezipour. 2013. Eye tracking and head movement detection: A state-of-art survey. IEEE Journal of Translational Engineering in Health and Medicine 1 (2013).Google Scholar
- Shumeet Baluja and Dean Pomerleau. 1994. Non-intrusive gaze tracking using artificial neural networks. Technical Report. Carnegie Mellon Univ., Pittsburgh, PA, USA. Google Scholar
- Roman Bednarik and Markku Tukiainen. 2005. Effects of display blurring on the behavior of novices and experts during program debugging. In Proceedings of CHI’05 Extended Abstracts on Human Factors in Computing Systems (CHI EA’05). ACM, New York, NY, 1204--1207. Google ScholarDigital Library
- Roman Bednarik and Markku Tukiainen. 2007. Validating the restricted focus viewer: A study using eye-movement tracking. Behavior Research Methods 39, 2 (2007), 274--282.Google ScholarCross Ref
- Jennifer Romano Bergstrom and Andrew Schall. 2014. Eye Tracking in User Experience Design. Elsevier. Google ScholarDigital Library
- Alan F. Blackwell, Anthony R. Jansen, and Kim Marriott. 2000. Restricted Focus Viewer: A Tool for Tracking Visual Attention. Springer, Berlin, 162--177.Google Scholar
- Ali Borji and Laurent Itti. 2013. State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 1 (Jan. 2013), 185--207. Google ScholarDigital Library
- Ali Borji and Laurent Itti. 2015. Cat2000: A large scale fixation dataset for boosting saliency research. arXiv Preprint arXiv:1505.03581 (2015).Google Scholar
- Ali Borji, Dicky N. Sihite, and Laurent Itti. 2013. Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing 22, 1 (Jan 2013), 55--69. Google ScholarDigital Library
- Michelle A. Borkin, Zoya Bylinskii, Nam Wook Kim, Constance May Bainbridge, Chelsea S. Yeh, Daniel Borkin, Hanspeter Pfister, and Aude Oliva. 2016. Beyond memorability: Visualization recognition and recall. IEEE Transactions on Visualization and Computer Graphics 22, 1 (Jan. 2016), 519--528.Google ScholarDigital Library
- Daniel Bruneau, M. Angela Sasse, and J. D. McCarthy. 2002. The eyes never lie: The use of eye tracking data in HCI research. In Proceedings of CHI, Vol. 2, 25.Google Scholar
- Georg Buscher, Edward Cutrell, and Meredith Ringel Morris. 2009. What do you see when you’re surfing?: Using eye tracking to predict salient regions of web pages. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). ACM, New York, NY, 21--30. Google ScholarDigital Library
- Zoya Bylinskii, Michelle A. Borkin, Nam Wook Kim, Hanspeter Pfister, and Aude Oliva. 2017. Eye fixation metrics for large scale evaluation and comparison of information visualizations. In Eye Tracking and Visualization: Foundations, Techniques, and Applications. ETVIS 2015, Michael Burch, Lewis Chuang, Brian Fisher, Albrecht Schmidt, and Daniel Weiskopf (Eds.). Springer International Publishing, 235--255.Google Scholar
- Zoya Bylinskii, Ellen M. DeGennaro, Rishi Rajalingham, Harald Ruda, Jinxia Zhang, and John K. Tsotsos. 2015. Towards the quantitative evaluation of visual attention models. Vision Research 116, Part B (2015), 258—268.Google Scholar
- Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Frédo Durand, Aude Oliva, and Antonio Torralba. 2014. MIT Saliency Benchmark. (2014). http://saliency.mit.edu/.Google Scholar
- Zoya Bylinskii, Tilke Judd, Aude Oliva, Antonio Torralba, and Frédo Durand. 2016. What do different evaluation metrics tell us about saliency models? CoRR abs/1604.03605 (2016). http://arxiv.org/abs/1604.03605.Google Scholar
- Zoya Bylinskii, Nam Wook Kim, Peter O’Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan Russell, and Aaron Hertzmann. 2017. Learning visual importance for graphic designs and data visualizations. In Proceedings of the 30th Annual ACM Symposium on User Interface Software 8 Technology (UIST’17). ACM. Google ScholarDigital Library
- Zoya Bylinskii, Adrià Recasens, Ali Borji, Aude Oliva, Antonio Torralba, and Frédo Durand. 2016. Where should saliency models look next? In Proceedings of the European Conference on Computer Vision. Springer, 809--824.Google ScholarCross Ref
- Mon Chu Chen, John R. Anderson, and Myeong Ho Sohn. 2001. What can a mouse cursor tell us more?: Correlation of eye/mouse movements on web browsing. In Proceedings of CHI’01 Extended Abstracts on Human Factors in Computing Systems (CHI EA’01). ACM, New York, NY, 281--282. Google ScholarDigital Library
- Laura Cowen, Linden J. Ball, and Judy Delin. 2002. An eye movement analysis of web page usability. In People and Computers XVI. Springer, 317--335.Google Scholar
- Edward Cutrell and Zhiwei Guan. 2007. What are you looking for?: An eye-tracking study of information usage in web search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07). ACM, New York, NY, 407--416. Google ScholarDigital Library
- Abhishek Das, Harsh Agrawal, Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2016. Human attention in visual question answering: Do humans and deep networks look at the same regions? arXiv Preprint arXiv:1606.03556 (2016).Google Scholar
- Jia Deng, Jonathan Krause, and Li Fei-Fei. 2013. Fine-grained crowdsourcing for fine-grained recognition. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’13). IEEE Computer Society, Washington, DC, 580--587. Google ScholarDigital Library
- Andrew T. Duchowski. 2002. A breadth-first survey of eye-tracking applications. Behavior Research Methods, Instruments, 8 Computers 34, 4 (2002), 455--470.Google Scholar
- Simone Frintrop, Erich Rome, and Henrik I. Christensen. 2010. Computational visual attention systems and their cognitive foundations: A survey. ACM Transactions on Applied Perception 7, 1, Article 6 (Jan. 2010), 39 pages. Google ScholarDigital Library
- Kenneth Alberto Funes Mora, Florent Monay, and Jean-Marc Odobez. 2014. EYEDIAP: A database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA’14). ACM, New York, NY, 255--258. Google ScholarDigital Library
- Joseph H. Goldberg and Xerxes P. Kotval. 1999. Computer interface evaluation using eye movements: Methods and constructs. International Journal of Industrial Ergonomics 24, 6 (1999), 631--645.Google ScholarCross Ref
- Joseph H. Goldberg, Mark J. Stimson, Marion Lewenstein, Neil Scott, and Anna M. Wichansky. 2002. Eye tracking in web search tasks: Design implications. In Proceedings of the 2002 Symposium on Eye Tracking Research 8 Applications (ETRA’02). ACM, New York, NY, 51--58. Google ScholarDigital Library
- Frédéric Gosselin and Philippe G. Schyns. 2001. Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research 41, 17 (2001), 2261--2271.Google ScholarCross Ref
- W. Graf and H. Krueger. 1989. Ergonomic evaluation of user-interfaces by means of eye-movement data. In Proceedings of the 3rd International Conference on Human-computer Interaction. Elsevier Science Inc., 659--665. Google ScholarDigital Library
- Elizabeth R. Grant and Michael J. Spivey. 2003. Eye movements and problem solving. Psychological Science 14, 5 (2003), 462--466.Google ScholarCross Ref
- Rebecca Grier, Philip Kortum, and James Miller. 2007. How users view web pages: An exploration of cognitive and perceptual mechanisms. Human Computer Interaction Research in Web Design and Evaluation (2007), 22--41.Google ScholarCross Ref
- Qi Guo and Eugene Agichtein. 2010. Towards predicting web searcher gaze position from mouse movements. In Proceedings of CHI’10 Extended Abstracts on Human Factors in Computing Systems (CHI EA’10). ACM, New York, NY, 3601--3606. Google ScholarDigital Library
- Mary Hayhoe. 2004. Advances in relating eye movements and cognition. Infancy 6, 2 (2004), 267--274.Google ScholarCross Ref
- Kenneth Holmqvist, Marcus Nyström, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye Tracking: A Comprehensive Guide to Methods and Measures. Oxford University Press, Oxford.Google Scholar
- Jeff Huang, Ryen White, and Georg Buscher. 2012. User see, user point: Gaze and cursor alignment in web search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’12). ACM, New York, NY, 1341--1350. Google ScholarDigital Library
- Jeff Huang, Ryen W. White, and Susan Dumais. 2011. No clicks, no problem: Using cursor movements to understand and improve search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). ACM, New York, NY, 1225--1234. Google ScholarDigital Library
- Qiong Huang, Ashok Veeraraghavan, and Ashutosh Sabharwal. 2015. TabletGaze: Unconstrained appearance-based gaze estimation in mobile tablets. arXiv Preprint arXiv:1508.01244 (2015).Google Scholar
- Weidong Huang. 2007. Using eye tracking to investigate graph layout effects. In Proceedings of APVIS’07. 97--100.Google ScholarCross Ref
- Robert J. K. Jacob and Keith S. Karn. 2003. Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. Mind 2, 3 (2003), 4.Google Scholar
- Anthony R. Jansen, Alan F. Blackwell, and Kim Marriott. 2003. A tool for tracking visual attention: The restricted focus viewer. Behavior Research Methods, Instruments, 8 Computers 35, 1 (2003), 57--69.Google Scholar
- Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. 2015. SALICON: Saliency in context. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1072--1080.Google ScholarCross Ref
- Sheree Josephson and Michael E. Holmes. 2002. Visual attention to repeated internet images: Testing the scanpath theory on the world wide web. In Proceedings of the 2002 Symposium on Eye Tracking Research 8 Applications (ETRA’02). ACM, New York, NY, 43--49. Google ScholarDigital Library
- Tilke Judd, Frédo Durand, and Antonio Torralba. 2012. A benchmark of computational models of saliency to predict human fixations. MIT Technical Report. MIT-CSAIL-TR-2012-001. MIT CSAIL.Google Scholar
- Tilke Judd, Krista Ehinger, Fredo Durand, and Antonio Torralba. 2009. Learning to predict where humans look. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision. 2106--2113.Google ScholarCross Ref
- Marcel Adam Just and Patricia A. Carpenter. 1976. Eye fixations and cognitive processes. Cognitive Psychology 8, 4 (1976), 441--480.Google ScholarCross Ref
- Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, and Prof. Bernhard Schölkopf. 2007. A nonparametric approach to bottom-up visual saliency. In Advances in Neural Information Processing Systems 19, P. B. Schölkopf, J. C. Platt, and T. Hoffman (Eds.). MIT Press, 689--696. Google ScholarDigital Library
- Nam Wook Kim, Zoya Bylinskii, Michelle A. Borkin, Aude Oliva, Krzysztof Z. Gajos, and Hanspeter Pfister. 2015. A crowdsourced alternative to eye-tracking for visualization understanding. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’15). ACM, New York, NY, 1349--1354. Google ScholarDigital Library
- Sung-Hee Kim, Zhihua Dong, Hanjun Xian, Benjavan Upatising, and Ji Soo Yi. 2012. Does an eye tracker tell the truth about visualizations?: Findings while investigating visualizations for decision making. IEEE TVCG 18, 12 (2012), 2421--2430. Google ScholarDigital Library
- Aniket Kittur, Ed H. Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’08). ACM, New York, NY, 453--456. Google ScholarDigital Library
- Kathryn Koehler, Fei Guo, Sheng Zhang, and Miguel P. Eckstein. 2014. What do saliency models predict? Journal of Vision 14, 3 (2014), 14. arXiv:/data/journals/jov/932817/i1534-7362-14-3-14.pdfGoogle ScholarCross Ref
- Eileen Kowler. 1989. The role of visual and cognitive processes in the control of eye movement. Reviews of Oculomotor Research 4 (1989), 1--70.Google Scholar
- Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, and Antonio Torralba. 2016. Eye tracking for everyone. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2176--2184.Google ScholarCross Ref
- Srinivas S. Kruthiventi, Kumar Ayush, and R. Venkatesh Babu. 2015. DeepFix: A fully convolutional neural network for predicting human eye fixations. CoRR abs/1510.02927 (2015). http://arxiv.org/abs/1510.02927.Google Scholar
- Dmitry Lagun and Eugene Agichtein. 2011. ViewSer: Enabling large-scale remote user studies of web search examination and interaction. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’11). ACM, New York, NY, 365--374. Google ScholarDigital Library
- Olivier Le Meur and Thierry Baccino. 2013. Methods for comparing scanpaths and saliency maps: Strengths and weaknesses. Behavior Research Methods 45, 1 (2013), 251--266.Google ScholarCross Ref
- Daniel J. Liebling and Sören Preibusch. 2014. Privacy considerations for a pervasive eye tracking world. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (UbiComp’14 Adjunct). ACM, New York, NY, 1169--1177. Google ScholarDigital Library
- Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In Proceedings of the 13th European Conference on Computer Vision -- ECCV 2014, Zurich, Switzerland, September 6--12, 2014, Part V. David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (Eds.). Springer International Publishing, 740--755.Google ScholarCross Ref
- Christof Lutteroth, Moiz Penkar, and Gerald Weber. 2015. Gaze vs. mouse: A fast and accurate gaze-only click alternative. In Proceedings of the 28th Annual ACM Symposium on User Interface Software 8 Technology (UIST’15). ACM, New York, NY, 385--394. Google ScholarDigital Library
- Päivi Majaranta and Andreas Bulling. 2014. Eye Tracking and Eye-Based Human--Computer Interaction. Springer London, London, 39--65.Google Scholar
- George W. McConkie and Keith Rayner. 1975. The span of the effective stimulus during a fixation in reading. Perception 8 Psychophysics 17, 6 (1975), 578--586.Google Scholar
- Jakob Nielsen and Kara Pernice. 2009. Eyetracking Web Usability (1st ed.). New Riders Publishing, Thousand Oaks, CA. Google ScholarDigital Library
- David Noton and Lawrence Stark. 1971. Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vision Research 11, 9 (1971), 929--942, IN3--IN8.Google ScholarCross Ref
- Peter O’Donovan, Aseem Agarwala, and Aaron Hertzmann. 2014. Learning layouts for single-page graphic designs. IEEE Transactions on Visualization and Computer Graphics 20, 8 (Aug 2014), 1200--1213. Google ScholarDigital Library
- Peter O’Donovan, Aseem Agarwala, and Aaron Hertzmann. 2015. DesignScape: Design with interactive layout suggestions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI’15). ACM, New York, NY, 1221--1224. Google ScholarDigital Library
- Bing Pan, Helene A. Hembrooke, Geri K. Gay, Laura A. Granka, Matthew K. Feusner, and Jill K. Newman. 2004. The determinants of web page viewing behavior: An eye-tracking study. In Proceedings of the 2004 Symposium on Eye Tracking Research 8 Applications (ETRA’04). ACM, New York, NY, 147--154. Google ScholarDigital Library
- Junting Pan, Kevin McGuinness, Elisa Sayrol, Noel E. O’Connor, and Xavier Giró i Nieto. 2016. Shallow and deep convolutional networks for saliency prediction. CoRR abs/1603.00845 (2016). http://arxiv.org/abs/1603.00845.Google Scholar
- Alexandra Papoutsaki, Patsorn Sangkloy, James Laskey, Nediyana Daskalova, Jeff Huang, and James Hays. 2016. WebGazer: Scalable webcam eye tracking using user interactions. In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI). AAAI, 3839--3845. Google ScholarDigital Library
- Derrick Parkhurst, Klinton Law, and Ernst Niebur. 2002. Modeling the role of salience in the allocation of overt visual attention. Vision Research 42, 1 (2002), 107--123.Google ScholarCross Ref
- Mathias Pohl, Markus Schmitt, and Stephan Diehl. 2009. Comparing the readability of graph layouts using eyetracking and task-oriented analysis. In Proceedings of the 5th Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging (Computational Aesthetics’09). Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 49--56. Google ScholarDigital Library
- Alex Poole and Linden J. Ball. 2006. Eye tracking in HCI and usability research. Encyclopedia of Human Computer Interaction 1 (2006), 211--219.Google ScholarCross Ref
- Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research.Psychological Bulletin 124, 3 (1998), 372.Google Scholar
- Keith Rayner. 2014. The gaze-contingent moving window in reading: Development and review. Visual Cognition 22, 3--4 (2014), 242--258.Google ScholarCross Ref
- Keith Rayner, Caren M. Rotello, Andrew J. Stewart, Jessica Keir, and Susan A. Duffy. 2001. Integrating text and pictorial information: Eye movements when looking at print advertisements.Journal of Experimental Psychology: Applied 7, 3 (2001), 219.Google Scholar
- Eyal M. Reingold, Lester C. Loschky, George W. McConkie, and David M. Stampe. 2003. Gaze-contingent multiresolutional displays: An integrative review. Human Factors: The Journal of the Human Factors and Ergonomics Society 45, 2 (2003), 307--328.Google ScholarCross Ref
- Ronald A. Rensink. 2011. The Management of Visual Attention in Graphic Displays. Cambridge University Press, Cambridge, England.Google Scholar
- Kerry Rodden, Xin Fu, Anne Aula, and Ian Spiro. 2008. Eye-mouse coordination patterns on web search results pages. In Proceedings of CHI’08 Extended Abstracts on Human Factors in Computing Systems (CHI EA’08). ACM, New York, NY, 2997--3002. Google ScholarDigital Library
- Ruth Rosenholtz, Amal Dorai, and Rosalind Freeman. 2011. Do predictions of visual perception aid design? ACM Trans. Appl. Percept. 8, 2, Article 12 (Feb. 2011), 12:1--12:20 pages. Google ScholarDigital Library
- Michael Schulte-Mecklenbeck, Ryan O. Murphy, and Florian Hutzler. 2011. Flashlight - Recording information acquisition online. Computers in Human Behavior 27, 5 (Sept. 2011), 1771--1782. Google ScholarCross Ref
- Chengyao Shen, Xun Huang, and Qi Zhao. 2015. Predicting eye fixations on webpage with an ensemble of early features and high-level representations from deep network. IEEE Transactions on Multimedia 17, 11 (Nov. 2015), 2084--2093.Google ScholarDigital Library
- Chengyao Shen and Qi Zhao. 2014. Webpage Saliency. Springer International Publishing, 33--46.Google Scholar
- Peter Tarasewich, Marc Pomplun, Stephanie Fillion, and Daniel Broberg. 2005. The enhanced restricted focus viewer. International Journal of Human Computer Interaction 19, 1 (2005), 35--54.Google ScholarCross Ref
- Benjamin W. Tatler. 2007. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision 7, 14 (2007), 4. arXiv:/data/journals/jov/932846/jov-7-14-4.pdfGoogle ScholarCross Ref
- Benjamin W. Tatler, Roland J. Baddeley, and Iain D. Gilchrist. 2005. Visual correlates of fixation selection: Effects of scale and time. Vision Research 45, 5 (2005), 643--659.Google ScholarCross Ref
- Benjamin W. Tatler, Mary M. Hayhoe, Michael F. Land, and Dana H. Ballard. 2011. Eye guidance in natural vision: Reinterpreting salience. Journal of Vision 11, 5 (2011), 5. arXiv:/data/journals/jov/933487/jov-11-5-5.pdfGoogle ScholarCross Ref
- Hamed R. Tavakoli, Fawad Ahmed, Ali Borji, and Jorma Laaksonen. 2017. Saliency revisited: Analysis of mouse movements versus fixations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarCross Ref
- Tobii. 2010. Tobii Eye Tracking: An Introduction to Eye Tracking and Tobii Eye Trackers. White paper. Tobii Technology AB.Google Scholar
- Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’04). ACM, New York, NY, 319--326. Google ScholarDigital Library
- Niklas Wilming, Torsten Betz, Tim C. Kietzmann, and Peter Kanig. 2011. Measures and limits of models of fixation selection. PLOS ONE 6, 9 (2011), 1--19.Google ScholarCross Ref
- Juan Xu, Ming Jiang, Shuo Wang, Mohan S. Kankanhalli, and Qi Zhao. 2014. Predicting human gaze beyond pixels. Journal of Vision 14, 1 (2014), 28. arXiv:/data/Journals/JOV/933546/i1534-7362-14-1-28.pdfGoogle ScholarCross Ref
- Pingmei Xu, Krista A. Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev R. Kulkarni, and Jianxiong Xiao. 2015. TurkerGaze: Crowdsourcing saliency with webcam based eye tracking. CoRR abs/1504.06755 (2015). http://arxiv.org/abs/1504.06755.Google Scholar
- Pingmei Xu, Yusuke Sugano, and Andreas Bulling. 2016. Spatio-temporal modeling and prediction of visual attention in graphical user interfaces. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, NY, 3299--3310. Google ScholarDigital Library
- Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2015. Appearance-based gaze estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4511--4520.Google ScholarCross Ref
Index Terms
- BubbleView: An Interface for Crowdsourcing Image Importance Maps and Tracking Visual Attention
Recommendations
A Crowdsourced Alternative to Eye-tracking for Visualization Understanding
CHI EA '15: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing SystemsIn this study we investigate the utility of using mouse clicks as an alternative for eye fixations in the context of understanding data visualizations. We developed a crowdsourced study online in which participants were presented with a series of images ...
Human Visual Scanpath Prediction Based on RGB-D Saliency
ICIGP '18: Proceedings of the 2018 International Conference on Image and Graphics ProcessingHuman visual perception is considered as a dynamic process of information acquisition, while the visual scanpath can clearly reflect the shift of our eye fixations. In the previous study of visual attention, researchers generally do the saliency ...
A psychophysical study of fixation behavior in a computer game
APGV '08: Proceedings of the 5th symposium on Applied perception in graphics and visualizationPrediction of gaze behavior in gaming environments can be a tremendously useful asset to game designers, enabling them to improve gameplay, selectively increase visual fidelity, and optimize the distribution of computing resources. The use of saliency ...
Comments