skip to main content
research-article

BubbleView: An Interface for Crowdsourcing Image Importance Maps and Tracking Visual Attention

Published:13 November 2017Publication History
Skip Abstract Section

Abstract

In this article, we present BubbleView, an alternative methodology for eye tracking using discrete mouse clicks to measure which information people consciously choose to examine. BubbleView is a mouse-contingent, moving-window interface in which participants are presented with a series of blurred images and click to reveal “bubbles” -- small, circular areas of the image at original resolution, similar to having a confined area of focus like the eye fovea. Across 10 experiments with 28 different parameter combinations, we evaluated BubbleView on a variety of image types: information visualizations, natural images, static webpages, and graphic designs, and compared the clicks to eye fixations collected with eye-trackers in controlled lab settings. We found that BubbleView clicks can both (i) successfully approximate eye fixations on different images, and (ii) be used to rank image and design elements by importance. BubbleView is designed to collect clicks on static images, and works best for defined tasks such as describing the content of an information visualization or measuring image importance. BubbleView data is cleaner and more consistent than related methodologies that use continuous mouse movements. Our analyses validate the use of mouse-contingent, moving-window methodologies as approximating eye fixations for different image and task types.

Skip Supplemental Material Section

Supplemental Material

References

  1. Amer Al-Rahayfeh and Miad Faezipour. 2013. Eye tracking and head movement detection: A state-of-art survey. IEEE Journal of Translational Engineering in Health and Medicine 1 (2013).Google ScholarGoogle Scholar
  2. Shumeet Baluja and Dean Pomerleau. 1994. Non-intrusive gaze tracking using artificial neural networks. Technical Report. Carnegie Mellon Univ., Pittsburgh, PA, USA. Google ScholarGoogle Scholar
  3. Roman Bednarik and Markku Tukiainen. 2005. Effects of display blurring on the behavior of novices and experts during program debugging. In Proceedings of CHI’05 Extended Abstracts on Human Factors in Computing Systems (CHI EA’05). ACM, New York, NY, 1204--1207. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Roman Bednarik and Markku Tukiainen. 2007. Validating the restricted focus viewer: A study using eye-movement tracking. Behavior Research Methods 39, 2 (2007), 274--282.Google ScholarGoogle ScholarCross RefCross Ref
  5. Jennifer Romano Bergstrom and Andrew Schall. 2014. Eye Tracking in User Experience Design. Elsevier. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Alan F. Blackwell, Anthony R. Jansen, and Kim Marriott. 2000. Restricted Focus Viewer: A Tool for Tracking Visual Attention. Springer, Berlin, 162--177.Google ScholarGoogle Scholar
  7. Ali Borji and Laurent Itti. 2013. State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 1 (Jan. 2013), 185--207. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Ali Borji and Laurent Itti. 2015. Cat2000: A large scale fixation dataset for boosting saliency research. arXiv Preprint arXiv:1505.03581 (2015).Google ScholarGoogle Scholar
  9. Ali Borji, Dicky N. Sihite, and Laurent Itti. 2013. Quantitative analysis of human-model agreement in visual saliency modeling: A comparative study. IEEE Transactions on Image Processing 22, 1 (Jan 2013), 55--69. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Michelle A. Borkin, Zoya Bylinskii, Nam Wook Kim, Constance May Bainbridge, Chelsea S. Yeh, Daniel Borkin, Hanspeter Pfister, and Aude Oliva. 2016. Beyond memorability: Visualization recognition and recall. IEEE Transactions on Visualization and Computer Graphics 22, 1 (Jan. 2016), 519--528.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Daniel Bruneau, M. Angela Sasse, and J. D. McCarthy. 2002. The eyes never lie: The use of eye tracking data in HCI research. In Proceedings of CHI, Vol. 2, 25.Google ScholarGoogle Scholar
  12. Georg Buscher, Edward Cutrell, and Meredith Ringel Morris. 2009. What do you see when you’re surfing?: Using eye tracking to predict salient regions of web pages. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). ACM, New York, NY, 21--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Zoya Bylinskii, Michelle A. Borkin, Nam Wook Kim, Hanspeter Pfister, and Aude Oliva. 2017. Eye fixation metrics for large scale evaluation and comparison of information visualizations. In Eye Tracking and Visualization: Foundations, Techniques, and Applications. ETVIS 2015, Michael Burch, Lewis Chuang, Brian Fisher, Albrecht Schmidt, and Daniel Weiskopf (Eds.). Springer International Publishing, 235--255.Google ScholarGoogle Scholar
  14. Zoya Bylinskii, Ellen M. DeGennaro, Rishi Rajalingham, Harald Ruda, Jinxia Zhang, and John K. Tsotsos. 2015. Towards the quantitative evaluation of visual attention models. Vision Research 116, Part B (2015), 258—268.Google ScholarGoogle Scholar
  15. Zoya Bylinskii, Tilke Judd, Ali Borji, Laurent Itti, Frédo Durand, Aude Oliva, and Antonio Torralba. 2014. MIT Saliency Benchmark. (2014). http://saliency.mit.edu/.Google ScholarGoogle Scholar
  16. Zoya Bylinskii, Tilke Judd, Aude Oliva, Antonio Torralba, and Frédo Durand. 2016. What do different evaluation metrics tell us about saliency models? CoRR abs/1604.03605 (2016). http://arxiv.org/abs/1604.03605.Google ScholarGoogle Scholar
  17. Zoya Bylinskii, Nam Wook Kim, Peter O’Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan Russell, and Aaron Hertzmann. 2017. Learning visual importance for graphic designs and data visualizations. In Proceedings of the 30th Annual ACM Symposium on User Interface Software 8 Technology (UIST’17). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Zoya Bylinskii, Adrià Recasens, Ali Borji, Aude Oliva, Antonio Torralba, and Frédo Durand. 2016. Where should saliency models look next? In Proceedings of the European Conference on Computer Vision. Springer, 809--824.Google ScholarGoogle ScholarCross RefCross Ref
  19. Mon Chu Chen, John R. Anderson, and Myeong Ho Sohn. 2001. What can a mouse cursor tell us more?: Correlation of eye/mouse movements on web browsing. In Proceedings of CHI’01 Extended Abstracts on Human Factors in Computing Systems (CHI EA’01). ACM, New York, NY, 281--282. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Laura Cowen, Linden J. Ball, and Judy Delin. 2002. An eye movement analysis of web page usability. In People and Computers XVI. Springer, 317--335.Google ScholarGoogle Scholar
  21. Edward Cutrell and Zhiwei Guan. 2007. What are you looking for?: An eye-tracking study of information usage in web search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07). ACM, New York, NY, 407--416. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Abhishek Das, Harsh Agrawal, Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2016. Human attention in visual question answering: Do humans and deep networks look at the same regions? arXiv Preprint arXiv:1606.03556 (2016).Google ScholarGoogle Scholar
  23. Jia Deng, Jonathan Krause, and Li Fei-Fei. 2013. Fine-grained crowdsourcing for fine-grained recognition. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’13). IEEE Computer Society, Washington, DC, 580--587. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Andrew T. Duchowski. 2002. A breadth-first survey of eye-tracking applications. Behavior Research Methods, Instruments, 8 Computers 34, 4 (2002), 455--470.Google ScholarGoogle Scholar
  25. Simone Frintrop, Erich Rome, and Henrik I. Christensen. 2010. Computational visual attention systems and their cognitive foundations: A survey. ACM Transactions on Applied Perception 7, 1, Article 6 (Jan. 2010), 39 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Kenneth Alberto Funes Mora, Florent Monay, and Jean-Marc Odobez. 2014. EYEDIAP: A database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA’14). ACM, New York, NY, 255--258. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Joseph H. Goldberg and Xerxes P. Kotval. 1999. Computer interface evaluation using eye movements: Methods and constructs. International Journal of Industrial Ergonomics 24, 6 (1999), 631--645.Google ScholarGoogle ScholarCross RefCross Ref
  28. Joseph H. Goldberg, Mark J. Stimson, Marion Lewenstein, Neil Scott, and Anna M. Wichansky. 2002. Eye tracking in web search tasks: Design implications. In Proceedings of the 2002 Symposium on Eye Tracking Research 8 Applications (ETRA’02). ACM, New York, NY, 51--58. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Frédéric Gosselin and Philippe G. Schyns. 2001. Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research 41, 17 (2001), 2261--2271.Google ScholarGoogle ScholarCross RefCross Ref
  30. W. Graf and H. Krueger. 1989. Ergonomic evaluation of user-interfaces by means of eye-movement data. In Proceedings of the 3rd International Conference on Human-computer Interaction. Elsevier Science Inc., 659--665. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Elizabeth R. Grant and Michael J. Spivey. 2003. Eye movements and problem solving. Psychological Science 14, 5 (2003), 462--466.Google ScholarGoogle ScholarCross RefCross Ref
  32. Rebecca Grier, Philip Kortum, and James Miller. 2007. How users view web pages: An exploration of cognitive and perceptual mechanisms. Human Computer Interaction Research in Web Design and Evaluation (2007), 22--41.Google ScholarGoogle ScholarCross RefCross Ref
  33. Qi Guo and Eugene Agichtein. 2010. Towards predicting web searcher gaze position from mouse movements. In Proceedings of CHI’10 Extended Abstracts on Human Factors in Computing Systems (CHI EA’10). ACM, New York, NY, 3601--3606. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Mary Hayhoe. 2004. Advances in relating eye movements and cognition. Infancy 6, 2 (2004), 267--274.Google ScholarGoogle ScholarCross RefCross Ref
  35. Kenneth Holmqvist, Marcus Nyström, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van de Weijer. 2011. Eye Tracking: A Comprehensive Guide to Methods and Measures. Oxford University Press, Oxford.Google ScholarGoogle Scholar
  36. Jeff Huang, Ryen White, and Georg Buscher. 2012. User see, user point: Gaze and cursor alignment in web search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’12). ACM, New York, NY, 1341--1350. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Jeff Huang, Ryen W. White, and Susan Dumais. 2011. No clicks, no problem: Using cursor movements to understand and improve search. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). ACM, New York, NY, 1225--1234. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Qiong Huang, Ashok Veeraraghavan, and Ashutosh Sabharwal. 2015. TabletGaze: Unconstrained appearance-based gaze estimation in mobile tablets. arXiv Preprint arXiv:1508.01244 (2015).Google ScholarGoogle Scholar
  39. Weidong Huang. 2007. Using eye tracking to investigate graph layout effects. In Proceedings of APVIS’07. 97--100.Google ScholarGoogle ScholarCross RefCross Ref
  40. Robert J. K. Jacob and Keith S. Karn. 2003. Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. Mind 2, 3 (2003), 4.Google ScholarGoogle Scholar
  41. Anthony R. Jansen, Alan F. Blackwell, and Kim Marriott. 2003. A tool for tracking visual attention: The restricted focus viewer. Behavior Research Methods, Instruments, 8 Computers 35, 1 (2003), 57--69.Google ScholarGoogle Scholar
  42. Ming Jiang, Shengsheng Huang, Juanyong Duan, and Qi Zhao. 2015. SALICON: Saliency in context. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 1072--1080.Google ScholarGoogle ScholarCross RefCross Ref
  43. Sheree Josephson and Michael E. Holmes. 2002. Visual attention to repeated internet images: Testing the scanpath theory on the world wide web. In Proceedings of the 2002 Symposium on Eye Tracking Research 8 Applications (ETRA’02). ACM, New York, NY, 43--49. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Tilke Judd, Frédo Durand, and Antonio Torralba. 2012. A benchmark of computational models of saliency to predict human fixations. MIT Technical Report. MIT-CSAIL-TR-2012-001. MIT CSAIL.Google ScholarGoogle Scholar
  45. Tilke Judd, Krista Ehinger, Fredo Durand, and Antonio Torralba. 2009. Learning to predict where humans look. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision. 2106--2113.Google ScholarGoogle ScholarCross RefCross Ref
  46. Marcel Adam Just and Patricia A. Carpenter. 1976. Eye fixations and cognitive processes. Cognitive Psychology 8, 4 (1976), 441--480.Google ScholarGoogle ScholarCross RefCross Ref
  47. Wolf Kienzle, Felix A. Wichmann, Matthias O. Franz, and Prof. Bernhard Schölkopf. 2007. A nonparametric approach to bottom-up visual saliency. In Advances in Neural Information Processing Systems 19, P. B. Schölkopf, J. C. Platt, and T. Hoffman (Eds.). MIT Press, 689--696. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Nam Wook Kim, Zoya Bylinskii, Michelle A. Borkin, Aude Oliva, Krzysztof Z. Gajos, and Hanspeter Pfister. 2015. A crowdsourced alternative to eye-tracking for visualization understanding. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’15). ACM, New York, NY, 1349--1354. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Sung-Hee Kim, Zhihua Dong, Hanjun Xian, Benjavan Upatising, and Ji Soo Yi. 2012. Does an eye tracker tell the truth about visualizations?: Findings while investigating visualizations for decision making. IEEE TVCG 18, 12 (2012), 2421--2430. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Aniket Kittur, Ed H. Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’08). ACM, New York, NY, 453--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Kathryn Koehler, Fei Guo, Sheng Zhang, and Miguel P. Eckstein. 2014. What do saliency models predict? Journal of Vision 14, 3 (2014), 14. arXiv:/data/journals/jov/932817/i1534-7362-14-3-14.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  52. Eileen Kowler. 1989. The role of visual and cognitive processes in the control of eye movement. Reviews of Oculomotor Research 4 (1989), 1--70.Google ScholarGoogle Scholar
  53. Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, and Antonio Torralba. 2016. Eye tracking for everyone. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2176--2184.Google ScholarGoogle ScholarCross RefCross Ref
  54. Srinivas S. Kruthiventi, Kumar Ayush, and R. Venkatesh Babu. 2015. DeepFix: A fully convolutional neural network for predicting human eye fixations. CoRR abs/1510.02927 (2015). http://arxiv.org/abs/1510.02927.Google ScholarGoogle Scholar
  55. Dmitry Lagun and Eugene Agichtein. 2011. ViewSer: Enabling large-scale remote user studies of web search examination and interaction. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’11). ACM, New York, NY, 365--374. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Olivier Le Meur and Thierry Baccino. 2013. Methods for comparing scanpaths and saliency maps: Strengths and weaknesses. Behavior Research Methods 45, 1 (2013), 251--266.Google ScholarGoogle ScholarCross RefCross Ref
  57. Daniel J. Liebling and Sören Preibusch. 2014. Privacy considerations for a pervasive eye tracking world. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (UbiComp’14 Adjunct). ACM, New York, NY, 1169--1177. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and Lawrence Zitnick. 2014. Microsoft COCO: Common objects in context. In Proceedings of the 13th European Conference on Computer Vision -- ECCV 2014, Zurich, Switzerland, September 6--12, 2014, Part V. David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (Eds.). Springer International Publishing, 740--755.Google ScholarGoogle ScholarCross RefCross Ref
  59. Christof Lutteroth, Moiz Penkar, and Gerald Weber. 2015. Gaze vs. mouse: A fast and accurate gaze-only click alternative. In Proceedings of the 28th Annual ACM Symposium on User Interface Software 8 Technology (UIST’15). ACM, New York, NY, 385--394. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Päivi Majaranta and Andreas Bulling. 2014. Eye Tracking and Eye-Based Human--Computer Interaction. Springer London, London, 39--65.Google ScholarGoogle Scholar
  61. George W. McConkie and Keith Rayner. 1975. The span of the effective stimulus during a fixation in reading. Perception 8 Psychophysics 17, 6 (1975), 578--586.Google ScholarGoogle Scholar
  62. Jakob Nielsen and Kara Pernice. 2009. Eyetracking Web Usability (1st ed.). New Riders Publishing, Thousand Oaks, CA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. David Noton and Lawrence Stark. 1971. Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vision Research 11, 9 (1971), 929--942, IN3--IN8.Google ScholarGoogle ScholarCross RefCross Ref
  64. Peter O’Donovan, Aseem Agarwala, and Aaron Hertzmann. 2014. Learning layouts for single-page graphic designs. IEEE Transactions on Visualization and Computer Graphics 20, 8 (Aug 2014), 1200--1213. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Peter O’Donovan, Aseem Agarwala, and Aaron Hertzmann. 2015. DesignScape: Design with interactive layout suggestions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI’15). ACM, New York, NY, 1221--1224. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Bing Pan, Helene A. Hembrooke, Geri K. Gay, Laura A. Granka, Matthew K. Feusner, and Jill K. Newman. 2004. The determinants of web page viewing behavior: An eye-tracking study. In Proceedings of the 2004 Symposium on Eye Tracking Research 8 Applications (ETRA’04). ACM, New York, NY, 147--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Junting Pan, Kevin McGuinness, Elisa Sayrol, Noel E. O’Connor, and Xavier Giró i Nieto. 2016. Shallow and deep convolutional networks for saliency prediction. CoRR abs/1603.00845 (2016). http://arxiv.org/abs/1603.00845.Google ScholarGoogle Scholar
  68. Alexandra Papoutsaki, Patsorn Sangkloy, James Laskey, Nediyana Daskalova, Jeff Huang, and James Hays. 2016. WebGazer: Scalable webcam eye tracking using user interactions. In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI). AAAI, 3839--3845. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Derrick Parkhurst, Klinton Law, and Ernst Niebur. 2002. Modeling the role of salience in the allocation of overt visual attention. Vision Research 42, 1 (2002), 107--123.Google ScholarGoogle ScholarCross RefCross Ref
  70. Mathias Pohl, Markus Schmitt, and Stephan Diehl. 2009. Comparing the readability of graph layouts using eyetracking and task-oriented analysis. In Proceedings of the 5th Eurographics Conference on Computational Aesthetics in Graphics, Visualization and Imaging (Computational Aesthetics’09). Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, 49--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Alex Poole and Linden J. Ball. 2006. Eye tracking in HCI and usability research. Encyclopedia of Human Computer Interaction 1 (2006), 211--219.Google ScholarGoogle ScholarCross RefCross Ref
  72. Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research.Psychological Bulletin 124, 3 (1998), 372.Google ScholarGoogle Scholar
  73. Keith Rayner. 2014. The gaze-contingent moving window in reading: Development and review. Visual Cognition 22, 3--4 (2014), 242--258.Google ScholarGoogle ScholarCross RefCross Ref
  74. Keith Rayner, Caren M. Rotello, Andrew J. Stewart, Jessica Keir, and Susan A. Duffy. 2001. Integrating text and pictorial information: Eye movements when looking at print advertisements.Journal of Experimental Psychology: Applied 7, 3 (2001), 219.Google ScholarGoogle Scholar
  75. Eyal M. Reingold, Lester C. Loschky, George W. McConkie, and David M. Stampe. 2003. Gaze-contingent multiresolutional displays: An integrative review. Human Factors: The Journal of the Human Factors and Ergonomics Society 45, 2 (2003), 307--328.Google ScholarGoogle ScholarCross RefCross Ref
  76. Ronald A. Rensink. 2011. The Management of Visual Attention in Graphic Displays. Cambridge University Press, Cambridge, England.Google ScholarGoogle Scholar
  77. Kerry Rodden, Xin Fu, Anne Aula, and Ian Spiro. 2008. Eye-mouse coordination patterns on web search results pages. In Proceedings of CHI’08 Extended Abstracts on Human Factors in Computing Systems (CHI EA’08). ACM, New York, NY, 2997--3002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Ruth Rosenholtz, Amal Dorai, and Rosalind Freeman. 2011. Do predictions of visual perception aid design? ACM Trans. Appl. Percept. 8, 2, Article 12 (Feb. 2011), 12:1--12:20 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Michael Schulte-Mecklenbeck, Ryan O. Murphy, and Florian Hutzler. 2011. Flashlight - Recording information acquisition online. Computers in Human Behavior 27, 5 (Sept. 2011), 1771--1782. Google ScholarGoogle ScholarCross RefCross Ref
  80. Chengyao Shen, Xun Huang, and Qi Zhao. 2015. Predicting eye fixations on webpage with an ensemble of early features and high-level representations from deep network. IEEE Transactions on Multimedia 17, 11 (Nov. 2015), 2084--2093.Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Chengyao Shen and Qi Zhao. 2014. Webpage Saliency. Springer International Publishing, 33--46.Google ScholarGoogle Scholar
  82. Peter Tarasewich, Marc Pomplun, Stephanie Fillion, and Daniel Broberg. 2005. The enhanced restricted focus viewer. International Journal of Human Computer Interaction 19, 1 (2005), 35--54.Google ScholarGoogle ScholarCross RefCross Ref
  83. Benjamin W. Tatler. 2007. The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision 7, 14 (2007), 4. arXiv:/data/journals/jov/932846/jov-7-14-4.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  84. Benjamin W. Tatler, Roland J. Baddeley, and Iain D. Gilchrist. 2005. Visual correlates of fixation selection: Effects of scale and time. Vision Research 45, 5 (2005), 643--659.Google ScholarGoogle ScholarCross RefCross Ref
  85. Benjamin W. Tatler, Mary M. Hayhoe, Michael F. Land, and Dana H. Ballard. 2011. Eye guidance in natural vision: Reinterpreting salience. Journal of Vision 11, 5 (2011), 5. arXiv:/data/journals/jov/933487/jov-11-5-5.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  86. Hamed R. Tavakoli, Fawad Ahmed, Ali Borji, and Jorma Laaksonen. 2017. Saliency revisited: Analysis of mouse movements versus fixations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.Google ScholarGoogle ScholarCross RefCross Ref
  87. Tobii. 2010. Tobii Eye Tracking: An Introduction to Eye Tracking and Tobii Eye Trackers. White paper. Tobii Technology AB.Google ScholarGoogle Scholar
  88. Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’04). ACM, New York, NY, 319--326. Google ScholarGoogle ScholarDigital LibraryDigital Library
  89. Niklas Wilming, Torsten Betz, Tim C. Kietzmann, and Peter Kanig. 2011. Measures and limits of models of fixation selection. PLOS ONE 6, 9 (2011), 1--19.Google ScholarGoogle ScholarCross RefCross Ref
  90. Juan Xu, Ming Jiang, Shuo Wang, Mohan S. Kankanhalli, and Qi Zhao. 2014. Predicting human gaze beyond pixels. Journal of Vision 14, 1 (2014), 28. arXiv:/data/Journals/JOV/933546/i1534-7362-14-1-28.pdfGoogle ScholarGoogle ScholarCross RefCross Ref
  91. Pingmei Xu, Krista A. Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev R. Kulkarni, and Jianxiong Xiao. 2015. TurkerGaze: Crowdsourcing saliency with webcam based eye tracking. CoRR abs/1504.06755 (2015). http://arxiv.org/abs/1504.06755.Google ScholarGoogle Scholar
  92. Pingmei Xu, Yusuke Sugano, and Andreas Bulling. 2016. Spatio-temporal modeling and prediction of visual attention in graphical user interfaces. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, NY, 3299--3310. Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2015. Appearance-based gaze estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4511--4520.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. BubbleView: An Interface for Crowdsourcing Image Importance Maps and Tracking Visual Attention

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Computer-Human Interaction
      ACM Transactions on Computer-Human Interaction  Volume 24, Issue 5
      October 2017
      167 pages
      ISSN:1073-0516
      EISSN:1557-7325
      DOI:10.1145/3149825
      Issue’s Table of Contents

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 November 2017
      • Accepted: 1 July 2017
      • Revised: 1 June 2017
      • Received: 1 February 2017
      Published in tochi Volume 24, Issue 5

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader