Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

A novel role for visual perspective cues in the neural computation of depth

Abstract

As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Figure 1: Schematic illustration of dynamic perspective cues and stimuli for measuring depth tuning from motion parallax.
Figure 2: Raw responses and depth tuning curves for an example neuron.
Figure 3: Depth tuning curves from three more example neurons.
Figure 4: Population summary of depth-sign selectivity.
Figure 5: Effects of auxiliary cues on depth-sign selectivity and effects of cue combination.
Figure 6: Response dynamics revealed by random-depth stimuli.
Figure 7: Summary of results from random-depth stimuli.

Similar content being viewed by others

References

  1. Rogers, B. & Graham, M. Motion parallax as an independent cue for depth perception. Perception 8, 125–134 (1979).

    Article  CAS  PubMed  Google Scholar 

  2. Koenderink, J.J. & van Doorn, A.J. Local structure of movement parallax of the plane. J. Opt. Soc. Am. A 66, 717–723 (1976).

    Article  Google Scholar 

  3. Wallach, H. Perceiving a stable environment when one moves. Annu. Rev. Psychol. 38, 1–27 (1987).

    Article  CAS  PubMed  Google Scholar 

  4. von Holst, E. Relations between the central nervous system and the peripheral organs. Br. J. Anim. Behav. 2, 89–94 (1954).

    Article  Google Scholar 

  5. Welchman, A.E., Harris, J.M. & Brenner, E. Extra-retinal signals support the estimation of 3D motion. Vision Res. 49, 782–789 (2009).

    Article  PubMed  Google Scholar 

  6. Royden, C.S., Banks, M.S. & Crowell, J.A. The perception of heading during eye movements. Nature 360, 583–585 (1992).

    Article  CAS  PubMed  Google Scholar 

  7. Banks, M.S., Ehrlich, S.M., Backus, B.T. & Crowell, J.A. Estimating heading during real and simulated eye movements. Vision Res. 36, 431–443 (1996).

    Article  CAS  PubMed  Google Scholar 

  8. Helmholtz, H.v. & Southall, J.P.C. Helmholtz's Treatise on Physiological Optics (Optical Society of America, Rochester, New York, USA, 1924).

  9. Longuet-Higgins, H.C. & Prazdny, K. The interpretation of a moving retinal image. Proc. R. Soc. Lond. B Biol. Sci. 208, 385–397 (1980).

    Article  CAS  PubMed  Google Scholar 

  10. Rieger, J.H. & Lawton, D.T. Processing differential image motion. J. Opt. Soc. Am. A 2, 354–360 (1985).

    Article  CAS  PubMed  Google Scholar 

  11. Rieger, J.H. & Toet, L. Human visual navigation in the presence of 3-D rotations. Biol. Cybern. 52, 377–381 (1985).

    Article  CAS  PubMed  Google Scholar 

  12. Warren, W.H. & Hannon, D.J. Direction of self-motion is perceived from optical flow. Nature 336, 162–163 (1988).

    Article  Google Scholar 

  13. van den Berg, A.V. Robustness of perception of heading from optic flow. Vision Res. 32, 1285–1296 (1992).

    Article  CAS  PubMed  Google Scholar 

  14. Rushton, S.K. & Warren, P.A. Moving observers, relative retinal motion and the detection of object movement. Curr. Biol. 15, R542–R543 (2005).

    Article  CAS  PubMed  Google Scholar 

  15. Warren, P.A. & Rushton, S.K. Optic flow processing for the assessment of object movement during ego movement. Curr. Biol. 19, 1555–1560 (2009).

    Article  CAS  PubMed  Google Scholar 

  16. Braunstein, M.L. & Payne, J.W. Perspective and the rotating trapezoid. J. Opt. Soc. Am. 58, 399–403 (1968).

    Article  CAS  PubMed  Google Scholar 

  17. Rogers, S. & Rogers, B.J. Visual and nonvisual information disambiguate surfaces specified by motion parallax. Percept. Psychophys. 52, 446–452 (1992).

    Article  CAS  PubMed  Google Scholar 

  18. Hayashibe, K. Reversals of visual depth caused by motion parallax. Perception 20, 17–28 (1991).

    Article  CAS  PubMed  Google Scholar 

  19. Nawrot, M. Eye movements provide the extra-retinal signal required for the perception of depth from motion parallax. Vision Res. 43, 1553–1562 (2003).

    Article  PubMed  Google Scholar 

  20. Nawrot, M. & Stroyan, K. The motion/pursuit law for visual depth perception from motion parallax. Vision Res. 49, 1969–1978 (2009).

    Article  PubMed  PubMed Central  Google Scholar 

  21. Nawrot, M. & Joyce, L. The pursuit theory of motion parallax. Vision Res. 46, 4709–4725 (2006).

    Article  PubMed  Google Scholar 

  22. Nawrot, M. Depth from motion parallax scales with eye movement gain. J. Vis. 3, 841–851 (2003).

    Article  PubMed  Google Scholar 

  23. Naji, J.J. & Freeman, T.C. Perceiving depth order during pursuit eye movement. Vision Res. 44, 3025–3034 (2004).

    Article  PubMed  Google Scholar 

  24. Nadler, J.W., Nawrot, M., Angelaki, D.E. & DeAngelis, G.C. MT neurons combine visual motion with a smooth eye movement signal to code depth-sign from motion parallax. Neuron 63, 523–532 (2009).

    Article  PubMed  PubMed Central  Google Scholar 

  25. Nadler, J.W., Angelaki, D.E. & DeAngelis, G.C. A neural representation of depth from motion parallax in macaque visual cortex. Nature 452, 642–645 (2008).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Nadler, J.W. et al. Joint representation of depth from motion parallax and binocular disparity cues in macaque area MT. J. Neurosci. 33, 14061–14074 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Maunsell, J.H. & Van Essen, D.C. Functional properties of neurons in middle temporal visual area of the macaque monkey. II. Binocular interactions and sensitivity to binocular disparity. J. Neurophysiol. 49, 1148–1167 (1983).

    Article  CAS  PubMed  Google Scholar 

  28. DeAngelis, G.C. & Newsome, W.T. Organization of disparity-selective neurons in macaque area MT. J. Neurosci. 19, 1398–1415 (1999).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  29. Britten, K.H., Shadlen, M.N., Newsome, W.T. & Movshon, J.A. The analysis of visual motion: a comparison of neuronal and psychophysical performance. J. Neurosci. 12, 4745–4765 (1992).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Warren, W.H. Jr. & Hannon, D.J. Eye movements and optical flow. J. Opt. Soc. Am. A 7, 160–169 (1990).

    Article  PubMed  Google Scholar 

  31. Crowell, J.A., Banks, M.S., Shenoy, K.V. & Andersen, R.A. Visual self-motion perception during head turns. Nat. Neurosci. 1, 732–737 (1998).

    Article  CAS  PubMed  Google Scholar 

  32. Mahar, M., DeAngelis, G.C. & Nawrot, M. Roles of perspective and pursuit cues in the disambiguation of depth from motion parallax. J. Vis. 13, 969 (2013).

  33. Page, W.K. & Duffy, C.J. MST neuronal responses to heading direction during pursuit eye movements. J. Neurophysiol. 81, 596–610 (1999).

    Article  CAS  PubMed  Google Scholar 

  34. Bradley, D.C., Maxwell, M., Andersen, R.A., Banks, M.S. & Shenoy, K.V. Mechanisms of heading perception in primate visual cortex. Science 273, 1544–1547 (1996).

    Article  CAS  PubMed  Google Scholar 

  35. Shenoy, K.V., Bradley, D.C. & Andersen, R.A. Influence of gaze rotation on the visual response of primate MSTd neurons. J. Neurophysiol. 81, 2764–2786 (1999).

    Article  CAS  PubMed  Google Scholar 

  36. Zhang, T., Heuer, H.W. & Britten, K.H. Parietal area VIP neuronal responses to heading stimuli are encoded in head-centered coordinates. Neuron 42, 993–1001 (2004).

    Article  CAS  PubMed  Google Scholar 

  37. Kaneko, H. & Howard, I.P. Spatial properties of shear disparity processing. Vision Res. 37, 315–323 (1997).

    Article  CAS  PubMed  Google Scholar 

  38. Chowdhury, S.A. & DeAngelis, G.C. Fine discrimination training alters the causal contribution of macaque area MT to depth perception. Neuron 60, 367–377 (2008).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  39. Tsutsui, K., Sakata, H., Naganuma, T. & Taira, M. Neural correlates for perception of 3D surface orientation from texture gradient. Science 298, 409–412 (2002).

    Article  CAS  PubMed  Google Scholar 

  40. Colby, C.L., Duhamel, J.R. & Goldberg, M.E. Ventral intraparietal area of the macaque: anatomic location and visual response properties. J. Neurophysiol. 69, 902–914 (1993).

    Article  CAS  PubMed  Google Scholar 

  41. Bremmer, F., Duhamel, J.R., Ben Hamed, S. & Graf, W. Heading encoding in the macaque ventral intraparietal area (VIP). Eur. J. Neurosci. 16, 1554–1568 (2002).

    Article  PubMed  Google Scholar 

  42. Saito, H. et al. Integration of direction signals of image motion in the superior temporal sulcus of the macaque monkey. J. Neurosci. 6, 145–157 (1986).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  43. Tanaka, K. & Saito, H. Analysis of motion of the visual field by direction, expansion/contraction, and rotation cells clustered in the dorsal part of the medial superior temporal area of the macaque monkey. J. Neurophysiol. 62, 626–641 (1989).

    Article  CAS  PubMed  Google Scholar 

  44. Duffy, C.J. & Wurtz, R.H. Sensitivity of MST neurons to optic flow stimuli. I. A continuum of response selectivity to large-field stimuli. J. Neurophysiol. 65, 1329–1345 (1991).

    Article  CAS  PubMed  Google Scholar 

  45. Maunsell, J.H. & van Essen, D.C. The connections of the middle temporal visual area (MT) and their relationship to a cortical hierarchy in the macaque monkey. J. Neurosci. 3, 2563–2586 (1983).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Newsome, W.T., Wurtz, R.H. & Komatsu, H. Relation of cortical areas MT and MST to pursuit eye movements. II. Differentiation of retinal from extraretinal inputs. J. Neurophysiol. 60, 604–620 (1988).

    Article  CAS  PubMed  Google Scholar 

  47. Ono, S. & Mustari, M.J. Extraretinal signals in MSTd neurons related to volitional smooth pursuit. J. Neurophysiol. 96, 2819–2825 (2006).

    Article  PubMed  Google Scholar 

  48. Stanton, G.B., Bruce, C.J. & Goldberg, M.E. Topography of projections to posterior cortical areas from the macaque frontal eye fields. J. Comp. Neurol. 353, 291–305 (1995).

    Article  CAS  PubMed  Google Scholar 

  49. Schall, J.D., Morel, A., King, D.J. & Bullier, J. Topography of visual cortex connections with frontal eye field in macaque: convergence and segregation of processing streams. J. Neurosci. 15, 4464–4487 (1995).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  50. MacAvoy, M.G., Gottlieb, J.P. & Bruce, C.J. Smooth-pursuit eye movement representation in the primate frontal eye field. Cereb. Cortex 1, 95–102 (1991).

    Article  CAS  PubMed  Google Scholar 

  51. Gu, Y., Watkins, P.V., Angelaki, D.E. & DeAngelis, G.C. Visual and nonvisual contributions to three-dimensional heading selectivity in the medial superior temporal area. J. Neurosci. 26, 73–85 (2006).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  52. Van Essen, D.C. et al. An integrated software suite for surface-based analyses of cerebral cortex. J. Am. Med. Inform. Assoc. 8, 443–459 (2001).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  53. Komatsu, H. & Wurtz, R.H. Relation of cortical areas MT and MST to pursuit eye movements. I. Localization and visual properties of neurons. J. Neurophysiol. 60, 580–603 (1988).

    Article  CAS  PubMed  Google Scholar 

  54. DeAngelis, G.C. & Uka, T. Coding of horizontal disparity and velocity by MT neurons in the alert macaque. J. Neurophysiol. 89, 1094–1111 (2003).

    Article  PubMed  Google Scholar 

  55. Albright, T.D., Desimone, R. & Gross, C.G. Columnar organization of directionally selective cells in visual area MT of the macaque. J. Neurophysiol. 51, 16–31 (1984).

    Article  CAS  PubMed  Google Scholar 

  56. Allman, J., Miezin, F. & McGuinness, E. Direction- and velocity-specific responses from beyond the classical receptive field in the middle temporal visual area (MT). Perception 14, 105–126 (1985).

    Article  CAS  PubMed  Google Scholar 

  57. Bradley, D.C. & Andersen, R.A. Center-surround antagonism based on disparity in primate area MT. J. Neurosci. 18, 7552–7565 (1998).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  58. Koenderink, J.J. & van Doorn, A.J. Facts on optic flow. Biol. Cybern. 56, 247–254 (1987).

    Article  CAS  PubMed  Google Scholar 

  59. Royden, C.S., Crowell, J.A. & Banks, M.S. Estimating heading during eye movements. Vision Res. 34, 3197–3214 (1994).

    Article  CAS  PubMed  Google Scholar 

  60. Howard, I.P. & Rogers, B.J. Binocular Vision and Stereopsis (Oxford Univ. Press, New York, 1995).

  61. Kaneko, H. & Howard, I.P. Spatial limitation of vertical-size disparity processing. Vision Res. 37, 2871–2878 (1997).

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

This work was supported by US National Institutes of Health grant EY013644 (to G.C.D.) and a CORE grant (EY001319) from the US National Eye Institute. D.E.A. was supported by EY017566.

Author information

Authors and Affiliations

Authors

Contributions

H.R.K., D.E.A. and G.C.D. designed the research; H.R.K. performed the recording experiments and analyzed data; H.R.K., D.E.A. and G.C.D. wrote the manuscript; G.C.D. supervised the project.

Corresponding author

Correspondence to Gregory C DeAngelis.

Ethics declarations

Competing interests

The authors declare no competing financial interests.

Integrated supplementary information

Supplementary Figure 1 Illustration of two additional viewing geometries.

(a) Illustration of the case of pure translation of the eye relative to the scene. An observer’s head translates from left to right while the eye remains stationary relative to the head. This produces no perspective distortion under planar image projection. For a dynamic version, see Supplementary Movie 2. (b) Illustration of the case of a pure eye rotation, with no eye or head translation (e.g., smooth pursuit of a target). This produces dynamic perspective distortion in the image plane but not in spherical (retinal) coordinates (see Supplementary Movie 3).

Supplementary Figure 2 Summary of stimulus and mask dimensions.

(a) Each red circle represents the size and location of the random-dot patch that was placed over the receptive field of a single MT neuron. Each blue circle (centered on a red circle) indicates the size of the mask region that was used to prevent background dots from entering the receptive field. (b) The sizes of the random-dot patch (red) and the mask (blue) are plotted against receptive field eccentricity. Each neuron is represented by a pair of red and blue data points that are vertically aligned. Masks were generally 2-3 times larger than the stimulus patch (geometric mean of the ratio of diameters = 2.79), and the mask was large enough to encompass the fixation target for 85/96 neurons (mask radius was not saved for the initial 5 neurons tested).

Supplementary Figure 3 Quantification of depth-sign discrimination capacity of single MT neurons using ROC analysis.

For each depth magnitude, the ability of each MT neuron to discriminate between near and far stimuli was quantified by applying ROC analysis to distributions of responses corresponding to the neuron’s preferred and non-preferred depth signs (as defined by the sign of DSDI). The area under the ROC curve represents the ability of an ideal observer to discriminate between the preferred and non-preferred depth signs—a value of 0.5 corresponds to chance performance. (a) Distribution of ROC areas for each depth magnitude tested in the Retinal Motion condition; arrowheads show the median values. Filled bars indicate neurons with ROC values that are significantly different from 0.5 (permutation test, P < 0.05). The overall median value across all depth magnitudes was 0.63. (b) Distributions of ROC areas for the Dynamic Perspective condition. Median values are 0.78, 0.88, 0.88, and 0.80 for depth magnitudes of 0.5, 1.0, 1.5, and 2.0 deg, respectively. The overall median across depth magnitudes is significantly greater than that for the Retinal Motion condition (n = 412, P = 6.7 × 10−23, Wilcoxon signed rank test). (c) Distributions of ROC areas for the Motion Parallax condition. Median ROC areas for all depth magnitudes are 1, and the overall median is significantly greater than that for the Dynamic Perspective condition (n = 412, P = 8.3 × 10−16, Wilcoxon signed rank test). (d) ROC areas for each neuron were averaged across the four depth magnitudes and the average ROC area was plotted against the absolute DSDI value for each neuron: Retinal Motion (black), Dynamic Perspective (magenta), and Motion Parallax (blue). The two metrics are strongly correlated (n = 309, ρ = 0.98, P = 6.9 × 10−223, Spearman rank correlation), indicating that DSDI is an effective measure of how well neurons discriminate depth sign.

Supplementary Figure 4 Depth-sign selectivity in the retinal motion condition is correlated with dynamic perspective information (DPI) in the stimulus.

For each point in the image, in stimulus coordinates (x, y), eye rotation relative to the scene induces a component of velocity orthogonal to the axis of translation (Eqn. 4), and this component is proportional to the product of the location coordinates of that image point, xy. Thus, we can approximate dynamic perspective information within a region of interest as ∑|xy| over that region (Eqn. 5). For each neuron, the absolute value of DSDI in the Retinal Motion condition is plotted as a function of DPI computed over the stimulus region overlying the receptive field. The two variables are significantly correlated (ρ = 0.24, P = 0.02, Spearman rank correlation), indicating that significant depth-sign selectivity in the Retinal Motion condition generally arises when receptive fields are large and located away from the visual field meridia, such that DPI is larger within the region of the stimulus.

Supplementary Figure 5 Eye movements do not drive depth-sign tuning in the dynamic perspective condition.

Eye movements were quantified by computing pursuit gain, which is the amplitude of the 0.5Hz component of eye velocity divided by the corresponding frequency component of target velocity. (a) Pursuit gain in the Dynamic Perspective condition is plotted against that in the Retinal Motion condition. Filled circles and open triangles represent data from monkeys M1 and M2, respectively. Green symbols denote cases in which pursuit gain is significantly different between the Dynamic Perspective and Retinal Motion conditions, whereas red symbols denote cases with no significant difference. (b) Absolute value of DSDI is plotted against pursuit gain for the Dynamic Perspective condition. Data are from monkey M1, and show no significant correlation (ρ = -0.18, P = 0.22, Spearman rank correlation). (c) Corresponding data from monkey M2, format as in b. Again, the correlation is not significant (ρ = -0.25; P = 0.07).

Supplementary Figure 6 Depth-sign selectivity in the dynamic perspective condition is not correlated with surround suppression.

We examined whether the modulatory effects of background motion in the Dynamic perspective condition are correlated with surround suppression in MT neurons. Surround suppression was quantified by analyzing size tuning curves and computing the percentage of surround suppression (see Methods). We found no significant correlation between the magnitude of depth-sign selectivity in the Dynamic Perspective condition and the percentage of surround suppression (n = 102; ρ = −0.19; P = 0.06, Spearman rank correlation). In fact, the correlation is slightly negative, indicating that cells with strong surround suppression tend to have slightly weaker depth-sign selectivity. This result is consistent with the finding that MT neurons still showed depth-sign selectivity in the DPbalanced condition, in which the velocity distributions of near and far dots moving in opposite directions are matched. Effects of surround suppression were likely minimized because we masked a fairly large region around the classical receptive field, thus removing most visual stimulation from the suppressive surround (Supplementary Fig. 2).

Supplementary information

Supplementary Text and Figures

Supplementary Figures 1–6 (PDF 2057 kb)

Supplementary Methods Checklist (PDF 124 kb)

Dynamic perspective cues associated with a combination of observer translation and counter-rotation of the eye.

This video shows images of a checkerboard generated under viewing conditions similar to those depicted in Fig. 1. A fronto-parallel checkerboard was rendered using OpenGL, which generates views under planar image projection (to be viewed on a flat display). The checkerboard was 55 cm on each side and was viewed from a distance of 33 cm (conditions similar to our experiments). The OpenGL camera was translated laterally over a distance of 6 cm, and was counter-rotated to maintain aim on a world-fixed point at the center of the checkerboard. The small black “+” shows the camera aiming point. White contours show loci of constant elevation angle (spherical coordinates) and are drawn in steps of 10° of elevation. Note that there is perspective distortion of the image in both the Cartesian coordinates of the image plane and in spherical coordinates. (AVI 2132 kb)

Images associated with a pure observer translation, without eye rotation.

This video shows images corresponding to the viewing situation illustrated in Suppl. Fig. 1a. In this case, the OpenGL camera translates laterally but does not rotate. Rather, the orientation of the camera relative to the fronto-parallel plane remains constant, as though the eyes remained stationary in the head while the head translates side to side. All other image generation parameters are as in Supplementary Movie 1. Note that there is no perspective distortion under planar projection in this case, but there is perspective distortion in spherical coordinates. (AVI 828 kb)

Images associated with a pure eye rotation.

This video shows images corresponding to the viewing situation illustrated in Suppl. Fig. 1b, which amounts to smooth pursuit of a target across the surface of the checkerboard. In this case, the OpenGL camera rotates but does not translate. All other image generation parameters are as in Supplementary Movie 1. Note that pure eye rotation produces dynamic perspective distortion in planar image coordinates, but not in spherical coordinates. (AVI 2219 kb)

Example visual stimuli for the dynamic perspective condition.

This video shows examples of the visual stimuli used in the Dynamic Perspective condition. The small patch of dots would be overlying the receptive field of an MT neuron under study. Two stimulus presentations are shown consecutively: the first presentation shows a simulated near depth of the small patch of dots, while the second presentation shows a simulated far depth. Most observers clearly see the small patch as near (first interval) or far (second interval) relative to the fixation target. The stimulus should be viewed with one eye closed to aid the percept, such that binocular disparity cues are not in conflict with motion parallax cues. (MP4 7755 kb)

Example visual stimuli for the retinal motion condition.

The motion of the small patch of dots is identical to that shown in Suppl. Movie 4. The only difference is that the background dots are not present in this movie. In this case, most observers no longer perceive any difference in depth between the first and second presentations of the small patch of random dots. Many observers now perceive the small patch of dots to be moving in the scene rather that a stationary object lying at different depths. (MP4 2820 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kim, H., Angelaki, D. & DeAngelis, G. A novel role for visual perspective cues in the neural computation of depth. Nat Neurosci 18, 129–137 (2015). https://doi.org/10.1038/nn.3889

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/nn.3889

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing