figshare
Browse
Mohanta-Turner Poster.pdf (4.94 MB)

Deciphering value learning rules underlying in fruit-flies using a model-driven approach

Download (4.94 MB)
poster
posted on 2022-11-18, 15:07 authored by Rishika MohantaRishika Mohanta, Glenn C. Turner

Navigating the world requires an animal to make choices in a dynamic and uncertain world. Therefore, animals can benefit by adapting their behavior to past experiences, but the exact nature of the computations performed and their neural implementations are currently unclear. Extensive prior knowledge about fruit flies (D. melanogaster) provides a unique opportunity to explore the mechanistic basis of cognitive factors underlying decision-making. However, to do this, we require a large number of choice trajectories from single flies. We, therefore, expand and calibrate a Y-maze olfactory choice assay to run 16 flies in parallel to allow us to build and test better models using behavioral perturbation methods such as choice engineering. We take two complementary approaches to explore various learning rules that the fly may use - a model-fitting approach and a novel de-novo learning rule synthesis approach. Firstly, we fit increasingly complex reinforcement learning rules to explain behavior. We find that approximating perseverance/habits better explains and predicts individual choice outcomes. Next, we develop a flexible framework using small neural networks to infer learning rules and predict choices. We find that small neural networks with less than < 5 neurons trained to estimate odor values can accurately predict decisions across flies better than the best reinforcement learning models. We analyze the behavior of these networks to reveal underlying dynamics that reiterate the presence of perseverance behavior. We successfully reproduce most of our observations across different behavioral setups. Our results suggest that habit-forming tendencies beyond naive reward-seeking may influence flies’ choices.

History