Department of Neuroscience
People flexibly adjust their use of information according to context. The same piece of information, for example the unexpected outcome of an action, might be highly influential on future behavior in one situation — but utterly ignored in another one. Bayesian models have provided insight into why people display this sort of behavior, and even identified potential neural mechanisms that link to behavior in specific tasks and environments, but to date have fallen short of providing broader mechanistic insights that generalize across tasks or statistical environments. Here I’ll examine the possibility that such broader insights might be gained through careful consideration of task structure. I’ll show that we can think about a large number of sequential tasks as requiring the same inference problem — that is to infer the latent states of the world and the parameters of those latent states — with the primary distinctions within the class defined by transition structure. Then I’ll talk about how a neural network that updates latent states according to a known transition structure and learns “parameters” of the world for each latent state can explain adaptive learning behavior across environments and provide the first insights into neural correlates of adaptive learning across environments. This model generates internal signals that identify the need for latent state updating, which maps onto previous observations made in pupil dilations and P300 responses across different task environments. I will also discuss an experiment that we are currently setting up to test the idea that these signals might reflect a latent state update signal, with a focus on relationships to learning and perception. Finally, I will briefly mention some theoretical work examining how latent states could be used to shape noise correlations in neural populations in order to speed learning.
View a recording of this session here.