4 May 2022: Latent states for adaptive learning: using structure for dynamics

Matt Nassar
Department of Neuroscience
Brown University

People flexibly adjust their use of information according to context. The same piece of information, for example the unexpected outcome of an action, might be highly influential on future behavior in one situation — but utterly ignored in another one. Bayesian models have provided insight into why people display this sort of behavior, and even identified potential neural mechanisms that link to behavior in specific tasks and environments, but to date have fallen short of providing broader mechanistic insights that generalize across tasks or statistical environments. Here I’ll examine the possibility that such broader insights might be gained through careful consideration of task structure. I’ll show that we can think about a large number of sequential tasks as requiring the same inference problem — that is to infer the latent states of the world and the parameters of those latent states — with the primary distinctions within the class defined by transition structure. Then I’ll talk about how a neural network that updates latent states according to a known transition structure and learns “parameters” of the world for each latent state can explain adaptive learning behavior across environments and provide the first insights into neural correlates of adaptive learning across environments. This model generates internal signals that identify the need for latent state updating, which maps onto previous observations made in pupil dilations and P300 responses across different task environments. I will also discuss an experiment that we are currently setting up to test the idea that these signals might reflect a latent state update signal, with a focus on relationships to learning and perception. Finally, I will briefly mention some theoretical work examining how latent states could be used to shape noise correlations in neural populations in order to speed learning.

View a recording of this session here.

Links to relevant work:
https://pubmed.ncbi.nlm.nih.gov/34144114/
https://pubmed.ncbi.nlm.nih.gov/35105677/
https://pubmed.ncbi.nlm.nih.gov/34193556/

20 April 2022: Rapid and reliable digital phenotyping using computational modeling, machine learning, and mobile technology

Woo-Young Ahn
Department of Psychology
Seoul National University

Machine learning has the potential to facilitate the development of computational methods that improve the measurement of cognitive and mental functioning, and adaptive design optimization (ADO) is a promising machine-learning method that might lead to rapid, precise, and reliable markers of individual differences. In this talk, I will first discuss the importance of reliability of (bio)markers. Then, I will present a series of studies that utilized ADO in the area of decision-making and for the development of ADO-based digital phenotypes for addiction and related behaviors. Lastly, I will discuss other promising approaches that might allow us to develop (bio)markers with clinical utility.

View a recording of this session here.

6 April 2022: Trial by trial model fitting workshop, part II: Model Comparison

Yael Niv
Princeton Neuroscience Institute
Princeton University

This is a continuation of the model fitting workshop from February 23rd. In this second part of the workshop, we will discuss how to compare models and use data to determine which of several alternatives is best.

View slides from this session here.

View a recording of this session here.