Department of Psychiatry
Columbia University Irving Medical Center
It is an open question how humans construct the subjective value of complex stimuli, such as artistic paintings or photographs. While great progress has been made toward understanding how the brain updates the value of known stimuli e.g., through reinforcement learning, little is known about how the value arises in the brain in the first place. Here, we propose that the brain constructs the value of a novel stimulus by extracting and assembling common features shared across stimuli. Notably, because those features are shared across a broad range of stimuli, we show that simple linear regression in the feature space can work as a single mechanism to construct the value across stimulus domains. In large-scale behavioral experiments with human participants, we show that a model of feature abstraction and linear summation can predict the subjective value of paintings, photographs, as well as shopping items whose values change according to different goals. The model shows a remarkable generalization across stimulus types and participants, e.g., when trained on liking ratings for photographs, the model successfully predicts a completely different set of art painting ratings. Also, we show that these general features emerge in a deep convolutional neural network, without explicit training on the features, suggesting that features relevant for value computation could arise spontaneously. Furthermore, using fMRI, we found evidence that the brain performs value computation hierarchically by transforming low-level visual features into high-level abstract features which in turn are transformed into valuation. Our findings suggest the feature-based value computation can be a general neural principle enabling us to make flexible and reliable value computations for a wide range of complex stimuli.
View this recorded session here.