normal  medium  large

Giancarlo La Camera


Assistant Professor
Ph.D. University of Bern

Phone: (631) 632-9109 - office
Phone: (631) 632-9197 - lab
Fax: (631) 632-6661

Life Sciences Building
Office: Room 513


Giancarlo La Camera studied Theoretical Physics at the University of Rome and received a Laurea (M. Sci.) in 1999. He went on to obtain a PhD in Neurobiology from the University of Bern in 2003. Between 2004 and 2008 he was a visiting fellow at the National Institute of Mental Health, where he performed research on the neural basis of complex cognitive functions. He then returned to the University of Bern where he focused on the topic of reinforcement learning in populations of spiking neurons. In early 2011 he joined the faculty of Stony Brook University as an Assistant Professor of Neurobiology & Behavior.

Research Interests/Expertise

My laboratory is interested in the neural underpinnings of reward-based learning and decision-making; how these processes depend on contextual factors; and how they shape our processing of relevant stimuli (i.e., how we ‘see’ and interpret the world). Reinforcement Learning, the theory of learning to predict rewarding outcomes, investigates the basis of how we make reward-based decisions. These are of the utmost importance in everyday life but are also at the core of much standard laboratory practice (think e.g. of Pavlovian or instrumental conditioning). We pursue a biologically plausible theory of reinforcement learning, i.e., a theory where populations of spiking neurons carry out the computation in accord with the principles of biophysics.

In this context, one question of special interest to us is that of ‘state’ formation. Examples of ‘states’ could be the internal state of the subject; an external stimulus; or in general the combination of all factors that together contribute to a decision. In theory, to address fully the question of how neural circuits learn to make decisions, the problem of state formation must be solved first. This is a formidable problem, one that seems not ripe for investigation yet. A simpler question, that may be amenable to analysis, is how populations of spiking neurons can learn to identify relevant stimuli and extract meaningful segments from a continuous sensory stream. This process, and how it is affected by context, is currently a major focus of our research.

Context is a powerful modulator of the way we learn to make decisions. We are very susceptible to factors such as the way in which an option is framed; how much it cost to reach a particular state; or whether we were hungry or sated when a choice between two different foods was given to us. Biologically relevant theories of context-dependent learning are in their infancy, and my lab is developing tools and ideas to further their development.

Our main efforts revolve around the central question of how to build powerful representations of external stimuli and events that are relevant for behavior. What are the neural substrates of such complex representations? And how are they learned? These two questions take us to two main research directions: 1) how is information represented and processed in neural circuits, and 2) how is learning achieved in the same circuits? We address these questions by analyzing behaviorial and neural data and by building mathematical models that are biologically plausible. In addition to seeking a theoretical understanding of these issues, we team up with other groups in the Department of Neurobiology and elsewhere to test our model predictions against empirical data.


  • Representative Publications
  • Laboratory Personnel
    • A. Bernacchia*, G. La Camera*, and F. Lavigne*, A latch on priming, Front Psychol  5:869, 2014
    • A. Jezzini*, L. Mazzucato*, G. La Camera and A. Fontanini, Processing of hedonic and chemosensory features of taste in medial prefrontal and insular networks, J Neurosci 33(48): 18966-18978, 2013
    • G. La Camera, R. Urbanczik and W. Senn, Stimulus detection and decision making via spike-based reinforcement learning, Proceedings of the 1st Multidisciplinary Conference on Reinforcement Learning andDecision Making, pp. 183-187, Princeton NJ, 2013
    • T. Minamimoto, G. La Camera, and B.J. Richmond, Measuring and Modeling the Interaction Among Reward Size, Delay to Reward, and Satiation Level on Motivation in Monkeys, J Neurophysiol 101:437-447, 2009
    • M. Giugliano, G. La Camera, S. Fusi and W. Senn, The response of cortical neurons to in vivo-like input current: theory and experiment II. Time-varying and spatially distributed inputs, Biol Cybern 99(4-5):303-18, 2008
    • G. La Camera, M. Giugliano, W. Senn and S. Fusi, The response of cortical neurons to in vivo-like input current: theory and experiment I. Noisy inputs with stationary statistics, Biol Cybern 99(4-5):279-301, 2008
    • G. La Camera and B.J. Richmond, Modeling the violation of reward maximization and invariance in reinforcement schedules, PLoS Comput Biol 4(8): e1000131, 2008
    • G. La Camera*, A. Rauch*, D. Thurbon, H-R Lüscher, W. Senn and S. Fusi, Multiple time scales of temporal response in pyramidal and fast spiking cortical neurons, J Neurophysiol 96(6): 3448-3464, 2006
    • E. Curti, G. Mongillo, G. La Camera and D.J. Amit, Mean-Field and capacity in realistic networks of spiking neurons storing sparsely coded random memories, Neural Comput 16(12): 2597-2637, 2004
    • G. La Camera, A. Rauch, H-R Lüscher, W. Senn and S. Fusi,
      Minimal models of adapted neuronal response to in vivo-like input currents, Neural Comput 16(10): 2101-2124, 2004
    • A. Rauch*, G. La Camera*, H-R Lüscher, W. Senn and S. Fusi,
      Neocortical pyramidal cells respond as integrate-and-fire neurons to in vivo-like input currents, J Neurophysiol 90(3): 1598-1612, 2003