Skip to main content
Medicine LibreTexts

1.3: The Computational Approach

  • Page ID
    12558
    • Contributed by

    An important feature of our journey through the brain is that we use the vehicle of computer models to understand cognitive neuroscience (i.e., Computational Cognitive Neuroscience). These computer models enrich the learning experience in important ways -- we routinely hear from our students that they didn't really understand anything until they pulled up the computer model and played around with it for a few hours. Being able to manipulate and visualize the brain using a powerful 3D graphical interface brings abstract concepts to life, and enables many experiments to be conducted easily, cleanly, and safely in the comfort of your own laptop. This stuff is fun, like a video game -- think "sim brain", as in the popular "sim city" game from a few years ago.

    At a more serious level, the use of computer models to understand how the brain works has been a critical contributor to scientific progress in this area over the past few decades. A key advantage of computer modeling is its ability to wrestle with complexity that often proves daunting to otherwise unaided human understanding. How could we possibly hope to understand how billions of neurons interacting with 10's of thousands of other neurons produce complex human cognition, just by talking in vague verbal terms, or simple paper diagrams? Certainly, nobody questions the need to use computer models in climate modeling, to make accurate predictions and understand how the many complex factors interact with each other. The situation is only more dire in cognitive neuroscience.

    Nevertheless, in all fields where computer models are used, there is a fundamental distrust of the models. They are themselves complex, created by people, and have no necessary relationship to the real system in question. How do we know these models aren't just completely made-up fantasies? The answer seems simple: the models must be constrained by data at as many levels as possible, and they must generate predictions that can then be tested empirically. In what follows, we discuss different approaches that people might take to this challenge -- this is intended to give a sense of the scientific approach behind the work described in this book -- as a student this is perhaps not so relevant, but it might help give some perspective on how science really works.

    In an ideal world, one might imagine that the neurons in the neural model would be mirror images of those in the actual brain, replicating as much detail as is possible given the technical limitations for obtaining the necessary details. They would be connected exactly as they are in the real brain. And they would produce detailed behaviors that replicate exactly how the organism in question behaves across a wide range of different situations. Then you would feel confident that your model is sufficiently "real" to trust some of its predictions.

    But even if this were technically feasible, you might wonder whether the resulting system would be any more comprehensible than the brain itself! In other words, we would only have succeeded in transporting the fundamental mysteries from the brain into our model, without developing any actual understanding about how the thing really works. From this perspective, the most important thing is to develop the simplest possible model that captures the most possible data -- this is basically the principle of Ockham's razor, which is widely regarded as a central principle for all scientific theorizing.

    In some cases, it is easy to apply this razor to cut away unnecessary detail. Certainly many biological properties of neurons are irrelevant for their core information processing function (e.g., cellular processes that are common to all biological cells, not just neurons). But often it comes down to a judgment call about what phenomena you regard as being important, which will vary depending on the scientific questions being addressed with the model.

    The approach taken for the models in this book is to find some kind of happy (or unhappy) middle ground between biological detail and cognitive functionality. This middle ground is unhappy to the extent that researchers concerned with either end of this continuum are dissatisfied with the level of the models. Biologists will worry that our neurons and networks are overly simplified. Cognitive psychologists will be concerned that our models are too biologically detailed, and they can make much simpler models that capture the same cognitive phenomena. We who relish this "golden middle" ground are happy when we've achieved important simplifications on the neural side, while still capturing important cognitive phenomena. This level of modeling explores how consideration of neural mechanisms inform the workings of the mind, and reciprocally, how cognitive and computational constraints afford a richer understanding of the problems these mechanisms evolved to solve. It can thus make predictions for how a cognitive phenomenon (e.g., memory interference) is affected by changes at the neural level (due to disease, pharmacology, genetics, or similarly due to changes in the cognitive task parameters). The model can then be tested, falsified and refined. In this sense, a model of cognitive neuroscience is just like any other 'theory', except that it is explicitly specified and formalized, forcing the modeler to be accountable for their theory if/when the data don't match up. Conversely, models can sometimes show that when an existing theory is faced with challenging data, the theory may hold up after all due to a particular dynamic that may not be considered from verbal theorizing.

    Ultimately, it comes down to aesthetic or personality-driven factors, which cause different people to prefer different overall strategies to computer modeling. Each of these different approaches has value, and science would not progress without them, so it is fortunate that people vary in their personalities so different people end up doing different things. Some people value simplicity, elegance, and cleanliness most highly -- these people will tend to favor abstract mathematical (e.g., Bayesian) cognitive models. Other people value biological detail above all else, and don't feel very comfortable straying beyond the most firmly established facts -- they will prefer to make highly elaborated individual neuron models incorporating everything that is known. To live in the middle, you need to be willing to take some risks, and value most highly the process of emergence, where complex phenomena can be shown to emerge from simpler underlying mechanisms. The criteria for success here are a bit murkier and subjective -- basically it boils down to whether the model is sufficiently simple to be comprehensible, but not so simple as to make its behavior trivial or otherwise so fully transparent that it doesn't seem to be doing you any good in the first place. One last note on this issue is that the different levels of models are not mutually exclusive. Each of the low level biophysical and high level cognitive models have made enormous contributions to understanding and analysis in their respective domains (much of which is a basis for further simplification or elaboration in the book). In fact, much ground can be (and to some extent already has been) gained by attempts to understand one level of modeling in terms of the other. At the end of the day, linking from molecule to mind spans multiple levels of analysis, and like studying the laws of particle physics to planetary motion, require multiple formal tools.