Skip to main content
Medicine LibreTexts

1.5: Why Should We Care about the Brain?

  • Page ID
    12560
    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    One of the things you'll discover on this journey is that Computational Cognitive Neuroscience is hard. There is a lot of material at multiple levels to master. We get into details of ion channels in neurons, names of pathways in different parts of the brain, effects of lesions to different brain areas, and patterns of neural activity, on top of all the details about behavioral paradigms and reaction time patterns. Wouldn't it just be a lot simpler if we could ignore all these brain details, and just focus on what we really care about -- how does cognition itself work? By way of analogy, we don't need to know much of anything about how computer hardware works to program in Visual Basic or Python, for example. Vastly different kinds of hardware can all run the same programming languages and software. Can't we just focus on the software of the mind and ignore the hardware?

    Exactly this argument has been promulgated in many different forms over the years, and indeed has a bit of a resurgence recently in the form of abstract Bayesian models of cognition. David Marr (Marr, 1977) was perhaps the most influential in arguing that one can somewhat independently examine cognition at three different levels:

    • Computational -- what computations are being performed? What information is being processed?
    • Algorithmic -- how are these computations being performed, in terms of a sequence of information processing steps?
    • Implementational -- how does the hardware actually implement these algorithms?

    This way of dividing up the problem has been used to argue that one can safely ignore the implementation (i.e., the brain), and focus on the computational and algorithmic levels, because, like in a computer, the hardware really doesn't matter so much.

    However, the key oversight of this approach is that the reason hardware doesn't matter in standard computers is that they are all specifically designed to be functionally equivalent in the first place! Sure, there are lots of different details, but they are all implementing a basic serial Von Neumann architecture. What if the brain has a vastly different architecture, which makes some algorithms and computations work extremely efficiently, while it cannot even support others? Then the implementational level would matter a great deal.

    There is every reason to believe that this is the case. The brain is not at all like a general purpose computational device. Instead, it is really a custom piece of hardware that implements a very specific set of computations in massive parallelism across its 20 billion neurons. In this respect, it is much more like the specialized graphics processing units (GPU's) in modern computers, which are custom designed to efficiently carry out in massive parallelism the specific computations necessary to render complex 3D graphics. More generally, the field of computer science is discovering that parallel computation is exceptionally difficult to program, and one has to completely rethink the algorithms and computations to obtain efficient parallel computation. Thus, the hardware of the brain matters a huge amount, and provides many important clues as to what kind of algorithms and computations are being performed.

    Historically, the "ignore the brain" approaches have taken an interesting trajectory. In the 1960's through the early 1990's, the dominant approach was to assume that the brain actually operates much like a standard computer, and researchers tended to use concepts like logic and symbolic propositions in their cognitive models. Since then, a more statistical metaphor has become popular, with the Bayesian probabilistic framework being widely used in particular. This is an advance in many respects, as it emphasizes the graded nature of information processing in the brain (e.g., integrating various graded probabilities to arrive at an overall estimate of the likelihood of some event), as contrasted with hard symbols and logic, which didn't seem to be a particularly good fit with the way that most of cognition actually operates. However, the actual mathematics of Bayesian probability computations are not a particularly good fit to how the brain operates at the neural level, and much of this research operates without much consideration for how the brain actually functions. Instead, a version of Marr's computational level is adopted, by assuming that whatever the brain is doing, it must be at least close to optimal, and Bayesian models can often tell us how to optimally combine uncertain pieces of information. Regardless of the validity of this optimality assumption, it is definitely useful to know what the optimal computations are for given problems, so this approach certainly has a lot of value in general. However, optimality is typically conditional on a number of assumptions, and it is often difficult to decide among these different assumptions.

    747px-fig_brain_puzzle_blue_sky_pieces.png
    Figure \(1.2\): Models that are relatively unconstrained, e.g., by not addressing biological constraints, or detailed behavioral data, are like jigsaw puzzles of a featureless blue sky -- very hard to solve -- you just don't have enough clues to how everything fits together.

    If you really want to know for sure how the brain is actually producing cognition, clearly you need to know how the brain actually functions. Yes, this is hard. But it is not impossible, and the state of neuroscience these days is such that there is a wealth of useful information to inform all manner of insights into how the brain actually works. It is like working on a jigsaw puzzle -- the easiest puzzles are full of distinctive textures and junk everywhere, so you can really see when the pieces fit together (Figure 1.3). The rich tableau of neuroscience data provides all this distinctive junk to constrain the process of puzzling together cognition. In contrast, abstract, purely cognitive models are like a jigsaw puzzle with only a big featureless blue sky (Figure 1.2). You only have the logical constraints of the piece shapes, which are all highly similar and difficult to discriminate. It takes forever.

    754px-fig_brain_puzzle_behav_bio_data.png
    Figure \(1.3\): Models that are relatively unconstrained, e.g., by not addressing biological constraints, or detailed behavioral data, are like jigsaw puzzles of a featureless blue sky -- very hard to solve -- you just don't have enough clues to how everything fits together.

    A couple of the most satisfying instances of all the pieces coming together to complete a puzzle include:

    • The detailed biology of the hippocampus, including high levels of inhibition and broad diffuse connectivity, fit together with its unique role in rapidly learning new episodic information, and the remarkable data from patient HM who had his hippocampus resected to prevent intractable epilepsy. Through computational models in the Memory Chapter, we can see that these biological details produce high levels of pattern separation which keep memories highly distinct, and thus enable rapid learning without creating catastrophic levels of interference.
    • The detailed biology of the connections between dopamine, basal ganglia, and prefrontal cortex fit together with the computational requirements for making decisions based on prior reward history, and learning what information is important to hold on to, versus what can be ignored. Computational models in the Executive Function Chapter show that the dopamine system can exhibit a kind of time travel needed to translate later utility into an earlier decision of what information to maintain, and those in the Motor Chapter show that the effects of dopamine on the basal ganglia circuitry are just right to facilitate decision making based on both positive and negative outcomes. And the interaction between the basal ganglia and the prefrontal cortex enables basal ganglia decisions to influence what is maintained and acted upon in the prefrontal cortex. There are a lot of pieces here, but the fact that they all fit together so well into a functional model -- and that many aspects of them have withstood the test of direct experimentation -- makes it that much more likely that this is really what is going on.

    This page titled 1.5: Why Should We Care about the Brain? is shared under a CC BY-SA 3.0 license and was authored, remixed, and/or curated by O'Reilly, Munakata, Hazy & Frank via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.


    This page titled 1.5: Why Should We Care about the Brain? is shared under a CC BY-SA 3.0 license and was authored, remixed, and/or curated by R. C. O'Reilly, Y. Munakata, M. J. Frank, T. E. Hazy, & Contributors via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.