Skip to main content
Medicine LibreTexts

10.6: Dynamic Updating of PFC Active Memory- The SIR Model

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)

    Having seen in the Stroop and A-not-B models how sustained PFC activity can influence behavior through top-down biasing, we now turn to the more complex aspects of PFC function, involving the dynamic gating of PFC representations by the basal ganglia, and the ability to rapidly update and robustly maintain information. As a first introduction to this functionality, captured by the PBWM model, we use the simple SIR (Store, Ignore, Recall) task. Here is a sample sequence of trials in this task:

    • S - A -- this means that the network should store the A stimulus for later recall -- network responds A.
    • I - C -- ignore the C stimulus, but you still have to respond to it -- network responds C.
    • I - B -- ignore the B stimulus -- network responds B.
    • R -- recall the most recently stored stimulus -- network responds A.

    The BG maintenance gating system has to learn to fire Go to drive updating of PFC on the Store trials to encode the associated stimulus for later recall. It also must learn to fire NoGo to the ignore stimuli, so they don't overwrite previously stored information. Finally, on recall trials, the output BG gating mechanism should drive output of the stored information from PFC. It is critical to appreciate that the network starts out knowing nothing about the semantics of these various inputs, and has to learn entirely through trial-and-error what to do with the different inputs.

    To see this learning unfold, open the SIR model and follow the directions from there. While we don't consider it here for simplicity, the same PBWM model, when augmented to have multiple parallel stripes, can learn to separately update and maintain multiple pieces of information in working memory and to retrieve the correct information when needed. A good example of this demand is summarized by the SIR-2 task, where instead of involving a single store and recall task control signal, there are two such signals (i.e. S1 and S2 and R1 and R2). Thus, the network has to learn to separately store two stimuli, update them into separate buffers, and appropriately respond based on the maintained information in the correct buffer when cue to recall R1 vs. R2.

    10.6: Dynamic Updating of PFC Active Memory- The SIR Model is shared under a CC BY-SA license and was authored, remixed, and/or curated by O'Reilly, Munakata, Hazy & Frank.