Control of automatic processes:
a Connectionist account of the Stroop Effect

Janet Wiles
Copyright © 1997

Table of Contents

  1. Introduction
  2. The Stroop Model
  3. Lab Exercises
  4. References

Introduction

The Stroop effect

The Stroop effect is an interference effect, usually demonstrated by asking a subject to view colour names, written in coloured inks. Each word is written in a different colour to the colour it names: For example, the word green might be written in blue ink, and the word red in green ink. Reading the words in colour is little different to reading words in black ink - the background colour of the ink is easy to ignore. It turns out to be more difficult to name the colour of the ink and ignore the word itself (try it with Figure 1) - it is much harder than naming a patch of colour. This asymmetry was first discovered and studied by Stroop in 1935 (hence its name).

Since 1935, several different explanations have been proposed to explain the Stroop effect: A plausible explanation is that a conflict occurs between the name of the word and the ink - the word which is automatically read, and the ink colour which is the correct response (Glass & Holyoak, 1986). Other Stroop-like interference effects also occur, such as counting and naming numbers (see Figure 2) and similar conflicts are assumed to occur between automatically and consciously activated representations. The question for a connectionist is what mechanism underlies such a conflict, if that indeed is the explanation.

Cohen, Dunbar and McClelland (1990) were interested in attention and automatic processing, in particular, in the theory that automaticity is a continuum, rather than an all-or-nothing phenomenon. They proposed that automaticity is a matter of degree, and that learning is a factor. That is, that asymmetries in performance such as those observed in the Stroop task can be accounted for by differences in practise.

The Stroop Model

The Stroop model is a demonstration that asymmetric processes may be explainable in terms of learning mechanisms. The exercises in this section aim to replicate one of Cohen et al's (1990) experiments.

  • The architecture has several components (see Cohen et al, Figure 1):
    • Units: The Stroop network is composed of three layers of units, input, hidden (or intermediate) and output.
    • Links: The units are connected with weighted links, some of fixed at the start of the simulation, and others which are initially set to small random values. All weights from the input to hidden units are fixed, and those from the hidden to output units are modifiable.
    • Activation function: The input units are clamped to their initial values for the duration of each update of the network. The hidden and output units use the sigmoid activation function (see the Backprop chapter for details).
    • Learning rule: The modifiable weights are trained using the backprop learning rule (see the Backprop chapter for details). A sample set of trained weights are shown in Cohen et al, Figure 3).
    • Environment for training and testing: The network is trained on data representing the conditions of naming colours and reading words (see Cohen et al, Table 1). It is then tested on data representing the Stroop task and control conditions (see Cohen et al, Table 2).

Exercise 1. Examining the network architecture.

Load the Stroop network into the workspace from the Networks menu. None of the frozen (indicated with pink arrows) weights will change. Note that some weights are positive (red) and some negative (blue).

Question 1.1: Which weights and biases are frozen and which ones are free to adapt during learning?

Question 1.2: Why are the input units only connected to some of the hidden units?

Exercise 2: Deciding on an input representation

Exercise 1: The first task is to transform the training and testing stimuli into patterns of inputs and outputs that the network will understand. For each stimulus in Cohen et al, Tables 1 and 2, fill in the appropriate pattern in the training and test sets in Table 3.


Exercise 3: Inspecting the training set.

The training set is designed to reflect people�s experience with word and color naming.

Question 3.1: What do the four types of training patterns correspond to in day to day experience?

Question 3.1: Why does the training set include more "word" conditions than "color" conditions?


Exercise 4: Inspecting the test set.

The test set contains examples of patterns that could be used to test people�s performance on the Stroop task.

Randomize the weights in the network by selecting the "randomize weights and biases" option in the Actions menu. Run the network on each pattern in the test set, and record the red and green output unit values in Table 4. To run the network on a pattern, select the pattern from the test set and click on the Cycle button. Note that a BackProp network will only update its activation values when it knows which are the input and output units. It gets this knowledge from the input and output sets, which must be selected when you click on cycle. It is only necessary to "cycle" once for each pattern. You can select the next pattern by clicking on the "Forw Pat" button just above the Cycle button.

Question 4.1: What conditions in the Stroop experiment do each of the test patterns correspond to?

Question 4.2: Which patterns represent the control, congruent, and conflict conditions in the Stroop experiment?

Question 4.3: Which patterns establish baseline performance?


Exercise 5. Training the network on the color and word naming tasks.

Create a graph to record the output error, which will allow you to watch the performance of the network as it trains.

To create a graph of the output error, use the VALUE tool to click on the Output Set used during Training. This will create a variable, which you can label as "Error". Next, use the graph tool to click on the Error variable. This will create a graph of the output set error [Note that a graph of an output unit will give that unit's activation value, not its error value!]

Use the Learn button, to train the network for 20 epochs or until the error just begins to drop. Retest the performance on the test set, and again record the values in Table 4.


Exercise 6. Interpreting the output.

In the original network, Cohen et al. used an iterative procedure to allow time for the output units to fully activate and they showed that the time for the network to converge was comparable to the response times of human subjects.

Question 5.1 Why don't the output unit activations change if you repeatedly click on cycle?

We require a way of modeling the time to respond for our simplified Stroop network. For the purpose of this exercise, we will assume that the output unit with the highest activation value would be the response given, and the time to respond will be proportional to one minus the activation at that unit.

Question 5.2 Calculate and draw your own Simulation Data graph similar to McClelland et al's (1990) Figure 5, where the control conditions are "Colour-red" and "Word-RED", the Conflict conditions are "Colour-red-GREEN" and "Word-green-RED", and the Congruent conditions are "Colour-red-RED" and "Word-red-RED".

Question 5.3 In what ways is your network similar to the original network response? how does it differ?

References

Cohen, J.D., Dunbar, K., and McClelland, J.L. (1990). On the Control of automatic processes: A parallel distributed processing account of the Stroop effect. Psychological Review 97(3) 332-361.

Glass, A. L. and Holyoak, K. J. (1986) Cognition Second edition. NY: Random House.


























Figure 1. Stroop effect with colours and words. Say the colour in which each word is written as quickly as you can. Do not read the words themselves.

Figure 2. Stroop effect with numbers and words. Say aloud the number of characters in each row as fast as you can [Adapted from Glass & Holyoak, 1986, Figure 2.17].



Answers to Exercises

XXXXX IMG ALT="Word Stroop" SRC="file://localhost/u1/earwig/pdp2/wwwdevel/Stroop/WordStroop.gif"

XXXXX IMG ALT="Number Stroop" SRC="file://localhost/u1/earwig/pdp2/wwwdevel/Stroop/NumStroop.gif"