Connectionist Models of Cognition BrainWave Logo

Connectionist Models of Cognition is a virtual textbook designed to introduce neural networks to undergraduate and postgraduate students either within the context of a course or by a program of self study. Chapters two to eight cover a set of neural architectures that illustrate the key concepts necessary for understanding the area. The remaining chapters cover models that have been instrumental in the development of the field.

The BrainWave neural network simulator is embedded throughout the chapters - as living figures - allowing students to complete exercises as they work through the text. BrainWave is a fully featured and easily extensible connectionist simulator written in the Java programming language meaning that it can be run directly from web browsers such as Internet Explorer 4.0 (on Windows 95, MacOs and Solaris).

To begin using Connectionist Models of Cognition from your browser simply click on the chapter below that interests you. The first two chapters are provided free of charge. The other chapters require a password which you obtain by registering. Your password will be issued immediately via email. Registration also allows you to download the BrainWave simulator for use on your own modelling projects.
  1. Preface: How to use the Materials (Available free of charge - just click)
  2. Introduction to Neural Networks and the BrainWave Simulator (Available free of charge - just click)

    Neural networks provide both useful information processing and acquisition mechanisms and interesting models of mind and brain. In this introducotory chapter, we discuss the main ideas behind the connectionist approach and provide a tutorial on the BrainWave simulator.

  3. The Interactive Activation and Competition Network: How Neural Networks Process Information (available now)

    The Interactive Activation and Competition network (IAC, McClelland 1981; McClelland & Rumelhart 1981; Rumelhart & McClelland 1982) embodies many of the properties that make neural networks useful information processing models. In this chapter, we use the IAC network to demonstrate several of these properties including content addressability, robustness in the face of noise, generalisation across exemplars and the ability to provide plausible default values for unknown variables. The chapter begins with an example of an IAC network to allow you to see a full network in action. Then we delve into the IAC mechanism in detail creating a number of small networks to demonstrate the network dynamics. Finally, we return to the original example and show how it embodies the information processing capabilities outlined above.

  4. The Hebbian Network: The Distributed Representation of Facts (coming soon)
  5. The Hopfield Network: Descent on an Energy Surface (available now)

    The Hopfield network (Hopfield 1982; Hopfield 1984) demonstrates how the mathematical simplification of a neuron can allow the analysis of the behaviour of large scale neural networks. By characterizing mathematically the effect of changes to the activation of individual units on a property of the entire neural architecture called energy, Hopfield (1982) provided the important link between local interactions and global behaviour. In this chapter, we explore the idea of energy and demonstrate how Hopfield architectures "descend on an energy surface". We start by providing an overview of the purpose of the Hopfield network. Then we outline the architecture including the threshold activation function, asynchronous updating and hebbian learning. Finally, we explain how a Hopfield network is able to store patterns of activity so that they can be reconstructed from partial or noisy cues.

  6. Adaptive Resonance Theory: Competitive Learning (coming soon)
  7. The Self Organizing Map: Topological Organization (coming soon)
  8. The Backpropagation Network: Learning by Example (coming soon)
  9. Context in Letter Perception: An IAC model of the Word Superiority Effect (coming soon)
  10. Control of automatic processes: A connectionist account of of the Stroop Effect (coming soon)
  11. Category Learning in the ALCOVE Model (coming soon)
  12. The Matrix Model: Episodic verses Semantic Memory (coming soon)
  13. Deep Dyslexia and the Connectionist Approach to Brain Damage (coming soon)

Visitors since 7th November 1996.