Connectionist Models of Cognition
Connectionist Models of Cognition is a virtual textbook designed to introduce neural networks to undergraduate and postgraduate students either within the context of a course or by a program of self study. Chapters two to eight cover a set of neural architectures that illustrate the key concepts necessary for understanding the area. The remaining chapters cover models that have been instrumental in the development of the field.
The BrainWave neural network simulator is embedded throughout the chapters - as living figures - allowing students to complete exercises as they work through the text. BrainWave is a fully featured and easily extensible connectionist simulator written in the Java programming language meaning that it can be run directly from web browsers such as Internet Explorer 4.0 (on Windows 95, MacOs and Solaris).
To begin using Connectionist Models of Cognition from your browser simply click on the chapter below that interests you. The first two chapters are provided free of charge. The other chapters require a password which you obtain by registering. Your password will be issued immediately via email. Registration also allows you to download the BrainWave simulator for use on your own modelling projects.Neural networks provide both useful information processing and acquisition mechanisms and interesting models of mind and brain. In this introducotory chapter, we discuss the main ideas behind the connectionist approach and provide a tutorial on the BrainWave simulator.
The Interactive Activation and Competition network (IAC, McClelland 1981; McClelland & Rumelhart 1981; Rumelhart & McClelland 1982) embodies many of the properties that make neural networks useful information processing models. In this chapter, we use the IAC network to demonstrate several of these properties including content addressability, robustness in the face of noise, generalisation across exemplars and the ability to provide plausible default values for unknown variables. The chapter begins with an example of an IAC network to allow you to see a full network in action. Then we delve into the IAC mechanism in detail creating a number of small networks to demonstrate the network dynamics. Finally, we return to the original example and show how it embodies the information processing capabilities outlined above.
The Hopfield network (Hopfield 1982; Hopfield 1984) demonstrates how the mathematical simplification of a neuron can allow the analysis of the behaviour of large scale neural networks. By characterizing mathematically the effect of changes to the activation of individual units on a property of the entire neural architecture called energy, Hopfield (1982) provided the important link between local interactions and global behaviour. In this chapter, we explore the idea of energy and demonstrate how Hopfield architectures "descend on an energy surface". We start by providing an overview of the purpose of the Hopfield network. Then we outline the architecture including the threshold activation function, asynchronous updating and hebbian learning. Finally, we explain how a Hopfield network is able to store patterns of activity so that they can be reconstructed from partial or noisy cues.
Visitors since 7th November 1996.