next up previous
Next: About this document

NEURAL NETWORKS

The boundary between different scientific disciplines is not always clear cut. For example, one might claim that the Lorenz model belongs to atmospheric science, or that percolation, since it can be used to discuss the properties of porous materials, is really a geoscience problem. Arguments over such matters are generally not very useful, but they do illustrate that the lines between disciplines are not sharply defined. Indeed, the ideas developed in one area can often lead to important breakthroughs when applied to other fields. In the book we consider in some detail three cases in which it appears that such cross fertilization has, or may, prove extremely fruitful. These topics are currently active areas of research so even the tentative conclusions we draw may change considerably in the next few years. Nevertheless, the computational methods we employ, which are extensions of approaches encountered earlier in the book, have already led to useful new insights.

The first problem in this chapter is from biology and concerns protein structure. Proteins are long flexible molecules that can fold into many different shapes. The problem is to understand the folding process so that one can predict the folded structure of a newly encountered protein, and thereby learn how to design molecules with a desired shape. The second example is from geoscience and concerns earthquakes and a possible connection with phase transitions. The third problem deals with the brain and some proposals concerning the way (human) memory might work. These proposals are based on what are known as neural networks, and we will now give a brief glimpse at the science they entail.

The Ising model of magnetism (which is the topic of chapter 8) consists of a large number of very simple units, i.e., spins, which are connected together in a very simple manner. By ``connected'' we mean that the orientation of any given spin, tex2html_wrap_inline35 , is influenced by the direction of other, neighboring, spins. We found that the behavior of an isolated spin was unremarkable. Things only became really interesting when we considered the behavior of a large number of spins and allowed them to interact. In that case we found that under the appropriate conditions some remarkable things could occur, including the singular behavior associated with a phase transition. Neural network models of the brain share some of these features. The human brain consists of an extremely large number ( tex2html_wrap_inline37 ) of basic units called neurons, each of which is connected to many other neurons in a relatively simple manner. A biologically complete discussion of neurons and how they function is a long story, which we don't want to go into here. The bottom line is that it seems reasonable to model a neuron as an Ising spin, which has a value of either ``up'' or ``down.'' A neuron can then store information, and by allowing neurons to interact with each other, much as Ising spins in a ferromagnet interact with each other, the network as a whole can be made to function as a memory.

This memory is a bit different from the usual type of a computer memory. In the case of a neural network, it is a content addressable memory. It is made to operate by presenting it with a pattern which resembles the pattern one is trying to remember; the system then ``recalls'' the appropriate information. This seems similar to a human memory; for example, given part of a phone number or address, a human memory is often able to recall the rest of the information. The details of how such a network can be made to function as a memory, along with how to implement it in a simulation are all covered in the text. Examples like those shown in the figure are then given. Here an image which resembles the letter ``A''is presented to the network, which then recalls a complete image of the letter.

   figure13
Figure 1: Left: spin arrangement close to the letter ``A'', but with a few spins flipped to the wrong state. Right: spin configuration after one Monte Carlo pass through the network.

Many other issues can be considered, including how information is stored in the network, the effect of ``damage'' on the performance, the concept of metastability, and the capacity of the network. Other related models, such as perceptrons, are also discussed.


next up previous
Next: About this document

Nick Giordano
Mon Sep 8 10:30:42 EST 1997