Biological learning systems are built of very complex webs of interconnected neurons.
Artificial neural networks are built out of a densely interconnected set of simple units, where each unit takes a number of real-valued inputs (possibly the outputs of other units) and produces a single real-valued output (which may become the input to many other units).
To understand better, let us consider a few facts from neurobiology.
The human brain, for example, is estimated to contain a densely interconnected network of approximately 1011 neurons, each connected, on average, to lo4 others.
Neuron activity is typically excited or inhibited through connections to other neurons.
The fastest neuron switching times are known to be on the order of l0-3 seconds--quite slow compared to computer switching speeds of 10-l0.
Yet humans are able to make surprisingly complex decisions, surprisingly quickly.
The information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons - motivation for ANN systems.
There are many complexities to biological neural systems that are not modeled by ANNs, and many features of the ANNs we discuss here are known to be inconsistent with biological systems.