Image Segmentation Primer The Perceptron Perceptual Mathematics and Neural Net History

THE PERCEPTRON
The perceptron was a simple model of human neurone behaviour introduced by Rosenblatt of MIT in the late 1950'5 +. Primarily Rosenblatt used a device for which the outputs were either 1 or 0 (threshold) depending on whether a linear sum of the form
w[1][i[1] + w[2]i[2] + ....+ w[n]i[n]
exceeds a threshold or not. Here i[1] .. i[n] are n inputs to the perceptron, and w[1] .. w[n] are the corresponding n weights. Rosenblatt investigated schemes whereby the magnitudes of the weights would be altered under (supervised) training. Rosenblatt did not develop a formula for describing the training of any other than single layer neural networks (in modern terminology). The famous backpropagation formula had yet to be developed. Multiple input, single output perceptrons are ommonly used as processing elements (PE) in Artificial neural Networks.
PERCEPTRON FIGURES
In the book "Perceptrons" Marvin Minsky and Seymour Papert demonstrated that a processing element introduced by Rosenblatt called (by him) the perceptron had certain specific inadequacies. In modern terms Minsky and Papert showed that a single layer neural network could not
THE PERCEPTRON CONTROVERSY
There is no doubt that Minsky and Papert's book was a block to the funding of research in neural networks for more than ten years. The book was widely interpreted as showing that neural networks are basically limited and fatally flawed.
What IS controversial is whether Minsky and Papert shared and/or promoted this belief.
Following the rebirth of interest in artificial neural networks, Minsky and Papert claimed that they had not intended such a broad interpretation of the conclusions they reached in the book Perceptron.
However, the writer was actually present at MIT in 1974, and reached then a different conclusion on the basis of the Chatter then circulating at MIT AI Lab.
What were Minsky and Papert actually saying to their colleagues in the period after the publication of their book? There IS a written record: Artificial Intelligence Memo 252, January 1, 1972.
Marvin Minsky and Seymour Papert, which is identical to pp 129-224 of the 1971 Project MAC Progress Report VIII.
A recent check found this report online at bitsavers.trailing-edge.com/pdf/mit/ai/aim/AIM-252.pdf

Starting at page 31, there is a very brief overview of the book Perceptrons. Minsky and Papert define the PERCEPTRON ALGORITHM as follows: On page 32 Minsky and Papert describe a neural network with a hidden layer as follows: Minsky and Papert then state: In sum, Minsky and Papert, with intellectual honesty, confessed that they were not not able to prove that even with hidden layers, feed-forward neural nets were very well useless, but they expressed strong confidence that they were quite inadequate computational learning devices.

It is noted as a technical sidepoint, that Minsky and Papert restrict their discussion to the use of a "linear threshold" rather than the sigmoid threshold functions prevalent in contemporary neural networks.




Demonstration that the two square spirals presented above are topologically different


Segments into only one connected region
Segments into two conected regions


These two square images were inspired by two images of similar digital topology in the book, Minsky and Papert, Perceptrons

Link to Primer on Image Segmentation
Return to VISION Index