Cover of the 1972 Edition of Perceptrons
This book had a significant impact on the development of AI. Minsky and Papert 'proved' that single layer perceptrons could not distinguish on the basis of connectivit and hence could not diustinguish topologically the upper and lower figures.


Image processed cover of the 1972 Edition of Perceptrons.
In the top figure the curves outline a single connected region [black pixels] while in lower figure there are two unconnected regions [red and blue pixels]

THE PERCEPTRON
The perceptron was a simple model of human neurone behaviour introduced by Rosenblatt of MIT in the late 1950'5 +. Primarily Rosenblatt used a device for which the outputs were either 1 or 0 (threshold) depending on whether a linear sum of the form
w1a1 + w2a2 + ....+ wnan

exceeds a threshold or not. Here a1 .. an are n inputs to the perceptron, and w1 .. wn are the corresponding n weights. Rosenblatt investigated schemes whereby the magnitudes of the weights would be altered under (supervised) training. Rosenblatt did not develop a formula for describing the training of any other than single layer neural networks (in modern terminology). The famous backpropagation formula had yet to be developed. Multiple input, single output perceptrons are commonly used as processing elements (PE) in Artificial neural Networks.
PERCEPTRON FIGURES
In the book "Perceptrons" Marvin Minsky and Seymour Papert demonstrated that a processing element introduced by Rosenblatt called (by him) the perceptron had certain specific inadequacies. In modern terms Minsky and Papert showed that a single layer neural network could not
THE PERCEPTRON CONTROVERSY
There is no doubt that Minsky and Papert's book was a block to the funding of research in neural networks for more than ten years. The book was widely interpreted as showing that neural networks are basically limited and fatally flawed.
What IS controversial is whether Minsky and Papert shared and/or promoted this belief.
Following the rebirth of interest in artificial neural networks, Minsky and Papert claimed - notably in the latter "Expanded" edition of Perceptrons that they had not intended such a broad interpretation of the conclusions they reached re Perceptron networks.
However, the writer was actually present at MIT in 1974, and can reliably report on the Chatter then circulating at MIT AI Lab.
But what was Minsky and Papert actually saying to their colleagues in the period immediately after the publication of Perceptrons? There IS a written record from this period: Artificial Intelligence Memo 252, January 1, report
Marvin Minsky and Seymour Papert, which is identical to pp 129-224 of the 1971 Project MAC Progress Report VIII.
A recent check found this report online at http://bitsavers.trailing-edge.com/pdf/mit/ai/aim/AIM-252.pdf

Starting at page 31 of the "progress Report" there is a very brief overview of the book Perceptrons. Minsky and Papert define the PERCEPTRON ALGORITHM as follows: On page 32 Minsky and Papert describe a neural network with a hidden layer as follows: Minsky and Papert then state: In sum, Minsky and Papert, with admirable intellectual honesty, confessed in writing that they were not not able to prove that even with hidden layers, feed-forward neural nets were very well useless, but they expressed strong confidence that they were quite inadequate computational learning devices.

It is noted as a technical sidepoint, that Minsky and Papert restrict their discussion to the use of a "linear threshold" rather than the sigmoid threshold functions prevalent in contemporary neural networks.



Cover of the Expanded Edition of Perceptrons
Published in 1987
The cover differs topologically from that in earlier editions as the lower figure details a simply connected region.


Cover of the Expanded Edition of Perceptrons
after segmentation showing that in the lower figure the curves outline a single connected region while in the upper figure there are two unconnected regions.
  :About the author of this analysis: Dr Harvey A Cohen
  Married to Dr Elizabeth Essex-Cohen, pioneering Space Physicist.

In 1974 Harvey was in the MIT AI Lab while Elizabeth was working on the future GPS in the US Air Force Geophysics Lab outside Boston.
Link to Harvey Cohen's Personal Website
Link to Glimpses of OZNAKI
Link to About Dragons