Usman Ali


Master of Science in Computer Science


Department of Computer Science

Faculty / School

Faculty of Computer Sciences (FCS)

Date of Submission



Dr. Muhammad Sarim, Visiting Faculty, Department of Computer Science

Document type

MSCS Survey Report


Artificial Neural Networks (ANNs) are biologically inspired computational tools that comprised of millions of interconnected processing units called neurons. They exhibit the remarkable information processing characteristics of biological systems like non-linearity, high parallelism, robustness, fault tolerance, learning ability to handle imprecise and fuzzy information and ability to generalize. The main objective of Artificial Neural Networks is to develop mathematical algorithms that will enable ANNs to learn by mimicking information processing and knowledge acquisition in the human brain. Artificial Neural Networks are used for learning and prediction of behavior and patterns in data in different data processing environments. They are particularly used in machine learning algorithms. They are also called ‘neural nets’, ’parallel distributed processing systems’ and ‘connectionist systems’.

It is believed that 1890 was the beginning of the neurocomputing age when first work on brain activity was published by William James. However, real neurocomputing started after McCulloh and Pitt’s paper on the ability of simple neural networks to compute arithmetic and logical functions ion 1943. The first neuro computer was built and tested by Minsky in 1951 at Princeton University, although it experienced many limitations. John von Neuman’s published the book “Computer and the Brain” in 1958. In the same year, Frank Rosenblatt introduced the first successful neuro computer called “the Mark -1 perceptron “designed for character recognition which is considered the oldest ANN hardware today. It was able to solve many linear problems but was unable to handle non-linear classification problems. In 1962, Rosenblatt also published a book “Principles of Neuro Dynamics”. These developments led to what is known as 1960s ANN hype. However, this hype didn’t last longer due to a campaign led by Minsky and Pappert regarding the inefficiency of Rosenblatt ANN regarding handling non-linear separable data. Campaign led by Minsky and Pappert achieved its planned goal, and by the late 1970s many ANN researchers switched their attention back to AI. However, some stubborn researchers continued their work.

With the Rosenblatt perceptron and the work of other quiet researchers, the field of neurocomputing began to revive. One of the most important researches that assisted this revival was the introduction of Hopfield Networks developed for retrieval of complete images from fragments by Hopfield in 1984. The year 1986 is regarded as a corner stone in the ANNs recent history. Rumelhart et al. rediscovered the back propagation learning algorithm after its initial development by Werbos in 1974. The first physical sign of the revival of the ANNs was the creation of the Annual IEEE International ANNs Conference in 1987 followed by the formation

of the International Neural Network Society (INNS) and publishing of the INNS Neural Network journal in 1988.

The field of neurocomputing has witnessed many ups and downs, notable among which is the hibernation period due to the perceptron inability to handle non-linear classification. At present, the field of neurocomputing is blossoming almost daily on both the theory and practical application fronts.

The full text of this document is only accessible to authorized users.