History of the Neural Networks — Part 2

Kalpa Kalhara Sampath
5 min readDec 25, 2019
Fathers of the Deep Learning Revolution

Second part of the history of neural network series

Hopfield Network (Recurrent)

Hopfield neural network was invented by Dr. John J. Hopfield in 1982. It consists of a single layer which contains one or more fully connected recurrent neurons. The Hopfield network is commonly used for auto-association and optimization tasks.

Neural Networks and Physical Systems with Emergent Collective Computational Abilities May 1982
Hopfield Network Image Source: Wikimedia

Here, a neuron either is on or is off, a vast simplification of the real situation. The state of a neuron (on: +1 or off: -1) will be renewed depending on the input it receives from other neurons. A Hopfield network is initially trained to store several patterns or memories. It is then able to recognize any of the learned patterns by exposure to only partial or even some corrupted information about that pattern it eventually settles down and returns the closest pattern or the best guess.

Back-propagation

In a 1986 paper entitled “Learning Representations by Back-propagating Errors,” Rumelhart, Hinton, and Williams described in greater detail the process of backpropagation.

They showed how it could vastly improve the existing neural networks for many tasks such as shape recognition, word prediction, and more.

Despite some setbacks after that initial success, Hinton kept at his research and to reach new level of success. He is considered by many in the field to be the godfather of deep learning.

Convolutional Neural Network (CNN)

Convolutional Neural Network Architecture — LeNet 5

In 1989, Yann LeCun et al. at the AT&T Bell Labs demonstrated a very significant real-world application of backpropagation in “”Backpropagation Applied to Handwritten Zip Code Recognition.

The publication, working with a large dataset from the US Postal Service, showed neural nets were entirely capable of understand handwritten digits. And much more importantly, it was first to highlight the practical need for a key modifications of neural nets beyond plain backpropagation toward modern deep learning

A Convolutional Neural Network (CNN) is a deep learning algorithm that can recognize and classify features in images for computer vision. It is a multi-layer neural network designed to analyze visual inputs and perform tasks such as image classification, segmentation and object detection, which can be useful for autonomous vehicles.

The name “convolutional neural network” indicates that the network employs a mathematical operation called convolution. Convolution is a specialized kind of linear operation. Convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers.

A simple CNN contains a sequence of layers, and every layer of a CNN transforms one volume of activation to another through a differentiable function.

Three main types of layers are used to build CNN architectures

  1. Convolutional layers (CONV)

Creates a feature map to predict the class probabilities for each feature by applying a filter that scans the whole image, few pixels at a time.
– Output: Feature Map
– ReLU (Rectified linear unit) layers (RELU)

2. Pooling (or Subsampling) layers (POOL)

scales down the amount of information the convolutional layer generated for each feature and maintains the most essential information (the process of the convolutional and pooling layers usually repeats several times).

3. Fully connected layers (classification) (FC)
– Multi-layer perceptron

Convolutional Network Demo from 1993

Popular Convolutional Neural Network Architectures

LeNet-5 (1998)

This 7-layer CNN classified digits, digitized 32×32 pixel greyscale input images. it was used by several banks to recognize the hand-written numbers on checks.

AlexNet (2012)

AlexNet is designed by SuperVision group, with a similar architecture to LeNet, but deeper━it has more filters per layer as well as stacked convolutional layers. It is composed of five convolutional layers followed by three fully connected layers. One of the most significant differences between AlexNet and other object detection algorithms is the use of ReLU for the non-linear part instead of Sigmond function or Tanh like traditional neural networks. AlexNet leverages ReLU’s faster training to make their algorithm faster.

GoogleNet (2014)

Built with a CNN inspired by LetNet, the GoogleNet network, which is also named Inception V1, was made by a team at Google. GoogleNet was the winner of ILSVRC 2014 and achieved a top-5 error rate of less than 7%, which is close to the level of human performance. GoogleNet architecture consisted of a 22 layer deep CNN used a module based on small convolutions, called “inception module”, which used batch normalization, RMSprop and image to reduce the number of parameters from 60 million like in AlexNet to only 4 million.

VGGNet (2014)

VGGNet, the runner-up at the ILSVRC 2014, consisted of 16 convolutional layers. Similar to AlexNet, it used only 3×3 convolutions but added more filters. VGGNet trained on 4 GPUs for more than two weeks to achieve its performance. The problem with VGGNet is that it consists of 138 million parameters, 34.5 times more than GoogleNet, which makes it challenging to run.

Next article will cover Restricted Boltzmann Machine (RBM), Q Learning (Reinforcement), Support Vector Machine (SVM)and Long Short-Term Memory (LSTM).

History of the Neural Networks Part 1 - https://medium.com/@kalhara.sampath/history-of-the-neural-networks-part-1-d6e0bdd9009a

--

--

Kalpa Kalhara Sampath

Cyber Security Researcher | Lecturer | Mentor | Ethical Hacking and Digital Forensic Trainer | MCT | CCAI