10 Buzzwords to Understand Neural Networks — Did you know that each of us walks around with a supercomputer in our heads? Thanks to thousands of years of evolution, our brains are capable of many things that are not yet possible to even the most powerful computers. Things such as image recognition or distinguishing one person’s handwriting from the other and knowing the word, letter or digit written. In fact, we do this so unconsciously and quickly, that we often take this ability for granted.
But for machines, this isn’t so easy. Using neural networks, they have to solve this “problem” in a different way and use hundreds, thousands or even millions of training examples. Interested in knowing more about neural networks? Here are 10 buzzwords you can throw around that you use to convince other people that you’re an expert in this:
1. Neural Network
A neural network, according to TechTarget, is a system of hardware or software that runs similar to the neurons in our own brains and is used to solve complex pattern recognition and signal processing problems, from facial recognition to weather analysis and more.
2. Deep Neural Network
Most neural networks have only one hidden layer. Deep neural networks have more nodes through which data and input can go through. This leads to more steps in pattern recognition, where each node layer trains on the features from the previous layer’s output. The “deeper” the nodes go, the more capable they become of recognizing complex features.
Each neural network consists of “layers”. These are in turn made of ‘nodes’ and each node has an ‘activation function’. There are three types of layers. The ‘input layer’ presents the pattern to the neural network, this is then communicated with ‘hidden layers’ and finally, the hidden layer(s) link to the ‘output layer’.
4. Backpropagational Neural Networks
Think of backpropagation as a ‘black box’ when it comes to neural networks. There’s a lot of math involved here, but basically what it does is that it provides a detailed insight into how the network behaves differently if we change the weights and biases of our inputs.
Perceptrons is a term coined in the 1960s by Frank Rosenblatt and is mainly used in pattern recognition by simulating the visual system of humans and other mammals. However, most scientists lost interest for single layer Perceptrons after a book written in 1969 by Minsky and Papert. However, multilayer perceptrons resurfaced again in the 1980s.
6. Sigmoid Neurons
Sigmoid neurons are similar to perceptrons (albeit modified) so that they allow only a small change in the output if either the weight or the bias gets altered in any way. However, unlike perceptrons, which have x1, x2 and so on inputs, sigmoids can take values in-between (for example 0.764 or 1.63).
7. Firing Rule
A firing rule determines how to calculate whether the input should fire for any input pattern. For example, a 3-input neuron can be taught to output 0 one the input is 000, 001 or 010 and to input 1 if the input is 100, 101, or 111.
8. Feed-Forward Neural Network (FNN)
A feed-forward neural network is a type of artificial neural network in which units are connected so that they don’t form a cycle. This means the information travels only forward and not in any other way, and there are no loops or cycles in the network.
9. Recurrent Neural Network (RNN)
A recurrent neural network (RNN) is a type of artificial neural network in which units are connected to form a directed cycle. This allows them to, unlike feed-forward neural networks, process arbitrary input sequences and be used for speech or handwriting recognition.
10. Convolutional Neural Network (CNN)
A convolutional neural network is a type of feed-forward neural network in which patterns between neurons mimic the animal brain, more specifically their visual cortex. They are also called ‘space invariant artificial neural networks or shift invariant artificial neural networks (SIANN).
There you go, 10 terms that will help you understand a Google programmer if you ever get the chance to have lunch with one.