A multilayer perceptron (MLP) is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate outputs.
Source: https://en.wikipedia.org/wiki/Multilayer_perceptron
You can write the topology (architecture) of a MLP by writing down the number of neurons per layer separated by ‘:’. Three examples are:
If you want to show the activation function like this:
(160,sigmoid):(500,sigmoid):(500,tanh):(369,softmax)
If it is not noted otherwise, the activation function is always sigmoid and in the last layer softmax.
Classification tasks can be tackled with MLPs by using one input neuron per feature and one output neuron per class. For example, if you have a 28×28 pixel image which is a digit (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), then you would make a MLP with 28 · 28 = 784 input neurons and 10 output neurons.