Multilayer Perceptrons (MLP)


An MLP consists of at least three layers of nodes: an input layer, a hidden layer and an output layer. Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training. Its multiple layers and non-linear activation distinguish MLP from a linear perceptron. It can distinguish data that is not linearly separable.

Overview

Tutorials

Module 1: Neural Networks - CS231n
These notes accompany the Stanford CS class CS231n: Convolutional Neural Networks for Visual Recognition.
multilayer-perceptrons cs231n stanford tutorial
Table of Contents
Share a project
Share something you or the community has made with ML.
Topic experts
Share