# Feed Forward Networks

Feed forward neural networks

Feed forward neural networks are the most common network architectures in predictive modeling, DynaML has an implementation of feed forward architectures that is trained using Backpropogation with momentum.

In a feed forward neural network with a single hidden layer the predicted target $y$ is expressed using the edge weights and node values in the following manner (this expression is easily extended for multi-layer nets).

$$y = W_2 \sigma(W_1 \mathbf{x} + b_1) + b_2$$

Where $W_1 , \ W_2$ are matrices representing edge weights for the hidden layer and output layer respectively and $\sigma(.)$ represents a monotonic activation function, the usual choices are sigmoid, tanh, linear or rectified linear functions.

The new neural network API extends the same top level traits as the old API, i.e. NeuralNet[Data, BaseGraph, Input, Output,Graph <: NeuralGraph[BaseGraph, Input, Output]] which itself extends the ParameterizedLearner[Data, Graph, Input, Output, Stream[(Input, Output)]] trait.

Tip