- A layer is a set of units that are not connected to each other.
- A layer is called fully connected if every neuron in the layer shares the same inputs.
- A layer has input and output (also known as activation) .
Fully connected layer:
Since each unit (neuron) has a vector of weights and a single offset, we can think of the weights of the whole layer as a matrix, , and the collection of all the offsets as a vector . If we have inputs, units, and outputs, then:
- is a matrix
- is a column vector
- , the input, is an column vector
- , the pre-activation value, is an column vector
- , the activation, is an column vector, and is applied element-wise to . and the output vector is:
Single-layer networks allow us to make linear hypotheses, as we have seen with linear classifiers and regression.