Linear Neuron
If we look at linear regression from a different perspective, we will realize that linear regression is just a neuron with a linear activation function.
Let us remind ourselves, that a neuron is a computational unit. The output of the neuron is based on three distinct calculation steps: scaling of inputs \mathbf{x} undefined with weights \mathbf{w} undefined , summation of the scaled inputs (plus bias b undefined ) and the application of an activation function.
Linear regression z = \mathbf{x} \mathbf{w^T} + b undefined essentially performs all three parts in a single step.
z = \begin{bmatrix} x_1 & x_2 & x_3 & \cdots & x_n \end{bmatrix} \begin{bmatrix} w_1 \\ w_2 \\ w_3 \\ \vdots \\ w_n \end{bmatrix} + b = x_1w_1 + x_2w_2+ x_3w_3+ \cdots + x_nw_n + b undefinedAt this point you might interject, that we do not have an activation function f(z) undefined . Instead we end up with z undefined , the so called net input. So let us introduce one that does not change the nature of linear regression. We are going to use the so called identity function, where the input equals the output f(z) = z undefined . When we apply the identity function as an activation, we end up with a linear neuron, where a = f(\mathbf{x} \mathbf{w}^T + b) undefined . This might seem like an unnecessary step, but by enforcing the usage of an identity function, we put ourselves into a position where we can start to understand different types of neurons. All we have to do is to replace the identity function by any other activation function f(z) undefined to describe any other type of neuron.