Sigmoid Neuron
Let us make the same exercise we did with linear regression. We can look at logistic regression from the perspective of a neural network. If we do, we will realize that logistic regression is a neuron with a sigmoid activation function.
Once again let us remind ourselves, that a neuron is a computational unit that is based on three distinct steps. First: the inputs \mathbf{x} undefined are scaled by weights \mathbf{w} undefined . Second: the scaled inputs (plus bias b undefined ) are aggregated via a sum. Third: an activation function f undefined is applied to the sum.
In logistic regression all three steps can be described by f(z) undefined , where f undefined is the sigmoid activation function \sigma undefined and z undefined is the net input \mathbf{xw}^T + b undefined . Written in a more familiar manner the output of the neuron amounts to: \dfrac{1}{1+e^{-(w_1x_1 + \cdots + w_nx_n + b)}} undefined .
This type of a neuron is extremely powerful. When we combine different sigmoid neurons, such that the output of a neuron is used as an input to the neurons in the next layer, we essentially create a neural network. Activation functions like the sigmoid are often called nonlinear activations, because they can be utilized in a neural network to solve nonlinear problems (more on that in the next chapter).