TAKE A CLOSE LOOK AT METHODOLOGY BY POSSIBLE CASE WITH BIGDATA:
Neural Network in plain NumPy
Can consider the “Mysteries of Neural Networks”
We’re to quantize economy of a given nation by the most intelligent method of programming, namely Neural Nets, as structure of AI in a modern robotics.
Using highlevel frameworks like Keras, TensorFlow or PyTorch allows us to build very complex models quickly. However, it is worth taking the time to look inside and understand underlying concepts. Not so long ago we had a work, explaining — in a simple way — how neural nets work. However, it was highly theoretical post, dedicated primarily to math, which is the source of NN superpower. From the beginning we was planning to followup this topic in a more practical way. This time we will try to utilize our knowledge and build a fully operational neural network using only NumPy. Finally, we will also test our model — solve simple classification problem and compare its performance with NN built with Keras.
Note: Obviously, today’s post will consist in large part of code snippets written in Python. we hope, however, that the reading will not be too boring. You may also find possible comments in places that we consider to be unclear or worthiness of attention.
Figure 1. Example of dense neural network architecture
First things first
Before we start programming, let’s stop for a moment and prepare a basic roadmap. Our goal is to create a program capable of creating a densely connected neural network with the specified architecture (number and size of layers and appropriate activation function). An example of such a network is presented in Figure 1. Above all, we must be able to train our network and make predictions using it.
Figure 2. Neural network blueprint
Diagram above shows what operations will have to be performed during the training of our NN. It also shows how many parameters we will have to update and read at different stages of a single iteration. Building the right data structure and skillfully managing its state is one of the most difficult parts of our task. Due to time limits, I will not describe in detail the role of each of parameters shown in the figure. I refer all those interested to the first article of this series, where I hope you will find answers to all the questions that bother you.
Figure 3. Dimensions of weight matrix W and bias vector b for layer l.
Initiation of neural network layers
Let’s start by initiating weight matrix W and bias vector b for each layer. In Figure 3. we have prepared a small cheatsheet, which will help us to assign the appropriate dimensions for these coefficients. Superscript [l] denotes the index of the current layer (counted from one) and the value n indicates the number of units in a given layer. I assumed that the information describing the NN architecture will be delivered to our program in the form of list similar to the one presented on Snippet 1. Each item in the list is a dictionary describing the basic parameters of a single network layer: input_dim – the size of the signal vector supplied as an input for the layer, output_dim – the size of the activation vector obtained at the output of the layer and activation – the activation function to be used inside the layer.
nn_architecture = [  
{“input_dim”: 2, “output_dim”: 4, “activation”: “relu”},  
{“input_dim”: 4, “output_dim”: 6, “activation”: “relu”},  
{“input_dim”: 6, “output_dim”: 6, “activation”: “relu”},  
{“input_dim”: 6, “output_dim”: 4, “activation”: “relu”},  
{“input_dim”: 4, “output_dim”: 1, “activation”: “sigmoid”},  
] 
Snippet 1. A list containing parameters describing a particular neural network. This list corresponds to the NN shown in Figure 1.
If you’re familiar with this subject, you’ve probably already heard a voice in your head speaking with anxious tone: “Ah ha, hey! Something is wrong! Some of these fields are unnecessary…”. Yeah, your inner voice is right this time. The vector coming out of one layer is also the input for the next one, so in fact it is enough to know the size of only one of those vectors. However, we deliberately decided to use the following notation to keep the objects consistent across all layers and make the code easier to understand for those who encounter these topic for the first time.
def init_layers(nn_architecture, seed = 99):  
np.random.seed(seed)  
number_of_layers = len(nn_architecture)  
params_values = {}  
for idx, layer in enumerate(nn_architecture):  
layer_idx = idx + 1  
layer_input_size = layer[“input_dim”]  
layer_output_size = layer[“output_dim”]  
params_values[‘W’ + str(layer_idx)] = np.random.randn(  
layer_output_size, layer_input_size) * 0.1  
params_values[‘b’ + str(layer_idx)] = np.random.randn(  
layer_output_size, 1) * 0.1  
return params_values 
Snippet 2. The function that initiates the values of the weight matrices and bias vectors.
Let’s finally focus on the main task that we have to accomplish in this part — the initiation of layers parameters. Those who have already looked at the code on Snippet 2 and have some experience with NumPy have noticed that the matrix W and vector b have been filled with small, random numbers. This approach is not accidental. Weights values cannot be initialized with the same number because it leads to breaking symmetry problem. Basically, if all weights are the same, no matter what was the input X, all units in the hidden layer will be the same too. In a way, we got stuck in the initial state without any hope for escape, no matter how long will we train our model and how deep our network is. A linear algebra does not forgive.
The use of small values increases the efficiency of our algorithm during first iterations. Looking at the graph of the sigmoid function, shown in Figure 4, we can see that it becomes almost flat for large values, what has significant effect on the speed of learning of our NN. All in all parameter initiation using small random numbers is simple approach, but it guarantees good enough starting point for our algorithm. Prepared parameters values are stored in a python dictionary with a key that uniquely identifies their parent layer. The dictionary is returned at the end of the function, so we will have access to its contents in the next stages of our algorithm.
Figure 4. Activation functions used in the algorithm.
Activation functions
Amongst all the functions that we will use, there are a few very simple but powerful ones. Activation functions can be written in a single line of code, but they give the neural nets nonlinearity and expressiveness that they need. “Without them, our neural network would become a combination of linear functions, so it would be just a linear function itself.” There are many activation functions, but in this project I decided to provide the possibility of using two of them — sigmoid and ReLU. To be able to go full circle and pass both forward and backward propagation, we also have to prepare their derivatives.
def sigmoid(Z):  
return 1/(1+np.exp(Z))  
def relu(Z):  
return np.maximum(0,Z)  
def sigmoid_backward(dA, Z):  
sig = sigmoid(Z)  
return dA * sig * (1 – sig)  
def relu_backward(dA, Z):  
dZ = np.array(dA, copy = True)  
dZ[Z <= 0] = 0;  
return dZ; 
Snippet 3. ReLU and Sigmoid activation functions and their derivatives.
Forward propagation
The designed neural network will have a simple architecture. The information flows in one direction — it is delivered in the form of an X matrix, and then travels through hidden units, resulting in the vector of predictions Y_hat. To make it easier to read, I split forward propagation into two separate functions — step forward for a single layer and step forward for the entire NN.
def single_layer_forward_propagation(A_prev, W_curr, b_curr, activation=”relu”):  
Z_curr = np.dot(W_curr, A_prev) + b_curr  
if activation is “relu”:  
activation_func = relu  
elif activation is “sigmoid”:  
activation_func = sigmoid  
else:  
raise Exception(‘Nonsupported activation function’)  
return activation_func(Z_curr), Z_curr 
Snippet 4. Single layer forward propagation step
This part of the code is probably the most straightforward and easy to understand . Given input signal from the previous layer, we compute affine transformation Z and then apply selected activation function. By using NumPy, we can leverage vectorization — performing matrix operations, for the whole layer and whole batch of examples at once. This eliminates iteration and significantly speeds up our calculations. In addition to the calculated matrix A, our function also returns an intermediate value Z. What for? The answer is shown in Figure 2. We will need Z during the backward step.
Figure 5. Dimensions of individual matrices used in a forward step.
Using a preprepared one layer step forward function, we can now easily build a whole forward propagation step. This is a slightly more complex function, whose role is not only to perform predictions but also to organize the collection of intermediate values. It returns, among other things, Python dictionary, which contains A and Z values computed for particular layers.
def full_forward_propagation(X, params_values, nn_architecture):  
memory = {}  
A_curr = X  
for idx, layer in enumerate(nn_architecture):  
layer_idx = idx + 1  
A_prev = A_curr  
activ_function_curr = layer[“activation”]  
W_curr = params_values[“W” + str(layer_idx)]  
b_curr = params_values[“b” + str(layer_idx)]  
A_curr, Z_curr = single_layer_forward_propagation(A_prev, W_curr, b_curr, activ_function_curr)  
memory[“A” + str(idx)] = A_prev  
memory[“Z” + str(layer_idx)] = Z_curr  
return A_curr, memory 
Snippet 5. Full forward propagation step
Loss function
In order to monitor our progress and make sure that we are moving in right direction, we should routinely calculate the value of the loss function. “Generally speaking, the loss function is designed to show how far we are from the ‘ideal’ solution.” It is selected according to the problem we plan to solve, and frameworks such as Keras have many options to choose from. Because I am planning to test our NN for the classification of points between two classes, I decided to use binary crossentropy, which is defined by the following formulas. To get more information about the learning process, I have also decided to implement a function that will calculate accuracy.
def get_cost_value(Y_hat, Y):  
m = Y_hat.shape[1]  
cost = 1 / m * (np.dot(Y, np.log(Y_hat).T) + np.dot(1 – Y, np.log(1 – Y_hat).T))  
return np.squeeze(cost)  
def get_accuracy_value(Y_hat, Y):  
Y_hat_ = convert_prob_into_class(Y_hat)  
return (Y_hat_ == Y).all(axis=0).mean() 
Snippet 6. Calculating the value of the cost function and accuracy
Backward propagation
Sadly, backward propagation is regarded by many inexperienced deep learning enthusiasts as algorithm that is intimidating and difficult to understand. The combination of differential calculus and linear algebra very often deters people who do not have a solid mathematical training. So don’t worry too much if you don’t understand everything right away. Trust me, we all went through it.
def single_layer_backward_propagation(dA_curr, W_curr, b_curr, Z_curr, A_prev, activation=”relu”):  
m = A_prev.shape[1]  
if activation is “relu”:  
backward_activation_func = relu_backward  
elif activation is “sigmoid”:  
backward_activation_func = sigmoid_backward  
else:  
raise Exception(‘Nonsupported activation function’)  
dZ_curr = backward_activation_func(dA_curr, Z_curr)  
dW_curr = np.dot(dZ_curr, A_prev.T) / m  
db_curr = np.sum(dZ_curr, axis=1, keepdims=True) / m  
dA_prev = np.dot(W_curr.T, dZ_curr)  
return dA_prev, dW_curr, db_curr 
Snippet 7. Single layer backward propagation step
Often people confuse backward propagation with gradient descent, but in fact these are two separate matters. The purpose of the first one is to calculate the gradient effectively, whereas the second one is to use the calculated gradient to optimize. In NN, we calculate the gradient of the cost function (discussed earlier) in respect to parameters, but backpropagation can be used to calculate derivatives of any function. The essence of this algorithm is the recursive use of a chain rule known from differential calculus — calculate a derivative of functions created by assembling other functions, whose derivatives we already know. This process – for one network layer – is described by the following formulas. Unfortunately, due to the fact that this article focuses mainly on practical implementation, I’ll omit the derivation. Looking at the formulas, it becomes obvious why we decided to remember the values of the A and Z matrices for intermediate layers in a forward step.
Figure 6. Forward and backward propagation for a single layer.
Just like in the case of forward propagation, I decided to split the calculations into two separate functions. The first one — shown in Snippet 7 — focuses on a single layer and boils down to rewriting above formulas in NumPy. The second one, representing full backward propagation, deals primarily with key juggling to read and update values in three dictionaries. We start by calculating a derivative of the cost function with respect to the prediction vector — result of forward propagation. This is quite trivial as it only consists of rewriting the following formula. Then iterate through the layers of the network starting from the end and calculate the derivatives with respect to all parameters according to the diagram shown in Figure 6. Ultimately, function returns a python dictionary containing the gradient we are looking for.
def full_backward_propagation(Y_hat, Y, memory, params_values, nn_architecture): 

grads_values = {}  
m = Y.shape[1]  
Y = Y.reshape(Y_hat.shape)  
dA_prev = – (np.divide(Y, Y_hat) – np.divide(1 – Y, 1 – Y_hat));  
for layer_idx_prev, layer in reversed(list(enumerate(nn_architecture))):  
layer_idx_curr = layer_idx_prev + 1  
activ_function_curr = layer[“activation”]  
dA_curr = dA_prev  
A_prev = memory[“A” + str(layer_idx_prev)]  
Z_curr = memory[“Z” + str(layer_idx_curr)]  
W_curr = params_values[“W” + str(layer_idx_curr)]  
b_curr = params_values[“b” + str(layer_idx_curr)]  
dA_prev, dW_curr, db_curr = single_layer_backward_propagation(  
dA_curr, W_curr, b_curr, Z_curr, A_prev, activ_function_curr)  
grads_values[“dW” + str(layer_idx_curr)] = dW_curr  
grads_values[“db” + str(layer_idx_curr)] = db_curr  
return grads_values 
Snippet 8. Full backward propagation step
Updating parameters values
The goal of this method is to update network parameters using gradient optimization. In this way, we try to bring our target function closer to a minimum. To accomplish this task, we will use two dictionaries provided as function arguments: params_values, which stores the current values of parameters, and grads_values, which stores cost function derivatives calculated with respect to these parameters. Now you only need to apply the following equations for each layer. This is a very simple optimization algorithm, but I decided to use it because it is a great starting point for more advanced optimizers, which will probably be the subject of one of my next articles.
def update(params_values, grads_values, nn_architecture, learning_rate): 

for idx, layer in enumerate(nn_architecture):  
layer_idx = idx + 1  
params_values[“W” + str(layer_idx)] = learning_rate * grads_values[“dW” + str(layer_idx)]  
params_values[“b” + str(layer_idx)] = learning_rate * grads_values[“db” + str(layer_idx)]  
return params_values; 
Snippet 9. Updating parameters values using gradient descent
Putting things together
We are finally ready. The most difficult part of the task is behind us— we have prepared all the necessary functions and now we just need to put them together in the correct order. To better understand the sequence of operations, it is worth looking again at the diagram in Figure 2. The function returns optimized weights obtained as a result of the training and the history of the metrics change during the training. In order to make a prediction, you only need to run a full forward propagation using the received weight matrix and a set of test data.
def train(X, Y, nn_architecture, epochs, learning_rate):  
params_values = init_layers(nn_architecture, 2)  
cost_history = []  
accuracy_history = []  
for i in range(epochs):  
Y_hat, cashe = full_forward_propagation(X, params_values, nn_architecture)  
cost = get_cost_value(Y_hat, Y)  
cost_history.append(cost)  
accuracy = get_accuracy_value(Y_hat, Y)  
accuracy_history.append(accuracy)  
grads_values = full_backward_propagation(Y_hat, Y, cashe, params_values, nn_architecture)  
params_values = update(params_values, grads_values, nn_architecture, learning_rate)  
Snippet 10. Training a model
David vs Goliath
It’s high time to see if our model can solve a simple classification problem. I generated a data set consisting of points belonging to two classes, as shown in Figure 7. Let’s try to teach our model to classify points belonging to this distribution. For comparison, I also prepared a model in a highlevel framework — Keras. Both models have the same architecture and learning rate. Still, this is a really uneven fight, as the implementation that we have prepared is probably the simplest possible one. Ultimately, both the NumPy and Keras model achieved similar accuracy of 95% on the test set. However, it took several dozen times longer for our model to reach such a result. In my opinion, this state has been caused primarily by a lack of appropriate optimization.
Figure 7. Test dataset
Figure 8. Visualization of the classification boundaries achieved with both models
CHEERS! FOR FOLLOWING THIS DEMONSTRATION.
I hope that this page has broadened your horizon and increased your understanding of compiled operations taking place inside the neural network which must be keen in applications upon profiting from digital econmics. This would be the best reward for the effort we put into creating this post. I admit that I’d really been rusted from my works at MIT in Neural Nets for the first implementation of robotics, 1982 and now relearned a lot by preparing both code and comments. It is true that nothing teaches as much as getting your hands dirty.
If you have any questions or if you found an error in the source code, please let me know in the comment or email. Best wishes!