$30.00
Order NowIn recitation, we show how to compute activations in a neural network, and how to perform stochastic gradient descent to train it. We compute activations for two example networks, but only show how to train one of them. Show how to train the second network using just a single example, x = [1 1], y = [0 0] (note that in this case, the label is a vector). Initialize all weights to 0.05. Use a learning rate of 0.3. Include your answers in text form in a file report.pdf/docx.
Part II: Training a neural network (25 points)
In this exercise, you will write code to train and evaluate a very simple neural network. We will follow the example in Bishop that uses a single hidden layer, a tanh function at the hidden layer and an identity function at the output layer, and a squared error loss. The network will have 30 hidden neurons (i.e. M=30) and 1 output neuron (i.e. K=1). To implement it, follow the equations in the slides and your textbook.
First, write a function [y_pred, Z] = forward(X, W1, W2) that computes activations from the front towards the back of the network, using fixed input features and weights. Also use the forward pass function to evaluate your network after training.
Inputs:
Outputs:
Second, write a function [W1, W2, error_over_time] = backward(X, y, M, iters, eta) that performs training using backpropagation (and calls the activation computation function as it iterates). Construct the network in this function, i.e. create the weight matrices and initialize the weights to small random numbers, then iterate: pick a training sample, compute the error at the output, then backpropagate to the hidden layer, and update the weights with the resulting error.
Inputs:
Outputs:
Part III: Testing your neural network on wine quality (15 points)
We will use the Wine Quality dataset from HW3 to test the neural network implementation. Write your code in a script neural_net.m.
Submission: Please include the following files:
WhatsApp us