Matlab is interesting because of the emphasis on vector math. I’ve been looking at a simple feedforward vector matrix for neural networks in matlab. Here’s the basic concepts of how to impliment one ( so I can do it again if I ever need to… )
Pretend I’m given a 3 layer network, 1 input layer, 1 hidden layer, and 1 output layer.
The function prototype is predict(t1, t2, X)
where t1 is a matrix (a,b) with a = the number of neurons in the next layer, and b being the number of predictors + 1 for each sigmoid activation function.
and where t2 is a matrix ( c, d) with c = the number of neurons in the output layer, and d being the number of predictors + 1 for each sigmoid activation function.
The number of entities we need to make predictions for is size(X,1);
Here’s a simple for loop to run the weights with a bias neuron in both Input and Hidden layers:
for thisEntity = 1:size(X,1)
thisInputLayerAsVector = [1; X(thisEntity , :)’]; % bias neuron + inputs.
% next, need to feed this forward to the hidden layer.
FeedForwardToHidden = [1; sigmoid(t1 * thisInputLayerAsVector)]; %bias + sigmoid of first weights.
FeedForwardToOutput = sigmoid(t2 * FeedForwardToHidden); % output to the number of final classifiers.
After you run this, FeedForwardToOutput will contain a vector “score” for a single entity line, with “1” in the “macthing” positions of this vector, and “0” in the non-matching. Ideally, you should only have one “1” and the rest “0” for multi-class classifications, but that’s a function of training, not this math to compute the forward values of the NN. Now, you’d need to figure out how to convert this score vector to something that makes sense to your use case.