The MATLAB feedforwardnet
function returns a Neural Network object with the properties as described in the documentation. The workflow for creating a neural network with pre-trained weights is as follows:
- Load data
- Create the network
- Configure the network
- Initialize the weights and biases
- Train the network
The steps 1, 2, 3, and 5 are exactly as they would be when creating a neural network from scratch. Let's look at a simple example:
% 1. Load data
load fisheriris
meas = meas.';
species = species.';
targets = dummyvar(categorical(species));
% 2. Create network
net = feedforwardnet([16, 16]);
% 3. Configure the network
configure(net, meas, targets)
Now, we have a neural network net
with 4 inputs (sepal and petal length and width), and 3 outputs ('setosa', 'versicolor', and 'virginica'). We have two hidden layers with 16 nodes each. The weights are stored in the two fields net.IW
and net.LW
, where IW
are the input weights, and LW
are the layer weights:
>> net.IW
ans =
3×1 cell array
[16×4 double]
[]
[]
>> net.LW
ans =
3×3 cell array
[] [] []
[16×16 double] [] []
[] [3×16 double] []
This is confusing at first, but makes sense: each row in both these cell arrays corresponds to one of the layers we have.
In the IW
array, we have the weights between the input and each of the layers. Obviously, we only have weights between the input and the first layer. The shape of this weight matrix is 16x4
, as we have 4
inputs and 16
hidden units.
In the LW
array, we have the weights from each layer (the rows) to each layer (the columns). In our case, we have a 16x16
weight matrix from the first to the second layer, and a 3x16
weight matrix from the second to the third layer. Makes perfect sense, right?
With that, we know how to initialize the weights we have got from the RBM code:
net.IW{1,1} = weights_input;
net.LW{2,1} = weights_hidden;
With that, you can continue with step 5, i.e. training the network in a supervised fashion.