I didn't find an answer anywhere on internet about why my code is returning an error.
I'm trying to make an LSTM model with tensorflow.net by using the example https://github.com/xuwaters/TensorFlow.NET-Examples/blob/master/src/TensorFlowNET.Examples/ImageProcessing/DigitRecognitionLSTM.cs
But I receive an error (TypeError : If default_name is None then scope is required) when the program is at the line
var (outputs, _, _) = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, dtype: tf.float32);
I tried to put like I saw on internet to define the scope but I still get the error :
using(var scope = tf.name_scope("scope")) { }
I use SciSharp.TensorFlow.Redist 2.11.0 and TensorFlow.NET 0.100.2 nuggets package.
You can find the code of the method here :
public Graph BuildGraph()
{
using(var scope = tf.name_scope("scope"))
{
var graph = new Graph().as_default();
X = tf.placeholder(tf.float32, (-1, timesteps, num_input), name: "X");
Y = tf.placeholder(tf.float32, (-1, num_classes), name: "Y");
// Hidden layer weights => 2*n_hidden because of forward + backward cells
var weights = tf.Variable(tf.random.normal((2 * num_hidden, num_classes)), name: "w");
var biases = tf.Variable(tf.random.normal(num_classes), name: "b");
// Unstack to get a list of 'timesteps' tensors of shape (batch_size, num_input)
var x = tf.split(X, timesteps, 1, "x");
// Define lstm cells with tensorflow
// Forward direction cell
var lstm_fw_cell = new BasicLstmCell(num_hidden, forget_bias: 1.0f);
// Backward direction cell
var lstm_bw_cell = new BasicLstmCell(num_hidden, forget_bias: 1.0f);
// Get lstm cell output
var (outputs, _, _) = rnn.static_bidirectional_rnn(lstm_fw_cell, lstm_bw_cell, x, dtype: tf.float32);
// Linear activation, using rnn inner loop last output
var logits = tf.matmul(outputs.Last(), weights) + biases;
prediction = tf.nn.softmax(logits);
// Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits: logits, labels: Y));
var optimizer = tf.train.GradientDescentOptimizer(learning_rate: learning_rate);
train_op = optimizer.minimize(loss_op);
// Evaluate model (with test logits, for dropout to be disabled)
var correct_pred = tf.equal(tf.math.argmax(prediction, 1), tf.math.argmax(Y, 1));
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32));
return graph;
}
}
Does someone have an idea ? Thank you in advance.