Python's input:
sequence1 = [0, 1]
output1 = [[1, 0, 0],[0, 1, 0]]
loss = tf.compat.v1.nn.ctc_loss(
labels=tf.sparse.from_dense([sequence1]),
inputs=np.array([output1]).astype('float32'),
sequence_length=[2],
time_major = False
)
print(loss.numpy())
Python's output:
array([1.2408944], dtype=float32)
Mathematica's input:
CTCLossLayer[<|
"Input" -> {{1, 0, 0}, {0, 1, 0}},
"Target" -> {1, 2} (* Index starts from 1 and not 0 *)
|>]
Mathematica's output:
0.
Since my sequence is [0,1]
and it's one hot encoding (when number of characters is 2 + 1(for CTC)) is [[1,0,0],[0,1,0]]
- shouldn't the nn.ctc_loss
be 0
? As in Mathematica? Thanks!