0

I am trying to create a custom loss function for use in lasagne.

I would like to use Sorensen-dice coefficient which I have written with numpy, and use for evaluation like so:

np.sum(np.logical_and(preds == num_labs, labels == num_labs)))*2/ (np.sum(preds == num_labs) + np.sum(labels == num_labs)

Which is doing:

Dice = (2*|X & Y|)/ (|X|+ |Y|)

I am now trying to implement this in theano, unsure how feasible it is.

Is it possible to use this as a loss function? I would like to use it as I am segmenting volumes but I as this should be differentiable for back propagation, how can I change this?

Any thoughts?

JP1
  • 731
  • 1
  • 10
  • 27
  • 1
    Note that if you're using (stochastic) gradient descent, your loss function has to be differentiable, which means it has to be continuous and has to work on a continuous domain. I'm not sure what the code you have written does, but except from looking like it misses a start parenthesis and an end parenthesis, it seems like you have a discrete (discontinuous) operator in there, `==`. What are `preds`, `labels` and `num_labs`, respectively? If they are arrays, what is the type of the elements they contain? – HelloGoodbye Jun 03 '16 at 11:21
  • did you find a solution to implement your custom loss function? – Maximilian May 17 '17 at 08:16
  • Not the Sørensen-Dice coefficient due to the point made by HelloGoodbye, although with some tweaking it would have been possible. Using theano however it isnt too difficult to create custom loss functions and then of course use them in a lasagne, keras or pure theano net. If you havent seen it already this might help: http://deeplearning.net/software/theano/tutorial/examples.html – JP1 May 17 '17 at 11:54

1 Answers1

1

you can write is as sum(A*B)/(sum(A^2)+sum(B^2)). refer to https://arxiv.org/abs/1606.04797