1

I am searching for a layer that performs an element-wise division of the input but of course the parameters of this division must be learned, just as those of a standard conv2D layer.

I found this: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Multiply

but i don't think its what i want, because i want the division parameters to be learned, not divide 2 layers.

With a dense layer, dot products are computed, which is NOT what I want. I am looking for ELEMENT-WISE multiplication/division.

SheppLogan
  • 322
  • 3
  • 18
  • 1
    This is effectively the same as a Dense layer without biases. – Dr. Snoopy Feb 20 '20 at 15:23
  • Indeed. No need to explicitly model a division, since a division is equivalent to a multiplication (x/2 :== x*0.5), which is what happens by default in a neural network layer. – sdcbr Feb 20 '20 at 15:24
  • No, I think you are making a mistake, by default dense nn its the dot product: w^T x = w1x1+w2x2+...+wnxn I dont want to sum! I want ELEMENT-WISE multiplication/division... what do you think? – SheppLogan Feb 20 '20 at 15:44
  • if you have an idea, I would be very interested. – SheppLogan Feb 20 '20 at 15:53
  • anyone has an idea? – SheppLogan Feb 21 '20 at 11:57
  • @Machupicchu, I see that you have asked a question in https://stackoverflow.com/questions/47289116/element-wise-multiplication-with-broadcasting-in-keras-custom-layer regarding this issue. Is your issue resolved? If so, can you please share the solution so that `community` can benefit from it. Thanks! –  May 15 '20 at 05:53

1 Answers1

1

The sample code for a Custom Layer that performs an element-wise division of the input with the parameters(Weights) of this division learned during Training, is shown below:

%tensorflow_version 2.x

from tensorflow import keras
from tensorflow.keras import Input
from tensorflow.keras.layers import Flatten, Dense
from tensorflow.keras.models import Model, Sequential
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Layer
import numpy as np

class MyLayer(Layer):

    def __init__(self, output_dims, **kwargs):
        self.output_dims = output_dims

        super(MyLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        # Create a trainable weight variable for this layer.
        self.kernel = self.add_weight(name='kernel',
                                      shape=self.output_dims,
                                      initializer='ones',
                                      trainable=True)


        super(MyLayer, self).build(input_shape)  # Be sure to call this somewhere!

    def call(self, x):
        # Dividing Input with Weights
        return tf.divide(x, self.kernel)

    def compute_output_shape(self, input_shape):
        return (self.output_dims)

mInput = np.array([[1,2,3,4]])
inShape = (4,)
net = Sequential()
outShape = (4,)
l1 = MyLayer(outShape, input_shape= inShape)
net.add(l1)
net.compile(loss='mean_absolute_error', optimizer='adam', metrics=['accuracy'])
p = net.predict(x=mInput, batch_size=1)
print(p)

Hope this helps. Happy Learning!

Innat
  • 16,113
  • 6
  • 53
  • 101