0

I want to add some error percentage (relative error) to the output of max-pooling layer in CNN. I am using max pooling layer from keras. Below is the code

i = Input(shape=x_train[0].shape)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(i)
x = BatchNormalization()(x)
x = Conv2D(32, (3, 3), activation='relu', padding='same')(x)
x = BatchNormalization()(x)
x = MaxPooling2D((2, 2))(x)

How can I add some errors to the output of this layer? I want to add some fraction of the original output. e.g. if x is my original output, I want my output to be x+ some fraction of (x).

Thanks in advance.

shaq_m
  • 3
  • 1

1 Answers1

0

If you just want to add a fraction of the input, you can just use Add:

x = K.layers.Add()([x, 1/4 * x])

for example:

input = K.layers.Input(shape=(5,))
x = K.layers.Add()([input, 1/4 * input])
model = K.Model(inputs=[input], outputs=[x])
model(np.ones((1,5)))
#<tf.Tensor: shape=(1, 5), dtype=float32, numpy=array([[1.25, 1.25, 1.25, 1.25, 1.25]], dtype=float32)>

However this is not noise, and the affine transformation after this layer will vanish what you have done, infact:

A(x+ 1/4x)+b
= A(5/4x)+b
= 5/4 * A(x)+b

so you are not adding any "additional expressivity" to your network

if you clarify what noise (wrt to a a fraction of the input) you want, I'll fix my answer

Alberto Sinigaglia
  • 12,097
  • 2
  • 20
  • 48
  • Basically, I want to add let's say 5% of the original value to the output of the max-pooling layer. If my output of my max-pooling layer was let's say 2, I want it to be 2+(0.1)(which is 5 % of 2)=2.1, so the new o/p is x + fraction of x that's what I want – shaq_m Aug 02 '22 at 09:31
  • @shaq_m as i said, this is not noise, you are adding a well defined quantity to your output, and the code that I've posted does exactly that, just with 1/4, but you can change it to 1/20 if you want 0.05% – Alberto Sinigaglia Aug 02 '22 at 10:16
  • Yes, that's what I wanted. So using the code that you have posted I am adding a well-defined value to output of max-pooling layer. – shaq_m Aug 02 '22 at 10:26
  • yes, you are doing `x + 1/smt * x`, try it and let me know – Alberto Sinigaglia Aug 02 '22 at 10:27
  • It will work but what about the second part that you have mentioned regarding affine transformation? Can you please elaborate on that? – shaq_m Aug 02 '22 at 10:29
  • @shaq_m idk what's you preparation in math, but with affine transformation i mean the "Wx+b" part of the layers.... and I just stated that this "adding a 5% to the output" does not increase how "powerful" is your network, since 2 affine transformations, are just as powerful as one (I've posted the calculus, and you can see that the last one is just a single affine transformation)... is like having 2 subsequent linear layer... they are as powerful as a single linear layer – Alberto Sinigaglia Aug 02 '22 at 10:32
  • @shaq_m you're welcome, if you have additional questions, feel free to ask, and if this solves your initial question, feel free to mark is as correct – Alberto Sinigaglia Aug 02 '22 at 12:07