7

The Need

Hello, I am experimenting the usage of CNNs on images which come from a cylindric domain, so I am interested to apply the Convolution layer in a circular (or cyclic) way. I mean a convolution layer that instead of padding the image with zeros would just wrap around the image (or the kernel around the image).

Thoughts, Searches and Ideas

Coming from a signal processing background I expected it were already covered: in fact when calculations are performed in the frequency domain (i.e. by means of DFT) this circularity comes from free, and rather extra effort (there called "zero padding") is required to avoid circular equivocation.

Well, ok: I learned that, because of kernel size being normally quite small, calculations can be done more conveniently in the base domain. I can't figure out any good reason why this should be unfeasible, so I hoped that some direct way to do a cyclic convolution existed.

But I found no coverage for this function, neither in Keras nor in TensorFlow docs. Moreover, I found little or no really relevant discussion about this around: Torch7 discussion

So I am left with the following options:

  • implement my own CyclicConv2D layer by subclassing Keras' layer.Layer class, as described here
  • submit my own pull request for an experimental new feature, as described here. Shortly, I would think of adding some "wrap" value for the "padding=" parameter, or adding a new "wrap=" parameter. It should specify along which axes the image should wrap around. This in fact would likely be needed only along one dimension, the circular one, not both.

The Questions

Is there any more straightforward option, or source of information that I should address first?

Otherwise, where can I find any advice how to implement the former?

For the latter I must admit that I described it more to stimulate some collective thinking how it should be implemented from a user standpoint, but I don't really feel able to contribute a good pull request (i.e. one including good code). Anyway I will appreciate any advice where to start from.

Community
  • 1
  • 1
lurix66
  • 502
  • 1
  • 5
  • 14
  • Hi @lurix66 did you find a solution to this? I think the problem with creating your own layer is finding an efficient way of doing this. The simplest efficient implementation would be to mirror the actual data pixels, up to the kernel size, from the other side of the domain (a halo or ghost cell). Then when you use `valid` padding it would return an image with the original size before padding. – Ed Smith Mar 12 '19 at 16:33
  • No @Ed, thanks, not yet. I am precisely thinking of explicitly pre-padding the input tensor on the left (top) side with columns (rows) copied from the other end, and then just call the usual `valid` convolution. I realize that this should be implemented on pooling layers too, and that it will be reasonable to make it work in the general case of multidimensional convolutions and poolings. – lurix66 Mar 13 '19 at 10:49
  • I've finally succeeded to do it and would like to post my own answer to the question, but the question having been _closed as too broad_, I am impeded to do it. What can I do? – lurix66 Apr 29 '19 at 16:34
  • Ask a new question. A much more targeted new question. Or maybe answer this (https://stackoverflow.com/questions/49189496/can-symmetrically-paddding-be-done-in-convolution-layers-in-keras/55210905#55210905) – Ed Smith Apr 29 '19 at 17:59
  • I think the question is great. I wonder about the same thing. Also, the question has received mutliple upvotes. The one Ed Smith links to is not the same, and neither are the "related" questions that show up. I really think the question should be reopened. I think "too broad" is the wrong verdict. The asker has shown effort to find a solution. Perhaps more emphasis should be put on the main question, which is how to do convolution with one or more cyclical dimensions. – Elias Hasle Oct 29 '19 at 09:53
  • ... and *I have worked out my own solution* (ehm, actually two), but being it closed, I can't find a button to post it! To me _the question is too broad_ compared to the time they dedicated to examine the content and decide – lurix66 Oct 29 '19 at 10:11
  • @Ed, the question you refer to is about _padding_, which is not that relevant (besides, it already has answers). Mine is about a convolutional layer, and the need to make the kernel work in a circular integral; something that doesn't seem to exist neither in Keras nor in Tensorflow, while otoh it is a well known tool (actually the base tool) in DSP. (Moreover it is a trainable layer, what Padding is not) – lurix66 Oct 29 '19 at 10:20
  • 1
    @Elias Hasle, I've voted to reopen – Ed Smith Oct 30 '19 at 13:54
  • @lurix66 Unless I misunderstand the question, padding is indeed related, but it would have to be a type of padding that takes values from the opposite side (combined with an appropriate padding option for the subsequent conv operation). I have found a solution for that, but would prefer posting it as an answer. I think you may have a chance of getting the question reopened if you rephrase it so that the "Need" becomes the main question. – Elias Hasle Oct 30 '19 at 17:33

1 Answers1

1

I'm implementing something like this so though I'd add code. I think the simplest way to actually implement the wrapped padding is to use Numpy pad function with the "wrap" option. For example with

input = np.array([[1,2,3],[4,5,6],[7,8,9]])
kernel = [1,1]
#We want symmetrical padding (same top and bottom)
# and np.pad format ((before_1, after_1), … (before_N, after_N))
pad = [[i,i] for i in kernel]
padded_input = np.pad(input, pad, "wrap")

which gives,

array([[9, 7, 8, 9, 7],
       [3, 1, 2, 3, 1],
       [6, 4, 5, 6, 4],
       [9, 7, 8, 9, 7],
       [3, 1, 2, 3, 1]])

It looks like creating a custom layer similar to ZeroPadding2D called something like CyclicPadding2D may then be the best idea to minimise changes to Keras code, like so,

kernel = [7,7]
model = Sequential()
model.add(CyclicPadding2D(kernel, input_shape=(224, 224, 3)))
model.add(Conv2D(32, kernel=kernel, padding="valid"))
model.build()

You can also use this between both pooling and conv layers. The code in CyclicPadding2D would probably need to consider input format (channels, batch, etc) with something like,

if self.data_format is "channels_last":
    #(batch, depth, rows, cols, channels)
    pad = [[0,0]] + [[i,i] for i in self.kernel] + [[0,0]]
elif self.data_format is "channels_first":
    #(batch, channels, depth, rows, cols)
    pad = [[0, 0], [0, 0]] + [[i,i] for i in self.kernel]
inputs = np.pad(inputs,  pad, "wrap")

This is similar to what the Keras Numpy backend does with option "constant" hardwired while the tensorflow backend supplied no option and so defaults to constant (although interestingly tf.pad provides a reflect option).

Looking at the Keras source, perhaps something like this could be added as a feature, by simply putting the code above in the call function of the _conv when a padding option is something like "periodic". That said, simply adding a new padding layer is probably the most flexible solution.

Ed Smith
  • 12,716
  • 2
  • 43
  • 55