0

The shape and positions of all the polygons are known beforehand. The polygons are not overlapping and will be of different colors and shapes, and there could be quite many of them. The polygons are defined in floating point based coordinates and will be painted on top of a JPEG photo as annotation.

How could I create the resulting image file as fast as possible after I get to know which color I should give each polygon?

If it would save time I would like to perform as much as possible of the computations beforehand. All information regarding geometry and positions of the polygons are known in advance. The JPEG photo is also known in advance. The only information not known beforehand is the color of each polygon.

The JPEG photo has a size of 250x250 pixels, so that would also be the image size of the resulting rasterised image.

The computations will be done on a Linux computer with a standard graphics card, so OpenGL might be a viable option. I know there are also rasterisation libraries like Cairo that could be used to paint polygons. What I wonder is if I could take advantage of the fact that I know so much of the input in advance and use that to speed up the computation. The only thing missing is the color of each polygon.

Preferably I would like to find a solution that would only precompute things in the form of data files. In other words as soon as the polygon colors are known, the algorithm would load the other information from datafiles (JPEG file, polygon geometry file and/or possibly precomputed datafiles). Of course it would be faster to start the computation out with a "warm" state ready in the GPU/CPU/RAM but I'd like to avoid that. The choice of programming language is not so import, but could for instance be C++.

To give some more background information: The JavaScript library OpenSeadragon that is running in a web browser requests image tiles from a web server. The idea is that measurement points (i.e. the polygons) could be plotted on-the-fly on to pregenerated Zooming Images (DZI format) by the web server. So for one image tile the algorithm would only need to be run one time. The aim is low latency.

Erik Sjölund
  • 10,690
  • 7
  • 46
  • 74
  • You seem to aim for some micro-optimizations. But the szenario is totally unclear. Will there by many iterations of this in a batch, or just a single one? If you deal with batches, do you want to minimize latency, or maximize throughput? I think in your scenario, loading and decoding the JPEG (and possibly encoding them again, if the output format is to be JPEG too) will be the limiting factor, and your are basically barking at the wrong tree. – derhass Aug 23 '15 at 12:46
  • One iteration and aim is low latency. I updated the question and provided some more background information. I am new to OpenGL, maybe this is just basic trivial drawing operations? – Erik Sjölund Aug 23 '15 at 13:14
  • I think OpenGL is actually not the ideal solution in this case. The data transfers and the implicit synchronization implied by it will outweight the performance improvement during rasterization. With the GL, you have to basically move the image twice. In the same time, the CPU could also update the pixels, especially if you can pre-calculate some active edge table like data structurs for your polygons. In the end this turns down to a set of pixel ranges which have to be filled with a single color, which is quite easy toi implement, and even trivially to parallelize on multi-cores. – derhass Aug 23 '15 at 13:18
  • Good to know! Then I think I will try something else than OpenGL, probably a rasterisation library like Cairo or Skia. If the perfomance would be too bad, I'm considering if I should rasterise the polygons in advance and use that with SSE (SIMD operations) to alter the photo image. Hmm, I should start testing and go for the easy way first... – Erik Sjölund Aug 23 '15 at 14:15

0 Answers0