2

Perhaps I'm doing it wrong but I don't think WPF or GDI+ classes are intended to process large images on a server. I have an app that needs to transform many large tiff files to different formats and sizes. Thumbnails and previews of these files are generated with the wpf classes and cached on disk so they are not a big deal.

My problem comes with the other higher res transformations which are not being cached atm. I'm thinking of using ImageMagick to replace the wpf for this part and see if there's a performance gain but while I'm at it I'd like to see if you guys know of an alternative to the wpf an gdi+ classes to process large image files.

JoseMarmolejos
  • 1,760
  • 1
  • 17
  • 34

1 Answers1

4

Well, which one are you using (which classes)? The Windows Imaging Component (WIC) as available through PresentationCore and WindowsBase are quite capable, if used properly.

I wouldn't go with GDI+ i.e. System.Drawing because that's legacy and slow.

If you just need raw processing power have your tried scaling out and distributing the work load? It's can be done quite easily (to some extent) even without investing in new hardware.

This is my main method with timers and all, it appears that the WIC stuff needs roughly 3 seconds to boot the first run takes about 3 seconds longer than the rest. The timings are then fairly consistent. I'm memory buffering to avoid the time it might take to pull or put the file from or to disk.

for (int i = 0; i < 20; i++)
{
    var total = new Stopwatch();

    var read = new Stopwatch();

    var process = new Stopwatch();

    total.Start();

    using (var inputStream = new FileStream(@"C:\Projects\ConsoleApplication6\ConsoleApplication6\monster.jpg", FileMode.Open))
    {
        read.Start();

        var bytes = new byte[inputStream.Length];

        inputStream.Read(bytes, 0, bytes.Length);

        read.Stop();

        process.Start();

        var bitmapImage = new BitmapImage();
        bitmapImage.BeginInit();
        bitmapImage.StreamSource = new MemoryStream(bytes);
        bitmapImage.DecodePixelWidth = 800;
        bitmapImage.EndInit();

        using (var outputStream = new MemoryStream())
        {
            var jpegEncoder = new JpegBitmapEncoder();

            var frame = BitmapFrame.Create(bitmapImage);

            jpegEncoder.Frames.Add(frame);

            jpegEncoder.Save(outputStream);

            process.Stop();

            File.WriteAllBytes(@"C:\Projects\ConsoleApplication6\ConsoleApplication6\monster" + i + ".jpg", outputStream.ToArray());
        }
    }

    total.Stop();

    Console.WriteLine("{0:0.000} ms ({1:0.000} ms / {2:0.000} ms)", total.Elapsed.TotalMilliseconds, read.Elapsed.TotalMilliseconds, process.Elapsed.TotalMilliseconds);
}
John Leidegren
  • 59,920
  • 20
  • 131
  • 152
  • I'm using BitmapImage and TransformBitmap to load/transform the files. And yes, it does a good job with the transformation ... but my issue is with the time it takes to do it with very large files. – JoseMarmolejos Jun 03 '11 at 16:47
  • Define very large files. Oh, include exactly what transformations you do as well. – John Leidegren Jun 03 '11 at 16:57
  • A tiff file 6144x4096 @ 350 dpi weighting 100mb would be a definition of a very large file. The transformations are to various proportions of the source's size: mid res, low res, web ready (72 dpi, with a 800px max width) – JoseMarmolejos Jun 03 '11 at 17:52
  • Hehe, yes it would. How long does it take to process an image currently? – John Leidegren Jun 03 '11 at 18:24
  • In average it takes close to 6 secs to generate a smaller image. Edit to add: on the server it can clock up to 12 secs. – JoseMarmolejos Jun 03 '11 at 18:32
  • I'm gonna try and reproduce the issue on my end and see what I can do about it. – John Leidegren Jun 03 '11 at 18:43
  • Using the `BitmapImage` class with the `DecodePixelWidth` setting i get this to run in between 600-800 ms on my machine. I'm resizing a noisy 6144x4096 @ 350 DPI, TIFF file, 75 MB big. It appears to take about 1 ms per width unit. About 40 ms to just load the TIFF file. The DecodePixelWidth property doesn't work as good as it does with JPEG and PNG source data but it appears to be a lot faster than what you are seeing. I've posted the code I'm running in my answer. – John Leidegren Jun 04 '11 at 17:04
  • I'm getting about 14-28% CPU utilization on my Intel i7 @ 2.8 Ghz with 8 cores (4 of those are hyper threading cores). In my case I should be able to process 4 such files every second (possibly more). – John Leidegren Jun 04 '11 at 17:10
  • Thanks a lot man, although your cpu specs are way better than the production server's it does show how much power it's needed to run these processes. Besides a hardware upgrade I'm thinking about generating and caching a mid-res .png version of the asset's source and use that for all transformations. Again man, thanks for your input. – JoseMarmolejos Jun 06 '11 at 00:26
  • Microsoft's JPEG XR format previously known as HD photo is quite a capable format and it can be made lossless to. I say this because It was 20% faster than decoding TIFF. PNG was however 200% slower. I mention this because the HD photo format can be decomposed efficiently at any resolution and size and if you have the option to change the format you might wanna look in to it. You can read more about the capabilities of JPEG XR here http://en.wikipedia.org/wiki/JPEG_XR – John Leidegren Jun 07 '11 at 04:07