19

I am writing a application that needs to resize massive amounts of images... and these are my requirements:

  • C/C++
  • Support jpeg/png at least
  • Fast
  • Cross-Platform

So far my options are:

  • OpenCV
  • CImg
  • ImageMagick
  • GraphicsMagick (it's said to be fast)
  • DevIL
  • GIL from Boost
  • CxImage
  • Imlib2 (it's said to be fast)
  • Any others?

All of these would get the job done, but I am looking for the fastest here, and I was not able to find any benchmarks on their performance.

rr-
  • 14,303
  • 6
  • 45
  • 67
The Unknown
  • 19,224
  • 29
  • 77
  • 93
  • 10
    Resizing massive (as in the entire flickr uploads of last month for example) amounts of images sounds like a highly specialized application, so I am wondering why cross-platform is so important? If you can rely on specific hardware you would probably be able to go so fast on the scaling part that you would have to seriously start thinking about how to read/write all that data fast enough. – Chris May 17 '09 at 06:27
  • 1
    "I was not able to find any benchmarks": You can always bench,ark them yourself and then share your findings here :-) – lothar May 18 '09 at 17:06
  • @chris I realize this is an old question but with a general purpose CPU and images on a local disk resizing is almost certainly going to be a compute bound problem. Now if you mean specialized (e.g. CUDA) and not specific hardware you _might_ be able to make it IO bound. – Yaur Mar 19 '13 at 20:47

6 Answers6

13

Take a look at Intel IPP (Integrated Performance Primitives) (Wiki link is better then the Intel one...) it works also on AMD and has functions to resize images (bilinear, nearest neighbor, etc) and works on Linux and Windows.

It is not free (but it won't break the bank), but its the fastest that you can find.

Shay Erlichmen
  • 31,691
  • 7
  • 68
  • 87
  • 1
    If the image is got decoded, IPP is good choice. However, it could be faster by decoding and scaling in single step. I don't know how to use IPP to resize without decoding 1:1 bitmap first. – kcwu May 17 '09 at 07:08
  • 2
    think for just a moment about that statement. What part of doing it in a single (complicated) step is going to make it any faster? The image decoding routines still need to decode every pixel in order for the filter routines to filter them. Even if their was a library that did it in one pass - our benchmarking of IPP routines would indicate a 2pass with IPP would probably be faster anyway. – Chris Becke May 17 '09 at 07:41
  • maybe because doing it in the 1st pass it can use the data already in the cpu cache – CiNN May 17 '09 at 23:18
  • 1
    @Chris: kcwu might be right. I remember the man page for convert, with a description of parameter -size -- to limit the decoded resolution of an image, that is intended to be resized down anyway. A paste from the man page: In this example, "-size 120x120" gives a hint to the JPEG decoder that the image is going to be downscaled to 120x120, allowing it to run faster by avoiding returning a full-resolution image. –  Jul 13 '09 at 09:27
  • If the images are large, it could be faster to do it in one step with just a few scanlines in memory. RAM is very slow and caches very small. – Lothar Jul 16 '14 at 16:47
  • IPP is a very bad choice, because it doesn't do resampling for bicubic, bilinear and lanczos. Instead, it use fixed number of pixels from source for each pixel in destination. The result is very bad for downscaling in 2 and more times. This is detailed explanation in Russian: http://habrahabr.ru/post/243475/ – homm Nov 21 '14 at 10:51
  • Actually, for jpeg images, scaling during decoding makes a lot of sense. If you're willing to sacrifice a bit of accuracy, you can simply discard frequency domain information while decoding. See imageflow and vipsthumbnail for examples. – Lilith River Apr 17 '16 at 04:12
  • In 2019, image resize is much faster than image decompress. There's little gain from a fast resize implementation since you're just not spending much time there. Instead, you'll get better throughput using a program that can overlap decompress, resize and recompress (like vipsthumbnail). – jcupitt Dec 18 '19 at 09:21
10

Take a look at VIPS. It's the fastest one I've found so far, and is free for commercial use.

https://github.com/libvips/libvips/wiki/Speed-and-memory-use

On that benchmark, it's 2x faster than Pillow-SIMD, 5x faster than imagemagick, 6x faster than opencv, 8x faster than freeimage, etc. It has dramatically lower memory use too: more than 10x less than imagemagick, for example.

jcupitt
  • 10,213
  • 2
  • 23
  • 39
  • I have not used VIPS myself. But it seems to be fast (and LGPL): http://www.vips.ecs.soton.ac.uk/index.php?title=Speed_and_Memory_Use – guettli Sep 22 '11 at 07:29
5

@Chris Becke's comment:

"think for just a moment about that statement. What part of doing it in a single (complicated) step is going to make it any faster? The image decoding routines still need to decode every pixel in order for the filter routines to filter them."

That isn't always the case. For example, when decoding a JPEG you can ask the JPEG library to give you a 1/2, 1/4, 1/8 size image (or something like that; it's a while since I've looked in detail) which it can do without having to decode the extra detail at all, due to the way JPEG works. It can be much quicker than a full decode + scale.

(Obviously you may need to scale a bit afterwards if the smaller image isn't the exact size you want.)

(Sorry I can only post this reply as a comment due to no reputaton. First time I've tried to post anything here. If someone wants to repost this or something similar as a comment and delete my answer, feel free!)

Leo Davidson
  • 6,093
  • 1
  • 27
  • 29
  • Make sense, scaling algorithm will give better result with more data (only at a certain small level it doesn't matter) but then again you need to work thru all the different compression methods (BMP, GIF, PNG, JPEG) and write a specific version. – Shay Erlichmen May 18 '09 at 06:20
  • I think JPEG is the only format that can be optimized this way. It's structured so the different levels of detail can be separated before full decoding. – Mark Ransom Aug 26 '13 at 18:48
  • Working in chunks also means you can overlap decode and recode. For example, libvips (see @bithive's answer) will stream images, running decode, process and recode in parallel. For many operations, decode/recode is a single-threaded rate-limiting step, so this can give a large speedup. – jcupitt May 14 '15 at 13:49
5

If IPP does what you need (e.g function Resize in section 12), then I doubt you'll find signifcantly faster x86 code anywhere else. Bear in mind that it may fall back onto slower "reference implementations" when run on AMD CPUs though.

If CPU isn't meeting your performance requirements, you might consider pushing the resizing onto a GPU using OpenGL (the simplest implementation using texture mapping would benefit from hardware interpolators, for more complex filtering use GLSL shader code). The ability of the GPU to do this sort of thing about a hundred times faster than a CPU (give or take a zero) has to be weighed against the relatively slow data transfer to and from the card though (typically a gigabyte or two per second at the most).

timday
  • 24,582
  • 12
  • 83
  • 135
0

If you are looking for open source, how about FreeImage? For commercial stuff, I use Snowbound. Both are quite fast and capable of many different image formats and resizing algorithms.

R Ubben
  • 2,225
  • 16
  • 8
-1

If you're looking for free stuff, and want to do things quickly, try to develop a Gimp C-compiled plugin : this is very easy, and I think Gimp does a good job at resizing :

This may not be the fastest to resize, but the cheapest (free) and the fastest to develop.

Take a look there.

Olivier Pons
  • 15,363
  • 26
  • 117
  • 213
  • Or alternatively call the resizer via a Python plugin. – graham.reeds Jul 13 '09 at 08:13
  • Gimp is a GUI. I think the question is about a library which can be used from C/C++. I know that Gimp can be controled from scripts, but this won't be fast. – guettli Sep 22 '11 at 07:32
  • You just may think it's slow because of time it takes to launch, but resizing using Gimp is very fast, and if you keep it loaded into memory, maybe this is not the fastest, but it's free **and** very fast. Test by yourself you may be surprised ;) – Olivier Pons Sep 23 '11 at 17:19