14

I am attempting to find the most performant method to find unique values from a NumPy array. NumPy's unique function is very slow and sorts the values first before finding the unique. Pandas hashes the values using the klib C library which is much faster. I am looking for a Cython solution.

The simplest solution seems to just iterate through the array and use a Python set to add each element like this:

from numpy cimport ndarray
from cpython cimport set

@cython.wraparound(False)
@cython.boundscheck(False)
def unique_cython_int(ndarray[np.int64_t] a):
    cdef int i
    cdef int n = len(a)
    cdef set s = set()
    for i in range(n):
        s.add(a[i])
    return s

I also tried an unordered_set from c++

from libcpp.unordered_set cimport unordered_set
@cython.wraparound(False)
@cython.boundscheck(False)
def unique_cpp_int(ndarray[np.int64_t] a):
    cdef int i
    cdef int n = len(a)
    cdef unordered_set[int] s
    for i in range(n):
        s.insert(a[i])
    return s

Performance

# create array of 1,000,000
a = np.random.randint(0, 50, 1000000)

# Pure Python
%timeit set(a)
86.4 ms ± 2.58 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

# Convert to list first
a_list = a.tolist()
%timeit set(a_list)
10.2 ms ± 74.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# NumPy
%timeit np.unique(a)
32 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

# Pandas
%timeit pd.unique(a)
5.3 ms ± 257 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

# Cython
%timeit unique_cython_int(a)
13.4 ms ± 1.02 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)

# Cython - c++ unordered_set
%timeit unique_cpp_int(a)
17.8 ms ± 158 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Discussion

So pandas is about 2.5x faster than a cythonized set. Its lead increases when there are more distinct elements. Surprisingly, a pure python set (on a list) beats out a cythonized set.

My question here - is there a faster way to do this in Cython than just use the add method repeatedly? And could the c++ unordered_set be improved?

Using Unicode strings

The story changes when we use unicode strings. I believe I have to convert the numpy array to an object data type to properly add its type for Cython.

@cython.wraparound(False)
@cython.boundscheck(False)
def unique_cython_str(ndarray[object] a):
    cdef int i
    cdef int n = len(a)
    cdef set s = set()
    for i in range(n):
        s.add(a[i])
    return s

And again I tried an unordered_set from c++

@cython.wraparound(False)
@cython.boundscheck(False)
def unique_cpp_str(ndarray[object] a):
    cdef int i
    cdef int n = len(a)
    cdef unordered_set[string] s
    for i in range(n):
        s.insert(a[i])
    return s

Performance

Create an array of 1 million strings with 1,000 distinct values

s_1000 = []
for i in range(1000):
    s = np.random.choice(list('abcdef'), np.random.randint(5, 50))
    s_1000.append(''.join(s))

s_all = np.random.choice(s_1000, 1000000)

# s_all has numpy unicode as its data type. Must convert to object
s_unicode_obj = s_all.astype('O')

# c++ does not easily handle unicode. Convert to bytes and then to object
s_bytes_obj = s_all.astype('S').astype('O')

# Pure Python
%timeit set(s_all)
451 ms ± 5.94 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) 

%timeit set(s_unicode_obj)
71.9 ms ± 5.91 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

# using set on a list
s_list = s_all.tolist()
%timeit set(s_list)
63.1 ms ± 7.38 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

# NumPy
%timeit np.unique(s_unicode_obj)
1.69 s ± 97.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit np.unique(s_all)
633 ms ± 3.99 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

# Pandas
%timeit pd.unique(s_unicode_obj)
97.6 ms ± 6.61 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

# Cython
%timeit unique_cython_str(s_unicode_obj)
60 ms ± 5.81 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

# Cython - c++ unordered_set
%timeit unique_cpp_str2(s_bytes_obj)
247 ms ± 8.45 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Discussion

So, it appears that Python's set outperforms pandas for unicode strings but not on integers. And again, iterating through the array in Cython doesn't really help us at all.

Cheating with integers

It's possible to circumvent sets if you know the range of your integers isn't too crazy. You can simply create a second array of all zeros/False and turn their position True when you encounter each one and append that number to a list. This is extremely fast since no hashing is done.

The following works for positive integer arrays. If you had negative integers, you would have to add a constant to shift the numbers up to 0.

@cython.wraparound(False)
@cython.boundscheck(False)
def unique_bounded(ndarray[np.int64_t] a):
    cdef int i, n = len(a)
    cdef ndarray[np.uint8_t, cast=True] unique = np.zeros(n, dtype=bool)
    cdef list result = []
    for i in range(n):
        if not unique[a[i]]:
            unique[a[i]] = True
            result.append(a[i])
    return result

%timeit unique_bounded(a)
1.18 ms ± 21.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

The downside is of course memory usage since your largest integer could force an extremely large array. But this method could work for floats too if you knew precisely how many significant digits each number had.

Summary

Integers 50 unique of 1,000,000 total

  • Pandas - 5 ms
  • Python set of list - 10 ms
  • Cython set - 13 ms
  • 'Cheating' with integers - 1.2 ms

Strings 1,000 unique of 1,000,000 total

  • Cython set - 60 ms
  • Python set of list - 63 ms
  • Pandas - 98 ms

Appreciate all the help making these faster.

Ted Petrou
  • 59,042
  • 19
  • 131
  • 136
  • @Dark Yes, Python sets are very fast - faster than pandas for strings but quite a bit slower for integers. I want to know if there are more performant ways using Cython perhaps with the help of C/C++. – Ted Petrou Jan 06 '18 at 17:26
  • I wonder why this question got a downvote. I learnt so much just by reading the question. Well I too am very curious. – Bharath M Shetty Jan 06 '18 at 17:27
  • 2
    Python `set` uses the same hashing as `dict`, a central processing component of Python. And list iteration is faster than array iteration. So I'm not surprised that Cython does not improve on the basic Python process. It doesn't bring anything new to the calculation. – hpaulj Jan 06 '18 at 17:59
  • @hpaulj Yea, I was checking to see if calling the `add` set method each iteration was optimal. I also wanted to see if there were other hash table implementations that yielded faster results. And also if the 'cheating with integers' strategy was viable – Ted Petrou Jan 06 '18 at 18:51
  • I mean the cheating with integers strategy is just equivalent to a set under perfect conditions. If anything, it represents the maximum performance for integers. – Gabriel A Jan 06 '18 at 18:56
  • I think this is an interesting discussion, but you'll probably exhaust memory before any of these methods are too slow. – Gabriel A Jan 06 '18 at 19:05
  • @GabrielA The set still uses its hash function even under perfect conditions so it'll still underperform the 'cheating with integers' correct? Theres got to be a more formal name for this too... Memory shouldn't be much of an issues since the temporary array is freed after. And even if the range is 1 million, thats around 1 mb which isn't too costly. – Ted Petrou Jan 06 '18 at 19:13
  • Also, this is my most downvoted question of all time. If you have an issue, leave a comment. – Ted Petrou Jan 06 '18 at 19:14
  • Correct. Not to mention hash collisions. I mean it's a pretty common data structure for cs interview type questions. I normally call it a histogram because that's what it basically is. – Gabriel A Jan 06 '18 at 19:16
  • I think `np.bincount` takes that 'cheating with integers' approach. You might add that to your test. – hpaulj Jan 06 '18 at 20:14
  • 1
    Please move a lot of your question into an answer and mark it accepted, this would become a useful dupe target. – cs95 Jan 06 '18 at 21:41
  • I really surprised your c++ version works, because it returns a c++ and not a python object. But I cannot check it right now – ead Jan 06 '18 at 22:26
  • 2
    @ead Cython defines automatic conversions for most of the C++ standard library containers. (There's obviously a little copying overhead.) – DavidW Jan 07 '18 at 22:05
  • @TedPetrou I tried to tweak cpp's unordered map, but was unable to beat the khash implementation. I don't think one can do it - see my last edit for more info – ead Jan 08 '18 at 19:36
  • @TedPetrou Actually google's `dense_hash_set` has beaten the `pd.unique` for integers... – ead Jan 08 '18 at 20:26
  • I didn't downvote but if you made the question more clear and then put a lot of the background in a "what I tried already" section it might read more like a traditional SO question and not a blog post – C8H10N4O2 Jan 08 '18 at 20:33

1 Answers1

7

I think the answer to you question "what is the fastest way to find unique elements" is "it depends". It depends on your data set and on your hardware.

For your scenarios (I mostly looked at integer case) pandas (and used khash) does a pretty decent job. I was not able to match this performance using std::unordered_map.

However, google::dense_hash_set was slightly faster in my experiments than the pandas-solution.

Please read on for a more detailed explanation.


I would like to start out by explaining the results you are observing and use these insights later on.

I start with your int-example: there are only 50 unique elements but 1,000,000 in the array:

import numpy as np
import pandas as pd
a=np.random.randint(0,50, 10**6, dtype=np.int64)

As baseline the timings of np.unique() and pd.unique() for my machine:

%timeit np.unique(a)
>>>82.3 ms ± 539 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit pd.unique(a)
>>>9.4 ms ± 110 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

pandas approach with the set (O(n)) is about 10 times faster than numpy's approach with sorting (O(nlogn)). log n = 20 for n=10**6, so the factor 10 is about the expected difference.

Another difference is, that np.unique returns a sorted array, so one could use binary search to look up the elements. pd.unique returns an unsorted array so we need either to sort it (which might be O(n log n) if there are not many duplicates in the original data) or to transform it to a set-like structure.

Let's take a look at the simple Python-Set:

%timeit set(a)
>>> 257 ms ± 21.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

First thing we must be aware here: we are comparing apples and oranges. The previous unique-functions return numpy arrays, which consists out of lowly c-integers. This one returns a set of full-fledged Python-integers. Quite a different thing!

That means for every element in the numpy-array we must first create a python-object - quite an overhead and only then can we add it to the set.

The conversion to Python-integers can be done in a preprocessing step - your version with list:

A=list(a)
%timeit set(A)
>>> 104 ms ± 952 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit set(list(a))
>>> 270 ms ± 23.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

More than 100 ms are needed for the creation of the Python-integers. However, the python-integers are more complex than the lowly C-ints and thus handling them costs more. Using pd.unique on C-int and than promoting to Python-set is much faster.

And now your Cython version:

%timeit unique_cython_int(a)
31.3 ms ± 630 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

That I don't really understand. I would expect it to perform similar to set(a) -cython would cut out the interpreter, but that would not explain the factor 10. However, we have only 50 different integers (which are even in the integers-pool because they are smaller than 256), so there is probably some optimization, which plays a role/difference.

Let's try another data-set (there are now 10**5 different numbers):

b=np.random.randint(0, 10**5,10**6, dtype=np.int64)
%timeit unique_cython_int(b)
>>> 236 ms ± 31.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit set(b)
>>> 388 ms ± 15.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

A speed-up less than 2 is something what I would expect.

Let's take a look at cpp-version:

%timeit unique_cpp_int(a)
>>> 25.4 ms ± 534 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit unique_cpp_int(b)
>>> 100 ms ± 4.8 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

There is some overhead in copying the data from the cpp-set to the Python set (as DavidW have pointed out), but otherwise the behavior as I would expect given my experience with it: std::unordered_map is somewhat faster than Python, but not the greatest implementation around - panda seems to beat it:

%timeit set(pd.unique(b))
>>> 45.8 ms ± 3.48 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

So it looks like, that in the situation, where there are many duplicated and the hash-function is cheap, the pandas-solution is hard to beat.

One maybe could try out the google data structures.


However, when the data has only very few duplicates, the numpy's sorting solution may become the faster one. The main reason is, that numpy's unique needs only twice the memory - the original data and the output, while pandas hash-set-solution needs much more memory: the original data, the set and the output. For huge datasets it might become the difference between having enough RAM and not having enough RAM.

It depends on the set-implementation how much memory-overhead is needed and it is always about the trade-off between memory and speed. For example std::unordered_set needs at least 32 byte to save a 8-byte integer. Some google's data structures can do better.

Running /usr/bin/time -fpeak_used_memory:%M python check_mem.py with pandas/numpy unique:

#check_mem.py
import numpy as np
import pandas as pd
c=np.random.randint(0, 2**63,5*10**7, dtype=np.int64)
#pd.unique(c)  
np.unique(c)

shows 1.2 GB for numpy and 2.0GB for pandas.

Actually, on my Windows machine np.unique is faster than pd.unique if there are (next to) only unique elements in the array, even for "only" 10^6 elements (probably because of the needed rehashes as the used set grows). This is however not the case for my Linux machine.


Another scenario in which pandas doesn't shine is when the calculation of the hash function is not cheap: Consider long strings (let's say of 1000 characters) as objects.

To calculate the hash-value one needs to consider all 1000 characters (which means a lot of data-> a lot of hash misses), the comparison of two strings is mostly done after one or two characters - the probability is then already very high, that we know that the strings are different. So the log n factor of the numpy's unique doesn't look that bad anymore.

It could be better to use a tree-set instead of a hash-set in this case.


Improving on cpp-unordered set:

The method using cpp's unordered set could be improved due to its method reserve(), which would eliminate the need for rehashing. But it is not imported to cython, so the usage is quite cumbersome from Cython.

The reserving however would not have any impact on the runtimes for data with only 50 unique elements and at most factor 2 (amortized costs due to the used resize-strategy) for the data with almost all elements unique.

The hash-function for ints is identity (at least for gcc), so not much to gain here (I don't think using a more fancy hash-function would help here).

I see no way how cpp's unordered-set could be tweaked to beat the khash-implementation used by pandas, which seems to be quite good for this type of tasks.

Here are for example these pretty old benchmarks, which show that khash is somewhat faster than std::unordered_map with only google_dense being even faster.


Using google dense map:

In my experiments, google dense map (from here) was able to beat khash - benchmark code can be found at the end of the answer.

It was faster if there were only 50 unique elements:

#50 unique elements:
%timeit google_unique(a,r)
1.85 ms ± 8.26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit pd.unique(a)
3.52 ms ± 33.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

but also faster if there were only unique elements:

%timeit google_unique(c,r)
54.4 ms ± 375 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [3]: %timeit pd.unique(c)
75.4 ms ± 499 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

My few experiments have also shown, that google_hash_set uses maybe more memory (up to 20%) than khash, but more tests are needed to see whether this is really the case.


I'm not sure my answer helped you at all. My take-aways are:

  1. If we need a set of Python-integers, set(pd.unique(...)) seems to be a good starting point.
  2. There are some cases for which numpy's sorting solution might be better (less memory, sometimes hash-calculation is too expensive)
  3. Knowing more about data can be used to tweak the solution, by making a better trade-off (e.g. using less/more memory/preallocating so we don't need to rehash or to use a bitset for look-up).
  4. Pandas solution seems to be tweaked pretty good for some usual cases, but then for other cases another trade-off might be better - google_dense being the most promising candidate.

Listings for google-tests:

#google_hash.cpp
#include <cstdint>
#include <functional>
#include <sparsehash/dense_hash_set>

typedef int64_t lli;
void cpp_unique(lli *input, int n, lli *output){

  google::dense_hash_set<lli, std::hash<lli> > set;
  set.set_empty_key(-1);
  for (int i=0;i<n;i++){
     set.insert(input[i]);
  }  

  int cnt=0;
  for(auto x : set)
    output[cnt++]=x;
}

the corresponding pyx-file:

#google.pyx
cimport numpy as np
cdef extern from "google_hash.cpp":
    void cpp_unique(np.int64_t  *inp, int n, np.int64_t *output)

#out should have enough memory:
def google_unique(np.ndarray[np.int64_t,ndim=1] inp, np.ndarray[np.int64_t,ndim=1] out):
    cpp_unique(&inp[0], len(inp), &out[0])

the setup.py-file:

from distutils.core import setup, Extension
from Cython.Build import cythonize
import numpy as np

setup(ext_modules=cythonize(Extension(
            name='google',
            language='c++',
            extra_compile_args=['-std=c++11'],
            sources = ["google.pyx"],
            include_dirs=[np.get_include()]
    )))

Ipython-benchmark script, after calling python setup.py build_ext --inplace:

import numpy as np
import pandas as pd
from google import google_unique

a=np.random.randint(0,50,10**6,dtype=np.int64)
b=np.random.randint(0, 10**5,10**6, dtype=np.int64)
c=np.random.randint(0, 2**63,10**6, dtype=np.int64)
r=np.zeros((10**6,), dtype=np.int64)

%timeit google_unique(a,r
%timeit pd.unique(a)

Other listings

Cython version after fixes:

%%cython
cimport cython
from numpy cimport ndarray
from cpython cimport set
cimport numpy as np
@cython.wraparound(False)
@cython.boundscheck(False)
def unique_cython_int(ndarray[np.int64_t] a):
    cdef int i
    cdef int n = len(a)
    cdef set s = set()
    for i in range(n):
        s.add(a[i])
    return s

C++ version after fixes:

%%cython -+ -c=-std=c++11
cimport cython
cimport numpy as np
from numpy cimport ndarray
from libcpp.unordered_set cimport unordered_set
@cython.wraparound(False)
@cython.boundscheck(False)
def unique_cpp_int(ndarray[np.int64_t] a):
    cdef int i
    cdef int n = len(a)
    cdef unordered_set[int] s
    for i in range(n):
        s.insert(a[i])
    return s
ead
  • 32,758
  • 6
  • 90
  • 153
  • Thanks very much for the extremely detailed explanation and the code for google dense hash. For integers, I think I will use a simple strategy of mapping to a boolean array. Also, Python's set seems to do better than pandas for strings. – Ted Petrou Jan 11 '18 at 23:12
  • @TedPetrou if your range of integers is known and small, probably nothing can beat your boolean-approach. Sorry I didn't look deeper into the string case (it is more complicated). In the end, different trade-offs are possible for set-implementation and it depends on the data which trade-off is the "right" one. It helps however to know the strengths and weaknesses of different set-implementations. – ead Jan 12 '18 at 08:27