0

So, I have two numpy arrays, a, of shape (p,2) and b, of shape (q,2). I create a KDTree with scipy

c=sp.KDTree(a)

I have an upper bound, ub=1.0/12.0 When I do

print c.query(b,distance_upper_bound=ub)

I get ([inf,inf,inf...],[len(a),len(a),len(a)...]) which means that a and b have no common elements. To confirm this, I do a for loop

for n in range(len(c.data)):#the result is the same with len(a)
  for m in range(len(b)):
    if (c.data[m][0]-b[n][0])**2.0+(c.data[m][1]-b[n][1])**2.0<ub**2.0:
      print c.data[m],b[n],n,m

Many n,m pairs get printed.
I have made a and b arrays generic, because it has happened with different arrays.
The most mysterious thing to me is that in the same code, I have used query and got good results.
So, could you point me in the right direction?

Edit: as requested,sample data. These are some data points printed by the for cycle. The full arrays are really long.

a=np.array([[276.95368542701721, 330.18454774620238], 
[276.95368542701721, 330.18454774620238], 
[283.19923346114763, 337.60512985013065], 
[270.32420807690886, 331.46587512659323], 
[271.32525610216351, 333.51103014735435], 
[271.9742523815284, 330.26777673087207], 
[268.89584462538102, 331.5474437183201], 
[278.6808380388178, 331.92691700030088], 
[271.36541507290735, 332.74113908742231]])

b=np.array([[ 276.956177, 330.183134],
[ 276.956177, 330.183134],
[ 283.264282, 337.592966],
[ 270.319366, 331.538029],
[ 271.351807, 333.578056],
[ 272.019257, 330.268417],
[ 268.913958, 331.523153],
[ 278.681976, 331.927687],
[ 271.303917, 332.706767]])
scnerd
  • 5,836
  • 2
  • 21
  • 36
Federico
  • 1
  • 2
  • Could you provide some sample data for `a` and `b` that we could verify your results against? – scnerd Jun 21 '18 at 19:27
  • @scnerd. I edited the question to add some data points. – Federico Jun 21 '18 at 19:51
  • Using that sample data, I don't get the results you talk about. I get `([0.0029, 0.0029, 0.661, 0.0723, 0.0721, 0.0450, 0.0303, 0.0014, 0.0705], [0, 0, 2, 3, 4, 5, 6, 7, 8])`. Likely, my KDTree is coming out differently because of the smaller training data, but that indicates that it's probably some aspect of your trained tree that is causing these issues, not necessarily the code – scnerd Jun 21 '18 at 19:59
  • @scnerd Yes, exactly, which is even more infuriating. I get the same results! Are there alternatives to KDTree that I could use and you recommend? Besides the for cycle. – Federico Jun 21 '18 at 20:25
  • 1
    I apologize in advance for even suggesting it, but you are positive that in your actual code `ub` is not by any chance `1/12`? – Paul Panzer Jun 22 '18 at 02:35
  • @PaulPanzer Yes, I made sure, because that was one of the first mistakes I made in my coding life. I remember I got some weird orbit in C, because I used r⁻⁵ instead of r^-(5.0) – Federico Jun 22 '18 at 14:33

0 Answers0