1

I need calculate the solar zenith angle for approximately 106.000.000 of different coordinates. This coordinates are referrals to the pixels from an image projected at Earth Surface after the image had been taken by camera into the airplane.

I am using the pvlib.solarposition.get_position() to calculate the solar zenith angle. The values returned are being calculated correctly (I compared some results with NOOA website) but, how I need calculate the solar zenith angle for many couple of coordinates, the python is spending many days (about 5 days) to finish the execution of the function.

How I am a beginner in programming, I wonder is there is any way to accelerate the solar zenith angle calculation.

Below found the part of the code implemented which calculate the solar zenith angle:

sol_apar_zen = [] 
    for i in range(size3):
        solar_position = np.array(pvl.solarposition.get_solarposition(Data_time_index, lat_long[i][0], lat_long[i][1]))
        sol_apar_zen.append(solar_position[0][0])
 print(len(sol_apar_zen))
eyllanesc
  • 235,170
  • 19
  • 170
  • 241
  • First step that I would have done is to isolate the performance issue between my code and the library code. One way to do that is to time it with (say) 1000 co-ordinates. With such small numbers, inefficiencies of data structures are less likely to show up and it should give a measure of the library code. If the total time taken for the 106M co-ordinates is roughly 10^5 times of the 1k co-ordinates run, then it is likely that the library code is long running. If the total time is significantly more than that, focus on wrapping code. – gsr Jul 25 '19 at 14:52
  • I run the algorithm for 10.000 coordinates. The solar zenith angle for this coordinates was calculated in 1 min and 12 sec. For 100.000 coordinates the program spent 12 min and 2 sec to calculate the solar zenith angle. – César de Paula Jul 26 '19 at 08:38
  • From the 10k data point, looks like the 106M should take ~8 days. So, looks like the library just takes time. Couple of options: find if there are tuning params that make the computation faster or find a different library. Also, if calculations for each co-ordinate are independent, you may consider splitting the data set and executing them concurrently (as separate processes) on a good multi-core system. – gsr Jul 26 '19 at 14:09
  • What is the length of `Data_time_index`? `solarposition.get_solarposition` is a wrapper for a handful of different solar position algorithms. Most the implementations are designed to efficiently calculate solar position at many times for a single coordinate. It appears that you want to do the opposite: efficiently calculate solar position for a single (or relatively few) time at many coordinates. 10,000 times for a single coordinate should take less than a second on most machines. – Will Holmgren Jul 26 '19 at 17:43
  • @WillHolmgren the Data_time_index contain only one value because it refers for the time (time and date) that one image was taken by the camera. I think so that is better found another library to calculations the Solar Zenith Angle. If you or another colleague know about one library, let me know please!! – César de Paula Jul 29 '19 at 07:55
  • It might be possible to use pvlib's lower-level numba accelerated functions in the pvlib.spa module. Another option is using the `pvlib.solarposition.*_analytical` functions -- most of them accept ndim arrays, but it's not immediately obvious to me how to make the hour angle calculation work for your use case. Please post an answer below if you find a solution. – Will Holmgren Jul 29 '19 at 16:49

3 Answers3

1

Technically, if you need to compute Solar Zenith Angle quickly for a large list (array), there are more efficient algorithms than the PVLIB's one. For example, the one described by Roberto Grena in 2012 (https://doi.org/10.1016/j.solener.2012.01.024).

I found a suitable implementation here: https://github.com/david-salac/Fast-SZA-and-SAA-computation (you mind need some tweaks, but it's simple to use it, plus it's also implemented for other languages than Python like C/C++ & Go).

Example of how to use it:

from sza_saa_grena import solar_zenith_and_azimuth_angle
# ...
# A random time series:
time_array = pd.date_range("2020/1/1", periods=87_600, freq="10T", tz="UTC")
sza, saa = solar_zenith_and_azimuth_angle(longitude=-0.12435,  # London longitude
                                          latitude=51.48728,   # London latitude
                                          time_utc=time_array)

That unit-test (in the project's folder) shows that in the normal latitude range, an error is minimal.

0

If you want to fasten up this calculation you can use the numba core (if installed)

location.get_solarposition(
        datetimes,
        method='nrel_numba'
    )

Otherwise you have to implement your own calculation based on vectorized numpy arrays. I know it is possible but I am not allowed to share. You can find the formulation if you search for spencer 1971 solar position

dl.meteo
  • 1,658
  • 15
  • 25
0

Since your coordinates represent a grid, another option would be to calculate the zenith angle for a subset of your coordinates, and the do a 2-d interpolation to obtain the remainder. 1 in 100 in both directions would reduce your calculation time by a factor of 10000.

adr
  • 1,731
  • 10
  • 18