I am having a very hard time figuring this out. Im trying to create a real time satellite tracker and using the sky field python module. It reads in TLE data and then gives a LAT and LON position relative to the earth. The sky filed module creates satrec objects which cannot be pickled (even tried using dill). I am using a for loop to loop over all the satellites but this is very slow so I want to speed it up using multiprocessing with the pool method, but as above this is not working since multiprocessing uses pickle. Is there any way around this or does anyone have suggestions on other ways to use multiprocessing so speed up this for loop?
from skyfield.api import load, wgs84, EarthSatellite
import numpy as np
import pandas as pd
import time
import os
from pyspark import SparkContext
from multiprocessing import Pool
import dill
data = pd.read_json('tempSatelliteData.json')
print(data.head())
newData = data.filter(['tle_line0', 'tle_line1', 'tle_line2'])
newData.to_csv('test.txt', sep='\n', index=False)
stations_file = 'test.txt'
satellites = load.tle_file(stations_file)
ts = load.timescale()
t = ts.now()
#print(satellites)
#data = pd.DataFrame(data=satellites)
#data = data.to_numpy()
def normal_for():
# this for loop takes 9 seconds to comeplete TOO SLOW
ty = time.time()
for satellite in satellites:
geocentric = satellite.at(t)
lat,lon = wgs84.latlon_of(geocentric)
print('Latitude:', lat)
print('Longitude:', lon)
print(np.round_(time.time()-ty,3),'sec')
def sat_lat_lon(satellite):
geocentric = satellite.at(t)
lat,lon = wgs84.latlon_of(geocentric)
p = Pool()
result = p.map(sat_lat_lon, satellites)
p.close()
p.join()