I am producing thousands of png files, using code similar to the following:
import matplotlib.pyplot as plt
...
for pd in plot_data:
fig = plt.figure(figsize=(5,5))
ax = plt.axes(projection=crs.Mercator())
ax.background_patch.set_facecolor((198/255, 236/255, 253/255))
norm = colors.LogNorm(vmin=(min(data[pd])), vmax=(max(data[pd])))
sm = plt.cm.ScalarMappable(cmap=colormap, norm=norm)
sm.set_array([])
for i, p in enumerate(polygons):
ax.add_patch(PolygonPatch(p,facecolor=colormap(data[pd][i]), transform=ccrs.PlateCarree())
ax.set_extent([west, east, south, north], crs=crs.PlateCarree())
ax.set_facecolor((198/255, 236/255, 253/255))
cb = plt.colorbar(sm)
plt.title(name)
fig.savefig(pd+".png")
plt.close()
There are 1000s of polygons in each map. Each iteration of the main loop takes approximately 35 seconds to execute. The for
loop adding the polygons takes 5 seconds to execute while fig.savefig(pd+".png") takes 30 seconds. I was wondering if it would be possible for me to run fig.savefig(pd+".png")
in its own thread so as to reduce this bottle neck. How can I investigate if this is possible?