I am trying to process a csv files which contains more than 20000 patient information. There are totally 50 columns and each patient will have multiple rows as its the hourly data . Most of the columns belong to Observation resource type. Like Heart Rate, Temperature, Blood Pressure.
I have successfully transformed the data into FHIR format. however, when i try to push the data inside FHIR server, the server throws an error saying maximum of 500 entries are only allowed for the data.
Even if i wait up to 500 entries and push the json file, its taking quite a lot time to cover up 20000 * 50 . Is there any efficient way of bulk inserting the data into the azure fhir server ?
Currently , i am using the following code. But looks like its going to take quite a lot time and resource. As there are around 0.7 million rows in my csv file.
def export_template(self, template):
if self.export_max_500 is None:
self.export_max_500 = template
else:
export_max_500_entry = self.export_max_500["entry"]
template_entry = template["entry"]
self.export_max_500["entry"] = export_max_500_entry + template_entry
if len(self.export_max_500["entry"]) > 500:
template["entry"] = self.export_max_500["entry"][:495]
self.export_max_500["entry"] = self.export_max_500["entry"][495:]
self.send_to_server(template)