2

I use iterparse to parse a big xml file (1,8 gb). I write all the data to an csv file.t The script what I made runs well, but for some reason it randomly skips lines. This is my script:

import xml.etree.cElementTree as ET
import csv
xml_data_to_csv =open('Out2.csv','w', newline='', encoding='utf8')
Csv_writer=csv.writer(xml_data_to_csv, delimiter=';')

file_path = "Products_50_producten.xml"
context = ET.iterparse(file_path, events=("start", "end"))

EcommerceProductGuid = ""
ProductNumber = ""
Description = ""
ShopSalesPriceInc = ""
Barcode = ""
AvailabilityStatus = ""
Brand = ""
# turn it into an iterator
#context = iter(context)
product_tag = False
for event, elem in context:
    tag = elem.tag

    if event == 'start' :
        if tag == "Product" :
            product_tag = True

        elif tag == 'EcommerceProductGuid' :
            EcommerceProductGuid = elem.text

        elif tag == 'ProductNumber' :
            ProductNumber = elem.text

        elif tag == 'Description' :
            Description = elem.text

        elif tag == 'SalesPriceInc' :
            ShopSalesPriceInc = elem.text

        elif tag == 'Barcode' :
            Barcode = elem.text

        elif tag == 'AvailabilityStatus' :
            AvailabilityStatus = elem.text


        elif tag == 'Brand' :
            Brand = elem.text

    if event == 'end' and tag =='Product' :
        product_tag = False
        List_nodes = []
        List_nodes.append(EcommerceProductGuid)
        List_nodes.append(ProductNumber)
        List_nodes.append(Description)
        List_nodes.append(ShopSalesPriceInc)
        List_nodes.append(Barcode)
        List_nodes.append(AvailabilityStatus)
        List_nodes.append(Brand)
        Csv_writer.writerow(List_nodes)
        print(EcommerceProductGuid)
        List_nodes.clear()
        EcommerceProductGuid = ""
        ProductNumber = ""
        Description = ""
        ShopSalesPriceInc = ""
        Barcode = ""
        AvailabilityStatus = ""
        Brand = ""

    elem.clear()


xml_data_to_csv.close()

The "Products_50_producten.xml" file has the following layout:

<?xml version="1.0" encoding="utf-16" ?>
<ProductExport xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<ExportInfo>
<ExportDateTime>2018-11-07T00:01:03+01:00</ExportDateTime>
<Type>Incremental</Type>
<ExportStarted>Automatic</ExportStarted>
</ExportInfo>
<Products>
<Product><EcommerceProductGuid>4FB8A271-D33E-4501-9EB4-17CFEBDA4177</EcommerceProductGuid><ProductNumber>982301017</ProductNumber><Description>Ducati Jas Radiaal Zwart Xxl Heren Tekst - 982301017</Description><Brand>DUCATI</Brand><ProductVariations><ProductVariation><SalesPriceInc>302.2338</SalesPriceInc><Barcodes><Barcode BarcodeOrder="1">982301017</Barcode></Barcodes></ProductVariation></ProductVariations></Product>
<Product><EcommerceProductGuid>4FB8A271-D33E-4501-9EB4-17CFEBDA4177</EcommerceProductGuid><ProductNumber>982301017</ProductNumber><Description>Ducati Jas Radiaal Zwart Xxl Heren Tekst - 982301017</Description><Brand>DUCATI</Brand><ProductVariations><ProductVariation><SalesPriceInc>302.2338</SalesPriceInc><Barcodes><Barcode BarcodeOrder="1">982301017</Barcode></Barcodes></ProductVariation></ProductVariations></Product>
</Products>

If I copy a "Product" 300 times for example, it leaves the 'EcommerceProductGuid' value empty on line 155 in the csv file. If I copy the Product 400 times, it leaves an empty value on line 155, 310, and 368. How is this possible?

Antonio
  • 133
  • 1
  • 11
  • Probably a empty `elem.text` in tag `EcommerceProductGuid`. Add a condition `if not elem.tag: print('Found empty elem.tag')`. – stovfl Dec 11 '18 at 18:07
  • Is there a probability to read the line again "if EcommerceProductGuid == None:"? Because for some reason it skips the line while the information is inside of the "EcommerceProductGuid" tag. – Antonio Dec 11 '18 at 19:40

2 Answers2

3

I think the problem is with if event == 'start'.

According to other questions/answers, the contents of the text attribute is not guaranteed to be defined.

However, it doesn't seem to be as simple as changing to if event == 'end'. When I tried it myself I was getting more empty fields than populated ones. (UPDATE: Using event == 'end' did work if I removed events=("start", "end") from iterparse.)

What ended up working was to ignore the event completely and just test to see if text was populated.

Updated code...

import xml.etree.cElementTree as ET
import csv

xml_data_to_csv = open('Out2.csv', 'w', newline='', encoding='utf8')
Csv_writer = csv.writer(xml_data_to_csv, delimiter=';')

file_path = "Products_50_producten.xml"
context = ET.iterparse(file_path, events=("start", "end"))

EcommerceProductGuid = ""
ProductNumber = ""
Description = ""
ShopSalesPriceInc = ""
Barcode = ""
AvailabilityStatus = ""
Brand = ""
for event, elem in context:
    tag = elem.tag
    text = elem.text

    if tag == 'EcommerceProductGuid' and text:
        EcommerceProductGuid = text

    elif tag == 'ProductNumber' and text:
        ProductNumber = text

    elif tag == 'Description' and text:
        Description = text

    elif tag == 'SalesPriceInc' and text:
        ShopSalesPriceInc = text

    elif tag == 'Barcode' and text:
        Barcode = text

    elif tag == 'AvailabilityStatus' and text:
        AvailabilityStatus = text

    elif tag == 'Brand' and text:
        Brand = text

    if event == 'end' and tag == "Product":
        product_tag = False
        List_nodes = []
        List_nodes.append(EcommerceProductGuid)
        List_nodes.append(ProductNumber)
        List_nodes.append(Description)
        List_nodes.append(ShopSalesPriceInc)
        List_nodes.append(Barcode)
        List_nodes.append(AvailabilityStatus)
        List_nodes.append(Brand)
        Csv_writer.writerow(List_nodes)
        print(EcommerceProductGuid)
        List_nodes.clear()
        EcommerceProductGuid = ""
        ProductNumber = ""
        Description = ""
        ShopSalesPriceInc = ""
        Barcode = ""
        AvailabilityStatus = ""
        Brand = ""

    elem.clear()

xml_data_to_csv.close()

This seemed to work fine with my test file of 300 Product elements.

Also, I think you could simplify your code if you used a dictionary and csv.DictWriter.

Example (produces same output as code above)...

import xml.etree.cElementTree as ET
import csv
from copy import deepcopy

field_names = ['EcommerceProductGuid', 'ProductNumber', 'Description',
               'SalesPriceInc', 'Barcode', 'AvailabilityStatus', 'Brand']

values_template = {'EcommerceProductGuid': "",
                   'ProductNumber': "",
                   'Description': "",
                   'SalesPriceInc': "",
                   'Barcode': "",
                   'AvailabilityStatus': "",
                   'Brand': ""}

with open('Out2.csv', 'w', newline='', encoding='utf8') as xml_data_to_csv:

    csv_writer = csv.DictWriter(xml_data_to_csv, delimiter=';', fieldnames=field_names)

    file_path = "Products_50_producten.xml"
    context = ET.iterparse(file_path, events=("start", "end"))

    values = deepcopy(values_template)

    for event, elem in context:
        tag = elem.tag
        text = elem.text

        if tag in field_names and text:
            values[tag] = text

        if event == 'end' and tag == "Product":
            csv_writer.writerow(values)
            print(values.get('EcommerceProductGuid'))
            values = deepcopy(values_template)

        elem.clear()
Daniel Haley
  • 51,389
  • 6
  • 69
  • 95
  • Great, thanks a lot! I tested both, and both worked. I made a test with 200.000 products and the second script was a little bit faster. The first script took like 57 seconds and the second script took 51 seconds. I have add "values.clear()" between "values = deepcopy(values_template)" and "elem.clear()", because sometimes some of the fields are empty. – Antonio Dec 11 '18 at 20:22
0

For what it's worth and anyone that might be searching, the above answer applies to the lxml library iterparse() also. I was having a similar issue using lxml and thought I would give it a try and it works almost exactly the same.

The start event randomly will not have picked up the text item yet when using it to get xml information. Trying to get the item on the end event seems to have solved the issue for me with large xml files. Looks like what Daniel Haley has done adds another layer of protection by checking to see if the text exists.

Dan
  • 758
  • 6
  • 20