I have tried in anger to parse the following representative HTML extract, using BeautifulSoup and lxml:
[<p class="fullDetails">
<strong>Abacus Trust Company Limited</strong>
<br/>Sixty Circular Road
<br/>DOUGLAS
<br/>ISLE OF MAN
<br/>IM1 1SA
<br/>
<br/>Tel: 01624 689600
<br/>Fax: 01624 689601
<br/>
<br/>
<span class="displayBlock" id="ctl00_ctl00_bodycontent_MainContent_Email">E-mail: </span>
<a href="mailto:email@abacusion.com" id="ctl00_ctl00_bodycontent_MainContent_linkToEmail">email@abacusion.com</a>
<br/>
<span id="ctl00_ctl00_bodycontent_MainContent_Web">Web: </span>
<a href="http://www.abacusiom.com" id="ctl00_ctl00_bodycontent_MainContent_linkToSite">http://www.abacusiom.com</a>
<br/>
<br/><b>Partners(s) - ICAS members only:</b> S H Fleming, M J MacBain
</p>]
What I want to do:
Extract 'strong' text into company_name
Extract 'br' tags text into company_line_x
Extract 'MainContent_Email' text into company_email
Extract 'MainContent_Web' text into company_web
The problems I was having:
1) I could extract all text by using .findall(text=True), but there was a lot of padding in each line
2) Non-ASCII chars are sometimes returned and this would cause the csv.writer to fail.. I'm not 100% sure how to handle this correctly. (I previously just used unicodecsv.writer)
Any advice would be MUCH appreciated!
At the moment, my function just receives page data and isolates the 'p class'
def get_company_data(page_data):
if not page_data:
pass
else:
company_dets=page_data.findAll("p",{"class":"fullDetails"})
print company_dets
return company_dets