As said by Gaby, the contents are dynamically loaded. You can see this by:
- Opening the website to scrap in chrome (firefox also has a way of
doing it)
- Press F12 to open DevTools
- Select the 'Network' tab
- Select 'XHR' as filter
- Make a search (or reload the website)
XHR filter
You are going to see a list of items, the one you want is:
search?cid=5168&isFacetsEnabled=true&globalShippingCountryCode=&globalShippingCurrencyCode=&locale=en_US&pageId=0
and if you click on it you can see the http request with the headers and response with all the data you want.
To do this on scrapy it's a bit more complex, you have to scrap this link but using the "POST" method instead of the default ("GET"). To do this from a scrapy spider:
yield scrapy.Request(url, self.parse_data, method="POST", headers=headers, body=body)
Where the URL should be the one you found on the XHR filter, the method used is "POST", you should copy the headers we found earlier and in the body goes all the parameters specific to what you are searching for.
From that you are gonna get a JSON response which you can save to a file or do whatever you want.
If you need more details let me know.