You should probably adjust your URL
for scraping.
Test using curl
When I run a curl request using this URL, the responded HTML does not contain the expected <span class="a-size-medium a-color-base a-text-normal">
.
curl 'https://www.amazon.com/s?k=samsung+tablet&crid=3VMSMTMZYOP78&sprefix=samsung+%2Caps%2C273&ref=nb_sb_ss_ts-doa-p_2_8' | grep "<span class="
But only following spans:
<span class="a-button a-button-primary a-span12">
<span class="a-button-inner">
<span class="a-letter-space"></span>
<span class="a-letter-space"></span>
<span class="a-letter-space"></span>
<span class="a-letter-space"></span>
Test the soup
You can also test the soup
as HedgeHog commented:
import requests # Import the library for sending requests to the server
from bs4 import BeautifulSoup # Import the library for webpage parsing
url ='https://www.amazon.com/s?k=samsung+tablet&crid=3VMSMTMZYOP78&sprefix=samsung+%2Caps%2C273&ref=nb_sb_ss_ts-doa-p_2_8'
response = requests.get(url) # GET-request
soup = BeautifulSoup(response.text, 'html') # adjusted from lxml to html
print(soup) # contains span elements but not expected
elements = soup.find_all('span', attrs={'class_':'a-size-medium a-color-base a-text-normal'})
print(elements) # empty list, the expected spans were not found
You will discover a robot-prevention, probably using a captcha to verify a human is using a browser:
<h4>Enter the characters you see below</h4>
<p class="a-last">Sorry, we just need to make sure you're not a robot. For best results, please make sure your browser is accepting cookies.</p>
Fun fact:
You can copy & paste or write the resulting HTML to a file and open in your browser. It shows the guarding dogs of Amazon:

See also All The Dogs You Can Meet If You're Trying To Get On Amazon But It's Broken
Workaround: passing required request-headers
Further research suggested to add 2 headers to the request (that your browser automatically adds, too):
- a valid
User-Agent
(can simulate a specific browser and OS/platform)
- the
Accept-Language
(is required by most e-commerce pages to localize content)
In requests you can add them as dictionary like:
HEADERS = ({
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36',
'Accept-Language': 'en-US, en;q=0.5'
})
response = requests.get(URL, headers=HEADERS)
See: