1

I am building a program that is checking Xbox usernames from a text file, and whenever I attempt to use it it will hang for about 20 seconds and stop with no error. I am hosting this program on Replit for now. My code is:

import requests
import pyfiglet
from colorama import Fore
import time
from getkey import getkey
proxies = {
  "http": "http://43.155.59.126:3128",
  "http": "http://88.198.24.108:3128",
  "http": "http://178.47.141.85:2580",
}
def turbo():
  url = 'https://xboxgamertag.com/search/'
  filename = input('File name?\n')
  with open(filename) as f:
      line = f.readline()
      while line:
          line = f.readline()
          req = requests.get(url + line, proxies=proxies)
          if req == '200':
            print(Fore.RED + "[AVAILABLE] " + line)
          elif req == '404':
            print(Fore.GREEN + "[AVAILABLE] " + line)
def menu():
  menu = pyfiglet.figlet_format("Xbox Turbo") 
  print(menu)
  time.sleep(0.5)
  print(Fore.GREEN + "made by ooaz#0001")
  time.sleep(0.5)
  print(Fore.WHITE + "[1] Xbox Turbo")
  key = getkey()
  if key == '1':
    turbo()
menu()

The proxies are NOT confirmed working.

vey
  • 101
  • 8
  • `f.readline()` will return lines with newline chars `\n` at the end, does stripping these off with `line = f.readline().strip()` work? – Iain Shelvington Dec 29 '21 at 01:23
  • Added it like this: while line: line = f.readline() line = line.strip() and it didnt work – vey Dec 29 '21 at 01:24
  • Wait I see you edited it let me try that – vey Dec 29 '21 at 01:25
  • @IainShelvington error: attributerror: builtin_function_or_method object has no attribute strip – vey Dec 29 '21 at 01:26
  • @IainShelvington Am I supposed to put it outside or inside the while? I put it inside – vey Dec 29 '21 at 01:26
  • I think you may not be chaining the methods correctly. You should also check the status of the response using `req.status_code` like: `if req.status_code == 200:` – Iain Shelvington Dec 29 '21 at 01:28
  • reponse.get returns a Response object. You compare it to the strings '200' and '404', and that will almost certainly be False. If so, your program won't produce any output. I think you want to compare req.status_code with integer values 200 and 404. – Paul Cornelius Dec 29 '21 at 01:30
  • Trying this, thanks – vey Dec 29 '21 at 01:31
  • Got past the error with the strip and I tried the req.status_code and it still isn't loading. I think you should check the Replit I made to see further errors: https://replit.com/@s1ushy/FondHightechJavascript#main.py:30 – vey Dec 29 '21 at 01:32
  • @vey compare the status_code to an int not a string: `if req.status_code == 200:` – Iain Shelvington Dec 29 '21 at 01:34
  • tried that with int(200): it didn't work – vey Dec 29 '21 at 01:35
  • @vey can you add another condition/print for `if req.status_code == 403:`? You might just be getting permission denied everytime – Iain Shelvington Dec 29 '21 at 01:45
  • That's what it was, thanks :) – vey Dec 29 '21 at 01:57
  • Now I have to find another proxy lmao – vey Dec 29 '21 at 01:59

2 Answers2

1

I think you need replace your proxies by:

proxies = {
  "http": "http://43.155.59.126:3128",
  "https": "http://43.155.59.126:3128"
}

Choose one, but proxies is a dictionary, with the protocol and the proxie

More information https://docs.python-requests.org/en/latest/user/advanced/#proxies

Gonzalo Odiard
  • 1,238
  • 12
  • 19
  • Still no, check my Replit: https://replit.com/@s1ushy/FondHightechJavascript#main.py:30 – vey Dec 29 '21 at 01:16
  • Try to implement a minimal test, just doing the request. There are other questions about this issue on SO too, like https://stackoverflow.com/questions/24058127/https-proxies-not-working-with-pythons-requests-module – Gonzalo Odiard Dec 29 '21 at 01:26
1

Most, if not all, large sites detect bots. Sometimes by naively inspecting your sent user agent and, sometimes, by more sophisticated means such as probing your "browser" for capabilities such as playing sound or having a window resolution.
Looks like this is the case:

>>> import requests
>>> result = requests.get('https://xboxgamertag.com/search/foo')
>>> result
<Response [403]>
>>> result.reason
'Forbidden'

Adding proxies

>>> proxies = {
...   "https": "http://43.155.59.126:3128",
...   "http": "http://88.198.24.108:3128",
...   "http": "http://178.47.141.85:2580",
... }
>>> result = requests.get('https://xboxgamertag.com/search/foo', proxies=proxies)
>>> result
<Response [403]>
>>> result.reason
'Forbidden'
edd
  • 1,307
  • 10
  • 10
  • I understand this and I am still playing with proxies that I am trying to use for this, just trying to get the program up before I edit the proxy chain – vey Dec 29 '21 at 01:30
  • That's fair but the HTTP response and reason remain the same. Most likely falling into the same detection category. But, regardless, replit fails because of a script execution timeout. The proxies take a very long time to respond (expected) – edd Dec 29 '21 at 01:45
  • @vey if the site is especially stubborn, you can try Selenium. It is more complex than `requests` and requires reading+patience. But, it uses a browser "driver" to talk HTTP, increasing your chances. Some _very_ stubborn sites detect these automation frameworks using said driver's special JS libraries. To which you also have methods of treating. And the rabbit hole goes on. – edd Dec 29 '21 at 02:01