Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python Requests ConnectionErrorr [11001] getaddrinfo failed

I'm making a stock index webscraper using beautifulsoup and requests but I can't figure out what's blocking the connection.

Here's my code:

from bs4 import BeautifulSoup
import requests


def scraper(link):
    response = requests.get(link, verify=False)
    soup = BeautifulSoup(response.content, 'html.parser')
    company = soup.find(class_="D(ib) ").get_text()
    response.close()
    return company


def main():
    ticker = input("Input stock ticker symbol:")
    site = ("http://finace.yahoo.com/quote/" + ticker.upper().strip())
    data = scraper(site)
    print(data)


main()

And Here's the exception:

C:\Python37-32\python.exe C:\Users\lucas\CS\Final_Project\Stock_Index_Scraper.py
Input stock ticker symbol:tsla
Traceback (most recent call last):
  File "C:\Python37-32\lib\site-packages\urllib3\connection.py", line 157, in _new_conn
    (self._dns_host, self.port), self.timeout, **extra_kw
  File "C:\Python37-32\lib\site-packages\urllib3\util\connection.py", line 61, in create_connection
    for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
  File "C:\Python37-32\lib\socket.py", line 748, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python37-32\lib\site-packages\urllib3\connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "C:\Python37-32\lib\site-packages\urllib3\connectionpool.py", line 387, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "C:\Python37-32\lib\http\client.py", line 1244, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "C:\Python37-32\lib\http\client.py", line 1290, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "C:\Python37-32\lib\http\client.py", line 1239, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "C:\Python37-32\lib\http\client.py", line 1026, in _send_output
    self.send(msg)
  File "C:\Python37-32\lib\http\client.py", line 966, in send
    self.connect()
  File "C:\Python37-32\lib\site-packages\urllib3\connection.py", line 184, in connect
    conn = self._new_conn()
  File "C:\Python37-32\lib\site-packages\urllib3\connection.py", line 169, in _new_conn
    self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x039969D0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python37-32\lib\site-packages\requests\adapters.py", line 449, in send
    timeout=timeout
  File "C:\Python37-32\lib\site-packages\urllib3\connectionpool.py", line 720, in urlopen
    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
  File "C:\Python37-32\lib\site-packages\urllib3\util\retry.py", line 436, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='finace.yahoo.com', port=80): Max retries exceeded with url: /quote/TSLA (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x039969D0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\lucas\CS\Final_Project\Stock_Index_Scraper.py", line 27, in <module>
    main()
  File "C:\Users\lucas\CS\Final_Project\Stock_Index_Scraper.py", line 17, in main
    data = scraper(site)
  File "C:\Users\lucas\CS\Final_Project\Stock_Index_Scraper.py", line 8, in scraper
    response = requests.get(link, verify=False)
  File "C:\Python37-32\lib\site-packages\requests\api.py", line 75, in get
    return request('get', url, params=params, **kwargs)
  File "C:\Python37-32\lib\site-packages\requests\api.py", line 60, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Python37-32\lib\site-packages\requests\sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Python37-32\lib\site-packages\requests\sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "C:\Python37-32\lib\site-packages\requests\adapters.py", line 516, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='finace.yahoo.com', port=80): Max retries exceeded with url: /quote/TSLA (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x039969D0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

Process finished with exit code 1
like image 264
ljgoudre Avatar asked Mar 03 '26 23:03

ljgoudre


1 Answers

The url address that you are calling contains a typo http://finace.yahoo.com/quote/ -> http://finaNce.yahoo.com/quote/

As a rule of thumb, whenever your scraper behaves in an unexpected way, it is always recommended to run the scenario in a browser and/or curl command line to debug any connection/javascript issues.

like image 112
Simas Joneliunas Avatar answered Mar 06 '26 13:03

Simas Joneliunas



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!