Author

Topic: What i discovered about webscraping Bitcointalk.org (Read 160 times)

sr. member
Activity: 966
Merit: 421
Bitcoindata.science
....
Thank you it solved the problem well.. And worked just fine

legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
Try using Requests library to read the data instead of URLlib3.

Although I no longer have the code sample to show you, my implementation of a post scraper using Requests worked magnificently well, with a timeout of 1 second.

You're probably running into issues with Cloudflare though, hence the 403. Maybe you should chain an anti-captcha browser or service to the library as well.
staff
Activity: 3500
Merit: 6152
There are two options here:

To use requests instead of urllib:

Code:
import urllib
import requests
from bs4 import BeautifulSoup

r =  requests.get('https://bitcointalk.org/index.php')
soup = BeautifulSoup(r.content, 'html.parser')
print(soup)

Or add a user-agent to the request you're making:

Code:
import urllib
import requests
from bs4 import BeautifulSoup
import time

r =  urllib.request.Request('https://bitcointalk.org/index.php?', headers={'User-Agent': 'Mozilla/5.0'})
response = urllib.request.urlopen(r)
soup = BeautifulSoup(response.read(), 'html.parser')
print(soup)

Either way, make sure you're not sending requests too often[1]. You should use time.sleep but that function takes seconds in Python, and not milliseconds.

[1] https://bitcointalksearch.org/topic/m.10442011
sr. member
Activity: 966
Merit: 421
Bitcoindata.science
Where's your code? Are you doing any looping (trying to load the website multiple times a second will result in an error, not sure if there's something else too as you've not added your code - feel free to dm if you don't want to post it publicly but remove login details if there are any).

time.sleep(1000) would be enough to add to a loop to stop the error - the time is in milliseconds if you want to edit it.
my code  is on the element i posted but i will still type them if it is not visible

Code:
! pip install BeautifulSoup
import urllib
import re
from bs4 import BeautifulSoup
import time

time.sleep(1000)
r =  urllib.request.urlopen('https://bitcointalk.org/index.php?').read()
soup = BeautifulSoup(r, 'html.parser')
type(soup)
I added the time.sleep(1000) but instead the entire cell went to sleep then finally popped up with the same error message:::
Code:
HTTPError: HTTP Error 403: Forbidden
copper member
Activity: 2856
Merit: 3071
https://bit.ly/387FXHi lightning theory
Where's your code? Are you doing any looping (trying to load the website multiple times a second will result in an error, not sure if there's something else too as you've not added your code - feel free to dm if you don't want to post it publicly but remove login details if there are any).

time.sleep(1000) would be enough to add to a loop to stop the error - the time is in milliseconds if you want to edit it.
sr. member
Activity: 966
Merit: 421
Bitcoindata.science
Hello mates i tried doing some fun stuffs with python BeautifulSoup library to scrap some information and possibly save them in a variable maybe to get to see the anchor tags and also scrap to see users with the highest activity in the last 20 days, play around with some informations scrapped from bitcointalk url  but unfortunately i got an error message. I tried the code on a few other sited and it worked well but that of the forum gave me this error



i tried the same code on a few other sites like analytics
I was able to get all the href and anchor tags from the sites

i did similar for facebook and it worked so i kept wondering why it didn't work for Bitcointalk url. I will be glad if some one can educate me why i can't scrap information from the forum.

Jump to: