5 Anti-Scraping Techniques You May Encounter

Octoparse
7 min readAug 9, 2019

Photo by Ian Schneider on Unsplash

With the advent of big data, people start to obtain data from the Internet for data analysis with the help of web crawlers. There are various ways to make your own crawler: extensions in browsers, python coding with Beautiful Soupor Scrapy, and also data extraction tools like Octoparse.

However, there is always a coding war between spiders and anti-bots. Web developers apply different kinds of anti-scraping techniques to keep their websites from being scraped. In this article, I have listed the five most common anti-scraping techniques and how they can be avoided.

1.IP

One of the easiest ways for a website to detect web scraping activities is through IP tracking. The website could identify whether the IP is a robot based on its behaviors. when a website finds out that an overwhelming number of requests had been sent from one single IP address periodically or within a short period of time, there is a good chance the IP would be blocked because it is suspected to be a bot. In this case, what really matters for building an anti-scraping crawler is the number and frequency of visits per unit of time. Here are some scenarios you may encounter.

Scenario 1: Making multiple visits within seconds. There’s no way a real human can browse that…

--

--

Octoparse
Octoparse

Written by Octoparse

Web scraping at a large scale without coding. Start simple, for free. www.octoparse.com

No responses yet