2019-02-28 15:34:25 +01:00
# Search Engine Scraper - se-scraper
2018-12-24 14:25:02 +01:00
2019-03-03 16:46:10 +01:00
[![npm ](https://img.shields.io/npm/v/se-scraper.svg?style=for-the-badge )](https://www.npmjs.com/package/se-scraper)
[![Donate ](https://img.shields.io/badge/donate-paypal-blue.svg?style=for-the-badge )](https://www.paypal.me/incolumitas)
2019-02-28 15:34:25 +01:00
[![Known Vulnerabilities ](https://snyk.io/test/github/NikolaiT/se-scraper/badge.svg )](https://snyk.io/test/github/NikolaiT/se-scraper)
2018-12-24 14:25:02 +01:00
2019-02-28 15:34:25 +01:00
This node module allows you to scrape search engines concurrently with different proxies.
2018-12-24 14:25:02 +01:00
2019-09-23 16:46:22 +02:00
If you don't have extensive technical experience or don't want to purchase proxies, you can use [my scraping service ](https://scrapeulous.com/ ).
2019-02-28 15:34:25 +01:00
2019-09-23 16:46:22 +02:00
#### Table of Contents
2019-02-28 15:34:25 +01:00
- [Installation ](#installation )
2019-09-23 16:46:22 +02:00
- [Docker ](#docker-support )
2019-06-11 18:27:34 +02:00
- [Minimal Example ](#minimal-example )
2019-02-28 15:34:25 +01:00
- [Quickstart ](#quickstart )
2019-08-11 23:58:10 +02:00
- [Contribute ](#contribute )
2019-02-28 15:34:25 +01:00
- [Using Proxies ](#proxies )
2019-06-13 12:34:39 +02:00
- [Custom Scrapers ](#custom-scrapers )
2019-02-28 15:34:25 +01:00
- [Examples ](#examples )
- [Scraping Model ](#scraping-model )
- [Technical Notes ](#technical-notes )
- [Advanced Usage ](#advanced-usage )
2019-03-06 00:08:25 +01:00
- [Special Query String Parameters for Search Engines ](#query-string-parameters )
2019-02-28 15:34:25 +01:00
Se-scraper supports the following search engines:
2018-12-24 14:25:02 +01:00
* Google
* Google News
2019-02-07 16:09:38 +01:00
* Google News App version (https://news.google.com)
2018-12-24 14:25:02 +01:00
* Google Image
* Bing
2019-02-28 15:34:25 +01:00
* Bing News
2018-12-24 14:25:02 +01:00
* Infospace
* Duckduckgo
2019-09-13 16:15:33 +02:00
* Yandex
2018-12-24 14:25:02 +01:00
* Webcrawler
2019-01-27 15:54:56 +01:00
2019-02-28 15:34:25 +01:00
This module uses puppeteer and a modified version of [puppeteer-cluster ](https://github.com/thomasdondorf/puppeteer-cluster/ ). It was created by the Developer of [GoogleScraper ](https://github.com/NikolaiT/GoogleScraper ), a module with 1800 Stars on Github.
## Installation
2018-12-24 14:25:02 +01:00
2019-02-28 15:34:25 +01:00
You need a working installation of **node** and the **npm** package manager.
2019-01-30 23:53:09 +01:00
2019-04-01 15:33:26 +02:00
For example, if you are using Ubuntu 18.04, you can install node and npm with the following commands:
2019-06-29 17:00:19 +02:00
```bash
sudo apt update;
sudo apt install nodejs;
# recent version of npm
curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh;
sudo bash nodesource_setup.sh;
sudo apt install npm;
```
2019-04-01 15:33:26 +02:00
Chrome and puppeteer [need some additional libraries to run on ubuntu ](https://techoverflow.net/2018/06/05/how-to-fix-puppetteer-error- ).
This command will install dependencies:
2019-06-29 17:00:19 +02:00
```bash
# install all that is needed by chromium browser. Maybe not everything needed
sudo apt-get install gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget;
2019-04-01 15:33:26 +02:00
```
2019-02-28 15:34:25 +01:00
Install **se-scraper** by entering the following command in your terminal
2019-01-31 22:13:22 +01:00
```bash
2019-02-28 15:34:25 +01:00
npm install se-scraper
2019-01-31 22:13:22 +01:00
```
2019-04-01 15:33:26 +02:00
If you **don't** want puppeteer to download a complete chromium browser, add this variable to your environment. Then this module is not guaranteed to run out of the box.
2019-01-30 23:53:09 +01:00
```bash
2019-02-28 15:34:25 +01:00
export PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=1
2019-01-30 23:53:09 +01:00
```
2019-09-23 16:46:22 +02:00
### Docker Support
2019-08-13 17:35:06 +02:00
I will maintain a public docker image of se-scraper. Pull the docker image with the command:
```bash
docker pull tschachn/se-scraper
```
2019-09-23 16:46:22 +02:00
Confirm that the docker image was correctly pulled:
```bash
docker image ls
```
Should show something like that:
```
2019-09-23 16:50:57 +02:00
tschachn/se-scraper latest 897e1aeeba78 21 minutes ago 1.29GB
2019-09-23 16:46:22 +02:00
```
2019-09-23 16:50:57 +02:00
You can check the [latest tag here ](https://hub.docker.com/r/tschachn/se-scraper/tags ). In the example below, the latest tag is **latest** . This will most likely remain **latest** in the future.
2019-09-23 16:46:22 +02:00
Run the docker image and map the internal port 3000 to the external
port 3000:
```bash
2019-09-23 16:50:57 +02:00
$ docker run -p 3000:3000 tschachn/se-scraper:latest
2019-09-23 16:46:22 +02:00
Running on http://0.0.0.0:3000
```
When the image is running, you may start scrape jobs via HTTP API:
2019-08-13 17:35:06 +02:00
```bash
curl -XPOST http://0.0.0.0:3000 -H 'Content-Type: application/json' \
-d '{
"browser_config": {
"random_user_agent": true
},
"scrape_config": {
"search_engine": "google",
"keywords": ["test"],
"num_pages": 1
}
}'
```
Many thanks goes to [slotix ](https://github.com/NikolaiT/se-scraper/pull/21 ) for his tremendous help in setting up a docker image.
2019-06-11 18:27:34 +02:00
## Minimal Example
Create a file named `minimal.js` with the following contents
```js
2019-06-11 18:34:51 +02:00
const se_scraper = require('se-scraper');
2019-06-11 18:27:34 +02:00
(async () => {
let scrape_job = {
search_engine: 'google',
keywords: ['lets go boys'],
num_pages: 1,
};
var results = await se_scraper.scrape({}, scrape_job);
console.dir(results, {depth: null, colors: true});
})();
```
Start scraping by firing up the command `node minimal.js`
2019-02-28 15:34:25 +01:00
## Quickstart
Create a file named `run.js` with the following contents
2019-01-30 23:53:09 +01:00
```js
2019-06-11 18:34:51 +02:00
const se_scraper = require('se-scraper');
2019-06-11 18:27:34 +02:00
(async () => {
let browser_config = {
debug_level: 1,
output_file: 'examples/results/data.json',
};
2019-01-30 23:53:09 +01:00
2019-06-11 18:27:34 +02:00
let scrape_job = {
search_engine: 'google',
keywords: ['news', 'se-scraper'],
num_pages: 1,
2019-06-11 22:01:27 +02:00
// add some cool google search settings
google_settings: {
gl: 'us', // The gl parameter determines the Google country to use for the query.
hl: 'en', // The hl parameter determines the Google UI language to return results.
start: 0, // Determines the results offset to use, defaults to 0.
num: 100, // Determines the number of results to show, defaults to 10. Maximum is 100.
},
2019-06-11 18:27:34 +02:00
};
2019-01-30 23:53:09 +01:00
2019-06-11 18:27:34 +02:00
var scraper = new se_scraper.ScrapeManager(browser_config);
2019-01-30 23:53:09 +01:00
2019-06-11 18:27:34 +02:00
await scraper.start();
var results = await scraper.scrape(scrape_job);
console.dir(results, {depth: null, colors: true});
await scraper.quit();
})();
2019-01-30 23:53:09 +01:00
```
2019-02-27 20:58:13 +01:00
Start scraping by firing up the command `node run.js`
2019-08-11 23:58:10 +02:00
## Contribute
I really help and love your help! However scraping is a dirty business and it often takes me a lot of time to find failing selectors or missing JS logic. So if any search engine does not yield the results of your liking, please create a **static test case** similar to [this static test of google ](test/static_tests/google.js ) that fails. I will try to correct se-scraper then.
That's how you would proceed:
1. Copy the [static google test case ](test/static_tests/google.js )
2. Remove all unnecessary testing code
3. Save a search to file where se-scraper does not work correctly.
3. Implement the static test case using the saved search html where se-scraper currently fails.
4. Submit a new issue with the failing test case as pull request
5. I will fix it! (or better: you submit a pull request directly)
2019-02-28 15:34:25 +01:00
## Proxies
2019-02-27 20:58:13 +01:00
2019-02-28 15:34:25 +01:00
**se-scraper** will create one browser instance per proxy. So the maximal amount of concurrency is equivalent to the number of proxies plus one (your own IP).
2019-02-27 20:58:13 +01:00
```js
2019-06-11 18:34:51 +02:00
const se_scraper = require('se-scraper');
2019-06-11 18:27:34 +02:00
(async () => {
let browser_config = {
debug_level: 1,
output_file: 'examples/results/proxyresults.json',
proxy_file: '/home/nikolai/.proxies', // one proxy per line
log_ip_address: true,
};
let scrape_job = {
search_engine: 'google',
keywords: ['news', 'scrapeulous.com', 'incolumitas.com', 'i work too much', 'what to do?', 'javascript is hard'],
num_pages: 1,
};
var scraper = new se_scraper.ScrapeManager(browser_config);
await scraper.start();
var results = await scraper.scrape(scrape_job);
console.dir(results, {depth: null, colors: true});
await scraper.quit();
})();
2019-02-27 20:58:13 +01:00
```
2019-02-28 15:34:25 +01:00
With a proxy file such as
2019-02-27 20:58:13 +01:00
```text
socks5://53.34.23.55:55523
socks4://51.11.23.22:22222
```
2019-02-28 15:34:25 +01:00
This will scrape with **three** browser instance each having their own IP address. Unfortunately, it is currently not possible to scrape with different proxies per tab. Chromium does not support that.
2019-06-13 12:34:39 +02:00
## Custom Scrapers
You can define your own scraper class and use it within se-scraper.
[Check this example out ](examples/custom_scraper.js ) that defines a custom scraper for Ecosia.
2019-02-28 15:34:25 +01:00
## Examples
2019-06-11 18:33:11 +02:00
* [Reuse existing browser ](examples/multiple_search_engines.js ) yields [these results ](examples/results/multiple_search_engines.json )
2019-02-28 15:34:25 +01:00
* [Simple example scraping google ](examples/quickstart.js ) yields [these results ](examples/results/data.json )
* [Scrape with one proxy per browser ](examples/proxies.js ) yields [these results ](examples/results/proxyresults.json )
* [Scrape 100 keywords on Bing with multible tabs in one browser ](examples/multiple_tabs.js ) produces [this ](examples/results/bing.json )
* [Inject your own scraping logic ](examples/pluggable.js )
2019-06-18 22:23:52 +02:00
* [For the Lulz: Scraping google dorks for SQL injection vulnerabilites and confirming them. ](examples/for_the_lulz.js )
2019-06-29 17:00:19 +02:00
* [Scrape google maps/locations ](examples/google_maps.js ) yields [these results ](examples/results/maps.json )
2019-02-28 15:34:25 +01:00
2019-02-27 20:58:13 +01:00
2019-02-28 15:34:25 +01:00
## Scraping Model
2019-02-27 20:58:13 +01:00
**se-scraper** scrapes search engines only. In order to introduce concurrency into this library, it is necessary to define the scraping model. Then we can decide how we divide and conquer.
#### Scraping Resources
What are common scraping resources?
1. **Memory and CPU** . Necessary to launch multiple browser instances.
2. **Network Bandwith** . Is not often the bottleneck.
3. **IP Addresses** . Websites often block IP addresses after a certain amount of requests from the same IP address. Can be circumvented by using proxies.
4. Spoofable identifiers such as browser fingerprint or user agents. Those will be handled by **se-scraper**
#### Concurrency Model
**se-scraper** should be able to run without any concurrency at all. This is the default case. No concurrency means only one browser/tab is searching at the time.
For concurrent use, we will make use of a modified [puppeteer-cluster library ](https://github.com/thomasdondorf/puppeteer-cluster ).
One scrape job is properly defined by
* 1 search engine such as `google`
* `M` pages
* `N` keywords/queries
* `K` proxies and `K+1` browser instances (because when we have no proxies available, we will scrape with our dedicated IP)
Then **se-scraper** will create `K+1` dedicated browser instances with a unique ip address. Each browser will get `N/(K+1)` keywords and will issue `N/(K+1) * M` total requests to the search engine.
The problem is that [puppeteer-cluster library ](https://github.com/thomasdondorf/puppeteer-cluster ) does only allow identical options for subsequent new browser instances. Therefore, it is not trivial to launch a cluster of browsers with distinct proxy settings. Right now, every browser has the same options. It's not possible to set options on a per browser basis.
Solution:
1. Create a [upstream proxy router ](https://github.com/GoogleChrome/puppeteer/issues/678 ).
2. Modify [puppeteer-cluster library ](https://github.com/thomasdondorf/puppeteer-cluster ) to accept a list of proxy strings and then pop() from this list at every new call to `workerInstance()` in https://github.com/thomasdondorf/puppeteer-cluster/blob/master/src/Cluster.ts I wrote an [issue here ](https://github.com/thomasdondorf/puppeteer-cluster/issues/107 ). **I ended up doing this** .
2019-02-28 15:34:25 +01:00
## Technical Notes
2018-12-24 14:25:02 +01:00
Scraping is done with a headless chromium browser using the automation library puppeteer. Puppeteer is a Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol.
2019-02-28 15:34:25 +01:00
If you need to deploy scraping to the cloud (AWS or Azure), you can contact me at **hire@incolumitas.com**
2018-12-24 14:25:02 +01:00
2019-01-29 22:48:08 +01:00
The chromium browser is started with the following flags to prevent
scraping detection.
```js
var ADDITIONAL_CHROME_FLAGS = [
'--disable-infobars',
'--window-position=0,0',
'--ignore-certifcate-errors',
'--ignore-certifcate-errors-spki-list',
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-dev-shm-usage',
'--disable-accelerated-2d-canvas',
'--disable-gpu',
'--window-size=1920x1080',
'--hide-scrollbars',
2019-03-01 16:02:30 +01:00
'--disable-notifications',
2019-01-29 22:48:08 +01:00
];
```
Furthermore, to avoid loading unnecessary ressources and to speed up
2019-02-28 15:34:25 +01:00
scraping a great deal, we instruct chrome to not load images and css and media:
2019-01-29 22:48:08 +01:00
```js
await page.setRequestInterception(true);
page.on('request', (req) => {
let type = req.resourceType();
const block = ['stylesheet', 'font', 'image', 'media'];
if (block.includes(type)) {
req.abort();
} else {
req.continue();
}
});
```
2019-02-28 15:34:25 +01:00
#### Making puppeteer and headless chrome undetectable
2019-01-29 22:48:08 +01:00
Consider the following resources:
2019-09-13 16:15:33 +02:00
* https://antoinevastel.com/bot%20detection/2019/07/19/detecting-chrome-headless-v3.html
2019-01-29 22:48:08 +01:00
* https://intoli.com/blog/making-chrome-headless-undetectable/
2019-02-07 16:09:38 +01:00
* https://intoli.com/blog/not-possible-to-block-chrome-headless/
* https://news.ycombinator.com/item?id=16179602
**se-scraper** implements the countermeasures against headless chrome detection proposed on those sites.
Most recent detection counter measures can be found here:
* https://github.com/paulirish/headless-cat-n-mouse/blob/master/apply-evasions.js
**se-scraper** makes use of those anti detection techniques.
To check whether evasion works, you can test it by passing `test_evasion` flag to the config:
```js
let config = {
// check if headless chrome escapes common detection techniques
test_evasion: true
};
```
It will create a screenshot named `headless-test-result.png` in the directory where the scraper was started that shows whether all test have passed.
2018-12-24 14:25:02 +01:00
2019-02-28 15:34:25 +01:00
## Advanced Usage
2018-12-24 14:25:02 +01:00
2019-02-28 15:34:25 +01:00
Use **se-scraper** by calling it with a script such as the one below.
2018-12-24 14:25:02 +01:00
2019-01-27 15:54:56 +01:00
```js
2018-12-24 14:25:02 +01:00
const se_scraper = require('se-scraper');
2019-06-11 22:01:27 +02:00
// those options need to be provided on startup
// and cannot give to se-scraper on scrape() calls
let browser_config = {
2018-12-24 14:25:02 +01:00
// the user agent to scrape with
2019-07-18 20:19:15 +02:00
user_agent: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3835.0 Safari/537.36',
2018-12-24 14:25:02 +01:00
// if random_user_agent is set to True, a random user agent is chosen
2019-06-11 22:01:27 +02:00
random_user_agent: false,
2019-07-18 20:19:15 +02:00
// whether to select manual settings in visible mode
set_manual_settings: false,
// log ip address data
log_ip_address: false,
// log http headers
log_http_headers: false,
// how long to sleep between requests. a random sleep interval within the range [a,b]
// is drawn before every request. empty string for no sleeping.
sleep_range: '',
// which search engine to scrape
search_engine: 'google',
compress: false, // compress
2019-06-11 18:27:34 +02:00
// whether debug information should be printed
// level 0: print nothing
// level 1: print most important info
// ...
// level 4: print all shit nobody wants to know
debug_level: 1,
2019-07-18 20:19:15 +02:00
keywords: ['nodejs rocks',],
// whether to start the browser in headless mode
headless: true,
2019-06-11 18:27:34 +02:00
// specify flags passed to chrome here
chrome_flags: [],
2019-07-18 20:19:15 +02:00
// the number of pages to scrape for each keyword
num_pages: 1,
// path to output file, data will be stored in JSON
output_file: '',
// whether to also passthru all the html output of the serp pages
html_output: false,
// whether to return a screenshot of serp pages as b64 data
screen_output: false,
// whether to prevent images, css, fonts and media from being loaded
// will speed up scraping a great deal
block_assets: true,
2019-01-27 15:54:56 +01:00
// path to js module that extends functionality
// this module should export the functions:
// get_browser, handle_metadata, close_browser
//custom_func: resolve('examples/pluggable.js'),
custom_func: '',
2019-07-18 20:19:15 +02:00
throw_on_detection: false,
2019-02-28 15:34:25 +01:00
// use a proxy for all connections
// example: 'socks5://78.94.172.42:1080'
// example: 'http://118.174.233.10:48400'
proxy: '',
// a file with one proxy per line. Example:
2019-02-27 20:58:13 +01:00
// socks5://78.94.172.42:1080
// http://118.174.233.10:48400
proxy_file: '',
2019-07-18 20:19:15 +02:00
// whether to use proxies only
// when this is set to true, se-scraper will not use
// your default IP address
use_proxies_only: false,
// check if headless chrome escapes common detection techniques
// this is a quick test and should be used for debugging
test_evasion: false,
apply_evasion_techniques: true,
// settings for puppeteer-cluster
2019-02-28 15:34:25 +01:00
puppeteer_cluster_config: {
2019-07-18 20:19:15 +02:00
timeout: 30 * 60 * 1000, // max timeout set to 30 minutes
2019-02-28 15:34:25 +01:00
monitor: false,
2019-07-18 20:19:15 +02:00
concurrency: Cluster.CONCURRENCY_BROWSER,
maxConcurrency: 1,
2019-02-28 15:34:25 +01:00
}
2018-12-24 14:25:02 +01:00
};
2019-06-11 18:27:34 +02:00
(async () => {
2019-06-11 22:01:27 +02:00
// scrape config can change on each scrape() call
2019-06-11 18:27:34 +02:00
let scrape_config = {
// which search engine to scrape
2019-06-11 22:01:27 +02:00
search_engine: 'google',
2019-06-11 18:27:34 +02:00
// an array of keywords to scrape
keywords: ['cat', 'mouse'],
// the number of pages to scrape for each keyword
num_pages: 2,
2019-06-11 22:01:27 +02:00
// OPTIONAL PARAMS BELOW:
google_settings: {
gl: 'us', // The gl parameter determines the Google country to use for the query.
hl: 'fr', // The hl parameter determines the Google UI language to return results.
start: 0, // Determines the results offset to use, defaults to 0.
num: 100, // Determines the number of results to show, defaults to 10. Maximum is 100.
},
// instead of keywords you can specify a keyword_file. this overwrites the keywords array
keyword_file: '',
// how long to sleep between requests. a random sleep interval within the range [a,b]
// is drawn before every request. empty string for no sleeping.
sleep_range: '',
// path to output file, data will be stored in JSON
output_file: 'output.json',
// whether to prevent images, css, fonts from being loaded
// will speed up scraping a great deal
block_assets: false,
// check if headless chrome escapes common detection techniques
// this is a quick test and should be used for debugging
test_evasion: false,
apply_evasion_techniques: true,
// log ip address data
log_ip_address: false,
// log http headers
log_http_headers: false,
2019-06-11 18:27:34 +02:00
};
2019-06-11 22:01:27 +02:00
let results = await se_scraper.scrape(browser_config, scrape_config);
2019-06-11 18:27:34 +02:00
console.dir(results, {depth: null, colors: true});
})();
2018-12-24 14:25:02 +01:00
```
2019-03-06 00:08:25 +01:00
[Output for the above script on my machine. ](examples/results/advanced.json )
### Query String Parameters
You can add your custom query string parameters to the configuration object by specifying a `google_settings` key. In general: `{{search engine}}_settings` .
For example you can customize your google search with the following config:
```js
2019-06-11 18:27:34 +02:00
let scrape_config = {
2019-03-06 00:08:25 +01:00
search_engine: 'google',
// use specific search engine parameters for various search engines
google_settings: {
google_domain: 'google.com',
gl: 'us', // The gl parameter determines the Google country to use for the query.
hl: 'us', // The hl parameter determines the Google UI language to return results.
start: 0, // Determines the results offset to use, defaults to 0.
num: 100, // Determines the number of results to show, defaults to 10. Maximum is 100.
},
}
```