Javascript scraping module based on puppeteer for many different search engines...
Go to file
2019-04-01 15:33:26 +02:00
examples passing chrome flags directly now possible 2019-04-01 15:33:26 +02:00
src passing chrome flags directly now possible 2019-04-01 15:33:26 +02:00
test fixed #11 by improving baidu a lot in speed and quality 2019-03-14 23:33:46 +01:00
.gitignore removed idea stuff 2018-12-24 14:30:36 +01:00
.gitmodules proxy mgmt better 2019-03-22 18:55:17 +01:00
CODE_OF_CONDUCT.md Create CODE_OF_CONDUCT.md 2019-02-08 00:54:10 +01:00
index.js passing chrome flags directly now possible 2019-04-01 15:33:26 +02:00
jformat.py . 2019-01-26 20:15:19 +01:00
LICENSE Create LICENSE 2019-02-08 00:58:15 +01:00
package-lock.json added suport for custom query string parameters 2019-03-06 00:08:25 +01:00
package.json passing chrome flags directly now possible 2019-04-01 15:33:26 +02:00
README.md passing chrome flags directly now possible 2019-04-01 15:33:26 +02:00
run.js passing chrome flags directly now possible 2019-04-01 15:33:26 +02:00
se-scraper.iml supporting yahoo ticker search for news 2019-01-24 15:50:03 +01:00
TODO.md added suport for custom query string parameters 2019-03-06 00:08:25 +01:00

Search Engine Scraper - se-scraper

npm Donate Known Vulnerabilities

This node module allows you to scrape search engines concurrently with different proxies.

If you don't have much technical experience or don't want to purchase proxies, you can use my scraping service.

Table of Contents

Se-scraper supports the following search engines:

  • Google
  • Google News
  • Google News App version (https://news.google.com)
  • Google Image
  • Amazon
  • Bing
  • Bing News
  • Baidu
  • Youtube
  • Infospace
  • Duckduckgo
  • Webcrawler
  • Reuters
  • Cnbc
  • Marketwatch

This module uses puppeteer and a modified version of puppeteer-cluster. It was created by the Developer of GoogleScraper, a module with 1800 Stars on Github.

Installation

You need a working installation of node and the npm package manager.

For example, if you are using Ubuntu 18.04, you can install node and npm with the following commands:

sudo apt install nodejs and sudo apt install npms

Chrome and puppeteer need some additional libraries to run on ubuntu.

This command will install dependencies:

sudo apt-get install gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget

Install se-scraper by entering the following command in your terminal

npm install se-scraper

If you don't want puppeteer to download a complete chromium browser, add this variable to your environment. Then this module is not guaranteed to run out of the box.

export PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=1

Quickstart

Create a file named run.js with the following contents

const se_scraper = require('se-scraper');

let config = {
    search_engine: 'google',
    debug: false,
    verbose: false,
    keywords: ['news', 'scraping scrapeulous.com'],
    num_pages: 3,
    output_file: 'data.json',
};

function callback(err, response) {
    if (err) { console.error(err) }
    console.dir(response, {depth: null, colors: true});
}

se_scraper.scrape(config, callback);

Start scraping by firing up the command node run.js

Proxies

se-scraper will create one browser instance per proxy. So the maximal amount of concurrency is equivalent to the number of proxies plus one (your own IP).

const se_scraper = require('se-scraper');

let config = {
    search_engine: 'google',
    debug: false,
    verbose: false,
    keywords: ['news', 'scrapeulous.com', 'incolumitas.com', 'i work too much'],
    num_pages: 1,
    output_file: 'data.json',
    proxy_file: '/home/nikolai/.proxies', // one proxy per line
    log_ip_address: true,
};

function callback(err, response) {
    if (err) { console.error(err) }
    console.dir(response, {depth: null, colors: true});
}

se_scraper.scrape(config, callback);

With a proxy file such as

socks5://53.34.23.55:55523
socks4://51.11.23.22:22222

This will scrape with three browser instance each having their own IP address. Unfortunately, it is currently not possible to scrape with different proxies per tab. Chromium does not support that.

Examples

Scraping Model

se-scraper scrapes search engines only. In order to introduce concurrency into this library, it is necessary to define the scraping model. Then we can decide how we divide and conquer.

Scraping Resources

What are common scraping resources?

  1. Memory and CPU. Necessary to launch multiple browser instances.
  2. Network Bandwith. Is not often the bottleneck.
  3. IP Addresses. Websites often block IP addresses after a certain amount of requests from the same IP address. Can be circumvented by using proxies.
  4. Spoofable identifiers such as browser fingerprint or user agents. Those will be handled by se-scraper

Concurrency Model

se-scraper should be able to run without any concurrency at all. This is the default case. No concurrency means only one browser/tab is searching at the time.

For concurrent use, we will make use of a modified puppeteer-cluster library.

One scrape job is properly defined by

  • 1 search engine such as google
  • M pages
  • N keywords/queries
  • K proxies and K+1 browser instances (because when we have no proxies available, we will scrape with our dedicated IP)

Then se-scraper will create K+1 dedicated browser instances with a unique ip address. Each browser will get N/(K+1) keywords and will issue N/(K+1) * M total requests to the search engine.

The problem is that puppeteer-cluster library does only allow identical options for subsequent new browser instances. Therefore, it is not trivial to launch a cluster of browsers with distinct proxy settings. Right now, every browser has the same options. It's not possible to set options on a per browser basis.

Solution:

  1. Create a upstream proxy router.
  2. Modify puppeteer-cluster library to accept a list of proxy strings and then pop() from this list at every new call to workerInstance() in https://github.com/thomasdondorf/puppeteer-cluster/blob/master/src/Cluster.ts I wrote an issue here. I ended up doing this.

Technical Notes

Scraping is done with a headless chromium browser using the automation library puppeteer. Puppeteer is a Node library which provides a high-level API to control headless Chrome or Chromium over the DevTools Protocol.

If you need to deploy scraping to the cloud (AWS or Azure), you can contact me at hire@incolumitas.com

The chromium browser is started with the following flags to prevent scraping detection.

var ADDITIONAL_CHROME_FLAGS = [
    '--disable-infobars',
    '--window-position=0,0',
    '--ignore-certifcate-errors',
    '--ignore-certifcate-errors-spki-list',
    '--no-sandbox',
    '--disable-setuid-sandbox',
    '--disable-dev-shm-usage',
    '--disable-accelerated-2d-canvas',
    '--disable-gpu',
    '--window-size=1920x1080',
    '--hide-scrollbars',
    '--disable-notifications',
];

Furthermore, to avoid loading unnecessary ressources and to speed up scraping a great deal, we instruct chrome to not load images and css and media:

await page.setRequestInterception(true);
page.on('request', (req) => {
    let type = req.resourceType();
    const block = ['stylesheet', 'font', 'image', 'media'];
    if (block.includes(type)) {
        req.abort();
    } else {
        req.continue();
    }
});

Making puppeteer and headless chrome undetectable

Consider the following resources:

se-scraper implements the countermeasures against headless chrome detection proposed on those sites.

Most recent detection counter measures can be found here:

se-scraper makes use of those anti detection techniques.

To check whether evasion works, you can test it by passing test_evasion flag to the config:

let config = {
    // check if headless chrome escapes common detection techniques
    test_evasion: true
};

It will create a screenshot named headless-test-result.png in the directory where the scraper was started that shows whether all test have passed.

Advanced Usage

Use se-scraper by calling it with a script such as the one below.

const se_scraper = require('se-scraper');

let config = {
    // the user agent to scrape with
    user_agent: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36',
    // if random_user_agent is set to True, a random user agent is chosen
    random_user_agent: true,
    // how long to sleep between requests. a random sleep interval within the range [a,b]
    // is drawn before every request. empty string for no sleeping.
    sleep_range: '[1,2]',
    // which search engine to scrape
    search_engine: 'google',
    // whether debug information should be printed
    // debug info is useful for developers when debugging
    debug: false,
    // whether verbose program output should be printed
    // this output is informational
    verbose: true,
    // an array of keywords to scrape
    keywords: ['scrapeulous.com', 'scraping search engines', 'scraping service scrapeulous', 'learn js'],
    // alternatively you can specify a keyword_file. this overwrites the keywords array
    keyword_file: '',
    // the number of pages to scrape for each keyword
    num_pages: 2,
    // whether to start the browser in headless mode
    headless: true,
    // path to output file, data will be stored in JSON
    output_file: 'examples/results/advanced.json',
    // whether to prevent images, css, fonts from being loaded
    // will speed up scraping a great deal
    block_assets: true,
    // path to js module that extends functionality
    // this module should export the functions:
    // get_browser, handle_metadata, close_browser
    // must be an absolute path to the module
    //custom_func: resolve('examples/pluggable.js'),
    custom_func: '',
    // use a proxy for all connections
    // example: 'socks5://78.94.172.42:1080'
    // example: 'http://118.174.233.10:48400'
    proxy: '',
    // a file with one proxy per line. Example:
    // socks5://78.94.172.42:1080
    // http://118.174.233.10:48400
    proxy_file: '',
    // check if headless chrome escapes common detection techniques
    // this is a quick test and should be used for debugging
    test_evasion: false,
    // log ip address data
    log_ip_address: false,
    // log http headers
    log_http_headers: false,
    puppeteer_cluster_config: {
        timeout: 10 * 60 * 1000, // max timeout set to 10 minutes
        monitor: false,
        concurrency: 1, // one scraper per tab
        maxConcurrency: 2, // scrape with 2 tabs
    }
};

function callback(err, response) {
    if (err) { console.error(err) }

    /* response object has the following properties:

        response.results - json object with the scraping results
        response.metadata - json object with metadata information
        response.statusCode - status code of the scraping process
     */

    console.dir(response.results, {depth: null, colors: true});
}

se_scraper.scrape(config, callback);

Output for the above script on my machine.

Query String Parameters

You can add your custom query string parameters to the configuration object by specifying a google_settings key. In general: {{search engine}}_settings.

For example you can customize your google search with the following config:

let config = {
    search_engine: 'google',
    // use specific search engine parameters for various search engines
    google_settings: {
        google_domain: 'google.com',
        gl: 'us', // The gl parameter determines the Google country to use for the query.
        hl: 'us', // The hl parameter determines the Google UI language to return results.
        start: 0, // Determines the results offset to use, defaults to 0.
        num: 100, // Determines the number of results to show, defaults to 10. Maximum is 100.
    },
}