300 stable releases

new 1.46.2 Sep 28, 2023
1.37.4 Aug 30, 2023
1.34.4 Jul 26, 2023
1.26.7 Mar 22, 2023
1.1.1 Feb 12, 2018

#1 in #crawler

Download history 57/week @ 2023-06-08 337/week @ 2023-06-15 23/week @ 2023-06-22 706/week @ 2023-06-29 620/week @ 2023-07-06 138/week @ 2023-07-13 160/week @ 2023-07-20 389/week @ 2023-07-27 60/week @ 2023-08-03 274/week @ 2023-08-10 373/week @ 2023-08-17 538/week @ 2023-08-24 259/week @ 2023-08-31 892/week @ 2023-09-07 922/week @ 2023-09-14 282/week @ 2023-09-21

2,484 downloads per month
Used in 3 crates

MIT license

225KB
5K SLoC

Spider

crate version

Multithreaded async crawler/indexer using isolates and IPC channels for communication with the ability to run decentralized.

Dependencies

On Linux

  • OpenSSL 1.0.1, 1.0.2, 1.1.0, or 1.1.1

Example

This is a basic async example crawling a web page, add spider to your Cargo.toml:

[dependencies]
spider = "1.46.2"

And then the code:

extern crate spider;

use spider::website::Website;
use spider::tokio;

#[tokio::main]
async fn main() {
    let url = "https://choosealicense.com";
    let mut website: Website = Website::new(&url);
    website.crawl().await;

    for link in website.get_links() {
        println!("- {:?}", link.as_ref());
    }
}

You can use Configuration object to configure your crawler:

// ..
let mut website: Website = Website::new("https://choosealicense.com");

website.configuration.respect_robots_txt = true;
website.configuration.subdomains = true;
website.configuration.tld = false;
website.configuration.delay = 0; // Defaults to 0 ms due to concurrency handling
website.configuration.request_timeout = None; // Defaults to 15000 ms
website.configuration.http2_prior_knowledge = false; // Enable if you know the webserver supports http2
website.configuration.user_agent = Some("myapp/version".into()); // Defaults to using a random agent
website.on_link_find_callback = Some(|s, html| { println!("link target: {}", s); (s, html)}); // Callback to run on each link find
website.configuration.blacklist_url.get_or_insert(Default::default()).push("https://choosealicense.com/licenses/".into());
website.configuration.proxies.get_or_insert(Default::default()).push("socks5://10.1.1.1:12345".into()); // Defaults to None - proxy list.
website.budget = Some(spider::hashbrown::HashMap::from([(spider::CaseInsensitiveString::new("*"), 300), (spider::CaseInsensitiveString::new("/licenses"), 10)])); // Defaults to None - Requires the `budget` feature flag

website.crawl().await;

The builder pattern is also available v1.33.0 and up:

let mut website = Website::new("https://choosealicense.com");

website
    .with_respect_robots_txt(true)
    .with_subdomains(true)
    .with_tld(false)
    .with_delay(0)
    .with_request_timeout(None)
    .with_http2_prior_knowledge(false)
    .with_user_agent(Some("myapp/version".into()))
    // requires the `budget` feature flag
    .with_budget(Some(spider::hashbrown::HashMap::from([("*", 300), ("/licenses", 10)])))
    .with_on_link_find_callback(Some(|link, html| {
        println!("link target: {}", link.inner());
        (link, html)
    }))
    .with_external_domains(Some(Vec::from(["https://creativecommons.org/licenses/by/3.0/"].map(|d| d.to_string())).into_iter()))
    .with_headers(None)
    .with_blacklist_url(Some(Vec::from(["https://choosealicense.com/licenses/".into()])))
    .with_proxies(None);

Features

We have a couple optional feature flags. Regex blacklisting, jemaloc backend, globbing, fs temp storage, decentralization, serde, gathering full assets, and randomizing user agents.

[dependencies]
spider = { version = "1.46.2", features = ["regex", "ua_generator"] }
  1. ua_generator: Enables auto generating a random real User-Agent.
  2. regex: Enables blacklisting paths with regx
  3. jemalloc: Enables the jemalloc memory backend.
  4. decentralized: Enables decentralized processing of IO, requires the spider_worker startup before crawls.
  5. sync: Subscribe to changes for Page data processing async.
  6. budget: Allows setting a crawl budget per path with depth.
  7. control: Enables the ability to pause, start, and shutdown crawls on demand.
  8. full_resources: Enables gathering all content that relates to the domain like css,jss, and etc.
  9. serde: Enables serde serialization support.
  10. socks: Enables socks5 proxy support.
  11. glob: Enables url glob support.
  12. fs: Enables storing resources to disk for parsing (may greatly increases performance at the cost of temp storage). Enabled by default.
  13. js: Enables javascript parsing links created with the alpha jsdom crate.
  14. sitemap: Include sitemap pages in results.
  15. time: Enables duration tracking per page.
  16. chrome: Enables chrome headless rendering, use the env var CHROME_URL to connect remotely [experimental].
  17. chrome_headed: Enables chrome rendering headful rendering [experimental].
  18. chrome_cpu: Disable gpu usage for chrome browser.
  19. chrome_stealth: Enables stealth mode to make it harder to be detected as a bot.

Decentralization

Move processing to a worker, drastically increases performance even if worker is on the same machine due to efficient runtime split IO work.

[dependencies]
spider = { version = "1.46.2", features = ["decentralized"] }
# install the worker
cargo install spider_worker
# start the worker [set the worker on another machine in prod]
RUST_LOG=info SPIDER_WORKER_PORT=3030 spider_worker
# start rust project as normal with SPIDER_WORKER env variable
SPIDER_WORKER=http://127.0.0.1:3030 cargo run --example example --features decentralized

The SPIDER_WORKER env variable takes a comma seperated list of urls to set the workers. If the scrape feature flag is enabled, use the SPIDER_WORKER_SCRAPER env variable to determine the scraper worker.

Subscribe to changes

Use the subscribe method to get a broadcast channel.

[dependencies]
spider = { version = "1.46.2", features = ["sync"] }
extern crate spider;

use spider::website::Website;
use spider::tokio;

#[tokio::main]
async fn main() {
    let mut website: Website = Website::new("https://choosealicense.com");
    let mut rx2 = website.subscribe(16).unwrap();

    let join_handle = tokio::spawn(async move {
        while let Ok(res) = rx2.recv().await {
            println!("{:?}", res.get_url());
        }
    });

    website.crawl().await;
}

Regex Blacklisting

Allow regex for blacklisting routes

[dependencies]
spider = { version = "1.46.2", features = ["regex"] }
extern crate spider;

use spider::website::Website;
use spider::tokio;

#[tokio::main]
async fn main() {
    let mut website: Website = Website::new("https://choosealicense.com");
    website.configuration.blacklist_url.push("/licenses/".into());
    website.crawl().await;

    for link in website.get_links() {
        println!("- {:?}", link.as_ref());
    }
}

Pause, Resume, and Shutdown

If you are performing large workloads you may need to control the crawler by enabling the control feature flag:

[dependencies]
spider = { version = "1.46.2", features = ["control"] }
extern crate spider;

use spider::tokio;
use spider::website::Website;

#[tokio::main]
async fn main() {
    use spider::utils::{pause, resume, shutdown};
    let url = "https://choosealicense.com/";
    let mut website: Website = Website::new(&url);

    tokio::spawn(async move {
        pause(url).await;
        sleep(Duration::from_millis(5000)).await;
        resume(url).await;
        // perform shutdown if crawl takes longer than 15s
        sleep(Duration::from_millis(15000)).await;
        // you could also abort the task to shutdown crawls if using website.crawl in another thread.
        shutdown(url).await;
    });

    website.crawl().await;
}

Scrape/Gather HTML

extern crate spider;

use spider::tokio;
use spider::website::Website;

#[tokio::main]
async fn main() {
    use std::io::{Write, stdout};

    let url = "https://choosealicense.com/";
    let mut website: Website = Website::new(&url);

    website.scrape().await;

    let mut lock = stdout().lock();

    let separator = "-".repeat(url.len());

    for page in website.get_pages().unwrap().iter() {
        writeln!(
            lock,
            "{}\n{}\n\n{}\n\n{}",
            separator,
            page.get_url_final(),
            page.get_html(),
            separator
        )
        .unwrap();
    }
}

Blocking

If you need a blocking sync implementation use a version prior to v1.12.0.

Dependencies

~10–28MB
~464K SLoC