#url #email #links #web #link #regex #text

linkify

Finds URLs and email addresses in plain text. Takes care to get the boundaries right with surrounding punctuation like parentheses.

14 releases (9 breaking)

0.10.0 Jun 24, 2023
0.9.0 Jul 11, 2022
0.8.1 Apr 14, 2022
0.8.0 Nov 26, 2021
0.1.2 Jun 9, 2017

#26 in Text processing

Download history 11596/week @ 2024-08-05 11092/week @ 2024-08-12 10489/week @ 2024-08-19 12630/week @ 2024-08-26 10281/week @ 2024-09-02 12288/week @ 2024-09-09 11705/week @ 2024-09-16 14965/week @ 2024-09-23 15335/week @ 2024-09-30 13169/week @ 2024-10-07 13306/week @ 2024-10-14 12873/week @ 2024-10-21 12550/week @ 2024-10-28 12204/week @ 2024-11-04 13804/week @ 2024-11-11 12616/week @ 2024-11-18

52,738 downloads per month
Used in 70 crates (32 directly)

MIT/Apache

40KB
696 lines

Linkify

Linkify is a Rust library to find links such as URLs and email addresses in plain text. It's smart about where a link ends, such as with trailing punctuation.

Documentation Crate ci codecov

Introduction

Your reaction might be: "Do I need a library for this? Why not a regex?". Let's look at a few cases:

  • In http://example.com/. the link should not include the trailing dot
  • http://example.com/, should not include the trailing comma
  • (http://example.com/) should not include the parens

Seems simple enough. But then we also have these cases:

  • https://en.wikipedia.org/wiki/Link_(The_Legend_of_Zelda) should include the trailing paren
  • http://üñîçøðé.com/ä should also work for Unicode (including Emoji and Punycode)
  • <http://example.com/> should not include angle brackets

This library behaves as you'd expect in the above cases and many more. It uses a simple scan with linear runtime.

In addition to URLs, it can also find email addresses.

Demo 🧑‍🔬

Try it out online on the demo playground (Rust compiled to WebAssembly): https://robinst.github.io/linkify/

If you want to use it on the command line, try lychee. It uses linkify to extract all links and checks if they're valid, but it can also just print them like this:

$ echo 'Test https://example.org (and https://example.com)' | lychee --dump -
https://example.org/
https://example.com/

Usage

Basic usage:

extern crate linkify;

use linkify::{LinkFinder, LinkKind};

let input = "Have you seen http://example.com?";
let finder = LinkFinder::new();
let links: Vec<_> = finder.links(input).collect();

assert_eq!(1, links.len());
let link = &links[0];

assert_eq!("http://example.com", link.as_str());
assert_eq!(14, link.start());
assert_eq!(32, link.end());
assert_eq!(&LinkKind::Url, link.kind());

Option to allow URLs without schemes:

use linkify::LinkFinder;

let input = "Look, no scheme: example.org/foo";
let mut finder = LinkFinder::new();

// true by default
finder.url_must_have_scheme(false);

let links: Vec<_> = finder.links(input).collect();
assert_eq!(links[0].as_str(), "example.org/foo");

Restrict the kinds of links:

use linkify::{LinkFinder, LinkKind};

let input = "http://example.com and foo@example.com";
let mut finder = LinkFinder::new();
finder.kinds(&[LinkKind::Email]);
let links: Vec<_> = finder.links(input).collect();

assert_eq!(1, links.len());
let link = &links[0];
assert_eq!("foo@example.com", link.as_str());
assert_eq!(&LinkKind::Email, link.kind());

See full documentation on docs.rs.

Conformance

This crates makes an effort to respect the various standards, namely:

At the same time, it does not guarantee that the returned links are valid. If in doubt, it rather returns a link than skipping it.

If you need to validate URLs, e.g. for checking TLDs, use another library on the returned links.

Contributing

Pull requests, issues and comments welcome! Make sure to add tests for new features and bug fixes.

License

Linkify is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See LICENSE-APACHE and LICENSE-MIT for details. Opening a pull requests is assumed to signal agreement with these licensing terms.

Dependencies

~250KB