#url #email #links #web #link #regex #text

linkify

Finds URLs and email addresses in plain text. Takes care to get the boundaries right with surrounding punctuation like parentheses.

14 releases (9 breaking)

0.10.0 Jun 24, 2023
0.9.0 Jul 11, 2022
0.8.1 Apr 14, 2022
0.8.0 Nov 26, 2021
0.1.2 Jun 9, 2017

#24 in Text processing

Download history 12132/week @ 2024-07-20 14454/week @ 2024-07-27 12082/week @ 2024-08-03 11194/week @ 2024-08-10 10458/week @ 2024-08-17 12492/week @ 2024-08-24 10282/week @ 2024-08-31 12360/week @ 2024-09-07 11228/week @ 2024-09-14 14198/week @ 2024-09-21 15362/week @ 2024-09-28 13394/week @ 2024-10-05 13537/week @ 2024-10-12 12784/week @ 2024-10-19 12798/week @ 2024-10-26 12173/week @ 2024-11-02

53,156 downloads per month
Used in 70 crates (32 directly)

MIT/Apache

40KB
696 lines

Linkify

Linkify is a Rust library to find links such as URLs and email addresses in plain text. It's smart about where a link ends, such as with trailing punctuation.

Documentation Crate ci codecov

Introduction

Your reaction might be: "Do I need a library for this? Why not a regex?". Let's look at a few cases:

  • In http://example.com/. the link should not include the trailing dot
  • http://example.com/, should not include the trailing comma
  • (http://example.com/) should not include the parens

Seems simple enough. But then we also have these cases:

  • https://en.wikipedia.org/wiki/Link_(The_Legend_of_Zelda) should include the trailing paren
  • http://üñîçøðé.com/ä should also work for Unicode (including Emoji and Punycode)
  • <http://example.com/> should not include angle brackets

This library behaves as you'd expect in the above cases and many more. It uses a simple scan with linear runtime.

In addition to URLs, it can also find email addresses.

Demo 🧑‍🔬

Try it out online on the demo playground (Rust compiled to WebAssembly): https://robinst.github.io/linkify/

If you want to use it on the command line, try lychee. It uses linkify to extract all links and checks if they're valid, but it can also just print them like this:

$ echo 'Test https://example.org (and https://example.com)' | lychee --dump -
https://example.org/
https://example.com/

Usage

Basic usage:

extern crate linkify;

use linkify::{LinkFinder, LinkKind};

let input = "Have you seen http://example.com?";
let finder = LinkFinder::new();
let links: Vec<_> = finder.links(input).collect();

assert_eq!(1, links.len());
let link = &links[0];

assert_eq!("http://example.com", link.as_str());
assert_eq!(14, link.start());
assert_eq!(32, link.end());
assert_eq!(&LinkKind::Url, link.kind());

Option to allow URLs without schemes:

use linkify::LinkFinder;

let input = "Look, no scheme: example.org/foo";
let mut finder = LinkFinder::new();

// true by default
finder.url_must_have_scheme(false);

let links: Vec<_> = finder.links(input).collect();
assert_eq!(links[0].as_str(), "example.org/foo");

Restrict the kinds of links:

use linkify::{LinkFinder, LinkKind};

let input = "http://example.com and foo@example.com";
let mut finder = LinkFinder::new();
finder.kinds(&[LinkKind::Email]);
let links: Vec<_> = finder.links(input).collect();

assert_eq!(1, links.len());
let link = &links[0];
assert_eq!("foo@example.com", link.as_str());
assert_eq!(&LinkKind::Email, link.kind());

See full documentation on docs.rs.

Conformance

This crates makes an effort to respect the various standards, namely:

At the same time, it does not guarantee that the returned links are valid. If in doubt, it rather returns a link than skipping it.

If you need to validate URLs, e.g. for checking TLDs, use another library on the returned links.

Contributing

Pull requests, issues and comments welcome! Make sure to add tests for new features and bug fixes.

License

Linkify is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See LICENSE-APACHE and LICENSE-MIT for details. Opening a pull requests is assumed to signal agreement with these licensing terms.

Dependencies

~250KB