序文

Rust エディションガイドへようこそ!「エディション」とは、Rust でのコードの書き方 における重大な変化をあなたに伝えるためにあるものです。

このガイドでは、下記の項目について説明します:

  • エディションとは何か
  • 各エディションの内容
  • コードをあるエディションから別のエディションへ移行する方法

Rust が新しくリリースされるとともに、標準ライブラリは発展します。標準ライブラリ へ追加される機能のうち、重要なものだけがこのガイドに記載されていますが、記載は ないものの、すばらしい機能も沢山追加されています。 標準ライブラリのドキュメンテーションも是非 ご覧ください。

What are Editions?

Rust ships releases on a six-week cycle. This means that users get a constant stream of new features. This is much faster than updates for other languages, but this also means that each update is smaller. After a while, all of those tiny changes add up. But, from release to release, it can be hard to look back and say "Wow, between Rust 1.10 and Rust 1.20, Rust has changed a lot!"

Every two or three years, we'll be producing a new edition of Rust. Each edition brings together the features that have landed into a clear package, with fully updated documentation and tooling. New editions ship through the usual release process.

This serves different purposes for different people:

  • For active Rust users, it brings together incremental changes into an easy-to-understand package.

  • For non-users, it signals that some major advancements have landed, which might make Rust worth another look.

  • For those developing Rust itself, it provides a rallying point for the project as a whole.

Compatibility

When a new edition becomes available in the compiler, crates must explicitly opt in to it to take full advantage. This opt in enables editions to contain incompatible changes, like adding a new keyword that might conflict with identifiers in code, or turning warnings into errors. A Rust compiler will support all editions that existed prior to the compiler's release, and can link crates of any supported editions together. Edition changes only affect the way the compiler initially parses the code. Therefore, if you're using Rust 2015, and one of your dependencies uses Rust 2018, it all works just fine. The opposite situation works as well.

Just to be clear: most features will be available on all editions. People using any edition of Rust will continue to see improvements as new stable releases are made. In some cases however, mainly when new keywords are added, but sometimes for other reasons, there may be new features that are only available in later editions. You only need to upgrade if you want to take advantage of such features.

Trying out the 2018 edition

At the time of writing, there are two editions: 2015 and 2018. 2015 is today's Rust; Rust 2018 will ship later this year. To transition to the 2018 edition from the 2015 edition, you'll want to get started with the transition section.

Transitioning your code to a new edition

New editions might change the way you write Rust -- they add new syntax, language, and library features but also remove features. For example, try, async, and await are keywords in Rust 2018, but not Rust 2015. Despite this it's our intention that the migration to new editions is as smooth an experience as possible. It's considered a bug if it's difficult to upgrade your crate to a new edition. If you have a difficult time then a bug should be filed with Rust itself.

Transitioning between editions is built around compiler lints. Fundamentally, the process works like this:

  • Turn on lints to indicate where code is incompatible with a new edition
  • Get your code compiling with no warnings.
  • Opt in to the new edition, the code should compile.
  • Optionally, enable lints about idiomatic code in the new edition.

Luckily, we've been working on Cargo to help assist with this process, culminating in a new built-in subcommand cargo fix. It can take suggestions from the compiler and automatically re-write your code to comply with new features and idioms, drastically reducing the number of warnings you need to fix manually!

cargo fix is still quite young, and very much a work in development. But it works for the basics! We're working hard on making it better and more robust, but please bear with us for now.

The preview period

Before an edition is released, it will have a "preview" phase which lets you try out the new edition in nightly Rust before its release. Currently Rust 2018 is in its preview phase and because of this, there's an extra step you need to take to opt in. Add this feature flag to your lib.rs or main.rs as well as to any examples in the examples directory of your project if you have one:


# #![allow(unused_variables)]
#![feature(rust_2018_preview)]
#fn main() {
#}

This will enable the unstable features listed in the feature status page. Note that some features require a miniumum of Rust 2018 and these features will require a Cargo.toml change to enable (described in the sections below). Also note that during the time the preview is available, we may continue to add/enable new features with this flag!

For Rust 2018 edition preview 2, we're also testing a new module path variant, "uniform paths", which we'd like to get further testing and feedback on. Please try it out, by adding the following to your lib.rs or main.rs:


# #![allow(unused_variables)]
#![feature(rust_2018_preview, uniform_paths)]
#fn main() {
#}

The release of Rust 2018 will stabilize one of the two module path variants and drop the other.

Fix edition compatibility warnings

Next up is to enable compiler warnings about code which is incompatible with the new 2018 edition. This is where the handy cargo fix tool comes into the picture. To enable the compatibility lints for your project you run:

$ cargo fix --edition

If the nightly toolchain is not your default, you may need to run this command instead:

$ cargo +nightly fix --edition

This will instruct Cargo to compile all targets in your project (libraries, binaries, tests, etc.) while enabling all Cargo features and prepare them for the 2018 edition. Cargo will likely automatically fix a number of files, informing you as it goes along. Note that this does not enable any new Rust 2018 features; it only makes sure your code is compatible with Rust 2018.

If Cargo can't automatically fix everything it'll print out the remaining warnings. Continue to run the above command until all warnings have been solved.

You can explore more about the cargo fix command with:

$ cargo fix --help

Switch to the next edition

Once you're happy with those changes, it's time to use the new edition. Add this to your Cargo.toml:

cargo-features = ["edition"]

[package]
edition = '2018'

That cargo-features line should go at the very top; and edition goes into the [package] section. As mentioned above, right now this is a nightly-only feature of Cargo, so you need to enable it for things to work.

At this point, your project should compile with a regular old cargo build. If it does not, this is a bug! Please file an issue.

Writing idiomatic code in a new edition

Your crate has now entered the 2018 edition of Rust, congrats! Recall though that Editions in Rust signify a shift in idioms over time. While much old code will continue to compile it might be written with different idioms today.

An optional next step you can take is to update your code to be idiomatic within the new edition. This is done with a different set of "idiom lints". Like before we'll be using cargo fix to drive this process:

$ cargo fix --edition-idioms

Additionally like before, this is intended to be an easy step. Here cargo fix will automatically fix any lints it can, so you'll only get warnings for things that cargo fix couldn't fix. If you find it difficult to work through the warnings, that's a bug!

Once you're warning-free with this command you're good to go.

The --edition-idioms flag applies only to the "current crate" if you want to run it against a workspace is necessary to use a workaround with RUSTFLAGS in order to execute it in all the workspace members.

$ RUSTFLAGS='-Wrust_2018_idioms' cargo fix --all

Enjoy the new edition!

Rust 2015

Rust 2015 has a theme of "stability". It commenced with the release of 1.0, and is the "default edition". The edition system was conceived in late 2017, but Rust 1.0 was released in May of 2015. As such, 2015 is the edition that you get when you don't specify any particular edition, for backwards compatibility reasons.

"Stability" is the theme of Rust 2015 because 1.0 marked a huge change in Rust development. Previous to Rust 1.0, Rust was changing on a daily basis. This made it very difficult to write large software in Rust, and made it difficult to learn. With the release of Rust 1.0 and Rust 2015, we committed to backwards compatibility, ensuring a solid foundation for people to build projects on top of.

Since it's the default edition, there's no way to port your code to Rust 2015; it just is. You'll be transitioning away from 2015, but never really to 2015. As such, there's not much else to say about it!

Rust 2018

The edition system was created for the release of Rust 2018. The theme of Rust 2018 is productivity. Rust 2018 improves upon Rust 2015 through new features, simpler syntax in some cases, a smarter borrow-checker, and a host of other things. These are all in service of the productivity goal. Rust 2015 was a foundation; Rust 2018 smooths off rough edges, makes writing code simpler and easier, and removes some inconsistencies.

Module system

In this chapter of the guide, we discuss a few changes to the module system. The most notable of these are the path clarity changes.

Raw identifiers

Minimum Rust version: nightly

Rust, like many programming languages, has the concept of "keywords". These identifiers mean something to the language, and so you cannot use them in places like variable names, function names, and other places. Raw identifiers let you use keywords where they would not normally be allowed.

For example, match is a keyword. If you try to compile this function:

fn match(needle: &str, haystack: &str) -> bool {
    haystack.contains(needle)
}

You'll get this error:

error: expected identifier, found keyword `match`
 --> src/main.rs:4:4
  |
4 | fn match(needle: &str, haystack: &str) -> bool {
  |    ^^^^^ expected identifier, found keyword

You can write this with a raw identifier:

#![feature(rust_2018_preview)]
#![feature(raw_identifiers)]

fn r#match(needle: &str, haystack: &str) -> bool {
    haystack.contains(needle)
}

fn main() {
    assert!(r#match("foo", "foobar"));
}

Note the r# prefix on both the function name, as well as the call.

More details

This feature is useful for a few reasons, but the primary motivation was inter-edition situations. For example, try is not a keyword in the 2015 edition, but is in the 2018 edition. So if you have a library that is written in Rust 2015 and has a try function, to call it in Rust 2018, you'll need to use the raw identifier.

New keywords

The new confirmed keywords in edition 2018 are:

async and await

Here, async is reserved for use in async fn as well as in async || closures and async { .. } blocks. Meanwhile, await is reserved to keep our options open with respect to await!(expr) syntax. See RFC 2394 for more details.

try

The do catch { .. } blocks have been renamed to try { .. } and to support that, the keyword try is reserved in edition 2018. See RFC 2388 for more details.

Path clarity

Minimum Rust version: nightly

The module system is often one of the hardest things for people new to Rust. Everyone has their own things that take time to master, of course, but there's a root cause for why it's so confusing to many: while there are simple and consistent rules defining the module system, their consequences can feel inconsistent, counterintuitive and mysterious.

As such, the 2018 edition of Rust introduces a few new module system features, but they end up simplifying the module system, to make it more clear as to what is going on.

Note: During the 2018 edition preview, there are two variants of the module system under consideration, the "uniform paths" variant and the "anchored use paths" variant. Most of these changes apply to both variants; the two variant sections call out the differences between the two. We encourage testing of the new "uniform paths" variant introduced in edition preview 2. The release of Rust 2018 will stabilize one of these two variants and drop the other.

To test Rust 2018 with the new "uniform paths" variant, put #![feature(rust_2018_preview, uniform_paths)] at the top of your lib.rs or main.rs.

Here's a brief summary:

  • extern crate is no longer needed
  • The crate keyword refers to the current crate.
  • Uniform paths variant: Paths work uniformly in both use declarations and in other code. Paths work uniformly both in the top-level module and in submodules. Any path may start with a crate, with crate, super, or self, or with a local name relative to the current module.
  • Anchored use paths variant: Paths in use declarations always start with a crate name, or with crate, super, or self. Paths in code other than use declarations may also start with names relative to the current module.
  • A foo.rs and foo/ subdirectory may coexist; mod.rs is no longer needed when placing submodules in a subdirectory.

These may seem like arbitrary new rules when put this way, but the mental model is now significantly simplified overall. Read on for more details!

More details

Let's talk about each new feature in turn.

No more extern crate

This one is quite straightforward: you no longer need to write extern crate to import a crate into your project. Before:

// Rust 2015

extern crate futures;

mod submodule {
    use futures::Future;
}

After:

// Rust 2018

mod submodule {
    use futures::Future;
}

Now, to add a new crate to your project, you can add it to your Cargo.toml, and then there is no step two. If you're not using Cargo, you already had to pass --extern flags to give rustc the location of external crates, so you'd just keep doing what you were doing there as well.

One other use for extern crate was to import macros; that's no longer needed. Check the macro section for more.

The crate keyword refers to the current crate.

In use declarations and in other code, you can refer to the root of the current crate with the crate:: prefix. For instance, crate::foo::bar will always refer to the name bar inside the module foo, from anywhere else in the same crate.

The prefix :: previously referred to either the crate root or an external crate; it now unambiguously refers to an external crate. For instance, ::foo::bar always refers to the name bar inside the external crate foo.

Uniform paths variant

The uniform paths variant of Rust 2018 simplifies and unifies path handling compared to Rust 2015. In Rust 2015, paths work differently in use declarations than they do elsewhere. In particular, paths in use declarations would always start from the crate root, while paths in other code implicitly started from the current module. Those differences didn't have any effect in the top-level module, which meant that everything would seem straightforward until working on a project large enough to have submodules.

In the uniform paths variant of Rust 2018, paths in use declarations and in other code always work the same way, both in the top-level module and in any submodule. You can always use a relative path from the current module, a path starting from an external crate name, or a path starting with crate, super, or self.

Code that looked like this:

// Rust 2015

extern crate futures;

use futures::Future;

mod foo {
    pub struct Bar;
}

use foo::Bar;

fn my_poll() -> futures::Poll { ... }

enum SomeEnum {
    V1(usize),
    V2(String),
}

fn func() {
    let five = std::sync::Arc::new(5);
    use SomeEnum::*;
    match ... {
        V1(i) => { ... }
        V2(s) => { ... }
    }
}

will look exactly the same in Rust 2018, except that you can delete the extern crate line:

// Rust 2018 (uniform paths variant)

use futures::Future;

mod foo {
    pub struct Bar;
}

use foo::Bar;

fn my_poll() -> futures::Poll { ... }

enum SomeEnum {
    V1(usize),
    V2(String),
}

fn func() {
    let five = std::sync::Arc::new(5);
    use SomeEnum::*;
    match ... {
        V1(i) => { ... }
        V2(s) => { ... }
    }
}

With Rust 2018, however, the same code will also work completely unmodified in a submodule:

// Rust 2018 (uniform paths variant)

mod submodule {
    use futures::Future;

    mod foo {
        pub struct Bar;
    }

    use foo::Bar;

    fn my_poll() -> futures::Poll { ... }

    enum SomeEnum {
        V1(usize),
        V2(String),
    }

    fn func() {
        let five = std::sync::Arc::new(5);
        use SomeEnum::*;
        match ... {
            V1(i) => { ... }
            V2(s) => { ... }
        }
    }
}

This makes it easy to move code around in a project, and avoids introducing additional complexity to multi-module projects.

If a path is ambiguous, such as if you have an external crate and a local module or item with the same name, you'll get an error, and you'll need to either rename one of the conflicting names or explicitly disambiguate the path. To explicitly disambiguate a path, use ::name for an external crate name, or self::name for a local module or item.

Anchored use paths variant

In the anchored use paths variant of Rust 2018, paths in use declarations must begin with a crate name, crate, self, or super.

Code that looked like this:

// Rust 2015

extern crate futures;

use futures::Future;

mod foo {
    pub struct Bar;
}

use foo::Bar;

Now looks like this:

// Rust 2018 (anchored use paths variant)

// 'futures' is the name of a crate
use futures::Future;

mod foo {
    pub struct Bar;
}

// 'crate' means the current crate
use crate::foo::Bar;

In addition, all of these path forms are available outside of use declarations as well, which eliminates many sources of confusion. Consider this code in Rust 2015:

// Rust 2015

extern crate futures;

mod submodule {
    // this works!
    use futures::Future;

    // so why doesn't this work?
    fn my_poll() -> futures::Poll { ... }
}

fn main() {
    // this works
    let five = std::sync::Arc::new(5);
}

mod submodule {
    fn function() {
        // ... so why doesn't this work
        let five = std::sync::Arc::new(5);
    }
}

In the futures example, the my_poll function signature is incorrect, because submodule contains no items named futures; that is, this path is considered relative. But because use is anchored, use futures:: works even though a lone futures:: doesn't! With std it can be even more confusing, as you never wrote the extern crate std; line at all. So why does it work in main but not in a submodule? Same thing: it's a relative path because it's not in a use declaration. extern crate std; is inserted at the crate root, so it's fine in main, but it doesn't exist in the submodule at all.

Let's look at how this change affects things:

// Rust 2018 (anchored use paths variant)

// no more `extern crate futures;`

mod submodule {
    // 'futures' is the name of a crate, so this is anchored and works
    use futures::Future;

    // 'futures' is the name of a crate, so this is anchored and works
    fn my_poll() -> futures::Poll { ... }
}

fn main() {
    // 'std' is the name of a crate, so this is anchored and works
    let five = std::sync::Arc::new(5);
}

mod submodule {
    fn function() {
        // 'std' is the name of a crate, so this is anchored and works
        let five = std::sync::Arc::new(5);
    }
}

Much more straightforward.

No more mod.rs

In Rust 2015, if you have a submodule:

mod foo;

It can live in foo.rs or foo/mod.rs. If it has submodules of its own, it must be foo/mod.rs. So a bar submodule of foo would live at foo/bar.rs.

In Rust 2018, mod.rs is no longer needed. foo.rs can just be foo.rs, and the submodule is still foo/bar.rs. This eliminates the special name, and if you have a bunch of files open in your editor, you can clearly see their names, instead of having a bunch of tabs named mod.rs.

More visibility modifiers

Minimum Rust version: 1.18

You can use the pub keyword to make something a part of a module's public interface. But in addition, there are some new forms:

pub(crate) struct Foo;

pub(in a::b::c) struct Bar;

The first form makes the Foo struct public to your entire crate, but not externally. The second form is similar, but makes Bar public for one other module, a::b::c in this case.

Nested imports with use

Minimum Rust version: 1.25

A new way to write use statements has been added to Rust: nested import groups. If you’ve ever written a set of imports like this:


# #![allow(unused_variables)]
#fn main() {
use std::fs::File;
use std::io::Read;
use std::path::{Path, PathBuf};
#}

You can now write this:


# #![allow(unused_variables)]
#fn main() {
# mod foo {
// on one line
use std::{fs::File, io::Read, path::{Path, PathBuf}};
# }

# mod bar {
// with some more breathing room
use std::{
    fs::File,
    io::Read,
    path::{
        Path,
        PathBuf
    }
};
# }
#}

This can reduce some repetition, and make things a bit more clear.

Error handling and Panics

In this chapter of the guide, we discuss a few improvements to error handling in Rust. The most notable of these is the introduction of the ? operator.

The ? operator for easier error handling

Minimum Rust version: 1.13 for Result<T, E>

Minimum Rust version: 1.22 for Option<T>

Minimum Rust version: 1.26 for using ? in main and tests

Rust has gained a new operator, ?, that makes error handling more pleasant by reducing the visual noise involved. It does this by solving one simple problem. To illustrate, imagine we had some code to read some data from a file:


# #![allow(unused_variables)]
#fn main() {
# use std::{io::{self, prelude::*}, fs::File};
fn read_username_from_file() -> Result<String, io::Error> {
    let f = File::open("username.txt");

    let mut f = match f {
        Ok(file) => file,
        Err(e) => return Err(e),
    };

    let mut s = String::new();

    match f.read_to_string(&mut s) {
        Ok(_) => Ok(s),
        Err(e) => Err(e),
    }
}
#}

Note: this code could be made simpler with a single call to std::fs::read_to_string, but we're writing it all out manually here to have an example with mutliple errors.

This code has two paths that can fail, opening the file and reading the data from it. If either of these fail to work, we'd like to return an error from read_username_from_file. Doing so involves matching on the result of the I/O operations. In simple cases like this though, where we are only propagating errors up the call stack, the matching is just boilerplate - seeing it written out, in the same pattern every time, doesn't provide the reader with a great deal of useful information.

With ?, the above code looks like this:


# #![allow(unused_variables)]
#fn main() {
# use std::{io::{self, prelude::*}, fs::File};
fn read_username_from_file() -> Result<String, io::Error> {
    let mut f = File::open("username.txt")?;
    let mut s = String::new();

    f.read_to_string(&mut s)?;

    Ok(s)
}
#}

The ? is shorthand for the entire match statements we wrote earlier. In other words, ? applies to a Result value, and if it was an Ok, it unwraps it and gives the inner value. If it was an Err, it returns from the function you're currently in. Visually, it is much more straightforward. Instead of an entire match statement, now we are just using the single "?" character to indicate that here we are handling errors in the standard way, by passing them up the call stack.

Seasoned Rustaceans may recognize that this is the same as the try! macro that's been available since Rust 1.0. And indeed, they are the same. Previously, read_username_from_file could have been implemented like this:


# #![allow(unused_variables)]
#fn main() {
# use std::{io::{self, prelude::*}, fs::File};
fn read_username_from_file() -> Result<String, io::Error> {
    let mut f = try!(File::open("username.txt"));
    let mut s = String::new();

    try!(f.read_to_string(&mut s));

    Ok(s)
}
#}

So why extend the language when we already have a macro? There are multiple reasons. First, try! has proved to be extremely useful, and is used often in idiomatic Rust. It is used so often that we think it's worth having a sweet syntax. This sort of evolution is one of the great advantages of a powerful macro system: speculative extensions to the language syntax can be prototyped and iterated on without modifying the language itself, and in return, macros that turn out to be especially useful can indicate missing language features. This evolution, from try! to ? is a great example.

One of the reasons try! needs a sweeter syntax is that it is quite unattractive when multiple invocations of try! are used in succession. Consider:

try!(try!(try!(foo()).bar()).baz())

as opposed to

foo()?.bar()?.baz()?

The first is quite difficult to scan visually, and each layer of error handling prefixes the expression with an additional call to try!. This brings undue attention to the trivial error propagation, obscuring the main code path, in this example the calls to foo, bar and baz. This sort of method chaining with error handling occurs in situations like the builder pattern.

Finally, the dedicated syntax will make it easier in the future to produce nicer error messages tailored specifically to ?, whereas it is difficult to produce nice errors for macro-expanded code generally.

You can use ? with Result<T, E>s, but also with Option<T>. In that case, ? will return a value for Some(T) and return None for None. One current restriction is that you cannot use ? for both in the same function, as the return type needs to match the type you use ? on. In the future, this restriction will be lifed.

? in main and tests

Rust's error handling revolves around returning Result<T, E> and using ? to propagate errors. For those who write many small programs and, hopefully, many tests, one common paper cut has been mixing entry points such as main and #[test]s with error handling.

As an example, you might have tried to write:

use std::fs::File;

fn main() {
    let f = File::open("bar.txt")?;
}

Since ? works by propagating the Result with an early return to the enclosing function, the snippet above does not work, and results today in the following error:

error[E0277]: the `?` operator can only be used in a function that returns `Result`
              or `Option` (or another type that implements `std::ops::Try`)
 --> src/main.rs:5:13
  |
5 |     let f = File::open("bar.txt")?;
  |             ^^^^^^^^^^^^^^^^^^^^^^ cannot use the `?` operator in a function that returns `()`
  |
  = help: the trait `std::ops::Try` is not implemented for `()`
  = note: required by `std::ops::Try::from_error`

To solve this problem in Rust 2015, you might have written something like:

// Rust 2015

# use std::process;
# use std::error::Error;

fn run() -> Result<(), Box<Error>> {
    // real logic..
    Ok(())
}

fn main() {
    if let Err(e) = run() {
        println!("Application error: {}", e);
        process::exit(1);
    }
}

However, in this case, the run function has all the interesting logic and main is just boilerplate. The problem is even worse for #[test]s, since there tend to be a lot more of them.

In Rust 2018 you can instead let your #[test]s and main functions return a Result:

// Rust 2018

use std::fs::File;

fn main() -> Result<(), std::io::Error> {
    let f = File::open("bar.txt")?;

    Ok(())
}

In this case, if say the file doesn't exist and there is an Err(err) somewhere, then main will exit with an error code (not 0) and print out a Debug representation of err.

More details

Getting -> Result<..> to work in the context of main and #[test]s is not magic. It is all backed up by a Termination trait which all valid return types of main and testing functions must implement. The trait is defined as:


# #![allow(unused_variables)]
#fn main() {
pub trait Termination {
    fn report(self) -> i32;
}
#}

When setting up the entry point for your application, the compiler will use this trait and call .report() on the Result of the main function you have written.

Two simplified example implementations of this trait for Result and () are:


# #![allow(unused_variables)]
#fn main() {
# #![feature(process_exitcode_placeholder, termination_trait_lib)]
# use std::process::ExitCode;
# use std::fmt;
#
# pub trait Termination { fn report(self) -> i32; }

impl Termination for () {
    fn report(self) -> i32 {
        # use std::process::Termination;
        ExitCode::SUCCESS.report()
    }
}

impl<E: fmt::Debug> Termination for Result<(), E> {
    fn report(self) -> i32 {
        match self {
            Ok(()) => ().report(),
            Err(err) => {
                eprintln!("Error: {:?}", err);
                # use std::process::Termination;
                ExitCode::FAILURE.report()
            }
        }
    }
}
#}

As you can see in the case of (), a success code is simply returned. In the case of Result, the success case delegates to the implementation for () but prints out an error message and a failure exit code on Err(..).

To learn more about the finer details, consult either the tracking issue or the RFC.

? in main and tests

Minimum Rust version: 1.26

Rust's error handling revolves around returning Result<T, E> and using ? to propagate errors. For those who write many small programs and, hopefully, many tests, one common paper cut has been mixing entry points such as main and #[test]s with error handling.

As an example, you might have tried to write:

use std::fs::File;

fn main() {
    let f = File::open("bar.txt")?;
}

Since ? works by propagating the Result with an early return to the enclosing function, the snippet above does not work, and results today in the following error:

error[E0277]: the `?` operator can only be used in a function that returns `Result`
              or `Option` (or another type that implements `std::ops::Try`)
 --> src/main.rs:5:13
  |
5 |     let f = File::open("bar.txt")?;
  |             ^^^^^^^^^^^^^^^^^^^^^^ cannot use the `?` operator in a function that returns `()`
  |
  = help: the trait `std::ops::Try` is not implemented for `()`
  = note: required by `std::ops::Try::from_error`

To solve this problem in Rust 2015, you might have written something like:

// Rust 2015

# use std::process;
# use std::error::Error;

fn run() -> Result<(), Box<Error>> {
    // real logic..
    Ok(())
}

fn main() {
    if let Err(e) = run() {
        println!("Application error: {}", e);
        process::exit(1);
    }
}

However, in this case, the run function has all the interesting logic and main is just boilerplate. The problem is even worse for #[test]s, since there tend to be a lot more of them.

In Rust 2018 you can instead let your #[test]s and main functions return a Result:

// Rust 2018

use std::fs::File;

fn main() -> Result<(), std::io::Error> {
    let f = File::open("bar.txt")?;

    Ok(())
}

In this case, if say the file doesn't exist and there is an Err(err) somewhere, then main will exit with an error code (not 0) and print out a Debug representation of err.

More details

Getting -> Result<..> to work in the context of main and #[test]s is not magic. It is all backed up by a Termination trait which all valid return types of main and testing functions must implement. The trait is defined as:


# #![allow(unused_variables)]
#fn main() {
pub trait Termination {
    fn report(self) -> i32;
}
#}

When setting up the entry point for your application, the compiler will use this trait and call .report() on the Result of the main function you have written.

Two simplified example implementations of this trait for Result and () are:


# #![allow(unused_variables)]
#fn main() {
# #![feature(process_exitcode_placeholder, termination_trait_lib)]
# use std::process::ExitCode;
# use std::fmt;
#
# pub trait Termination { fn report(self) -> i32; }

impl Termination for () {
    fn report(self) -> i32 {
        # use std::process::Termination;
        ExitCode::SUCCESS.report()
    }
}

impl<E: fmt::Debug> Termination for Result<(), E> {
    fn report(self) -> i32 {
        match self {
            Ok(()) => ().report(),
            Err(err) => {
                eprintln!("Error: {:?}", err);
                # use std::process::Termination;
                ExitCode::FAILURE.report()
            }
        }
    }
}
#}

As you can see in the case of (), a success code is simply returned. In the case of Result, the success case delegates to the implementation for () but prints out an error message and a failure exit code on Err(..).

To learn more about the finer details, consult either the tracking issue or the RFC.

Controlling panics with std::panic

Minimum Rust version: 1.9

There is a std::panic module, which includes methods for halting the unwinding process started by a panic:


# #![allow(unused_variables)]
#fn main() {
use std::panic;

let result = panic::catch_unwind(|| {
    println!("hello!");
});
assert!(result.is_ok());

let result = panic::catch_unwind(|| {
    panic!("oh no!");
});
assert!(result.is_err());
#}

In general, Rust distinguishes between two ways that an operation can fail:

  • Due to an expected problem, like a file not being found.
  • Due to an unexpected problem, like an index being out of bounds for an array.

Expected problems usually arise from conditions that are outside of your control; robust code should be prepared for anything its environment might throw at it. In Rust, expected problems are handled via the Result type, which allows a function to return information about the problem to its caller, which can then handle the error in a fine-grained way.

Unexpected problems are bugs: they arise due to a contract or assertion being violated. Since they are unexpected, it doesn't make sense to handle them in a fine-grained way. Instead, Rust employs a "fail fast" approach by panicking, which by default unwinds the stack (running destructors but no other code) of the thread which discovered the error. Other threads continue running, but will discover the panic any time they try to communicate with the panicked thread (whether through channels or shared memory). Panics thus abort execution up to some "isolation boundary", with code on the other side of the boundary still able to run, and perhaps to "recover" from the panic in some very coarse-grained way. A server, for example, does not necessarily need to go down just because of an assertion failure in one of its threads.

It's also worth noting that programs may choose to abort instead of unwind, and so catching panics may not work. If your code relies on catch_unwind, you should add this to your Cargo.toml:

[profile.debug]
panic = "unwind"

[profile.release]
panic = "unwind"

If any of your users choose to abort, they'll get a compile-time failure.

The catch_unwind API offers a way to introduce new isolation boundaries within a thread. There are a couple of key motivating examples:

  • Embedding Rust in other languages
  • Abstractions that manage threads
  • Test frameworks, because tests may panic and you don't want that to kill the test runner

For the first case, unwinding across a language boundary is undefined behavior, and often leads to segfaults in practice. Allowing panics to be caught means that you can safely expose Rust code via a C API, and translate unwinding into an error on the C side.

For the second case, consider a threadpool library. If a thread in the pool panics, you generally don't want to kill the thread itself, but rather catch the panic and communicate it to the client of the pool. The catch_unwind API is paired with resume_unwind, which can then be used to restart the panicking process on the client of the pool, where it belongs.

In both cases, you're introducing a new isolation boundary within a thread, and then translating the panic into some other form of error elsewhere.

Aborting on panic

Minimum Rust version: 1.10

By default, Rust programs will unwind the stack when a panic! happens. If you'd prefer an immediate abort instead, you can configure this in Cargo.toml:

[profile.debug]
panic = "abort"

[profile.release]
panic = "abort"

Why might you choose to do this? By removing support for unwinding, you'll get smaller binaries. You will lose the ability to catch panics. Which choice is right for you depends on exactly what you're doing.

Control flow

In this chapter of the guide, we discuss a few improvements to control flow. The most notable of these will be async and await.

loops can break with a value

Minimum Rust version: 1.19

loops can now break with a value:


# #![allow(unused_variables)]
#fn main() {
// old code
let x;

loop {
    x = 7;
    break;
}

// new code
let x = loop { break 7; };
#}

Rust has traditionally positioned itself as an “expression oriented language”, that is, most things are expressions that evaluate to a value, rather than statements. loop stuck out as strange in this way, as it was previously a statement.

For now, this only applies to loop, and not things like while or for. It's not clear yet, but we may add this to those in the future.

async/await for easier concurrency

Minimum Rust version: nightly

The initial release of Rust 2018 won't ship with async/await support, but we have reserved the keywords so that a future release will contain them. We'll update this page when it's closer to shipping!

Trait system

In this chapter of the guide, we discuss a few improvements to the trait system. The most notable of these is impl Trait.

impl Trait for returning complex types with ease

Minimum Rust version: 1.26

impl Trait is the new way to specify unnamed but concrete types that implement a specific trait. There are two places you can put it: argument position, and return position.

trait Trait {}

// argument position
fn foo(arg: impl Trait) {
}

// return position
fn foo() -> impl Trait {
}

Argument Position

In argument position, this feature is quite simple. These two forms are almost the same:

trait Trait {}

fn foo<T: Trait>(arg: T) {
}

fn foo(arg: impl Trait) {
}

That is, it's a slightly shorter syntax for a generic type parameter. It means, "arg is an argument that takes any type that implements the Trait trait."

However, there's also an important technical difference between T: Trait and impl Trait here. When you write the former, you can specify the type of T at the call site with turbo-fish syntax as with foo::<usize>(1). In the case of impl Trait, if it is used anywhere in the function definition, then you can't use turbo-fish at all. Therefore, you should be mindful that changing both from and to impl Trait can constitute a breaking change for the users of your code.

Return Position

In return position, this feature is more interesting. It means "I am returning some type that implements the Trait trait, but I'm not going to tell you exactly what the type is." Before impl Trait, you could do this with trait objects:


# #![allow(unused_variables)]
#fn main() {
trait Trait {}

impl Trait for i32 {}

fn returns_a_trait_object() -> Box<dyn Trait> {
    Box::new(5)
}
#}

However, this has some overhead: the Box<T> means that there's a heap allocation here, and this will use dynamic dispatch. See the dyn Trait section for an explanation of this syntax. But we only ever return one possible thing here, the Box<i32>. This means that we're paying for dynamic dispatch, even though we don't use it!

With impl Trait, the code above could be written like this:


# #![allow(unused_variables)]
#fn main() {
trait Trait {}

impl Trait for i32 {}

fn returns_a_trait_object() -> impl Trait {
    5
}
#}

Here, we have no Box<T>, no trait object, and no dynamic dispatch. But we still can obscure the i32 return type.

With i32, this isn't super useful. But there's one major place in Rust where this is much more useful: closures.

impl Trait and closures

If you need to catch up on closures, check out their chapter in the book.

In Rust, closures have a unique, un-writable type. They do implement the Fn family of traits, however. This means that previously, the only way to return a closure from a function was to use a trait object:


# #![allow(unused_variables)]
#fn main() {
fn returns_closure() -> Box<dyn Fn(i32) -> i32> {
    Box::new(|x| x + 1)
}
#}

You couldn't write the type of the closure, only use the Fn trait. That means that the trait object is necessary. However, with impl Trait:


# #![allow(unused_variables)]
#fn main() {
fn returns_closure() -> impl Fn(i32) -> i32 {
    |x| x + 1
}
#}

We can now return closures by value, just like any other type!

More details

The above is all you need to know to get going with impl Trait, but for some more nitty-gritty details: type parameters and impl Trait in argument position are universals (universally quantified types). Meanwhile, impl Trait in return position are existentials (existentially quantified types). Okay, maybe that's a bit too jargon-heavy. Let's step back.

Consider this function:

fn foo<T: Trait>(x: T) {

When you call it, you set the type, T. "you" being the caller here. This signature says "I accept any type that implements Trait." ("any type" == universal in the jargon)

This version:

fn foo<T: Trait>() -> T {

is similar, but also different. You, the caller, provide the type you want, T, and then the function returns it. You can see this in Rust today with things like parse or collect:

let x: i32 = "5".parse()?;
let x: u64 = "5".parse()?;

Here, .parse has this signature:

pub fn parse<F>(&self) -> Result<F, <F as FromStr>::Err> where
    F: FromStr,

Same general idea, though with a result type and FromStr has an associated type... anyway, you can see how F is in the return position here. So you have the ability to choose.

With impl Trait, you're saying "hey, some type exists that implements this trait, but I'm not gonna tell you what it is." ("existential" in the jargon, "some type exists"). So now, the caller can't choose, and the function itself gets to choose. If we tried to define parse with Result<impl F,... as the return type, it wouldn't work.

Using impl Trait in more places

As previously mentioned, as a start, you will only be able to use impl Trait as the argument or return type of a free or inherent function. However, impl Trait can't be used inside implementations of traits, nor can it be used as the type of a let binding or inside a type alias. Some of these restrictions will eventually be lifted. For more information, see the tracking issue on impl Trait.

dyn Trait for trait objects

Minimum Rust version: 1.27

The dyn Trait feature is the new syntax for using trait objects. In short:

  • Box<Trait> becomes Box<dyn Trait>
  • &Trait and &mut Trait become &dyn Trait and &mut dyn Trait

And so on. In code:


# #![allow(unused_variables)]
#fn main() {
trait Trait {}

impl Trait for i32 {}

// old
fn function1() -> Box<Trait> {
# unimplemented!()
}

// new
fn function2() -> Box<dyn Trait> {
# unimplemented!()
}
#}

That's it!

More details

Using just the trait name for trait objects turned out to be a bad decision. The current syntax is often ambiguous and confusing, even to veterans, and favors a feature that is not more frequently used than its alternatives, is sometimes slower, and often cannot be used at all when its alternatives can.

Furthermore, with impl Trait arriving, "impl Trait vs dyn Trait" is much more symmetric, and therefore a bit nicer, than "impl Trait vs Trait". impl Trait is explained further in the next section.

In the new edition, you should therefore prefer dyn Trait to just Trait where you need a trait object.

More container types support trait objects

Minimum Rust version: 1.2

In Rust 1.0, only certain, special types could be used to create trait objects.

With Rust 1.2, that restriction was lifted, and more types became able to do this. For example, Rc<T>, one of Rust's reference-counted types:

use std::rc::Rc;

trait Foo {}

impl Foo for i32 {
    
}

fn main() {
    let obj: Rc<dyn Foo> = Rc::new(5);
}

This code would not work with Rust 1.0, but now works.

If you haven't seen the dyn syntax before, see the section on it. For versions that do not support it, replace Rc<dyn Foo> with Rc<Foo>.

Associated constants

Minimum Rust version: 1.20

You can define traits, structs, and enums that have “associated functions”:

struct Struct;

impl Struct {
    fn foo() {
        println!("foo is an associated function of Struct");
    }
}

fn main() {
    Struct::foo();
}

These are called “associated functions” because they are functions that are associated with the type, that is, they’re attached to the type itself, and not any particular instance.

Rust 1.20 adds the ability to define “associated constants” as well:

struct Struct;

impl Struct {
    const ID: u32 = 0;
}

fn main() {
    println!("the ID of Struct is: {}", Struct::ID);
}

That is, the constant ID is associated with Struct. Like functions, associated constants work with traits and enums as well.

Traits have an extra ability with associated constants that gives them some extra power. With a trait, you can use an associated constant in the same way you’d use an associated type: by declaring it, but not giving it a value. The implementor of the trait then declares its value upon implementation:

trait Trait {
    const ID: u32;
}

struct Struct;

impl Trait for Struct {
    const ID: u32 = 5;
}

fn main() {
    println!("{}", Struct::ID);
}

Before this feature, if you wanted to make a trait that represented floating point numbers, you’d have to write this:


# #![allow(unused_variables)]
#fn main() {
trait Float {
    fn nan() -> Self;
    fn infinity() -> Self;
    // ...
}
#}

This is slightly unwieldy, but more importantly, because they’re functions, they cannot be used in constant expressions, even though they only return a constant. Because of this, a design for Float would also have to include constants as well:

mod f32 {
    const NAN: f32 = 0.0f32 / 0.0f32;
    const INFINITY: f32 = 1.0f32 / 0.0f32;

    impl Float for f32 {
        fn nan() -> Self {
            f32::NAN
        }
        fn infinity() -> Self {
            f32::INFINITY
        }
    }
}

Associated constants let you do this in a much cleaner way. This trait definition:


# #![allow(unused_variables)]
#fn main() {
trait Float {
    const NAN: Self;
    const INFINITY: Self;
    // ...
}
#}

Leads to this implementation:

mod f32 {
    impl Float for f32 {
        const NAN: f32 = 0.0f32 / 0.0f32;
        const INFINITY: f32 = 1.0f32 / 0.0f32;
    }
}

much cleaner, and more versatile.

Slice patterns

Minimum Rust version: 1.26

Have you ever tried to pattern match on the contents and structure of a slice? Rust 2018 will let you do just that.

For example, say we want to accept a list of names and respond to that with a greeting. With slice patterns, we can do that easy as pie with:

fn main() {
    greet(&[]);
    // output: Bummer, there's no one here :(
    greet(&["Alan"]);
    // output: Hey, there Alan! You seem to be alone.
    greet(&["Joan", "Hugh"]);
    // output: Hello, Joan and Hugh. Nice to see you are at least 2!
    greet(&["John", "Peter", "Stewart"]);
    // output: Hey everyone, we seem to be 3 here today.
}

fn greet(people: &[&str]) {
    match people {
        [] => println!("Bummer, there's no one here :("),
        [only_one] => println!("Hey, there {}! You seem to be alone.", only_one),
        [first, second] => println!(
            "Hello, {} and {}. Nice to see you are at least 2!",
            first, second
        ),
        _ => println!("Hey everyone, we seem to be {} here today.", people.len()),
    }
}

Now, you don't have to check the length first.

We can also match on arrays like so:


# #![allow(unused_variables)]
#fn main() {
let arr = [1, 2, 3];

assert_eq!("ends with 3", match arr {
    [_, _, 3] => "ends with 3",
    [a, b, c] => "ends with something else",
});
#}

More details

Exhaustive patterns

In the first example, note in particular the _ => ... pattern. Since we are matching on a slice, it could be of any length, so we need a "catch all pattern" to handle it. If we forgot the _ => ... or identifier => ... pattern, we would instead get an error saying:

error[E0004]: non-exhaustive patterns: `&[_, _, _]` not covered

If we added a case for a slice of size 3 we would instead get:

error[E0004]: non-exhaustive patterns: `&[_, _, _, _]` not covered

and so on...

Arrays and exact lengths

In the second example above, since arrays in Rust are of known lengths, we have to match on exactly three elements. If we try to match on 2 or 4 elements,we get the errors:

error[E0527]: pattern requires 2 elements but array has 3

and

error[E0527]: pattern requires 4 elements but array has 3

In the pipeline

When it comes to slice patterns, more advanced forms are planned but have not been stabilized yet. To learn more, follow the tracking issue.

Ownership and lifetimes

In this chapter of the guide, we discuss a few improvements to ownership and lifetimes. One of the most notable of these is default match binding modes.

Default match bindings

Minimum Rust version: 1.26

Have you ever had a borrowed Option<T> and tried to match on it? You probably wrote this:

let s: &Option<String> = &Some("hello".to_string());

match s {
    Some(s) => println!("s is: {}", s),
    _ => (),
};

In Rust 2015, this would fail to compile, and you would have to write the following instead:

// Rust 2015

let s: &Option<String> = &Some("hello".to_string());

match s {
    &Some(ref s) => println!("s is: {}", s),
    _ => (),
};

Rust 2018, by contrast, will infer the &s and refs, and your original code will Just Work.

This affects not just match, but patterns everywhere, such as in let statements, closure arguments, and for loops.

More details

The mental model of patterns has shifted a bit with this change, to bring it into line with other aspects of the language. For example, when writing a for loop, you can iterate over borrowed contents of a collection by borrowing the collection itself:

let my_vec: Vec<i32> = vec![0, 1, 2];

for x in &my_vec { ... }

The idea is that an &T can be understood as a borrowed view of T, and so when you iterate, match, or otherwise destructure a &T you get a borrowed view of its internals as well.

More formally, patterns have a "binding mode," which is either by value (x), by reference (ref x), or by mutable reference (ref mut x). In Rust 2015, match always started in by-value mode, and required you to explicitly write ref or ref mut in patterns to switch to a borrowing mode. In Rust 2018, the type of the value being matched informs the binding mode, so that if you match against an &Option<String> with a Some variant, you are put into ref mode automatically, giving you a borrowed view of the internal data. Similarly, &mut Option<String> would give you a ref mut view.

'_, the anonymous lifetime

Minimum Rust version: nightly

Rust 2018 allows you to explicitly mark where a lifetime is elided, for types where this elision might otherwise be unclear. To do this, you can use the special lifetime '_ much like you can explicitly mark that a type is inferred with the syntax let x: _ = ..;.

Let's say, for whatever reason, that we have a simple wrapper around &'a str:


# #![allow(unused_variables)]
#fn main() {
struct StrWrap<'a>(&'a str);
#}

In Rust 2015, you might have written:


# #![allow(unused_variables)]
#fn main() {
// Rust 2015

use std::fmt;

# struct StrWrap<'a>(&'a str);

fn make_wrapper(string: &str) -> StrWrap {
    StrWrap(string)
}

impl<'a> fmt::Debug for StrWrap<'a> {
    fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {
        fmt.write_str(self.0)
    }
}
#}

In Rust 2018, you can instead write:


# #![allow(unused_variables)]
#![feature(rust_2018_preview)]

#fn main() {
# use std::fmt;
# struct StrWrap<'a>(&'a str);

// Rust 2018

fn make_wrapper(string: &str) -> StrWrap<'_> {
    StrWrap(string)
}

impl fmt::Debug for StrWrap<'_> {
    fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
        fmt.write_str(self.0)
    }
}
#}

More details

In the Rust 2015 snippet above, we've used -> StrWrap. However, unless you take a look at the definition of StrWrap, it is not clear that the returned value is actually borrowing something. Therefore, starting with Rust 2018, it is deprecated to leave off the lifetime parameters for non-reference-types (types other than & and &mut). Instead, where you previously wrote -> StrWrap, you should now write -> StrWrap<'_>, making clear that borrowing is occurring.

What exactly does '_ mean? It depends on the context! In output contexts, as in the return type of make_wrapper, it refers to a single lifetime for all "output" locations. In input contexts, a fresh lifetime is generated for each "input location". More concretely, to understand input contexts, consider the following example:


# #![allow(unused_variables)]
#fn main() {
// Rust 2015

struct Foo<'a, 'b: 'a> {
    field: &'a &'b str,
}

impl<'a, 'b: 'a> Foo<'a, 'b> {
    // some methods...
}
#}

We can rewrite this as:


# #![allow(unused_variables)]
#![feature(rust_2018_preview)]

#fn main() {
# struct Foo<'a, 'b: 'a> {
#     field: &'a &'b str,
# }

// Rust 2018

impl Foo<'_, '_> {
    // some methods...
}
#}

This is the same, because for each '_, a fresh lifetime is generated. Finally, the relationship 'a: 'b which the struct requires must be upheld.

For more details, see the tracking issue on In-band lifetime bindings.

Lifetime elision in impl

Minimum Rust version: nightly

When writing an impl, you can mention lifetimes without them being bound in the argument list.

In Rust 2015:

impl<'a> Iterator for MyIter<'a> { ... }
impl<'a, 'b> SomeTrait<'a> for SomeType<'a, 'b> { ... }

In Rust 2018:

impl Iterator for MyIter<'iter> { ... }
impl SomeTrait<'tcx> for SomeType<'tcx, 'gcx> { ... }

T: 'a inference in structs

Minimum Rust version: nightly

An annotation in the form of T: 'a, where T is either a type or another lifetime, is called an "outlives" requirement. Note that "outlives" also implies 'a: 'a.

One way in which edition 2018 helps you out in maintaining flow when writing programs is by removing the need to explicitly annotate these T: 'a outlives requirements in struct definitions. Instead, the requirements will be inferred from the fields present in the definitions.

Consider the following struct definitions in Rust 2015:


# #![allow(unused_variables)]
#fn main() {
// Rust 2015

struct Ref<'a, T: 'a> {
    field: &'a T
}

// or written with a `where` clause:

struct WhereRef<'a, T> where T: 'a {
    data: &'a T
}

// with nested references:

struct RefRef<'a, 'b: 'a, T: 'b> {
    field: &'a &'b T,
}

// using an associated type:

struct ItemRef<'a, T: Iterator>
where
    T::Item: 'a
{
    field: &'a T::Item
}
#}

In Rust 2018, since the requirements are inferred, you can instead write:

// Rust 2018

struct Ref<'a, T> {
    field: &'a T
}

struct WhereRef<'a, T> {
    data: &'a T
}

struct RefRef<'a, 'b, T> {
    field: &'a &'b T,
}

struct ItemRef<'a, T: Iterator> {
    field: &'a T::Item
}

If you prefer to be more explicit in some cases, that is still possible.

More details

For more details, see the tracking issue and the RFC.

Simpler lifetimes in static and const

Minimum Rust version: 1.17

In older Rust, you had to explicitly write the 'static lifetime in any static or const that needed a lifetime:


# #![allow(unused_variables)]
#fn main() {
# mod foo {
const NAME: &'static str = "Ferris";
# }
# mod bar {
static NAME: &'static str = "Ferris";
# }
#}

But 'static is the only possible lifetime there. So Rust now assumes the 'static lifetime, and you don't have to write it out:


# #![allow(unused_variables)]
#fn main() {
# mod foo {
const NAME: &str = "Ferris";
# }
# mod bar {
static NAME: &str = "Ferris";
# }
#}

In some situations, this can remove a lot of boilerplate:


# #![allow(unused_variables)]
#fn main() {
# mod foo {
// old
const NAMES: &'static [&'static str; 2] = &["Ferris", "Bors"];
# }
# mod bar {

// new
const NAMES: &[&str; 2] = &["Ferris", "Bors"];
# }
#}

Data types

In this chapter of the guide, we discuss a few improvements to data types. One of these are field-init-shorthand.

Field init shorthand

Minimum Rust version: 1.17

In older Rust, when initializing a struct, you must always give the full set of key: value pairs for its fields:


# #![allow(unused_variables)]
#fn main() {
struct Point {
    x: i32,
    y: i32,
}

let a = 5;
let b = 6;

let p = Point {
    x: a,
    y: b,
};
#}

However, often these variables would have the same names as the fields. So you'd end up with code that looks like this:

let p = Point {
    x: x,
    y: y,
};

Now, if the variable is of the same name, you don't have to write out both, just write out the key:


# #![allow(unused_variables)]
#fn main() {
struct Point {
    x: i32,
    y: i32,
}

let x = 5;
let y = 6;

// new
let p = Point {
    x,
    y,
};
#}

..= for inclusive ranges

Minimum Rust version: 1.26

Since well before Rust 1.0, you’ve been able to create exclusive ranges with .. like this:

for i in 1..3 {
    println!("i: {}", i);
}

This will print i: 1 and then i: 2. Today, you can now create an inclusive range, like this:


# #![allow(unused_variables)]
#fn main() {
for i in 1..=3 {
    println!("i: {}", i);
}
#}

This will print i: 1 and then i: 2 like before, but also i: 3; the three is included in the range. Inclusive ranges are especially useful if you want to iterate over every possible value in a range. For example, this is a surprising Rust program:

fn takes_u8(x: u8) {
    // ...
}

fn main() {
    for i in 0..256 {
        println!("i: {}", i);
        takes_u8(i);
    }
}

What does this program do? The answer: nothing. The warning we get when compiling has a hint:

warning: literal out of range for u8
 --> src/main.rs:6:17
  |
6 |     for i in 0..256 {
  |                 ^^^
  |
  = note: #[warn(overflowing_literals)] on by default

That’s right, since i is a u8, this overflows, and is the same as writing for i in 0..0, so the loop executes zero times.

We can do this with inclusive ranges, however:

fn takes_u8(x: u8) {
    // ...
}

fn main() {
    for i in 0..=255 {
        println!("i: {}", i);
        takes_u8(i);
    }
}

This will produce those 256 lines of output you might have been expecting.

128 bit integers

Minimum Rust version: 1.26

A very simple feature: Rust now has 128 bit integers!


# #![allow(unused_variables)]
#fn main() {
let x: i128 = 0;
let y: u128 = 0;
#}

These are twice the size of u64, and so can hold more values. More specifically,

  • u128: 0 - 340,282,366,920,938,463,463,374,607,431,768,211,455
  • i128: −170,141,183,460,469,231,731,687,303,715,884,105,728 - 170,141,183,460,469,231,731,687,303,715,884,105,727

Whew!

"Operator-equals" are now implementable

Minimum Rust version: 1.8

The various “operator equals” operators, such as += and -=, are implementable via various traits. For example, to implement += on a type of your own:

use std::ops::AddAssign;

#[derive(Debug)]
struct Count { 
    value: i32,
}

impl AddAssign for Count {
    fn add_assign(&mut self, other: Count) {
        self.value += other.value;
    }
}

fn main() {
    let mut c1 = Count { value: 1 };
    let c2 = Count { value: 5 };

    c1 += c2;

    println!("{:?}", c1);
}

This will print Count { value: 6 }.

union for an unsafe form of enum

Minimum Rust version: 1.19

Rust now supports unions:


# #![allow(unused_variables)]
#fn main() {
union MyUnion {
    f1: u32,
    f2: f32,
}
#}

Unions are kind of like enums, but they are “untagged”. Enums have a “tag” that stores which variant is the correct one at runtime; unions don't have this tag.

Since we can interpret the data held in the union using the wrong variant and Rust can’t check this for us, that means reading or writing a union’s field is unsafe:


# #![allow(unused_variables)]
#fn main() {
# union MyUnion {
#     f1: u32,
#     f2: f32,
# }
let mut u = MyUnion { f1: 1 };

unsafe { u.f1 = 5 };

let value = unsafe { u.f1 };
#}

Pattern matching works too:


# #![allow(unused_variables)]
#fn main() {
# union MyUnion {
#     f1: u32,
#     f2: f32,
# }
fn f(u: MyUnion) {
    unsafe {
        match u {
            MyUnion { f1: 10 } => { println!("ten"); }
            MyUnion { f2 } => { println!("{}", f2); }
        }
    }
}
#}

When are unions useful? One major use-case is interoperability with C. C APIs can (and depending on the area, often do) expose unions, and so this makes writing API wrappers for those libraries significantly easier. Additionally, unions also simplify Rust implementations of space-efficient or cache-efficient structures relying on value representation, such as machine-word-sized unions using the least-significant bits of aligned pointers to distinguish cases.

There’s still more improvements to come. For now, unions can only include Copy types and may not implement Drop. We expect to lift these restrictions in the future.

Choosing alignment with the repr attribute

Minimum Rust version: 1.25

From Wikipedia:

The CPU in modern computer hardware performs reads and writes to memory most efficiently when the data is naturally aligned, which generally means that the data address is a multiple of the data size. Data alignment refers to aligning elements according to their natural alignment. To ensure natural alignment, it may be necessary to insert some padding between structure elements or after the last element of a structure.

The #[repr] attribute has a new parameter, align, that sets the alignment of your struct:


# #![allow(unused_variables)]
#fn main() {
struct Number(i32);

assert_eq!(std::mem::align_of::<Number>(), 4);
assert_eq!(std::mem::size_of::<Number>(), 4);

#[repr(align(16))]
struct Align16(i32);

assert_eq!(std::mem::align_of::<Align16>(), 16);
assert_eq!(std::mem::size_of::<Align16>(), 16);
#}

If you’re working with low-level stuff, control of these kinds of things can be very important!

The alignment of a type is normally not worried about as the compiler will "do the right thing" of picking an appropriate alignment for general use cases. There are situations, however, where a nonstandard alignment may be desired when operating with foreign systems. For example these sorts of situations tend to necessitate or be much easier with a custom alignment:

  • Hardware can often have obscure requirements such as "this structure is aligned to 32 bytes" when it in fact is only composed of 4-byte values. While this can typically be manually calculated and managed, it's often also useful to express this as a property of a type to get the compiler to do a little extra work instead.
  • C compilers like gcc and clang offer the ability to specify a custom alignment for structures, and Rust can much more easily interoperate with these types if Rust can also mirror the request for a custom alignment (e.g. passing a structure to C correctly is much easier).
  • Custom alignment can often be used for various tricks here and there and is often convenient as "let's play around with an implementation" tool. For example this can be used to statically allocate page tables in a kernel or create an at-least cache-line-sized structure easily for concurrent programming.

The purpose of this feature is to provide a lightweight annotation to alter the compiler-inferred alignment of a structure to enable these situations much more easily.

SIMD for faster computing

Minimum Rust version: 1.27

The basics of SIMD are now available! SIMD stands for “single instruction, multiple data.” Consider a function like this:


# #![allow(unused_variables)]
#fn main() {
pub fn foo(a: &[u8], b: &[u8], c: &mut [u8]) {
    for ((a, b), c) in a.iter().zip(b).zip(c) {
        *c = *a + *b;
    }
}
#}

Here, we’re taking two slices, and adding the numbers together, placing the result in a third slice. The simplest possible way to do this would be to do exactly what the code does, and loop through each set of elements, add them together, and store it in the result. However, compilers can often do better. LLVM will usually “autovectorize” code like this, which is a fancy term for “use SIMD.” Imagine that a and b were both 16 elements long. Each element is a u8, and so that means that each slice would be 128 bits of data. Using SIMD, we could put both a and b into 128 bit registers, add them together in a single instruction, and then copy the resulting 128 bits into c. That’d be much faster!

While stable Rust has always been able to take advantage of autovectorization, sometimes, the compiler just isn’t smart enough to realize that we can do something like this. Additionally, not every CPU has these features, and so LLVM may not use them so your program can be used on a wide variety of hardware. The std::arch module allows us to use these kinds of instructions directly, which means we don’t need to rely on a smart compiler. Additionally, it includes some features that allow us to choose a particular implementation based on various criteria. For example:

#[cfg(all(any(target_arch = "x86", target_arch = "x86_64"),
      target_feature = "avx2"))]
fn foo() {
    #[cfg(target_arch = "x86")]
    use std::arch::x86::_mm256_add_epi64;
    #[cfg(target_arch = "x86_64")]
    use std::arch::x86_64::_mm256_add_epi64;

    unsafe {
        _mm256_add_epi64(...);
    }
}

Here, we use cfg flags to choose the correct version based on the machine we’re targeting; on x86 we use that version, and on x86_64 we use its version. We can also choose at runtime:

fn foo() {
    #[cfg(any(target_arch = "x86", target_arch = "x86_64"))]
    {
        if is_x86_feature_detected!("avx2") {
            return unsafe { foo_avx2() };
        }
    }

    foo_fallback();
}

Here, we have two versions of the function: one which uses AVX2, a specific kind of SIMD feature that lets you do 256-bit operations. The is_x86_feature_detected! macro will generate code that detects if your CPU supports AVX2, and if so, calls the foo_avx2 function. If not, then we fall back to a non-AVX implementation, foo_fallback. This means that our code will run super fast on CPUs that support AVX2, but still work on ones that don’t, albeit slower.

If all of this seems a bit low-level and fiddly, well, it is! std::arch is specifically primitives for building these kinds of things. We hope to eventually stabilize a std::simd module with higher-level stuff in the future. But landing the basics now lets the ecosystem experiment with higher level libraries starting today. For example, check out the faster crate. Here’s a code snippet with no SIMD:

let lots_of_3s = (&[-123.456f32; 128][..]).iter()
    .map(|v| {
        9.0 * v.abs().sqrt().sqrt().recip().ceil().sqrt() - 4.0 - 2.0
    })
    .collect::<Vec<f32>>();

To use SIMD with this code via faster, you’d change it to this:

let lots_of_3s = (&[-123.456f32; 128][..]).simd_iter()
    .simd_map(f32s(0.0), |v| {
        f32s(9.0) * v.abs().sqrt().rsqrt().ceil().sqrt() - f32s(4.0) - f32s(2.0)
    })
    .scalar_collect();

It looks almost the same: simd_iter instead of iter, simd_map instead of map, f32s(2.0) instead of 2.0. But you get a SIMD-ified version generated for you.

Beyond that, you may never write any of this yourself, but as always, the libraries you depend on may. For example, the regex crate contains these SIMD speedups without you needing to do anything at all!

Macros

In this chapter of the guide, we discuss a few improvements to the macro system. A notable addition here is the introduction of custom derive macros.

Custom Derive

Minimum Rust version: 1.15

In Rust, you’ve always been able to automatically implement some traits through the derive attribute:


# #![allow(unused_variables)]
#fn main() {
#[derive(Debug)]
struct Pet {
    name: String,
}
#}

The Debug trait is then implemented for Pet, with vastly less boilerplate. For example, without derive, you'd have to write this:


# #![allow(unused_variables)]
#fn main() {
use std::fmt;

struct Pet {
    name: String,
}

impl fmt::Debug for Pet {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        match self {
            Pet { name } => {
                let mut debug_trait_builder = f.debug_struct("Pet");

                let _ = debug_trait_builder.field("name", name);

                debug_trait_builder.finish()
            }
        }
    }
}
#}

Whew!

However, this only worked for traits provided as part of the standard library; it was not customizable. But now, you can tell Rust what to do when someone wants to derive your trait. This is used heavily in popular crates like serde and Diesel.

For more, including learning how to build your own custom derive, see The Rust Programming Language.

Macro changes

Minimum Rust version: nightly

In Rust 2018, you can import specific macros from external crates via use statements, rather than the old #[macro_use] attribute.

For example, consider a bar crate that implements a baz! macro. In src/lib.rs:


# #![allow(unused_variables)]
#fn main() {
#[macro_export]
macro_rules! baz {
    () => ()
}
#}

In your crate, you would have written

// Rust 2015

#[macro_use]
extern crate bar;

fn main() {
    baz!();
}

Now, you write:

// Rust 2018
#![feature(rust_2018_preview)]

use bar::baz;

fn main() {
    baz!();
}

This moves macro_rules macros to be a bit closer to other kinds of items.

Procedural macros

When using procedural macros to derive traits, you will have to name the macro that provides the custom derive. This generally matches the name of the trait, but check with the documentation of the crate providing the derives to be sure.

For example, with Serde you would have written

// Rust 2015
extern crate serde;
#[macro_use] extern crate serde_derive;

#[derive(Serialize, Deserialize)]
struct Bar;

Now, you write instead:

// Rust 2018
#![feature(rust_2018_preview)]
use serde_derive::{Serialize, Deserialize};

#[derive(Serialize, Deserialize)]
struct Bar;

More details

This only works for macros defined in external crates. For macros defined locally, #[macro_use] mod foo; is still required, as it was in Rust 2015.

The compiler

In this chapter of the guide, we discuss a few improvements to the compiler. A notable addition here is our new and improved error messages.

Improved error messages

Minimum Rust version: 1.12

We're always working on error improvements, and there are little improvements in almost every Rust version, but in Rust 1.12, a significant overhaul of the error message system was created.

For example, here's some code that produces an error:

fn main() {
    let mut x = 5;

    let y = &x;

    x += 1;
}

Here's the error in Rust 1.11:

foo.rs:6:5: 6:11 error: cannot assign to `x` because it is borrowed [E0506]
foo.rs:6     x += 1;
             ^~~~~~
foo.rs:4:14: 4:15 note: borrow of `x` occurs here
foo.rs:4     let y = &x;
                      ^
foo.rs:6:5: 6:11 help: run `rustc --explain E0506` to see a detailed explanation

Here's the error in Rust 1.28:

error[E0506]: cannot assign to `x` because it is borrowed
 --> foo.rs:6:5
  |
4 |     let y = &x;
  |              - borrow of `x` occurs here
5 |
6 |     x += 1;
  |     ^^^^^^ assignment to borrowed `x` occurs here

error: aborting due to previous error

This error isn't terribly different, but shows off how the format has changed. It shows off your code in context, rather than just showing the text of the lines themselves.

Incremental Compilation

Minimum Rust version: 1.24

Back in September of 2016, we blogged about Incremental Compilation. While that post goes into the details, the idea is basically this: when you’re working on a project, you often compile it, then change something small, then compile again. Historically, the compiler has compiled your entire project, no matter how little you’ve changed the code. The idea with incremental compilation is that you only need to compile the code you’ve actually changed, which means that that second build is faster.

This is now turned on by default. This means that your builds should be faster! Don’t forget about cargo check when trying to get the lowest possible build times.

This is still not the end story for compiler performance generally, nor incremental compilation specifically. We have a lot more work planned in the future.

One small note about this change: it makes builds faster, but makes the final binary a bit slower. For this reason, it's not turned on in release builds.

An attribute for deprecation

Minimum Rust version: 1.9

If you're writing a library, and you'd like to deprecate something, you can use the deprecated attribute:


# #![allow(unused_variables)]
#fn main() {
#[deprecated(
    since = "0.2.1",
    note = "Please use the bar function instead"
)]
pub fn foo() {
    // ...
}
#}

This will give your users a warning if they use the deprecated functionality:

   Compiling playground v0.0.1 (file:///playground)
warning: use of deprecated item 'foo': Please use the bar function instead
  --> src/main.rs:10:5
   |
10 |     foo();
   |     ^^^
   |
   = note: #[warn(deprecated)] on by default

Both since and note are optional.

since can be in the future; you can put whatever you'd like, and what's put in there isn't checked.

Rustup for managing Rust versions

Minimum Rust version: various (this tool has its own versioning scheme and works with all Rust versions)

The Rustup tool has become the recommended way to install Rust, and is advertised on our website. Its powers go further than that though, allowing you to manage various versions, components, and platforms.

For installing Rust

To install Rust through Rustup, you can go to https://www.rust-lang.org/install.html, which will let you know how to do so on your platform. This will install both rustup itself and the stable version of rustc and cargo.

To install a specific Rust version, you can use rustup install:

$ rustup install 1.30.0

This works for a specific nightly, as well:

$ rustup install nightly-2018-08-01

As well as any of our release channels:

$ rustup install stable
$ rustup install beta
$ rustup install nightly

For updating your installation

To update all of the various channels you may have installed:

$ rustup update

This will look at everything you've installed, and if there are new releases, will update anything that has one.

Managing versions

To set the default toolchain to something other than stable:

$ rustup toolchain default nightly

To use a toolchain other than the default, use rustup run:

$ rustup run nightly cargo build

There's also an alias for this that's a little shorter:

$ cargo +nightly build

If you'd like to have a different default per-directory, that's easy too! If you run this inside of a project:

$ rustup override set nightly

Then when you're in that directory, any invocations of rustc or cargo will use that toolchain. To share this with others, you can create a rust-toolchain file with the contents of a toolchain, and check it into source control. Now, when someone clones your project, they'll get the right version without needing to override set themselves.

Installing other targets

Rust supports cross-compiling to other targets, and Rustup can help you manage them. For example, to use MUSL:

$ rustup target add x86_64-unknown-linux-musl

And then you can

$ cargo build --target=x86_64-unknown-linux-musl

To see the full list of targets you can install:

$ rustup target list

Installing components

Components are used to install certain kinds of tools. While cargo-install has you covered for most tools, some tools need deep integration into the compiler. Rustup knows exactly what version of the compiler you're using, and so it's got just the information that these tools need.

Components are per-toolchain, so if you want them to be available to more than one toolchain, you'll need to install them multiple times. In the following examples, add a --toolchain flag, set to the toolchain you want to install for, nightly for example. Without this flag, it will install the component for the default toolchain.

To see the full list of components you can install:

$ rustup component list

Next, let's talk about some popular components and when you might want to install them.

rust-docs, for local documentation

This first component is installed by default when you install a toolchain. It contains a copy of Rust's documentation, so that you can read it offline.

This component cannot be removed for now; if that's of interest, please comment on this issue.

rust-src for a copy of Rust's source code

The rust-src component can give you a local copy of Rust's source code. Why might you need this? Well, autocompletion tools like Racer use this information to know more about the functions you're trying to call.

$ rustup component add rust-src

The "preview" components

There are several components in a "preview" stage. These components currently have -preview in their name, and this indicates that they're not quite 100% ready for general consumption yet. Please try them out and give us feedback, but know that they do not follow Rust's stability guarantees, and are still actively changing, possibly in backwards-incompatible ways.

rustfmt-preview for automatic code formatting

Minimum Rust version: 1.24

If you'd like to have your code automatically formatted, you can install this component:

$ rustup component add rustfmt-preview

This will install two tools, rustfmt and cargo-fmt, that will reformat your code for you! For example:

$ cargo fmt

will reformat your entire Cargo project.

rls-preview for IDE integration

Minimum Rust version: 1.21

Many IDE features are built off of the langserver protocol. To gain support for Rust with these IDEs, you'll need to install the Rust language sever, aka the "RLS":

$ rustup component add rls-preview

Your IDE should take it from there.

clippy-preview for more lints

For even more lints to help you write Rust code, you can install clippy:

$ rustup component add clippy-preview

This will install cargo-clippy for you:

$ cargo clippy

For more, check out clippy's documentation.

llvm-tools-preview for using extra LLVM tools

If you'd like to use the lld linker, or other tools like llvm-objdump or llvm-objcopy, you can install this component:

$ rustup component add llvm-tools-preview

This is the newest component, and so doesn't have good documentation at the moment.

Cargo and crates.io

In this chapter of the guide, we discuss a few improvements to cargo and crates.io. A notable addition here is the new cargo check command.

cargo check for faster checking

Minimum Rust version: 1.16

cargo check is a new subcommand should speed up the development workflow in many cases.

What does it do? Let's take a step back and talk about how rustc compiles your code. Compilation has many "passes", that is, there are many distinct steps that the compiler takes on the road from your source code to producing the final binary. However, you can think of this process in two big steps: first, rustc does all of its safety checks, makes sure your syntax is correct, all that stuff. Second, once it's satisfied that everything is in order, it produces the actual binary code that you end up executing.

It turns out that that second step takes a lot of time. And most of the time, it's not neccesary. That is, when you're working on some Rust code, many developers will get into a workflow like this:

  1. Write some code.
  2. Run cargo build to make sure it compiles.
  3. Repeat 1-2 as needed.
  4. Run cargo test to make sure your tests pass.
  5. Try the binary yourself
  6. GOTO 1.

In step two, you never actually run your code. You're looking for feedback from the compiler, not to actually run the binary. cargo check supports exactly this use-case: it runs all of the compiler's checks, but doesn't produce the final binary. To use it:

$ cargo check

where you may normally cargo build. The workflow now looks like:

  1. Write some code.
  2. Run cargo check to make sure it compiles.
  3. Repeat 1-2 as needed.
  4. Run cargo test to make sure your tests pass.
  5. Run cargo build to build a binary and try it yourself
  6. GOTO 1.

So how much speedup do you actually get? Like most performance related questions, the answer is "it depends." Here are some very un-scientific benchmarks at the time of writing.

build performance check performance speedup
initial compile 11s 5.6s 1.96x
second compile (no changes) 3s 1.9s 1.57x
third compile with small change 5.8s 3s 1.93x

cargo install for easy installation of tools

Minimum Rust version: 1.5

Cargo has grown a new install command. This is intended to be used for installing new subcommands for Cargo, or tools for Rust developers. This doesn't replace the need to build real, native packages for end-users on the platforms you support.

For example, this guide is created with mdbook. You can install it on your system with

$ cargo install mdbook

And then use it with

$ mdbook --help

As an example of extending Cargo, you can use the cargo-update package. To install it:

$ cargo install cargo-update

This will allow you to use this command, which checks everything you've cargo install'd and updates it to the latest version:

$ cargo install-update -a

cargo new defaults to a binary project

Minimum Rust version: 1.25

cargo new will now default to generating a binary, rather than a library. We try to keep Cargo’s CLI quite stable, but this change is important, and is unlikely to cause breakage.

For some background, cargo new accepts two flags: --lib, for creating libraries, and --bin, for creating binaries, or executables. If you don’t pass one of these flags, it used to default to --lib. At the time, we made this decision because each binary (often) depends on many libraries, and so we thought the library case would be more common. However, this is incorrect; each library is depended upon by many binaries. Furthermore, when getting started, what you often want is a program you can run and play around with. It’s not just new Rustaceans though; even very long-time community members have said that they find this default surprising. As such, we’ve changed it, and it now defaults to --bin.

cargo rustc for passing arbitrary flags to rustc

Minimum Rust version: 1.1

cargo rustc is a new subcommand for Cargo that allows you to pass arbitrary rustc flags through Cargo.

For example, Cargo does not have a way to pass unstable flags built-in. But if we'd like to use print-type-sizes to see what layout information our types have. We can run this:

$ cargo rustc -- -Z print-type-sizes

And we'll get a bunch of output describing the size of our types.

Note

cargo rustc only passes these flags to invocations of your crate, and not to any rustc invocations used to build dependencies. If you'd like to do that, see $RUSTFLAGS.

Cargo workspaces for multi-package projects

Minimum Rust version: 1.12

Cargo used to have two levels of organization:

  • A package contains one or more crates
  • A crate has one or more modules

Cargo now has an additional level:

  • A workspace contains one or more packages

This can be useful for larger projects. For example, [the futures respository] is a workspace that contains many related packages:

  • futures
  • futures-util
  • futures-io
  • futures-channel

and more.

Workspaces allow these packages to be developed individually, but they share a single set of dependencies, and therefore have a single target directory and a single Cargo.lock.

For more details about workspaces, please see the Cargo documentation.

Multi-file examples

Minimum Rust version: 1.22

Cargo has an examples feature for showing people how to use your package. By putting individual files inside of the top-level examples directory, you can create multiple examples.

But what if your example is too big for a single file? Cargo supports adding sub-directories inside of examples, and looks for a main.rs inside of them to build the example. It looks like this:

my-package
 └──src
     └── lib.rs // code here
 └──examples 
     └── simple-example.rs // a single-file example
     └── complex-example
        └── helper.rs
        └── main.rs // a more complex example that also uses `helper` as a submodule

Replacing dependencies with patch

Minimum Rust version: 1.21

The [patch] section of your Cargo.toml can be used when you want to override certain parts of your dependency graph.

Cargo has a [replace] feature that is similar; while we don't intend to deprecate or remove [replace], you should prefer [patch] in all circumstances.

So what’s it look like? Let’s say we have a Cargo.toml that looks like this:

[dependencies]
foo = "1.2.3"

In addition, our foo package depends on a bar crate, and we find a bug in bar. To test this out, we’d download the source code for bar, and then update our Cargo.toml:

[dependencies]
foo = "1.2.3"

[patch.crates-io]
bar = { path = '/path/to/bar' }

Now, when you cargo build, it will use the local version of bar, rather than the one from crates.io that foo depends on. You can then try out your changes, and fix that bug!

For more details, see the documentation for patch.

Cargo can use a local registry replacement

Minimum Rust version: 1.12

Cargo finds its packages in a "source". The default source is crates.io. However, you can choose a different source in your .cargo/config:

[source.crates-io]
replace-with = 'my-awesome-registry'

[source.my-awesome-registry]
registry = 'https://github.com/my-awesome/registry-index'

This configuration means that instead of using crates.io, Cargo will query the my-awesome-registry source instead (configured to a different index here). This alternate source must be the exact same as the crates.io index. Cargo assumes that replacement sources are exact 1:1 mirrors in this respect, and the following support is designed around that assumption.

When generating a lock file for crate using a replacement registry, the original registry will be encoded into the lock file. For example in the configuration above, all lock files will still mention crates.io as the registry that packages originated from. This semantically represents how crates.io is the source of truth for all crates, and this is upheld because all replacements have a 1:1 correspondance.

Overall, this means that no matter what replacement source you're working with, you can ship your lock file to anyone else and you'll all still have verifiably reproducible builds!

This has enabled tools like cargo-vendor and cargo-local-registry, which are often useful for "offline builds." They prepare the list of all Rust dependencies ahead of time, which lets you ship them to a build machine with ease.

Crates.io disallows wildcard dependencies

Minimum Rust version: 1.6

Crates.io will not allow you to upload a package with a wildcard dependency. In other words, these:

[dependencies]
regex = "*"

A wildcard dependency means that you work with any possible version of your dependency. This is highly unlikely to be true, and would cause unnecessary breakage in the ecosystem.

Instead, depend on a version range. For example, ^ is the default, so you could use

[dependencies]
regex = "1.0.0"

instead. >, <=, and all of the other, non-* ranges work as well.

Documentation

In this chapter of the guide, we discuss a few improvements to documentation. A notable addition here is the second edition of "the book".

New editions of the "the book"

Minimum Rust version: 1.18 for drafts of the second edition

Minimum Rust version: 1.26 for the final version of the second edition

Minimum Rust version: 1.28 for drafts of the 2018 edition

We've distributed a copy of "The Rust Programming Language," affectionately nicknamed "the book", with every version of Rust since Rust 1.0.

However, because it was written before Rust 1.0, it started showing its age. Many parts of the book are vague, because it was written before the true details were nailed down for the 1.0 release. It didn't do a fantastic job of teaching lifetimes.

Starting with Rust 1.18, we shipped drafts of a second edition of the book. The final version was shipped with Rust 1.26. The new edition is a complete re-write from the ground up, using the last two years of knowledge we’ve gained from teaching people Rust. You’ll find brand-new explanations for a lot of Rust’s core concepts, new projects to build, and all kinds of other good stuff. Please check it out and let us know what you think!

You can also purchase a dead-tree version from No Starch Press. Now that the print version has shipped, the second edition is frozen.

The names are a bit confusing though, because the "second edition" of the book is the first printed edition of the book. As such, we decided that newer editions of the book will correspond with newer editions of Rust itself, and so starting with 1.28, we've been shipping drafts of the next version, the 2018 Edition. It's still pretty close to the second edition, but contains information about newer features since the book's content was frozen. We'll be continuing to update this edition until we decide to print a second edition in paper.

The Rust Bookshelf

Minimum Rust version: various, each book is different.

As Rust's documentation has grown, we've gained far more than just "The book" and the reference. We now have a collection of various long-form docs, nicknamed "the Rust Bookshelf." Different resources are added at various times, and we're adding new ones as more get written.

The Cargo book

Minimum Rust version: 1.21

Historically, Cargo’s docs were hosted on http://doc.crates.io, which doesn’t follow the release train model, even though Cargo itself does. This led to situations where a feature would land in Cargo nightly, the docs would be updated, and then for up to twelve weeks, users would think that it should work, but it wouldn’t yet. https://doc.rust-lang.org/cargo is the new home of Cargo’s docs, and http://doc.crates.io now redirects there.

The rustdoc book

Minimum Rust version: 1.21

Rustdoc, our documentation tool, now has a guide at https://doc.rust-lang.org/rustdoc.

Rust By Example

Minimum Rust version: 1.25

Rust by Example used to live at https://rustbyexample.com, but now is part of the Bookshelf! It can be found at https://doc.rust-lang.org/rust-by-example/. RBE lets you learn Rust through short code examples and exercises, as opposed to the lengthy prose of The Book.

The Rustonomicon

Minimum Rust version: 1.3

We now have a draft book, The Rustonomicon: the Dark Arts of Advanced and Unsafe Rust Programming.

From the title, I'm sure you can guess: this book discusses some advanced topics, including unsafe. It's a must-read for anyone who's working at the lowest levels with Rust.

std::os has documentation for all platforms

Minimum Rust version: 1.21

The std::os module contains operating system specific functionality. You’ll now see more than just linux, the platform we build the documentation on.

We’ve long regretted that the hosted version of the documentation has been Linux-specific; this is a first step towards rectifying that. This is specific to the standard library and not for general use; we hope to improve this further in the future.

rustdoc

In this chapter of the guide, we discuss a few improvements to rustdoc. A notable addition to it was that documentation tests can now compile-fail.

Documentation tests can now compile-fail

Minimum Rust version: 1.22

You can now create compile-fail tests in Rustdoc, like this:

/// ```compile_fail
/// let x = 5;
/// x += 2; // shouldn't compile!
/// ```
# fn foo() {}

Please note that these kinds of tests can be more fragile than others, as additions to Rust may cause code to compile when it previously would not. Consider the first release with ?, for example: code using ? would fail to compile on Rust 1.21, but compile successfully on Rust 1.22, causing your test suite to start failing.

Rustdoc uses CommonMark

Minimum Rust version: 1.25 for support by default

Minimum Rust version: 1.23 for support via a flag

Rustdoc lets you write documentation comments in Markdown. At Rust 1.0, we were using the hoedown markdown implementation, written in C. Markdown is more of a family of implementations of an idea, and so hoedown had its own dialect, like many parsers. The CommonMark project has attempted to define a more strict version of Markdown, and so now, Rustdoc uses it by default.

As of Rust 1.23, we still defaulted to hoedown, but you could enable Commonmark via a flag, --enable-commonmark. Today, we only support CommonMark.

Platform and target support

In this chapter of the guide, we discuss a few improvements to platform and target support. A notable addition to it was that the libcore library now works on stable Rust.

libcore for low-level Rust

Minimum Rust version: 1.6

Rust’s standard library is two-tiered: there’s a small core library, libcore, and the full standard library, libstd, that builds on top of it. libcore is completely platform agnostic, and requires only a handful of external symbols to be defined. Rust’s libstd builds on top of libcore, adding support for things like memory allocation and I/O. Applications using Rust in the embedded space, as well as those writing operating systems, often eschew libstd, using only libcore.

As an additional note, while building libraries with libcore is supported today, building full applications is not yet stable.

To use libcore, add this flag to your crate root:

#![no_std]

This will remove the standard library, and bring the core crate into your namespace for use:

#![no_std]

use core::cell::Cell;

You can find libcore's documentation here.

WebAssembly support

Minimum Rust version: 1.14 for emscripten

Minimum Rust version: nightly for wasm32-unknown-unknown

Rust has gained support for WebAssembly, meaning that you can run Rust code in your browser, client-side.

In Rust 1.14, we gained support through emscripten. With it installed, you can write Rust code and have it produce asm.js (the precusor to wasm) and/or WebAssembly.

Here's an example of using this support:

$ rustup target add wasm32-unknown-emscripten
$ echo 'fn main() { println!("Hello, Emscripten!"); }' > hello.rs
$ rustc --target=wasm32-unknown-emscripten hello.rs
$ node hello.js

However, in the meantime, Rust has also grown its own support, independent from Emscripten. This is known as "the unknown target", because instead of wasm32-unknown-emscripten, it's wasm32-unknown-unknown. This will be the preferred target to use once it's ready, but for now, it's really only well-supported in nightly.

Global allocators

Minimum Rust version: 1.28

Allocators are the way that programs in Rust obtain memory from the system at runtime. Previously, Rust did not allow changing the way memory is obtained, which prevented some use cases. On some platforms, this meant using jemalloc, on others, the system allocator, but there was no way for users to control this key component. With 1.28.0, the #[global_allocator] attribute is now stable, which allows Rust programs to set their allocator to the system allocator, as well as define new allocators by implementing the GlobalAlloc trait.

The default allocator for Rust programs on some platforms is jemalloc. The standard library now provides a handle to the system allocator, which can be used to switch to the system allocator when desired, by declaring a static and marking it with the #[global_allocator] attribute.

use std::alloc::System;

#[global_allocator]
static GLOBAL: System = System;

fn main() {
    let mut v = Vec::new();
    // This will allocate memory using the system allocator.
    v.push(1);
}

However, sometimes you want to define a custom allocator for a given application domain. This is also relatively easy to do by implementing the GlobalAlloc trait. You can read more about how to do this in the documentation.

MSVC toolchain support

Minimum Rust version: 1.2

At the release of Rust 1.0, we only supported the GNU toolchain on Windows. With the release of Rust 1.2, we introduced initial support for the MSVC toolchain. After that, as support matured, we eventually made it the default choice for Windows users.

The difference between the two matters for interacting with C. If you're using a library built with one toolchain or another, you need to match that with the appropriate Rust toolchain. If you're not sure, go with MSVC; it's the default for good reason.

To use this feature, simply use Rust on Windows, and the installer will default to it. If you'd prefer to switch to the GNU toolchain, you can install it with Rustup:

$ rustup toolchain install stable-x86_64-pc-windows-gnu

MUSL support for fully static binaries

Minimum Rust version: 1.1

By default, Rust will statically link all Rust code. However, if you use the standard library, it will dynamically link to the system's libc implementation.

If you'd like a 100% static binary, the MUSL libc can be used on Linux.

Installing MUSL support

To add support for MUSL, you need to choose the correct target. The forge has a full list of targets supported, with a number of ones using musl.

If you're not sure what you want, it's probably x86_64-unknown-linux-musl, for 64-bit Linux. We'll be using this target in this guide, but the instructions remain the same for other targets, just change the name wherever we mention the target.

To get support for this target, you use rustup:

$ rustup target add x86_64-unknown-linux-musl

This will install support for the default toolchain; to install for other toolchains, add the --toolchain flag. For example:

$ rustup target add x86_64-unknown-linux-musl --toolchain=nightly

Building with MUSL

To use this new target, pass the --target flag to Cargo:

$ cargo build --target x86_64-unknown-linux-musl

The binary produced will now be built with MUSL!

cdylib crates for C interoperability

Minimum Rust version: 1.10 for rustc

Minimum Rust version: 1.11 for cargo

If you're producing a library that you intend to be used from C (or another language through a C FFI), there's no need for Rust to include Rust-specific stuff in the final object code. For libraries like that, you'll want to use the cdylib crate type in your Cargo.toml:

[lib]
crate-type = ["cdylib"]

This will produce a smaller binary, with no Rust-specific information insde of it.

Unstable feature status

Language

Feature Status Minimum Edition
impl Trait Shipped, 1.26 2015
Basic slice patterns Shipped, 1.26 2015
Default match bindings Shipped, 1.26 2015
Anonymous lifetimes Shipped, 1.26 2015
dyn Trait Shipped, 1.27 2015
SIMD support Shipped, 1.27 2015
? in main/tests Shipping, 1.26 and 1.28 2015
In-band lifetimes Unstable; tracking issue 2015
Lifetime elision in impls Unstable; tracking issue 2015
Non-lexical lifetimes Implemented but not ready for preview 2015
T: 'a inference in structs Unstable; tracking issue 2015
Raw identifiers Unstable; tracking issue ?
Import macros via use Unstable; tracking issue ?
Module system path changes Unstable; tracking issue 2018

While some of these features are already available in Rust 2015, they are tracked here because they are being promoted as part of the Rust 2018 edition. Accordingly, they will be discussed in subsequent sections of this guide book. The features marked as "Shipped" are all available today in stable Rust, so you can start using them right now!

Standard library

Feature Status
Custom global allocators Will ship in 1.28

Tooling

Tool Status
RLS 1.0 Feature-complete; see 1.0 milestone
rustfmt 1.0 Finalizing spec; 1.0 milestone, style guide RFC, stability RFC
Clippy 1.0 RFC

Documentation

Tool Status
Edition Guide Initial draft complete
TRPL Updated as features stabilize

Web site

The visual design is being finalized, and early rounds of content brainstorming are complete.