diff --git a/.travis.yml b/.travis.yml index 45ba4a890..d7f8f8f42 100644 --- a/.travis.yml +++ b/.travis.yml @@ -9,6 +9,7 @@ install: - bash ci/install.sh script: - mdbook build +- mdbook test notifications: email: on_success: never diff --git a/README.md b/README.md index 7e468fbb0..07d1749e8 100644 --- a/README.md +++ b/README.md @@ -30,8 +30,8 @@ To help prevent accidentally introducing broken links, we use the invoke this link checker, otherwise it will emit a warning saying it couldn't be found. -``` -$ cargo install mdbook-linkcheck +```bash +> cargo install mdbook-linkcheck ``` You will need `mdbook` version `>= 0.1`. `linkcheck` will be run automatically when you run `mdbook build`. diff --git a/src/appendix-background.md b/src/appendix-background.md index b49ad6d52..285d74477 100644 --- a/src/appendix-background.md +++ b/src/appendix-background.md @@ -21,7 +21,7 @@ all the remainder. Only at the end of the block is there the possibility of branching to more than one place (in MIR, we call that final statement the **terminator**): -``` +```mir bb0: { statement0; statement1; @@ -34,7 +34,7 @@ bb0: { Many expressions that you are used to in Rust compile down to multiple basic blocks. For example, consider an if statement: -```rust +```rust,ignore a = 1; if some_variable { b = 1; @@ -46,7 +46,7 @@ d = 1; This would compile into four basic blocks: -``` +```mir BB0: { a = 1; if some_variable { goto BB1 } else { goto BB2 } diff --git a/src/appendix-code-index.md b/src/appendix-code-index.md index 49fe08ee3..62edd0f5b 100644 --- a/src/appendix-code-index.md +++ b/src/appendix-code-index.md @@ -6,17 +6,17 @@ compiler. Item | Kind | Short description | Chapter | Declaration ----------------|----------|-----------------------------|--------------------|------------------- -`CodeMap` | struct | The CodeMap maps the AST nodes to their source code | [The parser] | [src/libsyntax/codemap.rs](https://github.com/rust-lang/rust/blob/master/src/libsyntax/codemap.rs) -`CompileState` | struct | State that is passed to a callback at each compiler pass | [The Rustc Driver] | [src/librustc_driver/driver.rs](https://github.com/rust-lang/rust/blob/master/src/librustc_driver/driver.rs) +`CodeMap` | struct | The CodeMap maps the AST nodes to their source code | [The parser] | [src/libsyntax/codemap.rs](https://doc.rust-lang.org/nightly/nightly-rustc/syntax/codemap/struct.CodeMap.html) +`CompileState` | struct | State that is passed to a callback at each compiler pass | [The Rustc Driver] | [src/librustc_driver/driver.rs](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_driver/driver/struct.CompileState.html) `DocContext` | struct | A state container used by rustdoc when crawling through a crate to gather its documentation | [Rustdoc] | [src/librustdoc/core.rs](https://github.com/rust-lang/rust/blob/master/src/librustdoc/core.rs) -`ast::Crate` | struct | Syntax-level representation of a parsed crate | [The parser] | [src/librustc/hir/mod.rs](https://github.com/rust-lang/rust/blob/master/src/libsyntax/ast.rs) -`hir::Crate` | struct | More abstract, compiler-friendly form of a crate's AST | [The Hir] | [src/librustc/hir/mod.rs](https://github.com/rust-lang/rust/blob/master/src/librustc/hir/mod.rs) -`ParseSess` | struct | This struct contains information about a parsing session | [the Parser] | [src/libsyntax/parse/mod.rs](https://github.com/rust-lang/rust/blob/master/src/libsyntax/parse/mod.rs) -`Session` | struct | The data associated with a compilation session | [the Parser], [The Rustc Driver] | [src/librustc/session/mod.html](https://github.com/rust-lang/rust/blob/master/src/librustc/session/mod.rs) -`StringReader` | struct | This is the lexer used during parsing. It consumes characters from the raw source code being compiled and produces a series of tokens for use by the rest of the parser | [The parser] | [src/libsyntax/parse/lexer/mod.rs](https://github.com/rust-lang/rust/blob/master/src/libsyntax/parse/lexer/mod.rs) -`TraitDef` | struct | This struct contains a trait's definition with type information | [The `ty` modules] | [src/librustc/ty/trait_def.rs](https://github.com/rust-lang/rust/blob/master/src/librustc/ty/trait_def.rs) -`Ty<'tcx>` | struct | This is the internal representation of a type used for type checking | [Type checking] | [src/librustc/ty/mod.rs](https://github.com/rust-lang/rust/blob/master/src/librustc/ty/mod.rs) -`TyCtxt<'cx, 'tcx, 'tcx>` | type | The "typing context". This is the central data structure in the compiler. It is the context that you use to perform all manner of queries. | [The `ty` modules] | [src/librustc/ty/context.rs](https://github.com/rust-lang/rust/blob/master/src/librustc/ty/context.rs) +`ast::Crate` | struct | Syntax-level representation of a parsed crate | [The parser] | [src/librustc/hir/mod.rs](https://doc.rust-lang.org/nightly/nightly-rustc/syntax/ast/struct.Crate.html) +`hir::Crate` | struct | More abstract, compiler-friendly form of a crate's AST | [The Hir] | [src/librustc/hir/mod.rs](https://doc.rust-lang.org/nightly/nightly-rustc/rustc/hir/struct.Crate.html) +`ParseSess` | struct | This struct contains information about a parsing session | [the Parser] | [src/libsyntax/parse/mod.rs](https://doc.rust-lang.org/nightly/nightly-rustc/syntax/parse/struct.ParseSess.html) +`Session` | struct | The data associated with a compilation session | [the Parser], [The Rustc Driver] | [src/librustc/session/mod.html](https://doc.rust-lang.org/nightly/nightly-rustc/rustc/session/struct.Session.html) +`StringReader` | struct | This is the lexer used during parsing. It consumes characters from the raw source code being compiled and produces a series of tokens for use by the rest of the parser | [The parser] | [src/libsyntax/parse/lexer/mod.rs](https://doc.rust-lang.org/nightly/nightly-rustc/syntax/parse/lexer/struct.StringReader.html) +`TraitDef` | struct | This struct contains a trait's definition with type information | [The `ty` modules] | [src/librustc/ty/trait_def.rs](https://doc.rust-lang.org/nightly/nightly-rustc/rustc/ty/trait_def/struct.TraitDef.html) +`Ty<'tcx>` | struct | This is the internal representation of a type used for type checking | [Type checking] | [src/librustc/ty/mod.rs](https://doc.rust-lang.org/nightly/nightly-rustc/rustc/ty/type.Ty.html) +`TyCtxt<'cx, 'tcx, 'tcx>` | type | The "typing context". This is the central data structure in the compiler. It is the context that you use to perform all manner of queries. | [The `ty` modules] | [src/librustc/ty/context.rs](https://doc.rust-lang.org/nightly/nightly-rustc/rustc/ty/struct.TyCtxt.html) [The HIR]: hir.html [The parser]: the-parser.html diff --git a/src/appendix-stupid-stats.md b/src/appendix-stupid-stats.md index 8e50b2c31..842a2a328 100644 --- a/src/appendix-stupid-stats.md +++ b/src/appendix-stupid-stats.md @@ -3,7 +3,7 @@ > **Note:** This is a copy of `@nrc`'s amazing [stupid-stats]. You should find > a copy of the code on the GitHub repository although due to the compiler's > constantly evolving nature, there is no guarantee it'll compile on the first -> go. +> go. Many tools benefit from being a drop-in replacement for a compiler. By this, I mean that any user of the tool can use `mytool` in all the ways they would @@ -87,14 +87,16 @@ in [librustc_back](https://github.com/rust-lang/rust/tree/master/src/librustc_ba (which also contains some things used primarily during translation). All these phases are coordinated by the driver. To see the exact sequence, look -at the `compile_input` function in [librustc_driver/driver.rs](https://github.com/rust-lang/rust/tree/master/src/librustc_driver/driver.rs). -The driver (which is found in [librust_driver](https://github.com/rust-lang/rust/tree/master/src/librustc_driver)) -handles all the highest level coordination of compilation - handling command -line arguments, maintaining compilation state (primarily in the `Session`), and -calling the appropriate code to run each phase of compilation. It also handles -high level coordination of pretty printing and testing. To create a drop-in -compiler replacement or a compiler replacement, we leave most of compilation -alone and customise the driver using its APIs. +at [the `compile_input` function in `librustc_driver`][compile-input]. +The driver handles all the highest level coordination of compilation - + 1. handling command-line arguments + 2. maintaining compilation state (primarily in the `Session`) + 3. calling the appropriate code to run each phase of compilation + 4. handles high level coordination of pretty printing and testing. +To create a drop-in compiler replacement or a compiler replacement, +we leave most of compilation alone and customise the driver using its APIs. + +[compile-input]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_driver/driver/fn.compile_input.html ## The driver customisation APIs @@ -111,7 +113,7 @@ between phases. `CompilerCalls` is a trait that you implement in your tool. It contains a fairly ad-hoc set of methods to hook in to the process of processing command line arguments and driving the compiler. For details, see the comments in -[librustc_driver/lib.rs](https://github.com/rust-lang/rust/tree/master/src/librustc_driver/lib.rs). +[librustc_driver/lib.rs](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_driver/index.html). I'll summarise the methods here. `early_callback` and `late_callback` let you call arbitrary code at different @@ -175,7 +177,7 @@ foo.rs` (assuming you have a Rust program called `foo.rs`. You can also pass any command line arguments that you would normally pass to rustc). When you run it you'll see output similar to -``` +```text In crate: foo, Found 12 uses of `println!`; @@ -203,7 +205,7 @@ should dump stupid-stats' stdout to Cargo's stdout). Let's start with the `main` function for our tool, it is pretty simple: -``` +```rust,ignore fn main() { let args: Vec<_> = std::env::args().collect(); rustc_driver::run_compiler(&args, &mut StupidCalls::new()); @@ -221,7 +223,7 @@ this tool different from rustc. `StupidCalls` is a mostly empty struct: -``` +```rust,ignore struct StupidCalls { default_calls: RustcDefaultCalls, } @@ -236,7 +238,7 @@ to keep Cargo happy. Most of the rest of the impl of `CompilerCalls` is trivial: -``` +```rust,ignore impl<'a> CompilerCalls<'a> for StupidCalls { fn early_callback(&mut self, _: &getopts::Matches, @@ -298,7 +300,7 @@ tool does it's actual work by walking the AST. We do that by creating an AST visitor and making it walk the AST from the top (the crate root). Once we've walked the crate, we print the stats we've collected: -``` +```rust,ignore fn build_controller(&mut self, _: &Session) -> driver::CompileController<'a> { // We mostly want to do what rustc does, which is what basic() will return. let mut control = driver::CompileController::basic(); @@ -338,7 +340,7 @@ That is all it takes to create your own drop-in compiler replacement or custom compiler! For the sake of completeness I'll go over the rest of the stupid-stats tool. -``` +```rust struct StupidVisitor { println_count: usize, arg_counts: Vec, @@ -353,7 +355,7 @@ methods, these walk the AST taking no action. We override `visit_item` and functions, modules, traits, structs, and so forth, we're only interested in functions) and macros: -``` +```rust,ignore impl<'v> visit::Visitor<'v> for StupidVisitor { fn visit_item(&mut self, i: &'v ast::Item) { match i.node { diff --git a/src/compiletest.md b/src/compiletest.md index 011304d45..363c12d3b 100644 --- a/src/compiletest.md +++ b/src/compiletest.md @@ -61,7 +61,8 @@ which takes a single argument (which, in this case is a value of 1). (rather than the current Rust default of 101 at the time of this writing). The header command and the argument list (if present) are typically separated by a colon: -``` + +```rust,ignore // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. diff --git a/src/const-eval.md b/src/const-eval.md index 4a05255c6..70c946f17 100644 --- a/src/const-eval.md +++ b/src/const-eval.md @@ -35,4 +35,4 @@ integer or fat pointer, it will directly yield the value (via `Value::ByVal` or memory allocation (via `Value::ByRef`). This means that the `const_eval` function cannot be used to create miri-pointers to the evaluated constant or static. If you need that, you need to directly work with the functions in -[src/librustc_mir/interpret/const_eval.rs](https://github.com/rust-lang/rust/blob/master/src/librustc_mir/interpret/const_eval.rs). +[src/librustc_mir/interpret/const_eval.rs](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_mir/interpret/const_eval/). diff --git a/src/conventions.md b/src/conventions.md index 96571301f..89a986789 100644 --- a/src/conventions.md +++ b/src/conventions.md @@ -21,7 +21,7 @@ tidy script runs automatically when you do `./x.py test`. All files must begin with the following copyright notice: -``` +```rust // Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. @@ -48,7 +48,7 @@ tests -- it can be necessary to exempt yourself from this limit. In that case, you can add a comment towards the top of the file (after the copyright notice) like so: -``` +```rust // ignore-tidy-linelength ``` @@ -61,7 +61,7 @@ Prefer 4-space indent. # Coding for correctness Beyond formatting, there are a few other tips that are worth -following. +following. ## Prefer exhaustive matches diff --git a/src/high-level-overview.md b/src/high-level-overview.md index 041136548..9f3b63a54 100644 --- a/src/high-level-overview.md +++ b/src/high-level-overview.md @@ -19,7 +19,7 @@ compilation improves, that may change.) The dependency structure of these crates is roughly a diamond: -``` +```text rustc_driver / | \ / | \ diff --git a/src/hir.md b/src/hir.md index f66468ffc..3d2fbede3 100644 --- a/src/hir.md +++ b/src/hir.md @@ -12,8 +12,8 @@ This chapter covers the main concepts of the HIR. You can view the HIR representation of your code by passing the `-Zunpretty=hir-tree` flag to rustc: -``` -cargo rustc -- -Zunpretty=hir-tree +```bash +> cargo rustc -- -Zunpretty=hir-tree ``` ### Out-of-band storage and the `Crate` type diff --git a/src/how-to-build-and-run.md b/src/how-to-build-and-run.md index 6e292934b..535823dfd 100644 --- a/src/how-to-build-and-run.md +++ b/src/how-to-build-and-run.md @@ -70,8 +70,8 @@ Once you've created a config.toml, you are now ready to run `x.py`. There are a lot of options here, but let's start with what is probably the best "go to" command for building a local rust: -``` -./x.py build -i --stage 1 src/libstd +```bash +> ./x.py build -i --stage 1 src/libstd ``` What this command will do is the following: @@ -106,7 +106,7 @@ will execute the stage2 compiler (which we did not build, but which you will likely need to build at some point; for example, if you want to run the entire test suite). -``` +```bash > rustup toolchain link stage1 build//stage1 > rustup toolchain link stage2 build//stage2 ``` @@ -115,7 +115,7 @@ Now you can run the rustc you built with. If you run with `-vV`, you should see a version number ending in `-dev`, indicating a build from your local environment: -``` +```bash > rustc +stage1 -vV rustc 1.25.0-dev binary: rustc diff --git a/src/incrcomp-debugging.md b/src/incrcomp-debugging.md index 261b46eb0..2488aa320 100644 --- a/src/incrcomp-debugging.md +++ b/src/incrcomp-debugging.md @@ -10,7 +10,7 @@ As an example, see `src/test/compile-fail/dep-graph-caller-callee.rs`. The idea is that you can annotate a test like: -```rust +```rust,ignore #[rustc_if_this_changed] fn foo() { } @@ -48,7 +48,7 @@ the graph. You can filter in three ways: To filter, use the `RUST_DEP_GRAPH_FILTER` environment variable, which should look like one of the following: -``` +```text source_filter // nodes originating from source_filter -> target_filter // nodes that can reach target_filter source_filter -> target_filter // nodes in between source_filter and target_filter @@ -58,14 +58,14 @@ source_filter -> target_filter // nodes in between source_filter and target_filt A node is considered to match a filter if all of those strings appear in its label. So, for example: -``` +```text RUST_DEP_GRAPH_FILTER='-> TypeckTables' ``` would select the predecessors of all `TypeckTables` nodes. Usually though you want the `TypeckTables` node for some particular fn, so you might write: -``` +```text RUST_DEP_GRAPH_FILTER='-> TypeckTables & bar' ``` @@ -75,7 +75,7 @@ with `bar` in their name. Perhaps you are finding that when you change `foo` you need to re-type-check `bar`, but you don't think you should have to. In that case, you might do: -``` +```text RUST_DEP_GRAPH_FILTER='Hir & foo -> TypeckTables & bar' ``` @@ -105,8 +105,10 @@ check of `bar` and you don't think there should be. You dump the dep-graph as described in the previous section and open `dep-graph.txt` to see something like: - Hir(foo) -> Collect(bar) - Collect(bar) -> TypeckTables(bar) +```text +Hir(foo) -> Collect(bar) +Collect(bar) -> TypeckTables(bar) +``` That first edge looks suspicious to you. So you set `RUST_FORBID_DEP_GRAPH_EDGE` to `Hir&foo -> Collect&bar`, re-run, and diff --git a/src/macro-expansion.md b/src/macro-expansion.md index 95ea64f19..ba807faf2 100644 --- a/src/macro-expansion.md +++ b/src/macro-expansion.md @@ -15,7 +15,7 @@ expansion works. It's helpful to have an example to refer to. For the remainder of this chapter, whenever we refer to the "example _definition_", we mean the following: -```rust +```rust,ignore macro_rules! printer { (print $mvar:ident) => { println!("{}", $mvar); @@ -45,7 +45,7 @@ worrying about _where_. For more information about tokens, see the Whenever we refer to the "example _invocation_", we mean the following snippet: -```rust +```rust,ignore printer!(print foo); // Assume `foo` is a variable defined somewhere else... ``` @@ -65,7 +65,7 @@ defined in [`src/libsyntax/ext/tt/macro_parser.rs`][code_mp]. The interface of the macro parser is as follows (this is slightly simplified): -```rust +```rust,ignore fn parse( sess: ParserSession, tts: TokenStream, @@ -156,7 +156,7 @@ TODO [code_dir]: https://github.com/rust-lang/rust/tree/master/src/libsyntax/ext/tt -[code_mp]: https://github.com/rust-lang/rust/tree/master/src/libsyntax/ext/tt/macro_parser.rs -[code_mr]: https://github.com/rust-lang/rust/tree/master/src/libsyntax/ext/tt/macro_rules.rs -[code_parse_int]: https://github.com/rust-lang/rust/blob/a97cd17f5d71fb4ec362f4fbd79373a6e7ed7b82/src/libsyntax/ext/tt/macro_parser.rs#L421 +[code_mp]: https://doc.rust-lang.org/nightly/nightly-rustc/syntax/ext/tt/macro_parser/ +[code_mr]: https://doc.rust-lang.org/nightly/nightly-rustc/syntax/ext/tt/macro_rules/ +[code_parse_int]: https://doc.rust-lang.org/nightly/nightly-rustc/syntax/ext/tt/macro_parser/fn.parse.html [parsing]: ./the-parser.html diff --git a/src/method-lookup.md b/src/method-lookup.md index ac0e427db..5aafb6abf 100644 --- a/src/method-lookup.md +++ b/src/method-lookup.md @@ -8,13 +8,13 @@ the code itself, naturally. One way to think of method lookup is that we convert an expression of the form: -```rust +```rust,ignore receiver.method(...) ``` into a more explicit [UFCS] form: -```rust +```rust,ignore Trait::method(ADJ(receiver), ...) // for a trait call ReceiverType::method(ADJ(receiver), ...) // for an inherent method call ``` @@ -24,7 +24,7 @@ autoderefs and then possibly an autoref (e.g., `&**receiver`). However we sometimes do other adjustments and coercions along the way, in particular unsizing (e.g., converting from `[T; n]` to `[T]`). -Method lookup is divided into two major phases: +Method lookup is divided into two major phases: 1. Probing ([`probe.rs`][probe]). The probe phase is when we decide what method to call and how to adjust the receiver. @@ -38,8 +38,8 @@ cacheable across method-call sites. Therefore, it does not include inference variables or other information. [UFCS]: https://github.com/rust-lang/rfcs/blob/master/text/0132-ufcs.md -[probe]: https://github.com/rust-lang/rust/blob/master/src/librustc_typeck/check/method/probe.rs -[confirm]: https://github.com/rust-lang/rust/blob/master/src/librustc_typeck/check/method/confirm.rs +[probe]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_typeck/check/method/probe/ +[confirm]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_typeck/check/method/confirm/ ## The Probe phase @@ -51,7 +51,7 @@ until it cannot be deref'd anymore, as well as applying an optional "unsize" step. So if the receiver has type `Rc>`, this might yield: -```rust +```rust,ignore Rc> Box<[T; 3]> [T; 3] @@ -99,9 +99,10 @@ So, let's continue our example. Imagine that we were calling a method that defines it with `&self` for the type `Rc` as well as a method on the type `Box` that defines `Foo` but with `&mut self`. Then we might have two candidates: - - &Rc> from the impl of `Foo` for `Rc` where `U=Box - &mut Box<[T; 3]>> from the inherent impl on `Box` where `U=[T; 3]` +```text +&Rc> from the impl of `Foo` for `Rc` where `U=Box +&mut Box<[T; 3]>> from the inherent impl on `Box` where `U=[T; 3]` +``` ### Candidate search diff --git a/src/mir-passes.md b/src/mir-passes.md index cd05edfb8..64e72f06e 100644 --- a/src/mir-passes.md +++ b/src/mir-passes.md @@ -52,13 +52,13 @@ fn main() { The files have names like `rustc.main.000-000.CleanEndRegions.after.mir`. These names have a number of parts: -``` +```text rustc.main.000-000.CleanEndRegions.after.mir ---- --- --- --------------- ----- either before or after | | | name of the pass | | index of dump within the pass (usually 0, but some passes dump intermediate states) | index of the pass - def-path to the function etc being dumped + def-path to the function etc being dumped ``` You can also make more selective filters. For example, `main & CleanEndRegions` @@ -159,7 +159,7 @@ ensuring that the reads have already happened (remember that [queries are memoized](./query.html), so executing a query twice simply loads from a cache the second time): -``` +```text mir_const(D) --read-by--> mir_const_qualif(D) | ^ stolen-by | @@ -172,6 +172,6 @@ This mechanism is a bit dodgy. There is a discussion of more elegant alternatives in [rust-lang/rust#41710]. [rust-lang/rust#41710]: https://github.com/rust-lang/rust/issues/41710 -[mirtransform]: https://github.com/rust-lang/rust/tree/master/src/librustc_mir/transform/mod.rs -[`NoLandingPads`]: https://github.com/rust-lang/rust/tree/master/src/librustc_mir/transform/no_landing_pads.rs +[mirtransform]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_mir/transform/ +[`NoLandingPads`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_mir/transform/no_landing_pads/struct.NoLandingPads.html [MIR visitor]: mir-visitor.html diff --git a/src/mir-regionck.md b/src/mir-regionck.md index 529b42395..4158b7d38 100644 --- a/src/mir-regionck.md +++ b/src/mir-regionck.md @@ -34,7 +34,7 @@ The MIR-based region analysis consists of two major functions: are used. - More details to come, though the [NLL RFC] also includes fairly thorough (and hopefully readable) coverage. - + [fvb]: appendix-background.html#free-vs-bound [NLL RFC]: http://rust-lang.github.io/rfcs/2094-nll.html @@ -82,7 +82,7 @@ The kinds of region elements are as follows: corresponds (intuitively) to some unknown set of other elements -- for details on skolemization, see the section [skolemization and universes](#skol). - + ## Causal tracking *to be written* -- describe how we can extend the values of a variable @@ -97,7 +97,7 @@ The kinds of region elements are as follows: From time to time we have to reason about regions that we can't concretely know. For example, consider this program: -```rust +```rust,ignore // A function that needs a static reference fn foo(x: &'static u32) { } @@ -122,10 +122,12 @@ stack, for example). But *how* do we reject it and *why*? When we type-check `main`, and in particular the call `bar(foo)`, we are going to wind up with a subtyping relationship like this one: - fn(&'static u32) <: for<'a> fn(&'a u32) - ---------------- ------------------- - the type of `foo` the type `bar` expects - +```text +fn(&'static u32) <: for<'a> fn(&'a u32) +---------------- ------------------- +the type of `foo` the type `bar` expects +``` + We handle this sort of subtyping by taking the variables that are bound in the supertype and **skolemizing** them: this means that we replace them with @@ -135,8 +137,10 @@ regions" -- they represent, basically, "some unknown region". Once we've done that replacement, we have the following relation: - fn(&'static u32) <: fn(&'!1 u32) - +```text +fn(&'static u32) <: fn(&'!1 u32) +``` + The key idea here is that this unknown region `'!1` is not related to any other regions. So if we can prove that the subtyping relationship is true for `'!1`, then it ought to be true for any region, which is @@ -147,7 +151,9 @@ subtypes, we check if their arguments have the desired relationship (fn arguments are [contravariant](./appendix-background.html#variance), so we swap the left and right here): - &'!1 u32 <: &'static u32 +```text +&'!1 u32 <: &'static u32 +``` According to the basic subtyping rules for a reference, this will be true if `'!1: 'static`. That is -- if "some unknown region `!1`" lives @@ -168,7 +174,7 @@ put generic type parameters into this root universe (in this sense, there is not just one root universe, but one per item). So consider this function `bar`: -```rust +```rust,ignore struct Foo { } fn bar<'a, T>(t: &'a T) { @@ -185,7 +191,7 @@ Basically, the root universe contains all the names that Now let's extend `bar` a bit by adding a variable `x`: -```rust +```rust,ignore fn bar<'a, T>(t: &'a T) { let x: for<'b> fn(&'b u32) = ...; } @@ -195,7 +201,7 @@ Here, the name `'b` is not part of the root universe. Instead, when we "enter" into this `for<'b>` (e.g., by skolemizing it), we will create a child universe of the root, let's call it U1: -``` +```text U0 (root universe) │ └─ U1 (child universe) @@ -207,7 +213,7 @@ with a new name, which we are identifying by its universe number: Now let's extend `bar` a bit by adding one more variable, `y`: -```rust +```rust,ignore fn bar<'a, T>(t: &'a T) { let x: for<'b> fn(&'b u32) = ...; let y: for<'c> fn(&'b u32) = ...; @@ -218,7 +224,7 @@ When we enter *this* type, we will again create a new universe, which we'll call `U2`. Its parent will be the root universe, and U1 will be its sibling: -``` +```text U0 (root universe) │ ├─ U1 (child universe) @@ -257,11 +263,11 @@ children, that inference variable X would have to be in U0. And since X is in U0, it cannot name anything from U1 (or U2). This is perhaps easiest to see by using a kind of generic "logic" example: -``` +```text exists { forall { ... /* Y is in U1 ... */ } forall { ... /* Z is in U2 ... */ } -} +} ``` Here, the only way for the two foralls to interact would be through X, @@ -290,8 +296,10 @@ does not say region elements **will** appear. In the region inference engine, outlives constraints have the form: - V1: V2 @ P - +```text +V1: V2 @ P +``` + where `V1` and `V2` are region indices, and hence map to some region variable (which may be universally or existentially quantified). The `P` here is a "point" in the control-flow graph; it's not important @@ -338,8 +346,10 @@ for universal regions from the fn signature.) Put another way, the "universal regions" check can be considered to be checking constraints like: - {skol(1)}: V1 - +```text +{skol(1)}: V1 +``` + where `{skol(1)}` is like a constant set, and V1 is the variable we made to represent the `!1` region. @@ -348,30 +358,40 @@ made to represent the `!1` region. OK, so far so good. Now let's walk through what would happen with our first example: - fn(&'static u32) <: fn(&'!1 u32) @ P // this point P is not imp't here +```text +fn(&'static u32) <: fn(&'!1 u32) @ P // this point P is not imp't here +``` The region inference engine will create a region element domain like this: - { CFG; end('static); skol(1) } - --- ------------ ------- from the universe `!1` - | 'static is always in scope - all points in the CFG; not especially relevant here +```text +{ CFG; end('static); skol(1) } + --- ------------ ------- from the universe `!1` + | 'static is always in scope + all points in the CFG; not especially relevant here +``` It will always create two universal variables, one representing `'static` and one representing `'!1`. Let's call them Vs and V1. They will have initial values like so: - Vs = { CFG; end('static) } // it is in U0, so can't name anything else - V1 = { skol(1) } - +```text +Vs = { CFG; end('static) } // it is in U0, so can't name anything else +V1 = { skol(1) } +``` + From the subtyping constraint above, we would have an outlives constraint like - '!1: 'static @ P +```text +'!1: 'static @ P +``` To process this, we would grow the value of V1 to include all of Vs: - Vs = { CFG; end('static) } - V1 = { CFG; end('static), skol(1) } +```text +Vs = { CFG; end('static) } +V1 = { CFG; end('static), skol(1) } +``` At that point, constraint propagation is complete, because all the outlives relationships are satisfied. Then we would go to the "check @@ -385,34 +405,44 @@ In this case, `V1` *did* grow too large -- it is not known to outlive What about this subtyping relationship? - for<'a> fn(&'a u32, &'a u32) - <: - for<'b, 'c> fn(&'b u32, &'c u32) - -Here we would skolemize the supertype, as before, yielding: +```text +for<'a> fn(&'a u32, &'a u32) + <: +for<'b, 'c> fn(&'b u32, &'c u32) +``` + +Here we would skolemize the supertype, as before, yielding: + +```text +for<'a> fn(&'a u32, &'a u32) + <: +fn(&'!1 u32, &'!2 u32) +``` - for<'a> fn(&'a u32, &'a u32) - <: - fn(&'!1 u32, &'!2 u32) - then we instantiate the variable on the left-hand side with an existential in universe U2, yielding the following (`?n` is a notation for an existential variable): - fn(&'?3 u32, &'?3 u32) - <: - fn(&'!1 u32, &'!2 u32) - +```text +fn(&'?3 u32, &'?3 u32) + <: +fn(&'!1 u32, &'!2 u32) +``` + Then we break this down further: - &'!1 u32 <: &'?3 u32 - &'!2 u32 <: &'?3 u32 - +```text +&'!1 u32 <: &'?3 u32 +&'!2 u32 <: &'?3 u32 +``` + and even further, yield up our region constraints: - '!1: '?3 - '!2: '?3 - +```text +'!1: '?3 +'!2: '?3 +``` + Note that, in this case, both `'!1` and `'!2` have to outlive the variable `'?3`, but the variable `'?3` is not forced to outlive anything else. Therefore, it simply starts and ends as the empty set @@ -430,15 +460,17 @@ common lifetime of our arguments. -nmatsakis) [ohdeargoditsallbroken]: https://github.com/rust-lang/rust/issues/32330#issuecomment-202536977 -## Final example +## Final example Let's look at one last example. We'll extend the previous one to have a return type: - for<'a> fn(&'a u32, &'a u32) -> &'a u32 - <: - for<'b, 'c> fn(&'b u32, &'c u32) -> &'b u32 - +```text +for<'a> fn(&'a u32, &'a u32) -> &'a u32 + <: +for<'b, 'c> fn(&'b u32, &'c u32) -> &'b u32 +``` + Despite seeming very similar to the previous example, this case is going to get an error. That's good: the problem is that we've gone from a fn that promises to return one of its two arguments, to a fn that is promising to return the @@ -446,45 +478,59 @@ first one. That is unsound. Let's see how it plays out. First, we skolemize the supertype: - for<'a> fn(&'a u32, &'a u32) -> &'a u32 - <: - fn(&'!1 u32, &'!2 u32) -> &'!1 u32 - +```text +for<'a> fn(&'a u32, &'a u32) -> &'a u32 + <: +fn(&'!1 u32, &'!2 u32) -> &'!1 u32 +``` + Then we instantiate the subtype with existentials (in U2): - fn(&'?3 u32, &'?3 u32) -> &'?3 u32 - <: - fn(&'!1 u32, &'!2 u32) -> &'!1 u32 - +```text +fn(&'?3 u32, &'?3 u32) -> &'?3 u32 + <: +fn(&'!1 u32, &'!2 u32) -> &'!1 u32 +``` + And now we create the subtyping relationships: - &'!1 u32 <: &'?3 u32 // arg 1 - &'!2 u32 <: &'?3 u32 // arg 2 - &'?3 u32 <: &'!1 u32 // return type - +```text +&'!1 u32 <: &'?3 u32 // arg 1 +&'!2 u32 <: &'?3 u32 // arg 2 +&'?3 u32 <: &'!1 u32 // return type +``` + And finally the outlives relationships. Here, let V1, V2, and V3 be the variables we assign to `!1`, `!2`, and `?3` respectively: - V1: V3 - V2: V3 - V3: V1 - +```text +V1: V3 +V2: V3 +V3: V1 +``` + Those variables will have these initial values: - V1 in U1 = {skol(1)} - V2 in U2 = {skol(2)} - V3 in U2 = {} - +```text +V1 in U1 = {skol(1)} +V2 in U2 = {skol(2)} +V3 in U2 = {} +``` + Now because of the `V3: V1` constraint, we have to add `skol(1)` into `V3` (and indeed it is visible from `V3`), so we get: - V3 in U2 = {skol(1)} - +```text +V3 in U2 = {skol(1)} +``` + then we have this constraint `V2: V3`, so we wind up having to enlarge `V2` to include `skol(1)` (which it can also see): - V2 in U2 = {skol(1), skol(2)} - +```text +V2 in U2 = {skol(1), skol(2)} +``` + Now contraint propagation is done, but when we check the outlives relationships, we find that `V2` includes this new element `skol(1)`, so we report an error. diff --git a/src/mir-visitor.md b/src/mir-visitor.md index 3a8b06c54..265769b34 100644 --- a/src/mir-visitor.md +++ b/src/mir-visitor.md @@ -7,13 +7,13 @@ them, generated via a single macro: `Visitor` (which operates on a `&Mir` and gives back shared references) and `MutVisitor` (which operates on a `&mut Mir` and gives back mutable references). -[m-v]: https://github.com/rust-lang/rust/tree/master/src/librustc/mir/visit.rs +[m-v]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/mir/visit/index.html To implement a visitor, you have to create a type that represents your visitor. Typically, this type wants to "hang on" to whatever state you will need while processing MIR: -```rust +```rust,ignore struct MyVisitor<...> { tcx: TyCtxt<'cx, 'tcx, 'tcx>, ... @@ -22,10 +22,10 @@ struct MyVisitor<...> { and you then implement the `Visitor` or `MutVisitor` trait for that type: -```rust +```rust,ignore impl<'tcx> MutVisitor<'tcx> for NoLandingPads { fn visit_foo(&mut self, ...) { - // ... + ... self.super_foo(...); } } @@ -41,7 +41,7 @@ A very simple example of a visitor can be found in [`NoLandingPads`]. That visitor doesn't even require any state: it just visits all terminators and removes their `unwind` successors. -[`NoLandingPads`]: https://github.com/rust-lang/rust/tree/master/src/librustc_mir/transform/no_landing_pads.rs +[`NoLandingPads`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_mir/transform/no_landing_pads/struct.NoLandingPads.html ## Traversal @@ -50,6 +50,6 @@ contains useful functions for walking the MIR CFG in [different standard orders][traversal] (e.g. pre-order, reverse post-order, and so forth). -[t]: https://github.com/rust-lang/rust/tree/master/src/librustc/mir/traversal.rs +[t]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/mir/traversal/index.html [traversal]: https://en.wikipedia.org/wiki/Tree_traversal diff --git a/src/mir.md b/src/mir.md index 58479cf96..da468acf4 100644 --- a/src/mir.md +++ b/src/mir.md @@ -69,12 +69,12 @@ fn main() { You should see something like: -``` +```mir // WARNING: This output format is intended for human consumers only // and is subject to change without notice. Knock yourself out. fn main() -> () { ... -} +} ``` This is the MIR format for the `main` function. @@ -82,7 +82,7 @@ This is the MIR format for the `main` function. **Variable declarations.** If we drill in a bit, we'll see it begins with a bunch of variable declarations. They look like this: -``` +```mir let mut _0: (); // return place scope 1 { let mut _1: std::vec::Vec; // "vec" in scope 1 at src/main.rs:2:9: 2:16 @@ -107,8 +107,8 @@ program (which names were in scope when). it may look slightly different when you view it, and I am ignoring some of the comments): -``` -bb0: { +```mir +bb0: { StorageLive(_1); _1 = const >::new() -> bb2; } @@ -117,7 +117,7 @@ bb0: { A basic block is defined by a series of **statements** and a final **terminator**. In this case, there is one statement: -``` +```mir StorageLive(_1); ``` @@ -129,7 +129,7 @@ allocate stack space. The **terminator** of the block `bb0` is the call to `Vec::new`: -``` +```mir _1 = const >::new() -> bb2; ``` @@ -142,8 +142,8 @@ possible, and hence we list only one succssor block, `bb2`. If we look ahead to `bb2`, we will see it looks like this: -``` -bb2: { +```mir +bb2: { StorageLive(_3); _3 = &mut _1; _2 = const >::push(move _3, const 1i32) -> [return: bb3, unwind: bb4]; @@ -153,13 +153,13 @@ bb2: { Here there are two statements: another `StorageLive`, introducing the `_3` temporary, and then an assignment: -``` +```mir _3 = &mut _1; ``` Assignments in general have the form: -``` +```text = ``` @@ -169,7 +169,7 @@ value: in this case, the rvalue is a mutable borrow expression, which looks like `&mut `. So we can kind of define a grammar for rvalues like so: -``` +```text = & (mut)? | + | - @@ -178,7 +178,7 @@ rvalues like so: = Constant | copy Place | move Place -``` +``` As you can see from this grammar, rvalues cannot be nested -- they can only reference places and constants. Moreover, when you use a place, @@ -188,7 +188,7 @@ for a place of any type). So, for example, if we had the expression `x = a + b + c` in Rust, that would get compile to two statements and a temporary: -``` +```mir TMP1 = a + b x = TMP1 + c ``` @@ -214,14 +214,14 @@ but [you can read about those below](#promoted)). we pass around `BasicBlock` values, which are [newtype'd] indices into this vector. - **Statements** are represented by the type `Statement`. -- **Terminators** are represented by the `Terminator`. +- **Terminators** are represented by the `Terminator`. - **Locals** are represented by a [newtype'd] index type `Local`. The data for a local variable is found in the `Mir` (the `local_decls` vector). There is also a special constant `RETURN_PLACE` identifying the special "local" representing the return value. - **Places** are identified by the enum `Place`. There are a few variants: - Local variables like `_1` - - Static variables `FOO` + - Static variables `FOO` - **Projections**, which are fields or other things that "project out" from a base place. So e.g. the place `_1.f` is a projection, with `f` being the "projection element and `_1` being the base diff --git a/src/miri.md b/src/miri.md index 4b0a600be..be9587408 100644 --- a/src/miri.md +++ b/src/miri.md @@ -14,7 +14,7 @@ placed into metadata. Once you have a use-site like -```rust +```rust,ignore type Foo = [u8; FOO - 42]; ``` @@ -24,7 +24,7 @@ create items that use the type (locals, constants, function arguments, ...). To obtain the (in this case empty) parameter environment, one can call `let param_env = tcx.param_env(length_def_id);`. The `GlobalId` needed is -```rust +```rust,ignore let gid = GlobalId { promoted: None, instance: Instance::mono(length_def_id), @@ -112,7 +112,7 @@ to a pointer to `b`. Although the main entry point to constant evaluation is the `tcx.const_eval` query, there are additional functions in -[librustc_mir/interpret/const_eval.rs](https://github.com/rust-lang/rust/blob/master/src/librustc_mir/interpret/const_eval.rs) +[librustc_mir/interpret/const_eval.rs](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_mir/interpret/const_eval/) that allow accessing the fields of a `Value` (`ByRef` or otherwise). You should never have to access an `Allocation` directly except for translating it to the compilation target (at the moment just LLVM). diff --git a/src/query.md b/src/query.md index 500b9dec8..2c518ee55 100644 --- a/src/query.md +++ b/src/query.md @@ -41,7 +41,7 @@ To invoke a query is simple. The tcx ("type context") offers a method for each defined query. So, for example, to invoke the `type_of` query, you would just do this: -```rust +```rust,ignore let ty = tcx.type_of(some_def_id); ``` @@ -59,7 +59,7 @@ better user experience. In order to recover from a cycle, you don't get to use the nice method-call-style syntax. Instead, you invoke using the `try_get` method, which looks roughly like this: -```rust +```rust,ignore use ty::maps::queries; ... match queries::type_of::try_get(tcx, DUMMY_SP, self.did) { @@ -87,7 +87,7 @@ will be reported due to this cycle by some other bit of code. In that case, you can invoke `err.cancel()` to not emit any error. It is traditional to then invoke: -``` +```rust,ignore tcx.sess.delay_span_bug(some_span, "some message") ``` @@ -126,7 +126,7 @@ on how that works). Providers always have the same signature: -```rust +```rust,ignore fn provider<'cx, 'tcx>(tcx: TyCtxt<'cx, 'tcx, 'tcx>, key: QUERY_KEY) -> QUERY_RESULT @@ -146,7 +146,7 @@ When the tcx is created, it is given the providers by its creator using the `Providers` struct. This struct is generated by the macros here, but it is basically a big list of function pointers: -```rust +```rust,ignore struct Providers { type_of: for<'cx, 'tcx> fn(TyCtxt<'cx, 'tcx, 'tcx>, DefId) -> Ty<'tcx>, ... @@ -163,7 +163,7 @@ throughout the other `rustc_*` crates. This is done by invoking various `provide` functions. These functions tend to look something like this: -```rust +```rust,ignore pub fn provide(providers: &mut Providers) { *providers = Providers { type_of, @@ -180,7 +180,7 @@ before.) So, if we want to add a provider for some other query, let's call it `fubar`, into the crate above, we might modify the `provide()` function like so: -```rust +```rust,ignore pub fn provide(providers: &mut Providers) { *providers = Providers { type_of, @@ -189,7 +189,7 @@ pub fn provide(providers: &mut Providers) { }; } -fn fubar<'cx, 'tcx>(tcx: TyCtxt<'cx, 'tcx>, key: DefId) -> Fubar<'tcx> { .. } +fn fubar<'cx, 'tcx>(tcx: TyCtxt<'cx, 'tcx>, key: DefId) -> Fubar<'tcx> { ... } ``` N.B. Most of the `rustc_*` crates only provide **local @@ -216,9 +216,9 @@ the big macro invocation in changed by the time you read this README, but at present it looks something like: -[maps-mod]: https://github.com/rust-lang/rust/blob/master/src/librustc/ty/maps/mod.rs +[maps-mod]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/ty/maps/index.html -``` +```rust,ignore define_maps! { <'tcx> /// Records the type of every item. [] fn type_of: TypeOfItem(DefId) -> Ty<'tcx>, @@ -229,7 +229,7 @@ define_maps! { <'tcx> Each line of the macro defines one query. The name is broken up like this: -``` +```rust,ignore [] fn type_of: TypeOfItem(DefId) -> Ty<'tcx>, ^^ ^^^^^^^ ^^^^^^^^^^ ^^^^^ ^^^^^^^^ | | | | | @@ -270,7 +270,7 @@ Let's go over them one by one: of `Steal` for more details. New uses of `Steal` should **not** be added without alerting `@rust-lang/compiler`. -[dep-node]: https://github.com/rust-lang/rust/blob/master/src/librustc/dep_graph/dep_node.rs +[dep-node]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/dep_graph/struct.DepNode.html So, to add a query: @@ -288,7 +288,7 @@ describing the query. Each such struct implements the key/value of that particular query. Basically the code generated looks something like this: -```rust +```rust,ignore // Dummy struct representing a particular kind of query: pub struct type_of<'tcx> { phantom: PhantomData<&'tcx ()> } @@ -306,7 +306,7 @@ this trait is optional if the query key is `DefId`, but if you *don't* implement it, you get a pretty generic error ("processing `foo`..."). You can put new impls into the `config` module. They look something like this: -```rust +```rust,ignore impl<'tcx> QueryDescription for queries::type_of<'tcx> { fn describe(tcx: TyCtxt, key: DefId) -> String { format!("computing the type of `{}`", tcx.item_path_str(key)) diff --git a/src/rustc-driver.md b/src/rustc-driver.md index 23a036e73..af3c3c099 100644 --- a/src/rustc-driver.md +++ b/src/rustc-driver.md @@ -67,10 +67,10 @@ pervasive lifetimes. The `rustc::ty::tls` module is used to access these thread-locals, although you should rarely need to touch it. -[`rustc_driver`]: https://github.com/rust-lang/rust/tree/master/src/librustc_driver -[`CompileState`]: https://github.com/rust-lang/rust/blob/master/src/librustc_driver/driver.rs -[`Session`]: https://github.com/rust-lang/rust/blob/master/src/librustc/session/mod.rs -[`TyCtxt`]: https://github.com/rust-lang/rust/blob/master/src/librustc/ty/context.rs -[`CodeMap`]: https://github.com/rust-lang/rust/blob/master/src/libsyntax/codemap.rs +[`rustc_driver`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_driver/ +[`CompileState`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_driver/driver/struct.CompileState.html +[`Session`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/session/struct.Session.html +[`TyCtxt`]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/ty/struct.TyCtxt.html +[`CodeMap`]: https://doc.rust-lang.org/nightly/nightly-rustc/syntax/codemap/struct.CodeMap.html [stupid-stats]: https://github.com/nrc/stupid-stats [Appendix A]: appendix-stupid-stats.html \ No newline at end of file diff --git a/src/rustdoc.md b/src/rustdoc.md index e075087fc..36195c3a5 100644 --- a/src/rustdoc.md +++ b/src/rustdoc.md @@ -70,7 +70,7 @@ The main process of crate crawling is done in `clean/mod.rs` through several implementations of the `Clean` trait defined within. This is a conversion trait, which defines one method: -```rust +```rust,ignore pub trait Clean { fn clean(&self, cx: &DocContext) -> T; } diff --git a/src/tests/adding.md b/src/tests/adding.md index f777a458c..ab5a0adc1 100644 --- a/src/tests/adding.md +++ b/src/tests/adding.md @@ -100,7 +100,7 @@ are normally put after the short comment that explains the point of this test. For example, this test uses the `// compile-flags` command to specify a custom flag to give to rustc when the test is compiled: -```rust +```rust,ignore // Copyright 2017 The Rust Project Developers. blah blah blah. // ... // except according to those terms. @@ -198,7 +198,7 @@ incremental, though incremental tests are somewhat different). Revisions allow a single test file to be used for multiple tests. This is done by adding a special header at the top of the file: -``` +```rust // revisions: foo bar baz ``` @@ -211,7 +211,7 @@ You can also customize headers and expected error messages to a particular revision. To do this, add `[foo]` (or `bar`, `baz`, etc) after the `//` comment, like so: -``` +```rust // A flag to pass in only for cfg `foo`: //[foo]compile-flags: -Z verbose @@ -284,7 +284,7 @@ between platforms, mainly about filenames: Sometimes these built-in normalizations are not enough. In such cases, you may provide custom normalization rules using the header commands, e.g. -``` +```rust // normalize-stdout-test: "foo" -> "bar" // normalize-stderr-32bit: "fn\(\) \(32 bits\)" -> "fn\(\) \($$PTR bits\)" // normalize-stderr-64bit: "fn\(\) \(64 bits\)" -> "fn\(\) \($$PTR bits\)" @@ -298,7 +298,7 @@ default regex flavor provided by `regex` crate. The corresponding reference file will use the normalized output to test both 32-bit and 64-bit platforms: -``` +```text ... | = note: source type: fn() ($PTR bits) diff --git a/src/tests/running.md b/src/tests/running.md index 4e8c71590..03eb8ccc4 100644 --- a/src/tests/running.md +++ b/src/tests/running.md @@ -3,8 +3,8 @@ You can run the tests using `x.py`. The most basic command -- which you will almost never want to use! -- is as follows: -``` -./x.py test +```bash +> ./x.py test ``` This will build the full stage 2 compiler and then run the whole test @@ -17,7 +17,7 @@ The test results are cached and previously successful tests are `ignored` during testing. The stdout/stderr contents as well as a timestamp file for every test can be found under `build/ARCH/test/`. To force-rerun a test (e.g. in case the test runner fails to notice -a change) you can simply remove the timestamp file. +a change) you can simply remove the timestamp file. ## Running a subset of the test suites @@ -27,7 +27,7 @@ test" that can be used after modifying rustc to see if things are generally working correctly would be the following: ```bash -./x.py test --stage 1 src/test/{ui,compile-fail,run-pass} +> ./x.py test --stage 1 src/test/{ui,compile-fail,run-pass} ``` This will run the `ui`, `compile-fail`, and `run-pass` test suites, @@ -37,7 +37,7 @@ example, if you are hacking on debuginfo, you may be better off with the debuginfo test suite: ```bash -./x.py test --stage 1 src/test/debuginfo +> ./x.py test --stage 1 src/test/debuginfo ``` **Warning:** Note that bors only runs the tests with the full stage 2 @@ -51,8 +51,8 @@ Another common thing that people want to do is to run an **individual test**, often the test they are trying to fix. One way to do this is to invoke `x.py` with the `--test-args` option: -``` -./x.py test --stage 1 src/test/ui --test-args issue-1234 +```bash +> ./x.py test --stage 1 src/test/ui --test-args issue-1234 ``` Under the hood, the test runner invokes the standard rust test runner @@ -62,8 +62,8 @@ filtering for tests that include "issue-1234" in the name. Often, though, it's easier to just run the test by hand. Most tests are just `rs` files, so you can do something like -``` -rustc +stage1 src/test/ui/issue-1234.rs +```bash +> rustc +stage1 src/test/ui/issue-1234.rs ``` This is much faster, but doesn't always work. For example, some tests diff --git a/src/the-parser.md b/src/the-parser.md index 456f0a9ea..623a38e67 100644 --- a/src/the-parser.md +++ b/src/the-parser.md @@ -34,9 +34,9 @@ all the information needed while parsing, as well as the `CodeMap` itself. [libsyntax]: https://github.com/rust-lang/rust/tree/master/src/libsyntax [rustc_errors]: https://github.com/rust-lang/rust/tree/master/src/librustc_errors [ast]: https://en.wikipedia.org/wiki/Abstract_syntax_tree -[`CodeMap`]: https://github.com/rust-lang/rust/blob/master/src/libsyntax/codemap.rs -[ast module]: https://github.com/rust-lang/rust/blob/master/src/libsyntax/ast.rs +[`CodeMap`]: https://doc.rust-lang.org/nightly/nightly-rustc/syntax/codemap/struct.CodeMap.html +[ast module]: https://doc.rust-lang.org/nightly/nightly-rustc/syntax/ast/index.html [parser module]: https://github.com/rust-lang/rust/tree/master/src/libsyntax/parse -[`Parser`]: https://github.com/rust-lang/rust/blob/master/src/libsyntax/parse/parser.rs -[`StringReader`]: https://github.com/rust-lang/rust/blob/master/src/libsyntax/parse/lexer/mod.rs -[visit module]: https://github.com/rust-lang/rust/blob/master/src/libsyntax/visit.rs +[`Parser`]: https://doc.rust-lang.org/nightly/nightly-rustc/syntax/parse/parser/struct.Parser.html +[`StringReader`]: https://doc.rust-lang.org/nightly/nightly-rustc/syntax/parse/lexer/struct.StringReader.html +[visit module]: https://doc.rust-lang.org/nightly/nightly-rustc/syntax/visit/index.html diff --git a/src/trait-caching.md b/src/trait-caching.md index 4c728d521..4b7d7e096 100644 --- a/src/trait-caching.md +++ b/src/trait-caching.md @@ -26,7 +26,7 @@ possible impl is this one, with def-id 22: [selection process]: ./trait-resolution.html#selection -```rust +```rust,ignore impl Foo for usize { ... } // Impl #22 ``` diff --git a/src/trait-hrtb.md b/src/trait-hrtb.md index d677db2c5..7f77f274c 100644 --- a/src/trait-hrtb.md +++ b/src/trait-hrtb.md @@ -18,14 +18,14 @@ trait Foo { Let's say we have a function `want_hrtb` that wants a type which implements `Foo<&'a isize>` for any `'a`: -```rust +```rust,ignore fn want_hrtb() where T : for<'a> Foo<&'a isize> { ... } ``` Now we have a struct `AnyInt` that implements `Foo<&'a isize>` for any `'a`: -```rust +```rust,ignore struct AnyInt; impl<'a> Foo<&'a isize> for AnyInt { } ``` @@ -71,7 +71,7 @@ set for `'0` is `{'0, '$a}`, and hence the check will succeed. Let's consider a failure case. Imagine we also have a struct -```rust +```rust,ignore struct StaticInt; impl Foo<&'static isize> for StaticInt; ``` diff --git a/src/trait-resolution.md b/src/trait-resolution.md index 7494d969a..5bf8f8716 100644 --- a/src/trait-resolution.md +++ b/src/trait-resolution.md @@ -13,13 +13,13 @@ see [*this* traits chapter](./traits.html). Trait resolution is the process of pairing up an impl with each reference to a trait. So, for example, if there is a generic function like: -```rust -fn clone_slice(x: &[T]) -> Vec { /*...*/ } +```rust,ignore +fn clone_slice(x: &[T]) -> Vec { ... } ``` and then a call to that function: -```rust +```rust,ignore let v: Vec = clone_slice(&[1, 2, 3]) ``` @@ -30,7 +30,7 @@ Note that in some cases, like generic functions, we may not be able to find a specific impl, but we can figure out that the caller must provide an impl. For example, consider the body of `clone_slice`: -```rust +```rust,ignore fn clone_slice(x: &[T]) -> Vec { let mut v = Vec::new(); for e in &x { @@ -143,7 +143,7 @@ otherwise the result is considered ambiguous. This process is easier if we work through some examples. Consider the following trait: -```rust +```rust,ignore trait Convert { fn convert(&self) -> Target; } @@ -154,14 +154,14 @@ converts from the (implicit) `Self` type to the `Target` type. If we wanted to permit conversion between `isize` and `usize`, we might implement `Convert` like so: -```rust -impl Convert for isize { /*...*/ } // isize -> usize -impl Convert for usize { /*...*/ } // usize -> isize +```rust,ignore +impl Convert for isize { ... } // isize -> usize +impl Convert for usize { ... } // usize -> isize ``` Now imagine there is some code like the following: -```rust +```rust,ignore let x: isize = ...; let y = x.convert(); ``` @@ -186,7 +186,7 @@ inference? But what happens if there are multiple impls where all the types unify? Consider this example: -```rust +```rust,ignore trait Get { fn get(&self) -> Self; } @@ -224,11 +224,11 @@ the same trait (or some subtrait) and which can match against the obligation. Consider this simple example: -```rust +```rust,ignore trait A1 { fn do_a1(&self); } -trait A2 : A1 { /*...*/ } +trait A2 : A1 { ... } trait B { fn do_b(&self); @@ -256,13 +256,13 @@ values found in the obligation, possibly yielding a type error. Suppose we have the following variation of the `Convert` example in the previous section: -```rust +```rust,ignore trait Convert { fn convert(&self) -> Target; } -impl Convert for isize { /*...*/ } // isize -> usize -impl Convert for usize { /*...*/ } // usize -> isize +impl Convert for isize { ... } // isize -> usize +impl Convert for usize { ... } // usize -> isize let x: isize = ...; let y: char = x.convert(); // NOTE: `y: char` now! @@ -296,11 +296,11 @@ everything out. Here is an example: -```rust -trait Foo { /*...*/ } -impl> Foo for Vec { /*...*/ } +```rust,ignore +trait Foo { ... } +impl> Foo for Vec { ... } -impl Bar for isize { /*...*/ } +impl Bar for isize { ... } ``` After one shallow round of selection for an obligation like `Vec diff --git a/src/traits-associated-types.md b/src/traits-associated-types.md index c91dc255f..e2dd94d5a 100644 --- a/src/traits-associated-types.md +++ b/src/traits-associated-types.md @@ -29,10 +29,10 @@ that is, simplified -- based on the types given in an impl. So, to continue with our example, the impl of `IntoIterator` for `Option` declares (among other things) that `Item = T`: -```rust +```rust,ignore impl IntoIterator for Option { type Item = T; - .. + ... } ``` @@ -51,9 +51,11 @@ In our logic, normalization is defined by a predicate impls. For example, the `impl` of `IntoIterator` for `Option` that we saw above would be lowered to a program clause like so: - forall { - Normalize( as IntoIterator>::Item -> T) - } +```text +forall { + Normalize( as IntoIterator>::Item -> T) +} +``` (An aside: since we do not permit quantification over traits, this is really more like a family of predicates, one for each associated @@ -67,7 +69,7 @@ we've seen so far. Sometimes however we want to work with associated types that cannot be normalized. For example, consider this function: -```rust +```rust,ignore fn foo(...) { ... } ``` @@ -99,20 +101,24 @@ consider an associated type projection equal to another type?": We now introduce the `ProjectionEq` predicate to bring those two cases together. The `ProjectionEq` predicate looks like so: - ProjectionEq(::Item = U) +```text +ProjectionEq(::Item = U) +``` and we will see that it can be proven *either* via normalization or skolemization. As part of lowering an associated type declaration from some trait, we create two program clauses for `ProjectionEq`: - forall { - ProjectionEq(::Item = U) :- - Normalize(::Item -> U) - } +```text +forall { + ProjectionEq(::Item = U) :- + Normalize(::Item -> U) +} - forall { - ProjectionEq(::Item = (IntoIterator::Item)) - } +forall { + ProjectionEq(::Item = (IntoIterator::Item)) +} +``` These are the only two `ProjectionEq` program clauses we ever make for any given associated item. @@ -124,7 +130,9 @@ with unification. As described in the [type inference](./type-inference.html) section, unification is basically a procedure with a signature like this: - Unify(A, B) = Result<(Subgoals, RegionConstraints), NoSolution> +```text +Unify(A, B) = Result<(Subgoals, RegionConstraints), NoSolution> +``` In other words, we try to unify two things A and B. That procedure might just fail, in which case we get back `Err(NoSolution)`. This diff --git a/src/traits-canonical-queries.md b/src/traits-canonical-queries.md index b291a2898..3c4bb1bfe 100644 --- a/src/traits-canonical-queries.md +++ b/src/traits-canonical-queries.md @@ -19,12 +19,16 @@ In a traditional Prolog system, when you start a query, the solver will run off and start supplying you with every possible answer it can find. So given something like this: - ?- Vec: AsRef +```text +?- Vec: AsRef +``` The solver might answer: - Vec: AsRef<[i32]> - continue? (y/n) +```text +Vec: AsRef<[i32]> + continue? (y/n) +``` This `continue` bit is interesting. The idea in Prolog is that the solver is finding **all possible** instantiations of your query that @@ -35,34 +39,42 @@ response with our original query -- Rust's solver gives back a substitution instead). If we were to hit `y`, the solver might then give us another possible answer: - Vec: AsRef> - continue? (y/n) +```text +Vec: AsRef> + continue? (y/n) +``` This answer derives from the fact that there is a reflexive impl (`impl AsRef for T`) for `AsRef`. If were to hit `y` again, then we might get back a negative response: - no +```text +no +``` Naturally, in some cases, there may be no possible answers, and hence the solver will just give me back `no` right away: - ?- Box: Copy - no +```text +?- Box: Copy + no +``` In some cases, there might be an infinite number of responses. So for example if I gave this query, and I kept hitting `y`, then the solver would never stop giving me back answers: - ?- Vec: Clone - Vec: Clone - continue? (y/n) - Vec>: Clone - continue? (y/n) - Vec>>: Clone - continue? (y/n) - Vec>>>: Clone - continue? (y/n) +```text +?- Vec: Clone + Vec: Clone + continue? (y/n) + Vec>: Clone + continue? (y/n) + Vec>>: Clone + continue? (y/n) + Vec>>>: Clone + continue? (y/n) +``` As you can imagine, the solver will gleefully keep adding another layer of `Box` until we ask it to stop, or it runs out of memory. @@ -70,12 +82,16 @@ layer of `Box` until we ask it to stop, or it runs out of memory. Another interesting thing is that queries might still have variables in them. For example: - ?- Rc: Clone +```text +?- Rc: Clone +``` might produce the answer: - Rc: Clone - continue? (y/n) +```text +Rc: Clone + continue? (y/n) +``` After all, `Rc` is true **no matter what type `?T` is**. @@ -132,7 +148,7 @@ impls; among them, there are these two (for clarify, I've written the [borrow]: https://doc.rust-lang.org/std/borrow/trait.Borrow.html -```rust +```rust,ignore impl Borrow for T where T: ?Sized impl Borrow<[T]> for Vec where T: Sized ``` @@ -140,7 +156,7 @@ impl Borrow<[T]> for Vec where T: Sized **Example 1.** Imagine we are type-checking this (rather artificial) bit of code: -```rust +```rust,ignore fn foo(a: A, vec_b: Option) where A: Borrow { } fn main() { @@ -185,7 +201,7 @@ other sources, in which case we can try the trait query again. **Example 2.** We can now extend our previous example a bit, and assign a value to `u`: -```rust +```rust,ignore fn foo(a: A, vec_b: Option) where A: Borrow { } fn main() { @@ -210,11 +226,15 @@ Let's suppose that the type checker decides to revisit the Borrow`. `?U` is no longer an unbound inference variable; it now has a value, `Vec`. So, if we "refresh" the query with that value, we get: - Vec: Borrow> +```text +Vec: Borrow> +``` This time, there is only one impl that applies, the reflexive impl: - impl Borrow for T where T: ?Sized +```text +impl Borrow for T where T: ?Sized +``` Therefore, the trait checker will answer: diff --git a/src/traits-canonicalization.md b/src/traits-canonicalization.md index 9f3af36c9..fa39151d7 100644 --- a/src/traits-canonicalization.md +++ b/src/traits-canonicalization.md @@ -42,14 +42,18 @@ This query contains two unbound variables, but it also contains the lifetime `'static`. The trait system generally ignores all lifetimes and treats them equally, so when canonicalizing, we will *also* replace any [free lifetime](./appendix-background.html#free-vs-bound) with a -canonical variable. Therefore, we get the following result: +canonical variable. Therefore, we get the following result: - ?0: Foo<'?1, ?2> - -Sometimes we write this differently, like so: +```text +?0: Foo<'?1, ?2> +``` + +Sometimes we write this differently, like so: + +```text +for { ?0: Foo<'?1, ?2> } +``` - for { ?0: Foo<'?1, ?2> } - This `for<>` gives some information about each of the canonical variables within. In this case, each `T` indicates a type variable, so `?0` and `?2` are types; the `L` indicates a lifetime varibale, so @@ -57,8 +61,10 @@ so `?0` and `?2` are types; the `L` indicates a lifetime varibale, so `CanonicalVarValues` array OV with the "original values" for each canonicalized variable: - [?A, 'static, ?B] - +```text +[?A, 'static, ?B] +``` + We'll need this vector OV later, when we process the query response. ## Executing the query @@ -70,18 +76,24 @@ we create a substitution S from the canonical form containing a fresh inference variable (of suitable kind) for each canonical variable. So, for our example query: - for { ?0: Foo<'?1, ?2> } +```text +for { ?0: Foo<'?1, ?2> } +``` the substitution S might be: - S = [?A, '?B, ?C] - +```text +S = [?A, '?B, ?C] +``` + We can then replace the bound canonical variables (`?0`, etc) with these inference variables, yielding the following fully instantiated query: - ?A: Foo<'?B, ?C> - +```text +?A: Foo<'?B, ?C> +``` + Remember that substitution S though! We're going to need it later. OK, now that we have a fresh inference context and an instantiated @@ -93,7 +105,7 @@ created. For example, if there were only one impl of `Foo`, like so: [cqqr]: ./traits-canonical-queries.html#query-response -``` +```rust,ignore impl<'a, X> Foo<'a, X> for Vec where X: 'a { ... } @@ -123,39 +135,49 @@ result substitution `var_values`, and some region constraints. To create this, we wind up re-using the substitution S that we created when first instantiating our query. To refresh your memory, we had a query - for { ?0: Foo<'?1, ?2> } +```text +for { ?0: Foo<'?1, ?2> } +``` for which we made a substutition S: - S = [?A, '?B, ?C] - +```text +S = [?A, '?B, ?C] +``` + We then did some work which unified some of those variables with other things. If we "refresh" S with the latest results, we get: - S = [Vec, '?D, ?E] - +```text +S = [Vec, '?D, ?E] +``` + These are precisely the new values for the three input variables from our original query. Note though that they include some new variables (like `?E`). We can make those go away by canonicalizing again! We don't just canonicalize S, though, we canonicalize the whole query response QR: - QR = { - certainty: Proven, // or whatever - var_values: [Vec, '?D, ?E] // this is S - region_constraints: [?E: '?D], // from the impl - value: (), // for our purposes, just (), but - // in some cases this might have - // a type or other info - } +```text +QR = { + certainty: Proven, // or whatever + var_values: [Vec, '?D, ?E] // this is S + region_constraints: [?E: '?D], // from the impl + value: (), // for our purposes, just (), but + // in some cases this might have + // a type or other info +} +``` The result would be as follows: - Canonical(QR) = for { - certainty: Proven, - var_values: [Vec, '?1, ?2] - region_constraints: [?2: '?1], - value: (), - } +```text +Canonical(QR) = for { + certainty: Proven, + var_values: [Vec, '?1, ?2] + region_constraints: [?2: '?1], + value: (), +} +``` (One subtle point: when we canonicalize the query **result**, we do not use any special treatment for free lifetimes. Note that both @@ -172,20 +194,26 @@ In the previous section we produced a canonical query result. We now have to apply that result in our original context. If you recall, way back in the beginning, we were trying to prove this query: - ?A: Foo<'static, ?B> - +```text +?A: Foo<'static, ?B> +``` + We canonicalized that into this: - for { ?0: Foo<'?1, ?2> } +```text +for { ?0: Foo<'?1, ?2> } +``` and now we got back a canonical response: - for { - certainty: Proven, - var_values: [Vec, '?1, ?2] - region_constraints: [?2: '?1], - value: (), - } +```text +for { + certainty: Proven, + var_values: [Vec, '?1, ?2] + region_constraints: [?2: '?1], + value: (), +} +``` We now want to apply that response to our context. Conceptually, how we do that is to (a) instantiate each of the canonical variables in @@ -193,19 +221,19 @@ the result with a fresh inference variable, (b) unify the values in the result with the original values, and then (c) record the region constraints for later. Doing step (a) would yield a result of -``` +```text { certainty: Proven, var_values: [Vec, '?D, ?C] ^^ ^^^ fresh inference variables region_constraints: [?C: '?D], value: (), -} +} ``` Step (b) would then unify: -``` +```text ?A with Vec 'static with '?D ?B with ?C diff --git a/src/traits-goals-and-clauses.md b/src/traits-goals-and-clauses.md index 8e5e82d01..5e8ee1469 100644 --- a/src/traits-goals-and-clauses.md +++ b/src/traits-goals-and-clauses.md @@ -12,22 +12,24 @@ a few new superpowers. In Rust's solver, **goals** and **clauses** have the following forms (note that the two definitions reference one another): - Goal = DomainGoal // defined in the section below - | Goal && Goal - | Goal || Goal - | exists { Goal } // existential quantification - | forall { Goal } // universal quantification - | if (Clause) { Goal } // implication - | true // something that's trivially true - | ambiguous // something that's never provable - - Clause = DomainGoal - | Clause :- Goal // if can prove Goal, then Clause is true - | Clause && Clause - | forall { Clause } - - K = // a "kind" - | +```text +Goal = DomainGoal // defined in the section below + | Goal && Goal + | Goal || Goal + | exists { Goal } // existential quantification + | forall { Goal } // universal quantification + | if (Clause) { Goal } // implication + | true // something that's trivially true + | ambiguous // something that's never provable + +Clause = DomainGoal + | Clause :- Goal // if can prove Goal, then Clause is true + | Clause && Clause + | forall { Clause } + +K = // a "kind" + | +``` The proof procedure for these sorts of goals is actually quite straightforward. Essentially, it's a form of depth-first search. The @@ -47,8 +49,10 @@ To define the set of *domain goals* in our system, we need to first introduce a few simple formulations. A **trait reference** consists of the name of a trait along with a suitable set of inputs P0..Pn: - TraitRef = P0: TraitName - +```text +TraitRef = P0: TraitName +``` + So, for example, `u32: Display` is a trait reference, as is `Vec: IntoIterator`. Note that Rust surface syntax also permits some extra things, like associated type bindings (`Vec: IntoIterator`), that are not part of a trait reference. A **projection** consists of an associated item reference along with its inputs P0..Pm: - Projection = >::AssocItem +```text +Projection = >::AssocItem +``` Given that, we can define a `DomainGoal` as follows: - DomainGoal = Implemented(TraitRef) - | ProjectionEq(Projection = Type) - | Normalize(Projection -> Type) - | FromEnv(TraitRef) - | FromEnv(Projection = Type) - | WellFormed(Type) - | WellFormed(TraitRef) - | WellFormed(Projection = Type) - | Outlives(Type, Region) - | Outlives(Region, Region) +```text +DomainGoal = Implemented(TraitRef) + | ProjectionEq(Projection = Type) + | Normalize(Projection -> Type) + | FromEnv(TraitRef) + | FromEnv(Projection = Type) + | WellFormed(Type) + | WellFormed(TraitRef) + | WellFormed(Projection = Type) + | Outlives(Type, Region) + | Outlives(Region, Region) +``` - `Implemented(TraitRef)` -- true if the given trait is implemented for the given input types and lifetimes @@ -104,9 +112,11 @@ Given that, we can define a `DomainGoal` as follows: Most goals in our system are "inductive". In an inductive goal, circular reasoning is disallowed. Consider this example clause: +```text Implemented(Foo: Bar) :- Implemented(Foo: Bar). - +``` + Considered inductively, this clause is useless: if we are trying to prove `Implemented(Foo: Bar)`, we would then recursively have to prove `Implemented(Foo: Bar)`, and that cycle would continue ad infinitum @@ -130,8 +140,10 @@ struct Foo { The default rules for auto traits say that `Foo` is `Send` if the types of its fields are `Send`. Therefore, we would have a rule like - Implemented(Foo: Send) :- - Implemented(Option>: Send). +```text +Implemented(Foo: Send) :- + Implemented(Option>: Send). +``` As you can probably imagine, proving that `Option>: Send` is going to wind up circularly requiring us to prove that `Foo: Send` diff --git a/src/traits-lowering-module.md b/src/traits-lowering-module.md index c47b8fbe8..fbf1d6425 100644 --- a/src/traits-lowering-module.md +++ b/src/traits-lowering-module.md @@ -4,7 +4,7 @@ The program clauses described in the [lowering rules](./traits-lowering-rules.html) section are actually created in the [`rustc_traits::lowering`][lowering] module. -[lowering]: https://github.com/rust-lang/rust/tree/master/src/librustc_traits/lowering.rs +[lowering]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_traits/lowering/ ## The `program_clauses_for` query @@ -26,7 +26,7 @@ Unit tests are located in [`src/test/ui/chalkify`][chalkify]. A good example test is [the `lower_impl` test][lower_impl]. At the time of this writing, it looked like this: -```rust +```rust,ignore #![feature(rustc_attrs)] trait Foo { } @@ -49,7 +49,7 @@ standard [ui test] mechanisms to check them. In this case, there is a need only be a prefix of the error), but [the stderr file] contains the full details: -``` +```text error: Implemented(T: Foo) :- ProjectionEq(::Item == i32), TypeOutlives(T \ : 'static), Implemented(T: std::iter::Iterator), Implemented(T: std::marker::Sized). --> $DIR/lower_impl.rs:15:1 diff --git a/src/traits-lowering-rules.md b/src/traits-lowering-rules.md index 544ae41b3..f7433221e 100644 --- a/src/traits-lowering-rules.md +++ b/src/traits-lowering-rules.md @@ -67,7 +67,7 @@ but Chalk isn't modeling those right now. Given a trait definition -```rust +```rust,ignore trait Trait // P0 == Self where WC { @@ -87,10 +87,12 @@ relationships between different kinds of domain goals. The first such rule from the trait header creates the mapping between the `FromEnv` and `Implemented` predicates: - // Rule Implemented-From-Env - forall { - Implemented(Self: Trait) :- FromEnv(Self: Trait) - } +```text +// Rule Implemented-From-Env +forall { + Implemented(Self: Trait) :- FromEnv(Self: Trait) +} +``` @@ -101,17 +103,19 @@ The next few clauses have to do with implied bounds (see also [RFC 2089]: https://rust-lang.github.io/rfcs/2089-implied-bounds.html - // Rule Implied-Bound-From-Trait - // - // For each where clause WC: - forall { - FromEnv(WC) :- FromEnv(Self: Trait { + FromEnv(WC) :- FromEnv(Self: Trait // P0 == Self where WC { @@ -190,39 +194,43 @@ where WC We will produce a number of program clauses. The first two define the rules by which `ProjectionEq` can succeed; these two clauses are discussed -in detail in the [section on associated types](./traits-associated-types.html),, +in detail in the [section on associated types](./traits-associated-types.html), but reproduced here for reference: - // Rule ProjectionEq-Normalize - // - // ProjectionEq can succeed by normalizing: - forall { - ProjectionEq(>::AssocType = U) :- - Normalize(>::AssocType -> U) - } - - // Rule ProjectionEq-Skolemize - // - // ProjectionEq can succeed by skolemizing, see "associated type" - // chapter for more: - forall { - ProjectionEq( - >::AssocType = - (Trait::AssocType) - ) :- - // But only if the trait is implemented, and the conditions from - // the associated type are met as well: - Implemented(Self: Trait) - && WC1 - } +```text + // Rule ProjectionEq-Normalize + // + // ProjectionEq can succeed by normalizing: + forall { + ProjectionEq(>::AssocType = U) :- + Normalize(>::AssocType -> U) + } + + // Rule ProjectionEq-Skolemize + // + // ProjectionEq can succeed by skolemizing, see "associated type" + // chapter for more: + forall { + ProjectionEq( + >::AssocType = + (Trait::AssocType) + ) :- + // But only if the trait is implemented, and the conditions from + // the associated type are met as well: + Implemented(Self: Trait) + && WC1 + } +``` The next rule covers implied bounds for the projection. In particular, the `Bounds` declared on the associated type must be proven to hold to show that the impl is well-formed, and hence we can rely on them elsewhere. - // XXX how exactly should we set this up? Have to be careful; - // presumably this has to be a kind of `FromEnv` setup. +```text +// XXX how exactly should we set this up? Have to be careful; +// presumably this has to be a kind of `FromEnv` setup. +``` ### Lowering function and constant declarations @@ -234,7 +242,7 @@ values below](#constant-vals) for more details. Given an impl of a trait: -```rust +```rust,ignore impl Trait for A0 where WC { @@ -245,10 +253,12 @@ where WC Let `TraitRef` be the trait reference `A0: Trait`. Then we will create the following rules: - // Rule Implemented-From-Impl - forall { - Implemented(TraitRef) :- WC - } +```text +// Rule Implemented-From-Impl +forall { + Implemented(TraitRef) :- WC +} +``` In addition, we will lower all of the *impl items*. @@ -258,7 +268,7 @@ In addition, we will lower all of the *impl items*. Given an impl that contains: -```rust +```rust,ignore impl Trait for A0 where WC { @@ -268,13 +278,15 @@ where WC We produce the following rule: - // Rule Normalize-From-Impl - forall { - forall { - Normalize(>::AssocType -> T) :- - WC && WC1 - } - } +```text +// Rule Normalize-From-Impl +forall { + forall { + Normalize(>::AssocType -> T) :- + WC && WC1 + } +} +``` Note that `WC` and `WC1` both encode where-clauses that the impl can rely on. diff --git a/src/traits-lowering-to-logic.md b/src/traits-lowering-to-logic.md index 4e4e7cae9..54b3473d4 100644 --- a/src/traits-lowering-to-logic.md +++ b/src/traits-lowering-to-logic.md @@ -30,7 +30,7 @@ impl Clone for Vec where T: Clone { } We could map these declarations to some Horn clauses, written in a Prolog-like notation, as follows: -``` +```text Clone(usize). Clone(Vec) :- Clone(?T). @@ -51,18 +51,18 @@ so by applying the rules recursively: - `Clone(Vec>)` is provable if: - `Clone(Vec)` is provable if: - `Clone(usize)` is provable. (Which is is, so we're all good.) - + But now suppose we tried to prove that `Clone(Vec)`. This would fail (after all, I didn't give an impl of `Clone` for `Bar`): - `Clone(Vec)` is provable if: - `Clone(Bar)` is provable. (But it is not, as there are no applicable rules.) - + We can easily extend the example above to cover generic traits with more than one input type. So imagine the `Eq` trait, which declares that `Self` is equatable with a value of type `T`: -```rust +```rust,ignore trait Eq { ... } impl Eq for usize { } impl> Eq> for Vec { } @@ -70,12 +70,12 @@ impl> Eq> for Vec { } That could be mapped as follows: -``` +```text Eq(usize, usize). Eq(Vec, Vec) :- Eq(?T, ?U). ``` -So far so good. +So far so good. ## Type-checking normal functions @@ -90,7 +90,7 @@ that we need to prove, and those come from type-checking. Consider type-checking the function `foo()` here: -```rust +```rust,ignore fn foo() { bar::() } fn bar>() { } ``` @@ -105,7 +105,7 @@ If we wanted, we could write a Prolog predicate that defines the conditions under which `bar()` can be called. We'll say that those conditions are called being "well-formed": -``` +```text barWellFormed(?U) :- Eq(?U, ?U). ``` @@ -113,7 +113,7 @@ Then we can say that `foo()` type-checks if the reference to `bar::` (that is, `bar()` applied to the type `usize`) is well-formed: -``` +```text fooTypeChecks :- barWellFormed(usize). ``` @@ -122,7 +122,7 @@ If we try to prove the goal `fooTypeChecks`, it will succeed: - `fooTypeChecks` is provable if: - `barWellFormed(usize)`, which is provable if: - `Eq(usize, usize)`, which is provable because of an impl. - + Ok, so far so good. Let's move on to type-checking a more complex function. ## Type-checking generic functions: beyond Horn clauses @@ -134,7 +134,7 @@ a generic function, it turns out we need a stronger notion of goal than Prolog can be provide. To see what I'm talking about, let's revamp our previous example to make `foo` generic: -```rust +```rust,ignore fn foo>() { bar::() } fn bar>() { } ``` @@ -144,7 +144,7 @@ To type-check the body of `foo`, we need to be able to hold the type type-safe *for all types `T`*, not just for some specific type. We might express this like so: -``` +```text fooTypeChecks :- // for all types T... forall { diff --git a/src/ty.md b/src/ty.md index 1fd86a6bd..44017dd5b 100644 --- a/src/ty.md +++ b/src/ty.md @@ -10,7 +10,7 @@ The `tcx` ("typing context") is the central data structure in the compiler. It is the context that you use to perform all manner of queries. The struct `TyCtxt` defines a reference to this shared context: -```rust +```rust,ignore tcx: TyCtxt<'a, 'gcx, 'tcx> // -- ---- ---- // | | | @@ -47,7 +47,7 @@ for the `'gcx` and `'tcx` parameters of `TyCtxt`. Just to be a touch confusing, we tend to use the name `'tcx` in such contexts. Here is an example: -```rust +```rust,ignore fn not_in_inference<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) { // ---- ---- // Using the same lifetime here asserts @@ -59,7 +59,7 @@ fn not_in_inference<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) { In contrast, if we want to code that can be usable during type inference, then you need to declare a distinct `'gcx` and `'tcx` lifetime parameter: -```rust +```rust,ignore fn maybe_in_inference<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>, def_id: DefId) { // ---- ---- // Using different lifetimes here means that @@ -74,7 +74,7 @@ Rust types are represented using the `Ty<'tcx>` defined in the `ty` module (not to be confused with the `Ty` struct from [the HIR]). This is in fact a simple type alias for a reference with `'tcx` lifetime: -```rust +```rust,ignore pub type Ty<'tcx> = &'tcx TyS<'tcx>; ``` @@ -89,7 +89,7 @@ the rustc arenas (never e.g. on the stack). One common operation on types is to **match** and see what kinds of types they are. This is done by doing `match ty.sty`, sort of like this: -```rust +```rust,ignore fn test_type<'tcx>(ty: Ty<'tcx>) { match ty.sty { ty::TyArray(elem_ty, len) => { ... } @@ -111,7 +111,7 @@ To allocate a new type, you can use the various `mk_` methods defined on the `tcx`. These have names that correpond mostly to the various kinds of type variants. For example: -```rust +```rust,ignore let array_ty = tcx.mk_array(elem_ty, len * 2); ``` @@ -158,7 +158,7 @@ module. Here are a few examples: Although there is no hard and fast rule, the `ty` module tends to be used like so: -```rust +```rust,ignore use ty::{self, Ty, TyCtxt}; ``` diff --git a/src/type-checking.md b/src/type-checking.md index 8148ef627..cb6d346e4 100644 --- a/src/type-checking.md +++ b/src/type-checking.md @@ -17,9 +17,9 @@ similar conversions for where-clauses and other bits of the function signature. To try and get a sense for the difference, consider this function: -```rust +```rust,ignore struct Foo { } -fn foo(x: Foo, y: self::Foo) { .. } +fn foo(x: Foo, y: self::Foo) { ... } // ^^^ ^^^^^^^^^ ``` @@ -39,6 +39,6 @@ type *checking*). For more details, see the [`collect`][collect] module. [queries]: query.html -[collect]: https://github.com/rust-lang/rust/blob/master/src/librustc_typeck/collect.rs +[collect]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc_typeck/collect/ **TODO**: actually talk about type checking... diff --git a/src/type-inference.md b/src/type-inference.md index 45f9df18a..152bbd9da 100644 --- a/src/type-inference.md +++ b/src/type-inference.md @@ -21,7 +21,7 @@ signature, such as the `'a` in `for<'a> fn(&'a u32)`. A region is You create and "enter" an inference context by doing something like the following: -```rust +```rust,ignore tcx.infer_ctxt().enter(|infcx| { // Use the inference context `infcx` here. }) @@ -88,7 +88,7 @@ The most basic operations you can perform in the type inferencer is recommended way to add an equality constraint is to use the `at` method, roughly like so: -```rust +```rust,ignore infcx.at(...).eq(t, u); ``` @@ -159,7 +159,9 @@ is to first "generalize" `&'a i32` into a type with a region variable: `&'?b i32`, and then unify `?T` with that (`?T = &'?b i32`). We then relate this new variable with the original bound: - &'?b i32 <: &'a i32 +```text +&'?b i32 <: &'a i32 +``` This will result in a region constraint (see below) of `'?b: 'a`. @@ -176,12 +178,16 @@ eagerly unifying things, we simply collect constraints as we go, but make (almost) no attempt to solve regions. These constraints have the form of an "outlives" constraint: - 'a: 'b +```text +'a: 'b +``` Actually the code tends to view them as a subregion relation, but it's the same idea: - 'b <= 'a +```text +'b <= 'a +``` (There are various other kinds of constriants, such as "verifys"; see the `region_constraints` module for details.) @@ -189,7 +195,9 @@ the `region_constraints` module for details.) There is one case where we do some amount of eager unification. If you have an equality constraint between two regions - 'a = 'b +```text +'a = 'b +``` we will record that fact in a unification table. You can then use `opportunistic_resolve_var` to convert `'b` to `'a` (or vice diff --git a/src/variance.md b/src/variance.md index 16b4a7518..527c2745c 100644 --- a/src/variance.md +++ b/src/variance.md @@ -54,7 +54,7 @@ constraints will be satisfied. As a simple example, consider: -```rust +```rust,ignore enum Option { Some(A), None } enum OptionalFn { Some(|B|), None } enum OptionalMap { Some(|C| -> C), None } @@ -62,19 +62,23 @@ enum OptionalMap { Some(|C| -> C), None } Here, we will generate the constraints: - 1. V(A) <= + - 2. V(B) <= - - 3. V(C) <= + - 4. V(C) <= - +```text +1. V(A) <= + +2. V(B) <= - +3. V(C) <= + +4. V(C) <= - +``` These indicate that (1) the variance of A must be at most covariant; (2) the variance of B must be at most contravariant; and (3, 4) the variance of C must be at most covariant *and* contravariant. All of these results are based on a variance lattice defined as follows: - * Top (bivariant) - - + - o Bottom (invariant) +```text + * Top (bivariant) +- + + o Bottom (invariant) +``` Based on this lattice, the solution `V(A)=+`, `V(B)=-`, `V(C)=o` is the optimal solution. Note that there is always a naive solution which @@ -85,8 +89,10 @@ is that the variance of a use site may itself be a function of the variance of other type parameters. In full generality, our constraints take the form: - V(X) <= Term - Term := + | - | * | o | V(X) | Term x Term +```text +V(X) <= Term +Term := + | - | * | o | V(X) | Term x Term +``` Here the notation `V(X)` indicates the variance of a type/region parameter `X` with respect to its defining class. `Term x Term` @@ -101,7 +107,7 @@ represents the "variance transform" as defined in the paper: If I have a struct or enum with where clauses: -```rust +```rust,ignore struct Foo { ... } ``` @@ -170,9 +176,11 @@ another. To see what I mean, consider a trait like so: - trait ConvertTo { - fn convertTo(&self) -> A; - } +```rust +trait ConvertTo { + fn convertTo(&self) -> A; +} +``` Intuitively, If we had one object `O=&ConvertTo` and another `S=&ConvertTo`, then `S <: O` because `String <: Object` @@ -200,20 +208,24 @@ But traits aren't only used with objects. They're also used when deciding whether a given impl satisfies a given trait bound. To set the scene here, imagine I had a function: - fn convertAll>(v: &[T]) { - ... - } +```rust,ignore +fn convertAll>(v: &[T]) { ... } +``` Now imagine that I have an implementation of `ConvertTo` for `Object`: - impl ConvertTo for Object { ... } +```rust,ignore +impl ConvertTo for Object { ... } +``` And I want to call `convertAll` on an array of strings. Suppose further that for whatever reason I specifically supply the value of `String` for the type parameter `T`: - let mut vector = vec!["string", ...]; - convertAll::(vector); +```rust,ignore +let mut vector = vec!["string", ...]; +convertAll::(vector); +``` Is this legal? To put another way, can we apply the `impl` for `Object` to the type `String`? The answer is yes, but to see why @@ -222,11 +234,9 @@ we have to expand out what will happen: - `convertAll` will create a pointer to one of the entries in the vector, which will have type `&String` - It will then call the impl of `convertTo()` that is intended - for use with objects. This has the type: - - fn(self: &Object) -> i32 + for use with objects. This has the type `fn(self: &Object) -> i32`. - It is ok to provide a value for `self` of type `&String` because + It is OK to provide a value for `self` of type `&String` because `&String <: &Object`. OK, so intuitively we want this to be legal, so let's bring this back @@ -238,11 +248,15 @@ Maybe it's helpful to think of a dictionary-passing implementation of type classes. In that case, `convertAll()` takes an implicit parameter representing the impl. In short, we *have* an impl of type: - V_O = ConvertTo for Object +```text +V_O = ConvertTo for Object +``` and the function prototype expects an impl of type: - V_S = ConvertTo for String +```text +V_S = ConvertTo for String +``` As with any argument, this is legal if the type of the value given (`V_O`) is a subtype of the type expected (`V_S`). So is `V_O <: V_S`? @@ -250,9 +264,11 @@ The answer will depend on the variance of the various parameters. In this case, because the `Self` parameter is contravariant and `A` is covariant, it means that: - V_O <: V_S iff - i32 <: i32 - String <: Object +```text +V_O <: V_S iff + i32 <: i32 + String <: Object +``` These conditions are satisfied and so we are happy. @@ -263,7 +279,9 @@ expressions -- must be invariant with respect to all of their inputs. To see why this makes sense, consider what subtyping for a trait reference means: - <: +```text + <: +``` means that if I know that `T as Trait`, I also know that `U as Trait`. Moreover, if you think of it as dictionary passing style, @@ -279,7 +297,7 @@ Another related reason is that if we didn't make traits with associated types invariant, then projection is no longer a function with a single result. Consider: -``` +```rust,ignore trait Identity { type Out; fn foo(&self); } impl Identity for T { type Out = T; ... } ``` @@ -287,9 +305,11 @@ impl Identity for T { type Out = T; ... } Now if I have `<&'static () as Identity>::Out`, this can be validly derived as `&'a ()` for any `'a`: - <&'a () as Identity> <: <&'static () as Identity> - if &'static () < : &'a () -- Identity is contravariant in Self - if 'static : 'a -- Subtyping rules for relations +```text +<&'a () as Identity> <: <&'static () as Identity> +if &'static () < : &'a () -- Identity is contravariant in Self +if 'static : 'a -- Subtyping rules for relations +``` This change otoh means that `<'static () as Identity>::Out` is always `&'static ()` (which might then be upcast to `'a ()`,