Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Rust in 2016 (rust-lang.org)
371 points by aturon on Aug 14, 2015 | hide | past | favorite | 149 comments


I had the opportunity to work with Rust >1.0 recently, implementing an image processing algorithm. Knowing that the compiler was looking out for me, a wingman of sorts, was quite the pleasant experience. When coding in C I have a very paranoid mentality, constantly questioning every line of code and its impact on program state/memory. It results in my C code being almost always free of memory related bugs, but the work is absolutely _exhausting_. Rust was great in this regard, dramatically reducing the amount of mental capacity expended while coding. Either the compiler would catch the bugs, or worst-case a run-time assert would catch it and point me directly to the problem.

The major criticism I came away with, due in part to the type of program I was coding, was for Rust's lack of implicit type casting (more specifically, widening). What I mean is, adding a u8 and a u16 is an error in Rust. Rust will refuse to implicitly cast the u8 to a u16. These situations came up very frequently while implementing my program because I had to do a lot of optimized, low-level math. The scattering of type casts throughout the program resulted in clutter without any obvious benefit.

When I looked into the problem, the arguments I saw against it were often explanations that Rust is meant to be explicit and non-magical. But Rust, for example, already has type inference which I classify as "magical". Implicit type widening is hardly magical. And I don't see how it would be confusing or result in bugs, as long as only safe widening is done implicitly.

I think those involved in the Rust project were just scared off from it because of C's bizarre implicit type casting rules which result in bugs for typical programmers. I can understand that, but it's not like it can't be done better in Rust. Besides, if Rust is meant to be a system programming language, won't math between differing types come up often? And wouldn't handling those cases gracefully be a boon to productiveness in Rust?


I definitely agree, this is a pain point in Rust today, and it's something the core team would like to address. You might glance at the thread here for some of the tradeoffs: https://internals.rust-lang.org/t/implicit-widening-polymorp...

If we can reach consensus on a design, such a feature can be easily implemented. I'd love to see an RFC on the topic (https://github.com/rust-lang/rfcs)!


Here's why you might not want to have type conversion. I ran into this today:

class String {

  int find(char needle, int startIdx);
};

I ended up calling:

   int idx = find(startIdx, ';');
C++ happily converted my char to an int, and my int to a char, no warning or anything. Surprisingly, the code appeared to work for months, and then when I upgrade Xcode to the beta, it started failing. Weird.

I end up casting stuff when I do math, anyway, because I can never remember when 'int op float' and 'float op int' results in an int or a float, and every time I don't it ends up as a truncated int when I wanted a float in [0, 1]. So, a little more painful to have to cast stuff around, but at least you find out quickly about my find() bug and you don't end up with something like

   if (secondsFromMidnightInFloat / 86400 > .5) {...}
always fails. I can't tell you how much time I've wasted tracking something like that down, despite my diligence in adding (float). That and that stupid "->" for pointers drive me nuts. Even more than header files.


From the discussion linked[1] in another reply at your level, I believe the proposed automatic type conversions are only for when there's no data loss, and there needs to be explicit conversions whenever loss of information is possible. That is, it's implicit for i32 -> u32, or i32 -> i64, but not for something like i32 -> i8 (such as the int to char conversion you mentioned), which would need to be explicitly converted.

1: https://internals.rust-lang.org/t/implicit-widening-polymorp...


i32 -> u32 is not lossless, since u32 cannot represent negative numbers. And indeed that's not listed in the implicit conversion table in the page you linked to.


You're right, I want clearly thinking about that. Thanks.


The first case will be detected by -Wconversion, however you have to enable it.

For the second case, correct me if I'm wrong, but I think both float / int and int / float have a float result. In fact, pretty much all operations involving an int and a float will have a float result.


How to address this problem?

    fn foo(x: u32) { }
    let x: u16;
    let y: u16;
    foo(x * y);
`x` and `y` would have to be multiplied before they are widened to `u32`. This creates an overflow bug that may only be visible in the released code (release mode has arithmetic overflow checking off by default).


A smart solution could be to have arithmetic operators automatically coerce the types to the type of their result, if the result type is already known.


Indeed, that is my intuition as well. This is the safest default behavior from the compiler, and even solves crazy expressions like:

    u64 = ((u8 + u16) * u32) / u8;
It's hard to reason about what a programmer would want that statement to do. Coercing everything to u64 is the safest option. The idea, though, is to allow the programmer to use explicit casts to define exactly what they want when the need arises. So:

    u64 = (((u8 + u16) as u16) * u32) / u8;
Would mean u16 addition, u64 multiplication and u64 division. So you get the benefit of safe, implicit type widening without losing the ability to micro-optimize when you want to.


The annoying thing is that (because of type inference) parts of the expression could be within different expressions:

    fn required_bytes(width: u16, height: u16) -> u64 {
        let size = width * height; // what's the type of size?
        size + 12
    }


True, type inference would make an ideal solution difficult. But I'm totally fine with the compiler failing to apply implicit casting when type inference is involved. This code is just as readable, if not better:

    fn required_bytes(width: u16, height: u16) -> u64 {
        let size: u64 = width * height;
        size + 12
    }
I just really don't want to have to write this all the time:

    fn required_bytes(width: u16, height: u16) -> u64 {
        let size = (width as u64) * (height as u64);
        size + 12
    }


I can sympathesize, but the last think I would want in a language is for benign looking refactorings to changes meaning. E.g.

    let a: u16;
    let b: u16;
    fn f(x: u32) -> ...

    f(a*b) // 32 bit result
    
    let x = a*b; // x is u16
    f(x)
Now there is a sane answer to this: define multiplication to always result in larger integer types, and require some explicit downcasting. But I'm not sure anyone will go for this.


If there is a type inferred, single character operator which can cast one numeric type to another, both sides can be happy.


This way of thinking leads to ASCII spaghetti as features are added over time.


Should be u16 at first (or u32 for overflow?) and promoted to u64 when returning from the function.


If the result type is already known and the result type is wider.


That same bug exists even if the function takes a u16.


It's not the same bug


In both cases the mistake happened when two 16-bit integers were multiplied and the result was truncated when stored to a 16-bit number. This is the point at which the information was lost, and this is the point that with different tradeoffs (note: I am not advocating for these alternatives) would either result in a 32-bit number or a runtime overflow check, leading there to no longer be a bug. The same bug would exist with a fully generic function, or even without a function at all. Passing a number that has already been truncated to a function that could have accepted a wider number is not a bug: the bug already happened.


Very interesting remark. How about: Disallow what you wrote, but allow

  fn foo(x: u32) { }
  let x: u16;
  foo(x);
This way you get the 'best of both world'?


As a hardware guy who has never touched rust, and I know nobody will agree with me: the solution is to not have type inference. You knew you wanted u16 or whatever to begin with. For low level programming I think explicitly defining type makes so much more sense.


Not having type inference is a non-starter in Rust. Lifetimes would be totally impractical, and you couldn't use closures.


I'm not sure that's the solution

  int8 x = 12;
  int8 y = 84;
  int16 z = x + y;       // ERROR: compiler is stupid


Wait, did you mean

    int16 z = (x + y) as int16;
or

    int16 z = (x as int16) + (y as int16);

?


I think with C# the addition is mostly or always promoted to an int. So the following gives an error.

byte foo = 4; byte boo = 86; byte arglebargle = foo + boo; // cast monkey! cast!

I remember old assembly language guys annoyed at the types of math available with high level languages. I think they were a bit crusty but still they had a point about inability to manage precision without big hammers.

At least the C# way forces you to think about whether you're loosing precision. But I dislike having to add casts. I use casts a lot, but I distrust the because they hide errors. I find myself really wanting safe operators and unsafe ones. Safe as in, overflow results in a hard fault that I can catch. Unsafe means overflow is silent.


I think that although the commented above used `as`, he meant it as pseudocode and not C#, especially since C# is rarely used for low-level or perfomance-critical stuff where you would use int8 instead of int16 for optimization (in my experience).


I was just using C# as an example of an alternative way of handling calculations.

The comment about assembly comes from remembering a conversation with an older firmware guy. In his world multiplying two 32 numbers resulted in a 64 bit result. And division was 64 bits divided by 32 bits with a 64 bit result, 32 bit result plus a 32 bit remainder.

I think his thoughts on C's 32 bit number X 32 number => 32 bit result can be summed up in a single word: gah!


That depends largely on language semantics. Ideally, a language would either guarantee that overflows can't happen (via dependent/refined types), make sure that addition is `(int8, int8) -> int16`, or guarantee modular arithmetic. In any case, the second interpretation looks overall superior.


> As a hardware guy who has never touched rust, and I know nobody will agree with me: the solution is to not have type inference.

How is that a solution to OP's problem?


It's not a solution- it's saying that what the OP characterized as a problem is a feature, not a bug. I don't know if I agree (maybe, maybe not, maybe "it depends"), but that's the answer to your question.


As a Rust newbie wrapping a C API, I ran into a baffling error for something like wstr.s[0 .. wstr.len], where the s field is an array, and the len field is a u16.

I like static typing with inference, but I'm still getting the hang of the Rust type system. In this case, a little leniency would have been nice.


I would easily pay 3x the Xamarin price for a Xamarin-like platform for Rust. And I'm pretty sure I'm not alone.

It's the perfect language for a mobile platform, and I would love to use the zero-runtime-cost abstractions without resorting to the C++ hand-grenade roulette. The fact that it has an ML heritage, with all the goodies that entails (ADTs, pattern matching, type inference, etc), is even better.


I think people overestimate Rust as an application programming language.

It has some nice features but at the end of the day their focus is safety above all else - this actually creeps up all over the place - requiring you to be explicit, APIs requiring you to deal with every possible error type (even the ones you don't really want to deal with), etc. Error handling was still very messy last time I checked.

You know what makes you productive with high-level (often dynamically typed) languages ? Optimistic programming - you write some code and do the least amount of work to run it and see how far it gets you. Maybe a part of it fails under some condition, maybe some default assumption turns out to be wrong, etc. ... You can live with that because you got something working in fraction of the time it would require you to handle all the edge cases and type out all the bits and peaces up front. The sooner you get to run the sooner you get to iterate.

And that's ignoring the cost to iteration time because of compile time.

Rust has it's design goals - it's trade-offs are decent for the context they are made in - but it isn't my idea of high productivity application programming language.


My very first use case for rust was to replace a ruby script that was running too slowly. The script did some free form text parsing and mathematical postprocessing, and then dumped out a huge load file for SQLite. It took me much longer to initially write than the ruby script...but after throwing in all the debugging and error handling after the fact with the ruby script, I spent about half as much time writing it in rust. And that was with 2 years experience with Ruby and zero experience with rust. There is a lot to be said about a language that is strict, strongly typed, and with an emphasis on safety. If I'm shipping a product with my brand name attached to it, I would so much rather catch easy bugs at compile time than run time.


You did the hard part (developing the solution to your problem) in a high productivity language and then ported it over to Rust and it still took you longer.

I mean you're completely ignoring the fact that Ruby lets you fire up the interpreter - write out some formulas to test if they work with some data - fix that logic and iterate over various stuff instantaneously. Consider how much time you would spend in Rust just recompiling for every iteration - not to mention dealing with a bunch of pointless typing and error handling as you're iterating over (or worse - ignoring all errors defeating their purpose in the first place).

There is making the thing correct, robust and maintainable and there is getting something to work - and despite the common theme among dev forums the latter is much more important because without it you never might get to the first part.


There are higher-productivity, strongly/statically typed languages out there that are probably more reasonable for typical applications programming


With rust though, you get two things that you can't with the other strong/static languages: 1) extremely low resource usage (very helpful on a phone/tablet with limited resources), without an embedded runtime, and 2) native portability almost on the same level as C/C++.

I admit, I have higher productivity with Scala and F#, but the productivity gap isn't insurmountable. The borrow checker isn't a huge problem once you get used to it, and that seems to be the biggest hurdle that others have. The biggest thing I find lacking in rust is a compelling asynchronous IO solution. I would love an async/await capability, or even better, something along the lines of F#'s computation expressions. Mio is making progress, but is extremely immature in comparison.


Yup, put me in that camp as well.


It's going to happen, and it's going to be glorious!


Please tell me more!


OK, imagine...

You run a single command to instantiate a family of cross-platform Cargo projects, one your platform-agnostic backend, and one for each of your target platforms: Linux, Windows, Mac, Android, iOS, of course, but also the entire impending IoT down to the smallest real-time device. Your platform-specific projects come with safe Rust bindings to the system platform (ala Xamarin).

Another single command builds and tests this from a single host system on an armada of virtual machines, emulators and cloud systems. Your tests run on a wide variety of standardized machine images, including those used for upstream Rust's own integration testing.

Another single command packages this for various Linuxes and app stores, deploys to your devices and cloud services.

Three commands, total world domination :)


Take my money!


How are you going to bridge between rust and the Android Java apis?


There's already quite a few JNI wrappers out there for pure-native activities.


But you need to automate it, you need something that generate JNI wrappers and that it generates Rust wrappers around it and vice versa.

With ObjC, you have similar problems and same goes for C#. It would be nice but I don't think it will happen anytime soon.


How is nobody talking about push-button cross-compilation? It could be huge! The only language that I'm aware of that does it at all well is C and that's because the only build tool that does it at all well is automake. But even then the library situation is very hit-and-miss. If rust nails this it's going to be awesome.


> push-button cross-compilation? It could be huge! The only language that I'm aware of that does it at all well is C

I'm not familiar with the term 'push-button', but Go also does cross-compilation very well.

(In fact, on Go, technically all compilations are cross-compilations; it just happens that the target platform and architecture may match the platform and architecture of the current machine.)

I'm not sure if that's still the case in 1.5 onwards, since I know they changed some stuff to make it even easier to target other platforms and architectures, but it was the case previously.


I'm pretty sure that the range of platforms for which Go does cross compilation "very well" is quite small. Go has a sizeable runtime, and you need to have a version of the runtime that works on each platform (for exotic platforms you would probably have to write some of the building blocks from scratch). This isn't push-button, this is just "the distribution supports many common platforms"

For example, it seems like compiling to MIPS is not push-button for Go. On the other hand, Rust supports all platforms that LLVM supports, though you need a version of the stdlib compiled for that platform (you don't have to write any code like in the Go case, you just have to compile the sources with the right invocation). For push-button compilation, we just need to distribute these binaries.


I imagine the vast majority of cross-compiles for the core tasks that Go targets will be to x86 or ARM, which it does support well (from everything I've heard).


Don't things get more complicated than that if you are expecting certain architecture features like SIMD?


I remember some comments here on HN a few months back (maybe longer) about how there isn't really an "ARM" to target, as since there's such diversity in what an ARM chip can provide that it's hard to make assumptions about what you can rely on from just "ARM". I believe it was in response to what is available to help with booting the system, but I imagine it extends into what exactly your runtime can rely on.


> I'm pretty sure that the range of platforms for which Go does cross compilation "very well" is quite small

Given that the number of popular platforms is approximately 2 (Intel and ARM), I don't see that as a huge problem.


Well, the OS and bit width matters too. {x86-32, x86-64, 32-bit ARM, AArch64} on {Windows, Mac, Linux, Android, iOS} ends up describing a decently large set of targets.

Also, you'd be surprised how often people request minor platforms. It's enough that we regularly get people asking for a C backend because they only have a C compiler for that target (though I think a C backend ends up usually not being what those people actually want).


Hm... I probably don't know enough about compilers, but why does Windows/Mac/... matter for a compiler (I see why it would matter for the standard library)? AFAIK, the main differences between these platforms are calling conventions (and possibly exception mechanisms), which LLVM should abstract away.


You're missing at least (a) structure padding/alignment; (b) debug info; (c) name mangling. LLVM abstracts over some of the differences but definitely not all.

In any case, just having a Rust compiler isn't very useful; you need a standard library (or at least libcore) to do anything interesting with the language, and that's where most of the porting effort comes in.


Actually, for compilers, platform is typically not just hardware, but the operating system as well. Windows and Linux are night-and-day different for major platform requirements (e.g., filesystems, I/O), and even Linux and FreeBSD can be important target differences in some platform areas.

So the number of platforms is approximately {x86-32, x86-64, ARM, AArch64}×{Windows, OS X, Linux, iOS, Android} (- a few combos) ~= 17 (I think). MIPS, Sparc, and PowerPC can be useful in some scenarios, as can even x86-16 (hey, wanna write the initial startup code on x86?), and platforms like PNaCl, Emscripten, or WebAssembly are effectively their own platforms as well.


Even PowerPC can be both 32-bit and 64-bit, and has operating systems like AIX and Wii U. I'm guessing one of those ports would be a bit harder.


And PPC has big and little endian variants, to add to the matrix.


You're missing PIC, AVR and quite a few other architectures that exist in the uC space(and Rust could be a great target for).


In that case Rust has exactly the same cross compilation story. You're lowering the bar for Go but not for Rust if you're making that comparison.


This indeed would be very cool! Right now things work relatively well with the biggest pain being the acquisition of std for your target.

If std and core could be made into first-class crates, then you could get rid of the need to find a pre-built version of libstd for your target. It would just be built when needed by a project.

This is what we currently do in zinc for core via the hack here: https://github.com/hackndev/zinc/blob/master/Cargo.toml#L27. Zinc doesn't use std (for obvious reasons). Right now, targeting a raspberry pi is much harder than it should be as a result of having to having to find an appropriate cross-compiled std.

That is, I want the steps for cross-compilation to be:

1. Execute `cargo build --target=foo`.

2. There is no step 2.


It would absolutely be huge! Writing some ARM firmware in Rust wasn't as straightforward as it could be.


As someone looking to use Rust primarily on the Raspberry Pi and other embedded systems, this is a big deal.


Maybe because many of us used cross compilers across multiple programming languages?


Glad to see the language evolve.

It feels great, once one gets used with the borrow checker messages.

One thing that could make the language better(and was mentioned in the post) is faster compilation.

Having programmed in Go, this may be one of its best points, just have a watcher that recompiles the program on change(and maybe run the unittests). Though it can be argued that not all types of programs benefit from such workflow, it's still one of my favorite things.


Yes, compilation speed is one of the most important things for us. It's a lot harder for Rust than it is for Go, because Rust has a much more sophisticated suite of optimizations (necessary for Rust's goals) and zero-cost abstractions. You can do a lot better if you're willing to trade runtime performance for compilation speed, and we aren't. But I'm confident that we can get there with incremental compilation and more codegen/typechecking improvements.


There are plenty of situations where trading compilation speed for performance is useful as a workflow tool. Not every build needs to be optimized to fullest extent possible, either by Rust or by llvm.


Of course! But:

1. The language is designed for zero-cost abstraction, which means we have more work to do to make it compile fast. For example, any Rust compiler must solve subtyping constraints on lifetimes, regardless of the optimization level.

2. Getting an optimizing backend and a non-optimizing backend to work well is more work than just getting a non-optimizing backend to work well.


IIRC, you usually claim that the majority of the compilation time is actually LLVM optimizations, not Rust typechecking. If that's true, it should be trivial to simply turn off LLVM optimization passes for DEV builds.


Sure, turning off LLVM optimization passes helps a lot, usually speeding up the compile by more than 2x. Though note:

1. There are some technical issues in LLVM that prevent Rust's -O0 from being truly -O0 (LLVM's FastISel not supporting the invoke instruction being the main one here).

2. Go's compiler doesn't have much of an optimization IR at all. From what I understand, it compiles straight from the AST to Plan 9 assembly. The equivalent in Rust would be a backend that went straight from the AST to LLVM MachineInstrs (like the Baseline JIT in SpiderMonkey). Such a backend would be the ideal way to get fast whole-program compilation but would be a non-starter for optimization, so nobody has focused on it given how much work it would be. Incremental compilation would be a better use of people's time than maintaining an alternate backend, because only incremental compilation gives you algorithmic speedups (O(size of your crate) → O(size of the functions that changed since your last rebuild)).

It still takes longer to compile Rust code than many people would like. That's why it's being worked on.


When the MIR work is done creating a new translator would certainly be easier.


At -O0, Rust is _very_ slow. It's a common meme that people jump into IRC or the users' forum and say "Why is this Rust code slower than Ruby?" Considering Rust is often chosen specifically for performance, sometimes, this makes your application not usable.


Yeah, definitely. The compiler has a few features recognising this already, e.g. there's various levels of optimisation (0 through 3) and, more significantly, there's parallel codegen where the compiler will internally divide up a single crate into multiple compilation units and optimise/run code-generation on them in parallel (this reduces how much each compilation unit sees and so reduces optimisations like inlining etc.).


While not every build needs to be optimized to the fullest extent possible, every build needs to be checked for correctness to the fullest extent possible. And since Rust does so much more than Go in terms of correctness, it's probably always going to be a no-contest between the two.


These kinds of checks take up very little of the overall build-time. Frankly, everything is dwarfed by LLVM optimization passes, which usually takes up about 50% of the time itself.


I think it's overreaching to say very little.

It's certainly true that the Rust compiler does quite a lot of analysis (more than Go), e.g. non-trivial type inference and borrow checking. In fact, a no-op build of libcore takes 12s for me, with 5s in type checking, 0.8s in borrow checking and less than 3s interacting with LLVM (~1s of which are LLVM actually running). Turning on optimisations pushes the LLVM time out to 4s. Similarly, libstd takes 8s to build without optimisations and LLVM only runs for 1.5s (type checking itself is about the same).

(The plan to make the compiler more parallel and more incremental improve these parts without affecting the quality of the generated code at all.)


Fair enough, and also, all of this changes every release, so it's possible the kinds of numbers I'm seeing are due to my projects and the time at which I last paid attention to this.


To be fair, core has lots of generics which don't get translated by LLVM at that stage.


> And you can do so in contexts you might not have before, dropping down from languages like Ruby or Python, making your first foray into systems programming.

I guess I'm one of those programmers who is quite alienated from systems programming - probably due to my daily work in Python / JS. The Rust lang book is quite good (great job @steveklabnik et al) but from my past experience I've found it easier to stay committed to learning a new programming language when I have a project that I can work on.

Can someone suggest a few "getting started" but useful systems programming projects that I can use as a test bed for learning Rust?


Try re-implementing some of the GNU Core Utils in Rust [0]

[0]: http://www.gnu.org/software/coreutils/coreutils.html


Or contribute to the project that's already doing so: https://github.com/uutils/coreutils/

(I myself wrote a little wc for fun a month ago, still missing some compatibility things https://github.com/steveklabnik/rwc/blob/master/src/main.rs )


When I read it, I think two things :

1- Great job. This is both innovative and powerfull. Like the idea to test nighties on every crate available on github. I am sure no other language does it.

2- So much feature may be a little disappointing. Take specialization. It may be interesting, but I don't even understand what it is. And I am not a beginner anymore ! Don't you fear that, by adding more and more feature, rust will become like the language it is aiming to replace (c++) : a huge mess of feature ?

That being said, I am definitely a rust enthusiast (I bought the book https://www.kickstarter.com/projects/1712125778/rust-program...). Carry on !


  > Take specialization. It may be interesting, but I don't even
  > understand what it is. 
That's fair, it's hard to put a full motivation in these kinds of posts. The RFC contains the full proposal, as well as a motivations section that hopefully makes it more clear: https://github.com/rust-lang/rfcs/pull/1210

TL;DR: specialization lets you also implement a more specific version of something that's generic. This lets you take advantage of this extra detail for various ends, like making a particular implementation more efficient than a general one could be.


It's a very detailed but also complicated proposal. It seems like it aims at some kind of inheritance as well? I would prefer it it were simpler.


It's going to end up being a key part of the future inheritance proposal, but that's not a part of this RFC at all.

  > I would prefer if it were simpler.
If you have thoughts about how to simplify it, please get involved in that thread! We don't actively try to introduce complexity, but sometimes, features are just inherently complex. There's also the case that sometimes features are easier to use than they are to define. I think this will be one of those kinds of features. At its core it just means that if you have a Vec<i32> and you also have

    impl<T> Foo for Vec<T>
you'll be able to define an additional

    impl Foo for Vec<i32>
and your Vec<i32> will use that one instead of the more general Vec<T> one.


Now I get it. It is not that complicated after all.


How long does Crater take to compile all (2792!) crates in stock on crates.io? It ought to be embarrassingly parallelizable. Is there a dashboard page showing the Crater results for rust nightlies?


At the moment we are using 60 r3.xlarge spot instances each running a single build at a time, and as of a few weeks ago it took maybe two hours. It hasn't been optimized.

There is no web dashboard yet (the website[1] - which runs Rust! - is quite minimal), but it's coming.

[1]: https://crater.rust-lang.org/


Where does the money come from to pay for these instances? Is it Mozilla?


Yes, Mozilla currently foots the bill for Rust's infrastructure. Currently there's no way to finance funding of Rust development directly, though we'd happily consider accepting donations of services from companies that provide these sort of things. :)


I wonder if you can use Buck with its distributed cache to make this go way faster?


If they add 'reuse' of old compilation intermediate results, the test for whether the source has changed should not be timestamp-based. That never works reliably, which is why "make clean; make" is so common. The source files must be compared by some cryptographic hash.


Content-sensitivity is actually required for this to be most useful with Rust. The file isn't a hard organisational boundary with Rust, incremental compilation would work either at the module level or the function level (i.e. the goal is to be incremental on the actual structures the compiler works with).

In any case, it has so many more benefits than just being actually-correct (as you say), e.g. one can edit comments/whitespace without forcing rebuilds.


Is that how they're doing it now? (I have experienced some trouble with cargo detecting changes to source files on my Mac, but I can't reproduce it reliably enough to file a bug.)


The compiler itself (rustc) does no form of incremental compilation: it just takes an input crate and compiles it entirely, unconditionally.

I believe cargo (which calls out to rustc) is using timestamps at the moment.


Oh, interesting. That definitely sounds like the source of my problem. I'll have to look in to it. Thanks.


For anyone interested in the talks from the recent RustCamp, the videos are now available: http://confreaks.tv/events/rustcamp2015


Certainly looking forward to the borrow checker improvements as it's quite tedious to work around the match borrowing problem.


What's interesting to me about SEME regions/non-lexical borrows is that it's a great example of a tradeoff:

Right now, the borrow checker's rules are very straightforward: references live for the entire lexical scope that they're alive. However, the downside of this is that sometimes you want the scope to end early, and so you have to code around it.

With non-lexical borrows, the rules get much more complex: borrows are determined by the control flow graph, not lexical scope. However, you no longer need to code around those edge cases.

So the question of which is better really comes down to a question: do programmers find reasoning about lexical scope or the CFG more intuitive? It would appear the latter is the case, which is kind of surprising for me, but maybe it shouldn't be.


Remember that Rust CFGs are much simpler than C++ CFGs (no goto, for one), and in general CFG analysis is simpler due to ownership. It's easier to reason about Rust control flow than it is about C++.


Can't you think of it as still being scoped - just more fine grained? Pieces of structs can be borrow checked (what you are using) instead of the whole struct?


We already do that. (See LoanPath in the compiler.) It's not a problem of reasoning about the structure of data--it's a problem of defining a notion of "overlapping control flow regions" that is simultaneously intuitive to the programmer, easy to compute, sound, and satisfies the ordering constraints we need (well-defined GLB, LUB, and partial order).


Is there a real trade off between intuitive and easy to compute, versus sound and satisfies the ordering constraints, or can you just compute the most precise sound solution?


Maybe; it's unclear. The hardest part is getting the GLB/LUB/subtyping right.


Yes, it's a scope, but not a lexical scope, hence the name :)

I believe that 'borrowing part of a struct' is a different extension to the system, actually, though maybe they'll be put together. I haven't been involved in the pre-RFC myself.


Same here, the borrow checker in it's current incarnation left a sour taste in my mouth. What made it worse, the language market keeps plugging the borrow checker as this great fully baked feature. And (like you said) in real life it's fairly easy to run into cases where the borrow checker has false positive that are hard to work around.

I was disheartening to see what it was not a priority to tackle it. There's long standing bugs (2+ years) filed in github against it. And so far the answer has been it's hard to fix this, so we're not going to do it yet.

I'm really happy to this development. I hope this is sooner rather then later (have to wait to 2016)


They should put having technical books on Rust as a goal.

I learn via technical books btw.

Does anybody know if there's a rust book coming out?

I mean Julia is having a book from Manning and they're not even version 1.


There's an O'Reilly book in the works: http://www.amazon.com/Programming-Rust-Jim-Blandy/dp/1491927...

And The Rust Programming Language (https://doc.rust-lang.org/book) is on its way to paper publication.

The newly minted Rustonomicon (https://doc.rust-lang.org/nightly/nomicon/) that covers deeper aspects of Rust is hopefully destined for the same.


It's probably a niche interest, but do you know if anyone's working on a Rust equivalent of Stroustrup's Design and Evolution of C++? I found that book a huge help in grokking the "why"s behind the language, and gaining some degree of mechanical sympathy with it.

I appreciate that the vast majority of Rust's D&E happened in the open, so most of it's probably still available, but it's been a long and twisty road since Graydon's initial public post and picking out a coherent narrative from umpteen separate slowly-linkrotting fora would be no small task.


I dream of doing such a thing someday. I'm not aware of anyone that's actively working on it.

I did a talk at FOSDEM 2015 along these lines, which apparently got lost with a lot of them :/


All of these are great (really, not being smug) but FWIW neither TRPL nor the Rustonomicon address my needs (Rust by example is probably the closest). It may be related to the weird set of skills and needs I have but hey, I'll post about my experience anyway in case someone shares my POV. ;)

My background includes OS development (Windows COM), driver development (C for kernel mode driver and C for HW firmware development), web development (C#/ASP.NET and JavaScript - both in browser and Node.js), and most recently HW simulation using C++/SystemC. Mixed bag, I know. What I want to use Rust for is desktop app development. I could go C, but that's a lot of manual labor. I could go C# but I don't like paying cost of the runtime on desktop. I could go C++ but I don't like getting my soul crushed. I tested viability of Node (and Python, actually) for desktop apps but it just didn't fit my needs.

So I want to love Rust. And I kinda do. Initially I didn't like certain things (bits of syntax mostly) but I've learned to love them. The problem is: none of the resources tell me interesting stuff all the way through the system. I feel like knowledge could be laid out with all the details from "this is syntax" to "this is what happens" to "this is what you can/cannot do" to "this is why this happens" to "this is what this means to the compiler" to "this is what happens under the hood/in asm".

I'd also[1] love to have something that roughly maps rust to C and gives me clear explanation of pros/cons of various approaches. It wasn't immediately obvious to me if I can have static methods. Of whether it's important to have self mutable (or not) as a method param. Or how to organize structures now that I have to keep mutability in mind. I know this comes with time but I'd love a kick of sorts. :) Best ideas tend to come from reading existing pieces of code but rarely knows if author is worth mimicking. :S

Either way - I'm invested and I like what I'm seeing. Great work guys!

[1] or these could become one thing, dunno


Fundamentally Rust just doesn't define things well enough to be able to answer the "this is what it compiles to" issue.

There's some things like `Option<&T>` which we guarantee but really it's a lot of "man, if the compiler was smart enough...". Even then, a stray annotation can wall LLVM and kill any chances of perf (e.g. if the function is not a candidate for cross-crate inlining).


Oh and on an a semi related note - I get sad every time I find an interesting looking link just to end up with 404 on the other side. This - for whatever reason - happens far too often with Rust related resources. Trying to get to "The Advanced Rust Programming Language Book Draft" - no dice. Page 404s, so does repo. Lifetime of links does matter, guys. ;(


That book is now https://doc.rust-lang.org/nightly/nomicon/ . It was published at .../adv-book/ for approximately 1 day, but then was quickly moved.


The "doesn't define things well enough" thing bothers me a little bit. I actually care in my real job what happens after compilation and why. Not being able to guesstimate what happens in machine by looking at source is no deal breaker for what I want to use it for but I'd imagine this to be an issue for stuff like OS/driver/firmware development.

But I don't think this is the most important thing I want to know. What I want Rust to be is the opinionated low cost, higher level language. "Do things this way, dummy!" is what I need the most (at this point of my familiarity with the language at least).


I had a conversation with two developers at work were trying to convince me that C++ 11/14 feels like a totally different language and a lot more enjoyable to work with. While I wasn't sold I did make a mental note to take look with an open mind.


It's true. But bad stuff is still there for you to (ab)use. I don't like C++ because (at this point at least) it's not opinionated enough. 10 ppl can write 10 completely foreign (in terms of feel) pieces of code and all of them would be correct in some sense of the word.


There is an official book: https://doc.rust-lang.org/stable/book/


I have been one of the backers of this kickstarter project for a rust book : Rust Programming Concepts

https://www.kickstarter.com/projects/1712125778/rust-program...


Are compiler features to support writing something like libeigen still on the roadmap? rust is IMHO a bit of a non-starter for many engineering fields until it has a really good story for array math.


That's more of language typesystem features, and they still are, but we're focusing on work that will make said work easier in the future, rather than going straight to it.


Ok, thanks!


"We plan to extend the compiler to permit deeper integration with IDEs and other tools; the plan is to focus initially on two IDEs, and then grow from there."

Any ideas which IDEs will be chosen?


Visual Studio and another yet-to-be-determined one.


Sounds wonderful! I especially can't wait for incremental compilation. I can't understand how others do any work without it.


Since crates are the compilation boundary, and you can make projects out of multiple crates, larger Rust projects tend to be split up, even internally.

Take Servo, for instance: https://github.com/servo/servo/tree/master/components or rustc itself: https://github.com/rust-lang/rust/tree/master/src

Each of these subdirectories (basically, a few aren't) is a crate, so you'll only be recompiling stuff in the subdirectory you're editing. Incremental compilation will still help with projects like this, of course.


Is it possible for rust to optimize across crate boundaries?


Yep, it's possible to enable link time optimizations.


With LTO, yes. I always forget if #[inline] does or not.


#[inline] is all about cross-crate work; inside a crate, items not marked #[inline] may still be inlined by the optimiser, but it doesn’t do cross-crate inlining without #[inline] or LTO.


Not all about. It gives an inline hint to LLVM, making it inlined more eagerly (even within a single crate) than the default.


Are generics not inlined when appropriate even without #[inline] or LTO?


You don't need the #[inline] attribute when generics are involved, because Rust already has to cross-crate-export function metadata when generics are involved because otherwise it would be impossible to monomorphize. At that point, LLVM will inline the monomorphized functions as it deems fit.


Yes, #[inline] works across crates


> Rust’s greatest potential is to unlock a new generation of systems programmers. And that’s not just because of the language; it’s just as much because of a community culture that says “Don’t know the difference between the stack and the heap? Don’t worry, Rust is a great way to learn about it, and I’d love to show you how.”

Wonderful stuff.


Is there an RFC or thread covering the IDE integration plans?


Not yet, but Nick Cameron (@nrc) has been holding talks with many IDE makers and plans to launch a "Rust IDE initiative" very soon.


It would be nice to have something to Atom. Like http://nuclide.io/ with built-in building pipeline, syntax highlighting, smart code completion, and error checking and some basic support for refactoring.

I would prefer that over plugin to Eclipse or Intellij IDEA.


It would be fantastic if Jetbrains was building an IDE for Rust somewhere in their basement. One can only hope...


They are currently tackling C++(Clion), which would lead me to believe it is within the realm of possibility if they were so inclined.


Except that Jetbrains probably makes most of its revenue from enterprise, and Rust has yet to see big enough adoption there (although there is some AFAIK). I'm not sure it's worth the investment yet.

On the other hand this is a chicken and egg problem. A powerful IDE like the ones Jetbrains builds would increase Rust adoption a lot, and eventually Jetbrains would completely dominate that market.


I doubt they'll be inclined in less than 2-3 years.

Still, someone can create a plugin (afaik, there was one but abandoned)


I am very excited to see the focus on tooling. Rust is a kick ass language, but imagine how awesome it would be to have an IDE to help you with lifetimes, inline documentation, etc...

C#'s best feature is its integration with visual studio. Rust would greatly benefit from something similar.


Aaron Turon and Niko Matsakis gave a talk on this for their Rust Camp keynote, you can see slides on it here if you prefer that format:

http://rustcamp.com/schedule.html


It will be interesting to see Rust and Swift evolve as Swift moves to the server side (at least on Linux). Both are modern languages although each has its own target users.


There are 5 hits on dice.com for "rust".


Yes, there are few dedicated Rust jobs yet. Most organizations that are using Rust in production are using programmers they've already hired rather than hiring new ones specifically for Rust.

It's a newly stable language, these things take time.


And only the first actually refers to the language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: