There's a lot of talk in general how Rust has a steep learning curve, mainly due to the borrow checker and lifetimes - but is this as hard as it gets?
Everything here seems logical and fairly self-explanatory, the only slightly alien thing is the lifetime annotation.
Are there advanced gotchas that can actually trip up an experienced coder? Are there some aspects of real work that Rust makes difficult, or impossible?
I mostly code in C# (which in my opinion is an excellent and productive ecosystem), but Rust seems to make a promise of providing a greater level of confidence in the code that is very alluring - I'd like to take the plunge, I'm just a bit worried about smashing into any hidden rocks at the deeper end
I found a few things a little difficult. First, the compiler is not actually as smart / accepting of valid borrow patterns as should abstractly be possible. Things like borrowing different subfields of a struct in different and nonconflicting ways to different subroutines. This got a lot better in recent memory with non-lexical lifetimes but I think is still a problem.
Two: splitting mut borrows. It’s reasonable to take a mut borrow of some resource and divide it into two independent borrows of exclusive components. Think non-overlapping subarrays, or independent struct members. The language does not really provide this for you, but it is safe. (There are some APIs like this but last time I checked they didn’t cover all usecases.)
Structs containing a borrow are just weird and annoying to work with. Lifetime parameterization everywhere is verbose and it’s difficult to determine where you’ve made a mistake. The syntax for describing lifetime relationships is non-obvious to me.
let (first_42, rest) = mutable_slice.split_at_mut(42);
but you can see that the inner logic is just a bit of pointer twiddling[2]
let len = self.len();
let ptr = self.as_ptr();
// SAFETY: Caller has to check that `0 <= mid <= self.len()`
unsafe { (from_raw_parts(ptr, mid), from_raw_parts(ptr.add(mid), len - mid)) }
Tl;dr Rust can't distinguish between a `Vec` whose length is mutable (which might need to reallocate) and one whose elements are mutable (which can provide mutable references to its elements but will never have to move them), so it prevents mutation in a case where it would be safe.
Take a look at the standard hashmap implementation (https://doc.rust-lang.org/src/std/collections/hash/map.rs.ht...). Notice the lifetime annotations everywhere. This is a relatively basic data structure, but it has a ton of visual noise due to the constraints of the lifetime and borrow model, and programmers must know, understand and be able to reason about these constraints when using this data structure.
Now imagine a case where you need to operate on values with two (or more) different lifetimes (by definition, all must live at least as long as the one with smallest lifetime, but in practice they all can and will have different lifetimes). Now you have `'a` and `'b` (and perhaps more) everywhere *and* developers must now reason about that. (random example pulled from a popular networking crate: https://github.com/tokio-rs/tokio/blob/718d6ce8cac9f2e081c0a...)
It isn't trivial to pick up from scratch, especially if one comes from a much higher level language (C#, Java, python, etc.). This is true even for people who are very experienced writing high-quality, durable, and safe code in other languages (e.g., C++) without the help of a compiler, because the ownership and lifetime model are both different and compiler-enforced.
> Take a look at the standard hashmap implementation (https://doc.rust-lang.org/src/std/collections/hash/map.rs.ht...). Notice the lifetime annotations everywhere. This is a relatively basic data structure, but it has a ton of visual noise due to the constraints of the lifetime and borrow model, and programmers must know, understand and be able to reason about these constraints when using this data structure.
I'm honestly not sure what you're referring to with this example. The lines you linked to have no lifetime annotations, just generic type parameters (which I expect you'd see in most statically typed languages for a type like this), and from scanning down the file, the only places I see explicit lifetime annotations other than the anonymous lifetime (i.e. '_) are in documentation comments until the iterator implementation. I already think that the somewhat common argument that Rust makes implementing data structures isn't really that meaningful (because despite implementing hashmaps and linked list being common CS homework assignments, it really isn't common for most programmers to do at their jobs), but I'm not sure that you can even make that point with regard to lifetimes here. If anything, I think being able to do almost all of the hashmap implementation without needing to explicitly name a single lifetime makes the opposite point, which is that you can actually go surprisingly far without needing to use lifetimes at all and that even an intermediate level understanding of how to use them will cover a large portion of the cases where you absolutely can't avoid them.
Lifetimes can definitely be confusing, and sometimes the syntax for them is verbose enough that it can make it hard to understand what's going on, but I don't think they're quite as big a roadblock as they might sound from your comment. Even when you do have to use them, there's a dirty hack you can use if you're absolutely stuck (assuming you're writing safe Rust; unsafe is a whole other can of worms that I think you can make a good case about with regards to data structure implementation, but it's also even less likely that you'll need to use it than explicit liftetimes): the compiler only requires explicit lifetimes because brute forcing the potential lifetime combinations to see if any is valid is extremely inefficient, but you can always just try the different combinations yourself. Because the whole point of providing explicit lifetimes is that the borrow checker can easily verify whether the ones you provided are correct or not, you don't actually have to be able to tell yourself! By definition any lifetimes that the compiler doesn't give errors on are correct, so if you find something that works, you can move on (and maybe later ask online for someone to explain why it works if you're curious, or come back later yourself when you feel more confident in your understanding).
> Now you have `'a` and `'b` (and perhaps more) everywhere
Only if you're a bad rust programmer. Lifetime variables can and should be a descriptive identifier, just like literally any other variable. Complaining that there's lots of 'a and 'b around is like complaining that your code has lots of x, y, z variables and you can't keep them all straight... It's not a language problem.
1) They are used pervasively in the standard library and its documentation.
2) Due to lifetime subtyping and implicit bounds, descriptive names may be inaccurate. For example, if I have an Ast that contains slices with a lifetime of the source code, I might decide to write Ast<'source>. However, what do I do if I then have a reference to such a thing? Do I add another lifetime parameter, e.g. &'ref Ast<'source>? If I am not going to be returning any of the 'source-lifetime slices, this additional parameter may be redundant. In such a case, do I write &'source Ast<'source> or &'ref Ast<'ref>? Both of those names are incorrect. Do I keep around an extra lifetime parameter to get the correct names (and be more future-proof), or do I just suck it up and use 'a?
This is a non-problem, or at least a non-unique problem. It is the exact same thing as using variables as function parameters. Sometimes what the variable means in one place is different from another place, so... you use different names. This surprises no one except maybe first-time programmers. Sometimes a variable means nothing and we can give it a single-character name, but then we shouldn't have a bunch of them to get confused by.
It's a "problem" in the sense that it's additional syntax with critically important meaning to the understanding of the code. The descriptiveness of the labels may help, but it won't eliminate the additional cognitive overhead required to mentally parse and reason about the code.
Imho, Rust is not hard as in "high complexity hard", but it is hard as in "must practice to get good at".
All the little rules and new concepts add up and in the beginning it can be frustrating to have the compiler complaining. On the plus side, it's like a motorcycle - it doesn't take long to drive it safely & efficiently.
Learning Rust is fun, I wholeheartedly suggest it even if you don't end up using it at work.
I don't think motorcycles are a great analogy. They're way more dangerous for the driver than a car and you have to be constantly vigilant about your surroundings.
Yeah. If you start to get a bit too confident for your level, you'll just get yelled at by the compiler instead of ending up in the ER (if you're still breathing).
But I suppose the analogy is more along the lines of "it can be intimidating at first, but you can quickly figure out how to use it reasonably well".
You are right, I should have found a better one. I was between "bicycle" and "motorcycle" and I felt that bicycle would imply that learning is a breeze, so I went with motorcycle.
The ownership rules make it very difficult to write data structures.
Even the most trivial singly linked lists have caused innumerable blog articles to be penned on how exactly to do that in the most idiomatic Rusty way.
Personally, I don't see any problem with using unsafe Rust in these instances, because linear types are not appropriate model as you don't have a single source and sink.
Any data structure where chunks of heap memory are owned by a single pointer (red black trees, singly-linked lists, vectors, deques, circular queues, etc) can be represented easily in safe rust.
Implementing a double-linked list is where things get tricky, as the nodes don't have a singular owner.
Yep. It’s even why the standard library implementation of a doubly link list uses unsafe! Which, I don’t think it a problem. If you’re writing a data structure, it will be under heavy scrutiny, review, and testing so we should be okay with bypassing the compiler checks.
As I understand, in low-level languages the concept of ownership exists anyway, no matter whether the language allows to declare it explicitly or not. Rust just makes you write it out explicitly.
The only concept of ownership that exists in low level languages is that the kernel owns memory and gives your program a chunk of it. Aside from that I think the closest equivalent concept is the system break, but that just determines where in memory your stack and heap are divided.
Ownership doesn't exist in the C language per se, but it is still an important concept you have to reason about when manually managing memory. I think that's what GP meant.
E.g if you add some object to a generic hash table as a key by pointer, you better treat that pointer as being owned by the hash table and not mutate it, or interesting stuff will happen.
No one uses "low level language" to mean only assembler anymore. Besides, you also have to reason about ownership in assembler. It's not something you can really get away from at any level of the stack.
I think using data structures as an example is sort of missing the forest for the trees. It's definitely harder to do, but you generally don't need to as there's an excellent standard library and wider ecosystem of libraries that do this for you.
And implementing data structures in any language is hard. It's just a harder thing to do than most things because there's always gonna be a lot of gnarly implementation details and edgecases.
If you don’t need or want that, you’re probably better off sticking to other languages. Although hopefully, not something like C or C++ which the US government now advises against using for security reasons.
As someone coming from C I find the stdlib is fine. It has all the datastructures you need for the vast majority of code. It doesn't have very domain specific things like the kind of structures a text editor might use, for instance. That's fine.
As for the quality of crates, yes, this is the case in any language. Using a library includes the responsibility of making sure the quality is up to scratch. This is not unique to Rust.
I came to rust mostly from higher level languages, and I didn’t have as much of a hard time with lifetimes as I did with dynamic dispatch via trait objects. Lifetimes can certainly get hairy, but it’s easy to get around them when you’re learning by cloning a value, wrapping it in an Arc, or whatever. I have had multiple times where I wanted to use dynamic dispatch and discovered after a few days of work that my idea wouldn’t work for one reason or another. Mostly this came down to some limitations on what trait objects can and can’t do, but it has mostly ceased to be a problem now that I understand those limitations better.
No, there’s no argument. The statement was that lifetimes aren’t that hard, and that when you do get stuck there are escape hatches. The escape hatches are especially useful when you’re still learning.
FWIW I write Rust professionally now, and in our entire codebase we have maybe four or five data structures containing Arcs, specifically where objects are spawned as Futures on other threads.
I used ripgrep to find occurrences of `Arc::` (to exclude type signatures and only find constructors) and then just gave it a quick manual look to exclude tests and benchmarks.
This leaves me with:
- one data structure that contains general application state and contextual information, which is created at application startup and shared across all tasks/threads
- one data structure for accumulating results that is shared across threads. This is an Arc<Mutex<u64, SomeEnum>>. The less common of the two enum variants is an Arc<BTreeMap>.
- one data structure used to pass a database connection in to a context where it'll be used to spawn async tasks
All told, we've only got four places in production paths where we construct a new Arc. One of those four is only called once in the life cycle of the application, while the others are called for any given invocation.
Cloning values and using Arc to wrap structs carry a huge performance costs. It's pretty easy to write correct Rust. It's a lot trickier to write fast Rust.
That's where the learning curve started to get very steep for me.
Yes. Most of the production use cases for Rust are aimed at C, C++, and Go users. In my past experience, even poorly-written Go tends to outperform poorly-written Rust for many tasks, not to mention C or C++.
You're not exactly the target market if you're writing python.
But that is kind of my point. There are plenty of reasons to use Rust that have nothing to do with performance. My brief toe-dip into the Rust world loves the static types, good dependency management, and single executable deployment. That it is faster than my standard language is just the cherry on top.
Yeah, Rust is a really fun language to write in my opinion. If your use-cases are tolerant of non-hyper-optimized Rust, there’s no reason to make it too hard on yourself while you’re learning. As you use it, I think you’ll naturally gravitate towards writing more optimized code, because the language guides you in that direction.
I don’t know why this would be an unfortunate reality. People should use whatever language they want.
Rust gives you a lot of ways to express yourself, but now that I am quite familiar with it, I can write Rust just as quickly as I can write Python or Go.
You can spend all day code golfing in any language.
This conversation was about learning Rust, not about writing maximally optimized Rust. Clones and Arcs can make learning a lot easier, since they let you get stuff done without needing to figure out every obscure lifetime error.
For production contexts, we take more care to optimize for performance.
You and I have very different definitions of "maximally optimized." I think of eliminating all possible wasted CPU cycles when I hear those words. In comparison, being able to remove spurious copies and atomic accesses is table stakes for claiming that you know a systems programming language. Most use cases of C and C++ today are in that state.
Well, clones can sometimes make it faster (e.g. when you use multiple threads that can work independently, synchronization will have a much higher overhead than a literally insanely fast, predictable memory-to-memory copy). And then the compiler may very well be able to elide the copy, it only needs it for semantics.
You’re right, I was exaggerating for effect. But the point is that we’re talking about learning the language. You don’t have to get it perfect on your first try or do everything the best way when you’re just getting started.
Yeah, but Python is the slowest language ever. I think that a decent language like D or Common Lisp would outperform poorly written Rust, and they're easier to handle.
It really depends on what you're building whether that perf cost is problematic or not, and usually it isn't. If 90% of your code is clean Rust and the last 10% outside of the critical path is a straightforward clone or Arc, then I see no reason not to go that way.
Huge is maybe a strong term. The cost of a clone is hugely dependent on the size of the data being cloned. Avoiding clones of large data structures is important, but even that is unlikely to be a bottleneck outside of a hot path.
Arcs can be expensive, but once you’ve got the sense for lifetimes, they aren’t that hard to avoid.
> Mostly this came down to some limitations on what trait objects can and can’t do, but it has mostly ceased to be a problem now that I understand those limitations better.
Care to recount what some of those misunderstandings were? I’m casually interested in Rust but only really observe from afar, since most of my day job is Swift. Don’t get me wrong, Swift has some odd limitations around protocols (closest equivalent to traits) that may be similar, but I’m curious to see what some common pitfalls may be with Rust traits.
I’m sorry I don’t have specific examples because it’s been a while, but IIRC generally the issues came from trying to mix compile-time dynamism with runtime dynamism. Things like depending on methods that made traits non-object-safe and so on.
I think it can be quite confusing initially how traits are both the unit of compile-time generics AND can be used for dynamic dispatch at runtime, given that there are separate rules about what can be used in which context.
I think Rust is a lot easier when you already have decent experience with C. In C you have to reason about ownership and lifetimes too, the only difference is it's all implicit. Knowing C also helps you a great deal in the cases where unsafe is either neccessary or just the best solution. Programmers who haven't touched C might be very hesitant to even consider using unsafe.
Other than that, it's been my experience that lifetimes come up very rarely or even never for a large range of programs. But someone coming from Java/C# where almost everything is a reference might struggle more than others because these languages lend themselves to constructing large object graphs that are annoying to do in Rust. You have to take a different approach, keeping object graphiness to a minimum where possible.
Unless you're doing something tricky, this is basically it. If you want to implement data structures like cyclic graphs or doubly linked lists things could be trickier. Sometimes people struggle with async because they dive right into that early and it can be a bit tricky if you use async + closures and don't know what you're doing.
I think this exactly it: the rules on their own are simple but their implications are far reaching and hard to recognize ahead of time until you practice writing code.
Having to clone a String every time you send it around is “confusing” if you come from a C char* pointer background.
You can dereference it to &str but I’m pretty sure that introduces weird “lifetime” &’a or whatever errors in most places (when you are first learning to write Rust that is)
same for structs, I typically just derive Clone. Kind of gross but for hobbyist projects where I want to “move fast” (coming from node.js so it’s hard to not treat Rust like a “scripting” language), it suffices.
That is pretty much it; it is way harder to learn to fit withing those constraints than you are assuming; and it has some very unintuitive consequences that won't appear on 99% of your code, but are very hard to deal with on the other 1%.
Interesting, I feel like "share vector across threads" is an area that rust excels in. There's `split_at_mut` and `rayon`, and now with GATs you should be able to create a lending iterator too.
Borrow checking (lifetimes and references) are fairly easy for a beginner to Rust to grasp, I think it is basically a meme at this point that this stuff is hard. Entirely different thing if you are making some kind of async library though, that is when you will run into advanced gotchas.
Rather than a nice steep curve, its more like a threshold function.
Maybe. I found it very easy, but as you've seen some people really struggle. I think it matters a lot how you've mentally modelled what's going on, Rust fit how I assumed everything actually works anyway. I think C programs I wrote 20 years ago do many of the things I do in Rust today. I had lots of C background (enough to at least cosplay as an expert), lots of Java, some Go, and a whole bunch of other languages in my 30+ years of programming before I decided to learn Rust.
I learned C# after Rust, you say "mostly" C# but I guess it depends what else you've some experience in. If you have only worked in managed memory languages, Java, Javascript, Python, that sort of thing, then you should expect some struggle in Rust just because it isn't a managed language and so you are now responsible for making sure resource management happens.
In most of the unmanaged languages if you screw up your "punishment" is that the program has mysterious bugs, good luck - but in Rust the compiler will yell at you. So, the upside is obviously at least you know there's a problem, but the downside is that you're responsible for fixing the problem, and in managed languages you were not.
As to picking it up, well, it's Advent of Code starting December 1st, which is often taken as an excuse to either learn a new language or practice a preferred one you don't get to use, if you write software as a fun activity rather than only for $$$. You can assume some people will post Rust solutions you can read for inspiration, so that's an option. There are various free tutorials, I won't recommend any because that's not how I learned and our experience is likely very different.
One thing this article doesn't mention at all that might jump out at a C# programmer depending on other background, Rust has Move assignment semantics. This might work out to be something you didn't know you missed in C# but on the other hand you may find it a surprise, hard to say until you try it.
In Rust if I have a variable a_1, with a Thing, in it, and I make a new variable b_2 and I assign b_2 = a_1, that Thing from a_1 is moved to b_2. If I try to just use a_1 later, Rust won't let me, the Thing is gone, it was moved to b_2. I can put a different Thing in a_1 if I want, Rust is fine with that, but you don't get copies of the same Thing in different variables by default. If that makes sense, b_2 = a_1.clone() asks the Thing to provide a deep copy of itself.
Types can opt in to having Copy semantics like you'd see in most languages, but only if the type is literally just some bits, and in this case Rust will of course just copy the bits on assignment so that does what you'd expect. Basic types like the integers are Copy, whereas something like a String is not.
So this prevents that problem where you've got a Thing and you change some stuff about it and then put it in a Collection, and then you change more stuff, put it in the collection again, repeat until there are a dozen things in the collection - and then whoops, the Collection doesn't have 12 different Things in it, it has the same thing, Twelve times, with your last set of changes. In C# it's not difficult to do this (I wrote this bug last month without thinking) but in Rust you can't because once the Thing was moved into the Collection it's gone, so you can't change it, let alone add it to the Collection again.
> Traditionally, you either have to manage the memory yourself (à la C), or pass the burden down to a run-time feature of the language – heroically called “the garbage collector“.
I think it's worth mentioning that C++ manages heap memory the same way Rust does, with destructors (Drop, in Rust). The story is a bit complicated with the new/delete operators only being deprecated recently and lots of legacy code sitting around, but vector and string work the same way, and those have been in common use for decades. The difference isn't the memory management strategy itself, but rather that Rust catches all our mistakes when we retain pointers to destroyed values. (Move semantics are also quite different, but that's a separate idea I think.)
A tongue-in-cheek exploration of how Rust achieves memory safety & performance, for people interested in Rust coming from high-level (managed) languages.
You can't make typical Python programs two orders of magnitude faster by rewriting them in C, but that's because those use many bits that are already implemented in C (data structure implementations, regex engine, I/O, databases etc.).
If however you would rewrite those bits in Python, which is what OP said ("that Python is now the only legal programming language in the world to code with"), things would become two orders of magnitude slower (when using the current CPython implementation to run the Python program)! A program doing low-level work (e.g. a B-tree implementation) is that much faster when written in C over Python (if you don't cheat and write bits of your Python program in C), assuming that the C program does take advantage of the optimizations that you can do (and typically do) in C. It might be more like a factor of 60, but on a logarithmic scale that's much closer to two magnitudes than one.
This is assuming CPython, not PyPy or one of the subset-of-Python compilers that work more like C.
PS. OTOH, I think a 386DX is more like 1/1000 of a modern CPU's speed (maybe 1/10000 if counting multiple cores and SIMD).
I think the Java image should be associated the paragraph before...
> In general, the garbage collector is an inefficient beast.
The article has a tongue-in-cheek style, does the author really need to link to Benchmarks Game and CPU benchmarks?? Modern systems are way more than 100x faster than 386DX anyway.
You should be able to vaguely claim that Python and Java are inherently slower than Rust/C/Whatever. I agree claiming that it is 100x slower is a bit unfounded, but it's pretty easy to go find existing benchmarks that show results like this.
I did search for benchmark comparisons and bumped into one that had 2 orders of magnitude difference.
But naturally, isolated benchmarks can't encapsulate the whole picture because there's a world of difference between testing how fast a loop runs and performance in production.
I guess that arithmetic operations are not very slow in Python, but if you use objects, then Python performs complicated lookups for every field or method access.
If I recall correctly, the first time I used Java it was way back in JDK 1.1. It was amazingly slow. Of course, today's Java is fast enough for its domain, but I still enjoy throwing a jab at it here and there :)
Well, then you are just uninformed. Java can JIT compile hot loops to C speed. There is hardly anything it is not fast enough for, and if there is (a certain kind of HFT), then nor is a general purpose CPU (they use FPGAs).
> The biggest of said problems is that the garbage collector has the annoying habit to “pause the world”.
I think that main problem of garbage-collected programs is that they usually allocate several times more RAM than they use. This causes excessive swapping which produces more noticeable lags than just garbage collection. I vaguely remember reading somewhere that the difference can be 6x, but I am not sure if this is the correct number. This means that garbage-collected software is great for large expensive servers stuffed with RAM sticks but not so great for a personal computer having just 4 or 2 Gb.
On a more serious note, I think lighthearted content is what's needed when it comes to intro stuff. There are no puns in the Rust Book ( https://doc.rust-lang.org/book/ ) ;)
Ignoring (or hiding) the images, the explanation is actually the best one I have seen so far and well worth the read. I guess some people like the fun images too. /shrug
I found that not to be the case, and that the explanations were clear. I thought the images were contextually amusing. What specifically was problematic for you to read it?
Would there be any value in a GC'd language where you had the option to explicitly call do_gc_now()? Is this functionality already out there and just not popular? I haven't spent a ton of time in these languages, so forgive the naivety. Again, naively, this seems like it would be a good compromise between what GC offers but near-determinism when you want it.
I guess Rust and C++ also "pause the world", it's just predictable when it happens. It seems kind of arbitrary in some ways, like maybe I don't want the "world pause" at the end of this scope. There are ways around it obviously, but you start language-wrestling at that point.
Rust does not ship a garbage collector with your binary, so there is nothing to pause your code.
When the compiler adds those drop() statements, they just free the respective memory at runtime. There is no need to pause your application to do that.
The garbage collector needs to pause the application because it takes time to calculate in runtime which parts of the application memory are safe to clean. If the application was running in the meantime, there's no guarantee that the blocks it marked as "safe" are actually still so.
"pause the world" usually refers to garbage collectors having to stop all threads, whereas in Rust and C++ allocation/freeing of memory does not have to make any threads other than the current one wait (unless your memory allocator has global locks and multiple threads are trying to use it)
Virtually all GC languages have this option, but it is rarely better overall than the default algorithm for deciding when to do it.
The biggest problem with GC is that it is often global, and a simple call to invoke the global GC doesn't usually help from a local scope. There are some arena-based allocators (I believe OCaml's GC uses this strategy) where this may be more beneficial.
But the question always looms: if there is not enough memory to serve an allocation invoked from a hot loop, what should the program do?
Note also that in advanced GCs, such as .NET or the JVM, a hot loop that doesn't allocate will often not need to be stopped by the GC at all. There are even GCs (all proprietary as far as I know) that guarantee a fixed pause time, so that they can be used in real-time workloads - Azul's JVM GC has this option for example.
You can often do this, although some runtimes treat it only as a hint and may ignore it (usually what you want). Go has some pretty strong guarantees on how long it will stop the world for, which makes it much less of an issue. But the best pause is still no pause, which means no GC.
> Is this functionality already out there and just not popular?
Yep. Node.js has this for instance (you have to enable it with a command line flag, but it's easy enough to do). I imagine most GC languages have a similar option.
Everything here seems logical and fairly self-explanatory, the only slightly alien thing is the lifetime annotation.
Are there advanced gotchas that can actually trip up an experienced coder? Are there some aspects of real work that Rust makes difficult, or impossible?
I mostly code in C# (which in my opinion is an excellent and productive ecosystem), but Rust seems to make a promise of providing a greater level of confidence in the code that is very alluring - I'd like to take the plunge, I'm just a bit worried about smashing into any hidden rocks at the deeper end