> Well, extremely low-overhead of just about anything is certainly achievable with modern JITs[1].
Can't read twitter where I am, but to be clear I was primarily thinking of the code-readability overhead.
> Why is that necessary? You might as well presuppose the necessity of types :)
Well for me the value of using a monad to capture an effect is to be able to verify that I've sequenced that effect correctly at compile time. E.g. I use a monad to represent a database action that has to happen inside a transaction and the type system can enforce that I wrap it in a transaction. In a language without types it would be technically possible to do that, but entirely pointless, because if I get a mistake it's a runtime failure on that codepath either way. Maybe that was your point, but while monads are technically possible in dynamic languages, the technique that yields a lot of the value from using monads isn't.
> or, alternatively, say that you need a type system to verify that your monads are truly monads (i.e. obey the monad laws) at compile time, something Haskell doesn't do, either.
If I have a given defect rate in the sequencing of some effect, then isolating the fallible heart of that sequencing in a short piece of code is a big win (and I can e.g. flag that critical section for additional review). I do prefer to have a type system which verifies the monad laws at compile time, and I do think that the stronger the enforcement around the monad system, the more worthwhile it becomes to manage a given effect explicitly.
> Well for me the value of using a monad to capture an effect is to be able to verify that I've sequenced that effect correctly at compile time.
> I do prefer to have a type system which verifies the monad laws at compile time, and I do think that the stronger the enforcement around the monad system, the more worthwhile it becomes to manage a given effect explicitly.
That's an advantage of having a type system, not of having monads. There are other ways to type-check effects without monads (at least not explicit monads, and the monad laws don't need to be checked): http://blog.paralleluniverse.co/2015/08/07/scoped-continuati...
But effects do need to satisfy the laws if the "natural" way of writing them is to make sense, so I'd rather that were enforced (compare the confusing results one gets when using Set in a scala for/yield, because it doesn't obey the monad laws). Unless you're restricting the "interpreters" such that the implementation of a given action is necessarily monadic?
(Have you seen the "freer monad, more extensible effects" paper? I think they end up in a similar place, though from the opposite direction - using a single structure typed by a (type-level) list of possible effects to represent an effectful operation, and then you supply an interpretation for each effect to run it)
> But effects do need to satisfy the laws if the "natural" way of writing them is to make sense, so I'd rather that were enforced (compare the confusing results one gets when using Set in a scala for/yield, because it doesn't obey the monad laws). Unless you're restricting the "interpreters" such that the implementation of a given action is necessarily monadic?
I haven't thought too much about it, but I believe that if you structure effects as continuations, then they are monadic by construction (though I may well be wrong) because the continuations themselves are monadic. But I got interested in continuations because I don't like Haskell's purity, and I wondered if there is an imperative construct with the same expressive power as the PFP monad, and it turns out that not only that there is one, but that it composes much better than monads.
> Have you seen the "freer monad, more extensible effects" paper? I think they end up in a similar place, though from the opposite direction - using a single structure typed by a (type-level) list of possible effects to represent an effectful operation, and then you supply an interpretation for each effect to run it
Yes, and it isn't a coincidence. Oleg Kiselyov started out trying to bring the composability of continuations to Haskell (he built at least one continuation library for OCaml). As I wrote in that blog post, that was an open question by Phil Wadler: whether monads could be made as composable as continuations, and Kiselyov showed that the could.
> I haven't thought too much about it, but I believe that if you structure effects as continuations, then they are monadic by construction (though I may well be wrong) because the continuations themselves are monadic. But I got interested in continuations because I don't like Haskell's purity, and I wondered if there is an imperative construct with the same expressive power as the PFP monad, and it turns out that not only that there is one, but that it composes much better than monads.
I don't see it as imperative - if anything passing continuations around is more functional. They were originally a scheme thing, no?
How does the stack aspect work? It's a continuation style so you never return, so if you keep the stack frames around it'll overflow very quickly, right? The usual solution to that is tail call elimination, but then you don't have even the immediate stack trace. Are you doing some kind of dynamic "keep the last x stack frames"? (I can imagine that being useful for a trampolined monad style too).
I think that's the really interesting part. As you say the styles are equivalent (even clearer when one's using something like scalaz-stream), so it's a question of which is easier to read/understand. I do think the monadic style as implemented in scala has a big runtime-inspectability problem (one idea I keep toying with is a free monad style where you represent every step with a named object or class, purely so that debuggers handle them correctly). I find call/cc style incomprehensible at the code-reading level - I'd rather have the control flow captured in a value I can inspect in a debugger than as what are effectively gotos in the code text. Maybe if you're more used to macro-based programming then it makes more sense.
Sure, but scheme is an imperative-functional language (as is OCaml).
> It's a continuation style so you never return
No, I'm talking about delimited continuations. No need for TCO. The continuation contains only a portion of the stack (off the top).
> As you say the styles are equivalent
They're only equivalent in the sense that they have the same expressive power. It's been proven that continuations can be implemented with monads and that any monad can be expressed as a continuation, but continuations really do compose better (unless you use Kiselyov's effect handlers).
> I find call/cc style incomprehensible at the code-reading level
Right, but we're talking about delimited continuations, i.e shift/reset, not call/cc. You can simply treat them as threads. They're not at all like gotos (you can read my blog post). My claim is that they fit much better with imperative (including functional-imperative) languages than monads.
> No, I'm talking about delimited continuations. No need for TCO. The continuation contains only a portion of the stack (off the top).
I don't think that resolves it. It's quite normal to want to chain together a few thousand or million monadic operations even within a single method (e.g. making a number of network calls, writing lines if you're managing I/O monadically). If doing that results in a stack overflow then that severely limits the utility of this approach.
> Right, but we're talking about delimited continuations, i.e shift/reset, not call/cc. You can simply treat them as threads. They're not at all like gotos (you can read my blog post). My claim is that they fit much better with imperative (including functional-imperative) languages than monads.
I did read it; I'm sorry if I've got the terminology wrong. I found e.g. the "class Foo" example really hard to follow the control flow; in that sense it seemed goto-like to me. I find threads really hard to reason about, so "it's as easy as threads" doesn't really appeal.
I've not seen the term "functional-imperative" used that way, and usually the two are contrasted - can you clarify what class of languages you're talking about?
> If doing that results in a stack overflow then that severely limits the utility of this approach.
There is no stack overflow. A chaining of monads translate to a blocking of the continuation. The familiar imperative sequence of blocking operations is an instance of the continuation approach, and it isn't susceptible to stack overflows.
> I find threads really hard to reason about, so "it's as easy as threads" doesn't really appeal.
Let me make it clearer: it's not "threads", as in multiple threads, but "as easy as threads" as in the simple, familiar blocking thread. There is no concurrency involved. If you can understand:
a = read()
print("hello " + a)
you understand how to work with delimited continuations (that code is actually written as a continuation, which blocks on read()); there's nothing more to it. It is simply a formalization of what it means to block a thread, and a mathematical name for what we've always done.
When you block a kernel thread, the OS takes care of all the continuation stuff -- but it does exactly that (the OS does run your single-threaded program in a continuation). Just as the OS mechanics of continuations don't get in the way of your understanding of the above code, language-level implementation of continuations shouldn't either. It is completely transparent to you -- your thread blocks, some handler does something, and then your thread resumes.
> I've not seen the term "functional-imperative" used that way, and usually the two are contrasted - can you clarify what class of languages you're talking about?
Well, it's sort of my made-up term that I use because "functional" doesn't really have a definition (try to come up with a definition that encompasses all of: Lisp, ML, Haskell, and Erlang but none of Java, Ruby, Smalltalk, JS and you'll see why), so I use the term to denote languages which we call functional (whatever exactly that means) which aren't pure, so: the Lisps, MLs, Erlang.
> There is no stack overflow. A chaining of monads translate to a blocking of the continuation. The familiar imperative sequence of blocking operations is an instance of the continuation approach, and it isn't susceptible to stack overflows.
So wait, you are doing TCO? When my code "unblocks" it will look like it's continued directly from the previous line, when actually the previous line called into some other code which then called into a continuation for the "tail" of this method, right?
> When you block a kernel thread, the OS takes care of all the continuation stuff -- but it does exactly that (the OS does run your single-threaded program in a continuation). Just as the OS mechanics of continuations don't get in the way of your understanding of the above code, language-level implementation of continuations shouldn't either. It is completely transparent to you -- your thread blocks, some handler does something, and then your thread resumes.
If it's the kind of thing that can be implemented transparently then I don't think I'd want to be managing it at all (and automatic CPS-ization is a valid alternative to threading (indeed as you say threading can be viewed as just a crude form of it) that can have performance advantages, but if it looks the same at the code level then it's "just" a performance optimization). The point of the monadic style is for if I do want to manage the sequencing explicitly.
More concretely I think the big advantage is that a monadic effect can be captured as a value. So if I want to e.g. perform some effectful operation on a list, and I don't know about traverse, I can just do the operation in the normal way with map, and then as long as I use those values in a way that makes sense according to the normal rules of the language then I'll do something that makes sense. You draw an analogy with checked exceptions in Java - but most Java developers avoid using them, regard them as a failure, and I think a big part of that is you can't safely factor out the result of a call to a possibly-exception-throwing method as a value.
> Well, it's sort of my made-up term that I use because "functional" doesn't really have a definition (try to come up with a definition that encompasses all of: Lisp, ML, Haskell, and Erlang but none of Java, Ruby, Smalltalk, JS and you'll see why), so I use the term to denote languages which we call functional (whatever exactly that means) which aren't pure, so: the Lisps, MLs, Erlang.
Do you just mean non-Haskell languages?
I don't think "pure" is binary in that sense: all languages have some things that are managed implicitly by the compiler/runtime/infrastructure and some things that the developer is expected to manage implicitly. Haskell explicitly sequences I/O but the infrastructure manages laziness, whereas in most MLs it's the other way around. Which is more pure is a matter of perspective.
> So wait, you are doing TCO? When my code "unblocks" it will look like it's continued directly from the previous line, when actually the previous line called into some other code which then called into a continuation for the "tail" of this method, right?
No TCO. Your code doesn't "unblock"; it unblocks (no quotes). The handler code doesn't "call into the continuation" (i.e., there is no stack-growing call), it just resumes the continuation. Like unparking a thread (or not "like"; unparking a thread is resuming a continuation). There is a relationship between continuations and TCO in that they both deal with stacks, but in this case it only serves to confuse.
> The point of the monadic style is for if I do want to manage the sequencing explicitly.
But how is `a = read(); print(a)` not explicit?
> More concretely I think the big advantage is that a monadic effect can be captured as a value.
Continuations are the same in that they are also a "value", except that it is a mutable value. A continuation has a state (it's current program counter). The calls to read and print trigger an OS handler. When working with continuations, those are programmable handlers, so their function is pluggable. You can run the continuation calling read and print with a handler that actually does IO, or with one that does something else -- just like with monads.
> You draw an analogy with checked exceptions in Java
That's only to explain the typing. Java's checked continuations are an effect system, and the list of thrown exceptions can be viewed as a list of effects a function may have. That's how you can ensure that a function calling read and print will only be called within a context that implements a handler for those effects.
> most Java developers avoid using them, regard them as a failure
Off-topic, but that is a myth (there was an unscientific poll about this in the last JavaOne conference). If there is a failure is not with the concept, but that the standard Java libraries overuse checked exceptions (this may be addressed). Most Java developers like the concept (according to the poll).
> and I think a big part of that is you can't safely factor out the result of a call to a possibly-exception-throwing method as a value.
Don't think of checked exceptions in this context as actual exceptions, but as a list of effects. The same "problem" applies to monads, and is actually a good thing -- as you said yourself -- to have the type system ensure that monadic methods are called within the appropriate monad only. This kind of effect system also ensures that a function with effects is only called when the needed handlers are in context. The difference between continuation-effects and monads is that a function making use of continuations can use multiple effects without requiring nasty monad-transformers (and that's exactly the problem Kiselyov wanted to solve with his handler system).
> Do you just mean non-Haskell languages?
Basically yes (but Idris, Agda and Coq are also pure).
> I don't think "pure" is binary in that sense: all languages have some things that are managed implicitly by the compiler/runtime/infrastructure and some things that the developer is expected to manage implicitly.
I certainly agree in theory: what constitutes an effect (except IO) is indeed up to the language to define, but those languages I listed allow all effects (including IO, which is an "absolute" effect) without declaring them explicitly, hence they are not pure.
BTW, there is another absolute effect (i.e. an effect which is externally observable, and thus must be considered an effect regardless of the language's preferences) -- time. Haskell (and Idris etc.) don't have a good mechanism of handling that effect, which is one of the reasons I am not fond of the Haskell-style pure-FP.
As a nice demonstration of that problem, there are two ways in Haskell to write a function that sleeps for a second: one that is effectful (monadic) and one that is pure. Also, many people believe that Haskell has only one function inhabiting the type forall a: a -> a (a notion that is at the core of Haskell's equationality myth), when in fact there are infinitely many, all different from one another.
It doesn't tell me which of the calls is effectful and which is pure. It's clear enough at the direct level, but if I'm calling methods that I don't know the implementation of then the =/<- distinction when using monadic style is very helpful.
> That's only to explain the typing. Java's checked continuations are an effect system, and the list of thrown exceptions can be viewed as a list of effects a function may have. That's how you can ensure that a function calling read and print will only be called within a context that implements a handler for those effects.
So if I do "a = read()" what is the type of a? "(String effects IO)" or some such? If it's "string, but only allowed inside the scope of a method with IO as an effect" then it's the same problem as checked exceptions.
> Off-topic, but that is a myth (there was an unscientific poll about this in the last JavaOne conference).
Hmm. I wouldn't expect conference-goers to be a representative sample, and it doesn't match my experience.
> The difference between continuation-effects and monads is that a function making use of continuations can use multiple effects without requiring nasty monad-transformers (and that's exactly the problem Kiselyov wanted to solve with his handler system).
Since you're emphasizing it I'll ask: how do you express effects that don't commute? EitherT[Writer[Log, ?], Error, Value] behaves very differently from WriterT[Either[Error, ?], Log, Value].
> It doesn't tell me which of the calls is effectful and which is pure.
But it does because their type says so:
String read() effect IO
(where `effect` is just like Java's checked `throws`)
> what is the type of a? "(String effects IO)" or some such?
Right. It is as I said, a type on read, not on a. `a` is still `String`.
> only allowed inside the scope of a method with IO as an effect then it's the same problem as checked exceptions.
But that's the opposite of a problem. That's like using `IO a` outside of an IO monad. You want the function to only be called in the right context (an appropriate handler is in effect).
Sure. But I remember doing this with checked exceptions, mousing over each method call one by one to figure out which ones could or couldn't throw. It's much nicer if you can see which methods are the throwey ones immediately - =/<- is a real sweet spot in terms of having the difference be visible but not overly intrusive.
> But that's the opposite of a problem. That's like using `IO a` outside of an IO monad. You want the function to only be called in the right context (an appropriate handler is in effect).
Disagree. I want to be able to have a value of type IO a outside a method - e.g. in a static field. More generally it's really huge to be able to just treat these things as ordinary values that I can manipulate using the ordinary rules of the language.
> I'm not sure if the same effects would make sense, though. For example, Either is a monadic value which doesn't make sense in an imperative setting.
The two effects are "possibly skip the rest of the computation, returning an Error instead" and "record a programatically-accessible Log" - I think those both make sense. The point is the reason you have to use the horrible monad transformers is that the order in which you evaluate these effects matters - if there's an early error, are later log statements recorded, or not? There are more complicated cases when e.g. catching I/O errors.
> It's much nicer if you can see which methods are the throwey ones immediately - =/<- is a real sweet spot in terms of having the difference be visible but not overly intrusive.
Well, I guess that's a matter of taste, but 1/ what information does "effectful" convey if you don't know what the effect is until you mouse-over? 2/ I think that's a very, very low price (if it is a price at all) to pay for something that fits much better with non-pure languages.
> I want to be able to have a value of type IO a outside a method - e.g. in a static field
But continuation effects don't have `IO a`, just `a`. The effect comes with the context. An `IO a` would translate to a lazy value, namely a function, which could be a continuation. There's nothing stopping you from storing a continuation in a field. You can manipulate the `a` just like any value, and the "IO" with handlers -- just as you do with monads.
> possibly skip the rest of the computation, returning an Error instead
Yes, but that's not all Either says. It says "return an error or a value of type a". In imperative code that makes less sense. What you want to say is: "throw an exception or don't (and return whatever value it is that the function returns)". The type of the normal return value is not recorded in the Either.
> The point is the reason you have to use the horrible monad transformers is that the order in which you evaluate these effects matters
You need monad transformers not because of ordering, but because monads don't generally compose. Or it's a bit reversed: because monads don't compose they are sensitive to accidental ordering (whether you care about the order or not), which makes monad transformers tricky. With continuations you have the choice of when to be sensitive to order and when not to.
For example, in the imperative case, you could encode the information "if there's an early error, are later log statements recorded, or not?" directly in the type, (I think) but I don't see any reason to. In Haskell you must because the types need to be nested in one particular way according to the transformer.
In any case, my claim is as follows: if you generally prefer the imperative way of doing effects (like in Java, OCaml, F#, Scheme, JS, Python, Clojure, Erlang etc.) -- as I do -- then continuations are far more natural than monads. If you prefer the PFP way, you should stick to monads (or if using Haskell, must stick to monads).
> An `IO a` would translate to a lazy value, namely a function, which could be a continuation. There's nothing stopping you from storing a continuation in a field. You can manipulate the `a` just like any value, and the "IO" with handlers -- just as you do with monads.
I think the Java world is finally accepting that methods are not a substitute for first-class values; one of the biggest remaining problems with Java lambdas is that it's awkward to use them with checked exceptions (e.g. do you to create a named @FunctionalInterface for each input/output/exception combination you want to store? You can use generics but they don't let you abstract over arity). Handling effects this way would presumably have the same problems?
Put it this way: Either[MyError, A] is much more useful to me than a checked MyException, because I can use the former as a value whereas the latter I can only throw from methods. That's the difference I'm worried about here.
> You need monad transformers not because of ordering, but because monads don't generally compose. Or it's a bit reversed: because monads don't compose they are sensitive to accidental ordering (whether you care about the order or not), which makes monad transformers tricky. With continuations you have the choice of when to be sensitive to order and when not to.
What does the choice look like in code then? Under monads I'd write:
> Handling effects this way would presumably have the same problems?
First, I didn't suggest actually using checked exceptions as they are now to model "typesafe continuations" in Java. I'm perfectly fine with continuations not being fully type-checked (I like types, but I'm far from a type zealot).
But more to the point, such an effect system (or actually even Java's checked exceptions) won't pose such a problem at all. On the contrary. Just like you can't store a monadic value in a variable of a "plain" value, it makes sense to reject `int foo() throws IO` from a computation that doesn't support IO. It is a problem in Java with checked exceptions because `int foo() throws InterruptedException` would work where `int f()` is expected, and having it rejected (and requiring a wrapper) is indeed a nuisance. That's not the same issue if effects are involved.
> I can use the former as a value whereas the latter I can only throw from methods. That's the difference I'm worried about here.
I'm not sure I understand the distinction. You can always use something like Optional (Maybe) if you want a value. Alternatively, no one is stopping you from using the monadic Either even in an imperative context (although you may lose the stack trace). You can still use it this way if that's how you like doing things. I don't.
> then which semantics do I get, and how do I get the other one?
would log the exception, as in your second example.
You don't even need to think about it. It works just like plain imperative code. Continuations are the general (mathematical if you will) formulation of how imperative code already behaves.
> I'm not sure I understand the distinction. You can always use something like Optional (Maybe) if you want a value.
I mean that the "value or error" that I get back from a function that returns Either is itself a value, that I can use in the normal way I would use a value. I can pull it out into a static field. More importantly I can pass it back and forth through generic functions without them having to do anything special about it (e.g. as a database load callback), because it's just a value. I notably can't do this with exceptions (or e.g. if I'm emulating Writer with a global or ThreadLocal variable), though checked exceptions will at least fail this at compile time.
> would never call the writer (like your first example, I believe)
Hmm. If you're willing to fix the order in which effects happen (and explicitly handle when you want to change it, as in your example) then you can write something much simpler than the usual monad transformers, so make sure you're comparing like with like.
> I can pull it out into a static field. More importantly I can pass it back and forth through generic functions without them having to do anything special about it (e.g. as a database load callback), because it's just a value. I notably can't do this with exceptions (or e.g. if I'm emulating Writer with a global or ThreadLocal variable), though checked exceptions will at least fail this at compile time.
I understand, and as I said, I normally prefer not working this way. Others -- like yourself -- do. Both ways are perfectly valid. I prefer exceptions to not be values, as, to me, they capture a failed computation not a missing result. For a missing value, I can use null/Optional etc.. But in any case, it's not an either/or thing. You can always catch exceptions and store them if you want; you can even use Either if that's how you prefer to work.
> If you're willing to fix the order in which effects happen (and explicitly handle when you want to change it, as in your example) then you can write something much simpler than the usual monad transformers
How? You need to chain functions, each emitting some set of effects, and you don't control in advance the order of the effects in each function.
I am not fixing the order of effects any more than in your examples. The reason the examples look different is because I've chosen to represent an error not as a value but as a control-flow construct, but I don't have to, so let's look at other effects:
The whole point of composition is that foo(bar()) is foo(x) where x = bar(), which means that you can understand the combined program by understanding foo and bar independently. If you can't represent the result of bar() as a value x then that breaks down, in which case I don't see what this whole project is about? I mean being able to implicitly pass down a handler is cool, and there's some value in being able to swap out e.g. a "real" and "test" version, but it sounds like the effect sequencing is still directly coupled to the code operation.
> With monads you need to make sure that they both return the same nesting of Log[Writer] or Writer[Log], but I don't need to care.
You only don't need to care if the effects commute - and in that case you can actually automatically move them past each other in the monad world (my scalaz-transfigure library does this, at least at the proof-of-concept level). For noncommutative effects you can do the same thing if you're willing to fix the nesting order (e.g. say that an exception always trumps a later log, or always doesn't).
> which means that you can understand the combined program by understanding foo and bar independently. If you can't represent the result of bar() as a value x then that breaks down
Can you not understand this program by understanding both statements separately?
print("hi") ; print("there")
Why is that more understandable if the result of print is a value? (obviously, the result of read() is). In any case, describing everything as a value is a PFP idea (note that neither the Lisps nor the MLs do it), that some people like and others (like me) don't. In fact, I find the notion of treating every computation as a value (rather than possibly wrapping it as one), a not too great idea. Values are equational, computations are not (even pure ones). It's a nice abstraction to have at your disposal, but enforcing it everywhere is too much.
> You only don't need to care if the effects commute - and in that case you can actually automatically move them past each other in the monad world
You could, obviously, but when working with continuations, i.e. imperatively, or with Kiselyov's freer monads (which are inspired by continuations) you don't have to.
> Can you not understand this program by understanding both statements separately?
Maybe that one - I/O is a bad example, I don't really see the value of monads for I/O and I don't tend to use an I/O monad in my programs. Programs I can't understand by understanding statements separately are something like:
and these cases (database transaction, writer) are things I do find it useful to manage with a monad.
> Values are equational, computations are not (even pure ones). It's a nice abstraction to have at your disposal, but enforcing it everywhere is too much.
You need some way to encapsulate/isolate the relevant state of a program partway through running - I think imperative programmers agree that global mutable state is bad. Mostly I see monads as a way to turn global mutable state (whether in the form of global variables, control flow, or something external to the program) into local state, so that you can see which bits of state you need to understand to understand a given line of code.
I think that state has to be equational if we're to have any hope of being able to understand programs - if you can't say whether this time at line 20 the program is in the same state it was last time at line 20, or a different state from last time, how can you even begin to debug?
> You could, obviously, but when working with continuations, i.e. imperatively, or with Kiselyov's freer monads (which are inspired by continuations) you don't have to.
But will those approaches stop me from tripping myself up when effects don't commute? In scalaz-transfigure I do that with the Distributive typeclass - if one effect distributes over another then there will be an instance and you can freely move them past each other, if not then you get a compile error when you try and have to clarify what you want to happen.
> these cases (database transaction, writer) are things I do find it useful to manage with a monad.
Yeah, I know, but some of us don't :)
> You need some way to encapsulate/isolate the relevant state of a program partway through running
That's exactly what continuations do. The question of global mutable state is an orthogonal one.
> I think that state has to be equational if we're to have any hope of being able to understand programs - if you can't say whether this time at line 20 the program is in the same state it was last time at line 20, or a different state from last time, how can you even begin to debug?
Well, we have been successfully writing programs like that for a long time. Now, don't get me wrong -- the equational abstraction is often very useful, but it has its price, and it's not a complete representation of the computation.
> But will those approaches stop me from tripping myself up when effects don't commute?
I don't understand. We don't trip over ourselves when writing imperative code, and when we do, it mostly has to do with IO or other OS state, and monads won't help you there. Linear types ("typestate") might, but even they are limited.
Can't read twitter where I am, but to be clear I was primarily thinking of the code-readability overhead.
> Why is that necessary? You might as well presuppose the necessity of types :)
Well for me the value of using a monad to capture an effect is to be able to verify that I've sequenced that effect correctly at compile time. E.g. I use a monad to represent a database action that has to happen inside a transaction and the type system can enforce that I wrap it in a transaction. In a language without types it would be technically possible to do that, but entirely pointless, because if I get a mistake it's a runtime failure on that codepath either way. Maybe that was your point, but while monads are technically possible in dynamic languages, the technique that yields a lot of the value from using monads isn't.
> or, alternatively, say that you need a type system to verify that your monads are truly monads (i.e. obey the monad laws) at compile time, something Haskell doesn't do, either.
If I have a given defect rate in the sequencing of some effect, then isolating the fallible heart of that sequencing in a short piece of code is a big win (and I can e.g. flag that critical section for additional review). I do prefer to have a type system which verifies the monad laws at compile time, and I do think that the stronger the enforcement around the monad system, the more worthwhile it becomes to manage a given effect explicitly.