You seem to be describing the ideas underlying smalltalk, which uses the word 'objects' instead of 'discrete modules'. Every piece of functionality is an object - any object can talk to any other objects, etc.
If you're proposing something different, can you contrast your ideas with smalltalk style OOP?
This may be. We should also step back consider if building functional abstractions over data is a good way for modelling all of computation itself? Or something like 'smaller virtual computers all the way down' (i.e. smalltalk) is a better model? Decomposing a thing into the smaller things of the same kind seems cleaner from a certain perspective, at least.
In Haskell, higher level functions can be composed from other functions, which is very clean. But the state isn't inside the function (like it is for a real computer), so it doesn't model a computer completely in my mind - it models one aspect (the transformation).
Another perspective is thinking about building large scale systems and coupling of modules (how do you abstract over data from a third party library where you don't have the source code?). I kind of get smalltalk's answer here (an object scales up to become a distributable module), but I don't know if there is a good answer in a full Haskell based world.
(BTW, I'm not saying one is better, I'm still thinking these things through.)
> In Haskell, higher level functions can be composed from other functions, which is very clean. But the state isn't inside the function
> ... Another perspective is thinking about building large scale systems and coupling of modules
Both models have been proven formally equivalent, so it doesn't matter which one you use as the base computation model "all the way down"; you can always transform one into the other.
So in practice you end using the one which best represents the problem domain that you're solving. State-based object-orientation works best to represent simulations of the world where previous states are not needed. Functional is best when you need to reason about the properties of the system, since it allows you to access any present or past state of the computation.
The other neat thing about modular and functional programming is that since modules/functions are discrete, you can drop in any replacement you want to, without the rest of the codebase even noticing (except during the build process).
As a real example, you can use Haskell's Foreign Function Interface (FFI) to call C functions, and wrap C structs. The rest of your Haskell code doesn't even need to know about it. You can use the same interface with a variety of languages in place of C. Rust even has a specific compiler option for generating C-compatible shared object/dll files.
That's the key point though. In a large scale system (think multiple systems written by multiple groups running on multiple machines) when you modify a shared data type, how do you update the systems?
If you're proposing something different, can you contrast your ideas with smalltalk style OOP?