I remember old assembly language guys annoyed at the types of math available with high level languages. I think they were a bit crusty but still they had a point about inability to manage precision without big hammers.
At least the C# way forces you to think about whether you're loosing precision. But I dislike having to add casts. I use casts a lot, but I distrust the because they hide errors. I find myself really wanting safe operators and unsafe ones. Safe as in, overflow results in a hard fault that I can catch. Unsafe means overflow is silent.
I think that although the commented above used `as`, he meant it as pseudocode and not C#, especially since C# is rarely used for low-level or perfomance-critical stuff where you would use int8 instead of int16 for optimization (in my experience).
I was just using C# as an example of an alternative way of handling calculations.
The comment about assembly comes from remembering a conversation with an older firmware guy. In his world multiplying two 32 numbers resulted in a 64 bit result. And division was 64 bits divided by 32 bits with a 64 bit result, 32 bit result plus a 32 bit remainder.
I think his thoughts on C's 32 bit number X 32 number => 32 bit result can be summed up in a single word: gah!
That depends largely on language semantics. Ideally, a language would either guarantee that overflows can't happen (via dependent/refined types), make sure that addition is `(int8, int8) -> int16`, or guarantee modular arithmetic. In any case, the second interpretation looks overall superior.