I can sympathesize, but the last think I would want in a language is for benign looking refactorings to changes meaning. E.g.
let a: u16;
let b: u16;
fn f(x: u32) -> ...
f(a*b) // 32 bit result
let x = a*b; // x is u16
f(x)
Now there is a sane answer to this: define multiplication to always result in larger integer types, and require some explicit downcasting. But I'm not sure anyone will go for this.