Might also cause some other weird side effects. Some older versions of C# (or more specifically the .net CLR) had some issues with the heap space for large objects when it got used by external code. I believe the reason was that data allocated by external libraries had to be placed sequencially on the heap due to some marshalling constraints (or maybe the LOH always allocates sequentially?). After a while the heap got fragmented and you could get into a situation where all memory checks told you that you can still allocate another large chunk but the subsequent allocation crashed in a not so graceful manner since the memory wasn't available in sequence.
Took quite a time to even understand what was happening. I didn't really follow up to how the CLR-developers solved this issue. This was ages ago and has now been fixed. Certainly an interesting problem.
C# has this awesome using statement so you can at least indirectly influence when garbage collection occurs and ensures objects are as short lived as possible.
I don't think there is a golden way to Rome. Personally I think keeping the life-time of any object as short as possible is still the way to go and it helped with our problem. The GC of C# is aggressive towards young object that it assumes to be short lived. But I would never recommend to write code with the GC in mind until it starts to be an issue. Which might be right at the start for game developing, but otherwise I don't really care too muchabout keeping temporary copies.
`using` is for resource management scopes (disposal), not garbage collection.
Some disposal actions will dereference objects (this.thing = null) and thus add garbage and therefore garbage pressure for a GC run to clean up, so it is an indirect influence, but it is very indirect, and not what `using` and `IDisposable` are meant to do.
You shouldn't rely on `using` for GC management, and making things that don't manage resources (such as file handles, sockets, etc) `IDisposable` isn't a good way to ensure they are short-lived (and doesn't add any additional signal to the GC that the objects are short-lived; some `IDisposable` objects are incredibly long-lived).
Better tips to managing short-lived objects are to keep them as small as possible, avoid unnecessary references to/from other objects, avoid accidental captures into lambda closures, and consider if stack allocation makes sense for your scenario.
Might also cause some other weird side effects. Some older versions of C# (or more specifically the .net CLR) had some issues with the heap space for large objects when it got used by external code. I believe the reason was that data allocated by external libraries had to be placed sequencially on the heap due to some marshalling constraints (or maybe the LOH always allocates sequentially?). After a while the heap got fragmented and you could get into a situation where all memory checks told you that you can still allocate another large chunk but the subsequent allocation crashed in a not so graceful manner since the memory wasn't available in sequence.
Took quite a time to even understand what was happening. I didn't really follow up to how the CLR-developers solved this issue. This was ages ago and has now been fixed. Certainly an interesting problem.
C# has this awesome using statement so you can at least indirectly influence when garbage collection occurs and ensures objects are as short lived as possible.
I don't think there is a golden way to Rome. Personally I think keeping the life-time of any object as short as possible is still the way to go and it helped with our problem. The GC of C# is aggressive towards young object that it assumes to be short lived. But I would never recommend to write code with the GC in mind until it starts to be an issue. Which might be right at the start for game developing, but otherwise I don't really care too muchabout keeping temporary copies.