Very cool! A fairly popular terrain generation tool in gamedev which uses simulated erosion is World Creator: https://www.world-creator.com/
An additional (and tricky) requirement for procedural universe simulation games like Elite or No Man's Sky is that the terrain needs to be generated on the fly from a seed value, because there's not enough storage in the real world for pre-generated heightmaps in an infinite universe. Some additional notes how the presented algorithms deal with this requirement would be nice.
I've always thought on-the-fly terrain generation for persistent worlds was a tricky approach, because once you ship the game and have player-created structures (bases, etc.), it's difficult to improve the algorithm. You're locked in, so to speak. As a game designer, you have three choices: call it done for the lifespan of the game; change it and disrupt the player base, maybe retconning with some kind of "apocalypse" event; or introduce algorithm versioning for new regions, meaning original areas will look more dated and simplistic as time goes on.
With instanced games like Minecraft, it's less of a problem, because players can choose when to move to a new world and get the advantages of new terrain algorithms.
Maybe not the end of the world (ha ha) but it's an interesting design issue and I'm curious how various teams will handle it.
As far as I understand Minecraft stores the chunks a player visits so that if the generation changes the chunks you've already visited stays, and it becomes an issue of solving boundaries. I don't know what Minecraft does there, but a plausible method would be to generate past the boundaries, and apply some merging process to make it mesh somewhat predictably if the generation changes.
An idea I'd love to see implemented (maybe someone has) is to apply a time element to the generator function where things change towards whatever the generator expects. E.g. new types of trees start growing, desert spreads or recedes etc.
You may want to then make player-placed blocks more resistant, but otherwise apply transformation functions to slowly migrate the terrain. You can apply that more aggressively to chunks the player rarely if ever visits. E.g. erode land that's too high. have trees grow etc. Some features then certainly will be noticeably "wrong" - e.g. a mountain suddenly growing - so maybe you make exceptions and just try to eat away at the differences you can plausibly change. Even if you can't change everything, you can at least try to use it to make things blend better. Couple it with running the old generator slightly further out from the user and you can mark those chunks as "unseen" and much more aggressively blend faster there.
The more finegrained info you store about what the player has actually seen, the more aggressively you can apply this.
> As far as I understand Minecraft stores the chunks a player visits so that if the generation changes the chunks you've already visited stays, and it becomes an issue of solving boundaries. I don't know what Minecraft does there...
Last time I've played in a world spanning multiple Minecraft versions, there were clear discontinuities (steep cliffs, minor water and lava floods) between areas that used different generators.
Obviously doesn't apply to infinite or "practically" infinite scenarios (like ED) I was under the impression that landscapes are generally not that costly in terms of storage.
AFAIK with systems like Unreal's landscape feature, the landscape is not stored to disk as meshes, instead the heightmap and layer masks (for terrain coloration) are stored as textures which are looked up at runtime in order to render the landscape.
Would assume this means its relatively lightweight to store large amounts of terrain to disk compared to storing it as textured meshes.
The only scenarios where this wouldn't be true is if your landscape texture is for example satellite imagery instead of a combination of texture samples blended together at runtime with masks. With Unreal Engine 5's Nanite there is also the the baked Hierarchical Level of Detail meshes which would add to the storage cost.
Notably landscape is excluded from Nanite since the path Epic wants to go down is to continue generating landscape mesh at runtime.
Would love to get some more insight into landscapes in general in game engines :)
Probably relevant to note that Minecraft stores the entire chunk once it's generated. (Of course it has to -- it's malleable.) And that's a lot more information than a height map and texture.
It looks like Star Citizen did something like procedural planet generation at different levels of detail depending on the camera. I'm assuming with a chosen, stable seed. And then a combination of randomly- and purposely-placed distribution of assets on the planet.
Star Citizen does the textures in Substance Designer, I would guess they're included in the game files and not regenerated at runtime.
I believe in addition to the terrain height/color maps, they generate textures that feed into object scattering so that rocks/trees/etc can be designed into the environments by the artists, but regenerated clientside to avoid needing to store millions of rock locations.
The low detail "from space" version used to be painted separately and then blend into the ground detail as you approach, but I believe the space view and the ground ecosystem mapping come off of the same map now.
Depends on the scale. It might cost on the order of several MBs to store a km^2 at a moderate level of detail. For a small area that's relatively light, but becomes prohibitively large if you multiply by the hundreds of millions of square kilometers of a planetary surface...
Playing with terrain generation can be a very fun topic to explore even if you aren’t a gamedev (yet).
Unity’s terrain system is a toy compared to Unreal, but that also means it’s easier to hack around in. I’d recommend this basic course for an intro to it:
Unity doesn't have as much built in, but the asset store is miles ahead of Unreal's. You can build almost any game you can imagine ( within reason, MMOs are hard) fairly quickly. I am biased since I find C# to be much much easier than C++ and I don't like visual scripting.
You can build any prototype you can think of quickly, but when taking the game across the finish line I’ve found myself many times pining for some of the mature tooling Unreal offers in the art, character, and netcode areas.
The asset store can be great in general, but it’s rare I’ve used an asset that I didn’t have to modify heavily. I don’t mind that, but code quality varies greatly and sometimes you end up in a spot where you have to go write your own solution anyways for a production project. Art resources are also great for a prototype, but if you want a unique look and feel you’ll still end up needing significant art investment. There’s no free lunch.
All that said, I’d quit gamedev before working in Unreal. It’s great for an artist or a designer, but it’s hell on earth for an engineer. C++ and blueprint spaghetti, no thanks. I can’t tell you how many Unreal devs I’ve interviewed who have expressed this exact sentiment.
I enjoy writing my own small tools and slightly modifying some assets. I do hate that much of my current game is a black box since I don't know exactly how the asset actually works.
But I have fun working with Unity, Unreal always felt very hostile for some reason. Still can't get the C++ complier to work.
When you look at perseverance rover photos, a lot of them look like straight out of terra gen, simplistic, artificial. I think it's because the underlying mechanism that created those sediments was simple, like a simple chemistry, not complex microbial life.
I am bit confused, what sort of features are missing there? Like any example pictures where you see the impact of microbial life in comparison?
Also confused by the fact that rock looks like its maybe few feet across (admittedly difficult to judge the scale), so the effects and development I would imagine are quite different from large-scale features like discussed in the article, or what I associate terragen with.
Once you generate a height (displacement) map and color map, any easy way to "shade" the color map?
NASA has some pretty decent lunar displacement/color maps of the moon but the color map is evenly lit (no shadows spilling into craters). To render this quickly I want to pre-render the shadows (maybe into the color-map itself).
The easiest way is to compute the directional derivative with some direction pointing upwards. For example u_x + u_y, or convolving the height map with a 3x3 kernel with the following coefficients:
0 -1 0
-1 0 1
0 1 0
You may want to smooth the result if you see too much noise.
The linked post has some good comparisons with derivative-based shading, it does sorta work but you can see it comes up short in conveying actual 3D shape compared to the raytraced version.
Check out the rayshader R package for lots of methods for hillshading terrain maps (original HN post mentioned here as well, but it's evolved well beyond that original post)
An additional (and tricky) requirement for procedural universe simulation games like Elite or No Man's Sky is that the terrain needs to be generated on the fly from a seed value, because there's not enough storage in the real world for pre-generated heightmaps in an infinite universe. Some additional notes how the presented algorithms deal with this requirement would be nice.