The game currently has a Mixed (65%) rating on Steam. Granted, some negative reviews are shallow, but some mention important issues. Regardless, a Minecraft clone is not exactly groundbreaking in terms of gameplay.
This is to say that technical merits are rarely good indicators of a good game. As a gamer, I don't really care about the game engine, and even less about the language it's written in. Good programmers often obsess about these details, but it's easy to miss the forest for the trees, which is what I think happened here. Game design is a separate skill from game development, and not many people excel at both.
Still, it's great seeing this here, as the technical achievements are quite remarkable.
It's not intended to be a Minecraft clone.. if you look a bit closer than the initial visual impression, you'll see there are many differences in gameplay.
As for the rating, yes we had a rough initial launch, but we're fixing all these things. Note that it is 65% out of only 63 user reviews, so statistically not set in stone yet.
65% positive reviews doesn’t tell you much about whether the game is good or not. At most it tells you that the game wasn’t great at communicating what people should expect.
This is just being intentionally obtuse. Everyone knows that 65% generally means the game doesn't even run on most devices, crashes constantly, or has some other serious flaw. Ridiculous.
> I played this game for 3 hours and i can confirm it is not in a playable state, there were several bugs within the first few maps that deleted needed items causing us to reset the entire world. several times. Don't waste your time...
In what world do these new tools help with "laying bricks", but not with ensuring that the structure does not collapse? How is that work any more difficult than producing the software in the first place? It wasn't that long ago that these tools could barely produce a simple program. If you're buying into the promises of this tech, then what's stopping it from also being able to handle those managerial tasks much better than a human?
The seemingly profound points of your marketing slop article ignore that these new tools are not a higher level of abstraction, but a replacement of all cognitive work. The tech is coming for your job just as it is coming for the job of the "bricklayer" you think is now worthless. The work you're enjoying now is just a temporary transition period, not an indication of the future of this industry.
If you enjoy managing a system that hallucinates solutions and disregards every other instruction, that's great. When you reach a dead end with that approach, and the software is exposing customer data, or failing in unpredictable ways, hopefully you know some good "bricklayers" that can help you with that.
The future you're concerned with defending includes bots being a large part of this community, potentially the majority. Those bots will not only submit comments autonomously, but create these projects, and Show HN threads. I.e. there will be no human in the loop.
This is not something unique to this forum, but to the internet at large. We're drowning in bot-generated content, and now it is fully automated.
So the fundamental question is: do you want to treat bots as human users?
Ignoring the existential issue, whatever answer you choose, it will inevitably alienate a portion of existing (human) users. It's silly I have to say this, but bots don't think, nor "care", and will keep coming regardless.
To me the obvious answer is "no". All web sites that wish to preserve their humanity will have to do a complete block of machine-generated content, or, at the very least, filter and categorize it correctly so that humans who wish to ignore it, can. It's a tough nut to crack, but I reckon YC would know some people capable of tackling this.
It's important to note that this state of a human driving the machine directly is only temporary. The people who think these are tools as any other are sorely mistaken. This tool can do their minimal effort job much more efficiently, cheaper, and with better results, and it's only a matter of time until the human is completely displaced. This will take longer for more complex work, of course, but creating regurgitated projects on GitHub and posting content on discussion forums is a very low bar activity.
The layers of stupidity on this shit cake are staggering. I don't even know where to start...
Let it be known that this rotten industry brought us here, and that all people working for these companies are complicit with what is happening, and with what is yet to come. This is just the beginning.
After nearly 30 years of tech life myself, I've come to the realization that the best UIs are not graphical. They can have graphical elements mostly for visualization purposes, but all of them should be as minimal and unobtrusive as possible. Any interactivity should be primarily keyboard-driven, and mouse input should be optional.
Forcing users to click on graphical elements presents many challenges: what constitutes an "element"; what are its boundaries; when is it active, inactive, disabled, etc.; if it has icons, what do they mean; are interactive elements visually distinguishable from non-interactive elements; and so on.
A good example of bad UI that drives me mad today on Windows 11 is something as simple as resizing windows. Since the modern trend is to have rounded corners on everything, it's not clear where the "grab" area for resizing a window exists anymore. It seems to exist outside of the physical boundary of the window, and the actual activation point is barely a few pixels wide. Apparently this is an issue on macOS as well[1].
Like you, I do have a soft spot for the Windows 2000 GUI in particular, and consider it the pinnacle of Microsoft's designs, but it still feels outdated and inneficient by modern standards. The reason for this is because it follows the visual trends of the era, and it can't accomodate some of the UX improvements newer GUIs have (universal search, tiled/snappable windows, workspaces, etc.).
So, my point is that eschewing graphics as much as possible, and relying on keyboard input to perform operations, gets rid of the graphical ambiguities, minimizes the amount of trend following making the UI feel timeless, and makes the user feel more in command of their experience, making them more efficient and quicker.
This UI doesn't have to be some inaccessible CLI or TUI, although that's certainly an option for power users, but it should generally only serve to enable the user to do their work as easily as possible, and get out of the way the rest of the time. Unfortunately, most modern OSs have teams of designers and developers that need to justify their salary, and a UI that is invisible and rarely changes won't get anyone promoted. But it's certainly possible for power users to build out this UI themselves using some common and popular software. It takes a bit of work, but the benefits far outweigh the time and effort investment.
The issue with this type of design is that it completely tanks discoverability. Every visual UI element trimmed is another pit of confusion for less-technical computer users.
Modern UIs aren't great with discoverability, either however and are not an example that should be followed.
That's not necessarily the case. In fact, if implemented well, keyboard/command-driven UIs can be much easier to discover than GUIs.
Consider the "Command Palette" and similar features that are part of many UIs (VS Code, Obsidian, Vim, Emacs, etc.). It allows the user to search all possible actions using natural language, and see or assign key bindings to them, so that they can get to their most commonly used actions faster. This search can be global for the entire program, or contextual for the current view.
It is far easier to search for what you want to do, than to learn to what action every GUI element is associated with, or to navigate arbitrarily nested menu hierarchies. This does require the user to be familiar with the domain language somewhat in order to know what to search for, but this too can be simplified, actions can have different names, etc. It also makes the program more accessible for speech navigation, screen readers, and so on.
> The issue with this type of design is that it completely tanks discoverability.
There are still ways to help, such as having a menu bar, and having good documentation. (Documentation is more important, in my opinion; but both are helpful.)
Pointers are still very useful for many paradigms. Think about something like Blender or a game level editor: there can be _a lot_ of controls visible at once, trying to navigate them all with the keyboard is just unfeasible. And doing a fully context sensitive setup to limit visible controls, like the MS Office Ribbon, is also infeasible because the changes would be happening almost continually as different objects are selected and modes are chosen.
Your bad UI example of resizing windows is way less about the round corners or lack of obvious grab area (handle). It's more that the handle is way too small. It's a couple pixels (maybe just one?) wide/tall on screens that are thousands of pixels wide! It's just too easy to overshoot. I'd say it comes from the obsession with minimalism and flat design such that there is almost no visible seperate border to act as a target. Combined with trying to remove ambiguity as to which window the click should go to (if you click two pixels "outside" a window, should the click go to the window beneath or be interpreted as trying to grab the border?), the grab handles are tiny, almost matching the actual (lack of) pixels of the border, instead of being a usable target to click on.
To me it points to a lack of usability testing, or at least lack of generalized usability testing, ie: they tested their own workflows, which seem to be just always leaving windows as the OS creates them initially, or maximizing everything, not much resizing at all. Similarly, generally testing a [mostly] keyboard interface is tough to do thoroughly without providing a thorough cheat sheet. You know the commands because you made them, easy to test how you work, but others need to learn them first.
> After nearly 30 years of tech life myself, I've come to the realization that the best UIs are not graphical. They can have graphical elements mostly for visualization purposes, but all of them should be as minimal and unobtrusive as possible. Any interactivity should be primarily keyboard-driven, and mouse input should be optional.
I agree, that the interactivity should be primarily keyboard-driven. However, mouse input is useful for many things as well; if there are many things on the screen, the mouse can be a useful way to select one, even if the keyboard can also be used (if you already know what it is, you can type it in without having to know where on the screen it is; if you do not know what it is, you can see it on the screen and select it by mouse).
> Forcing users to click on graphical elements presents many challenges: what constitutes an "element"; what are its boundaries; when is it active, inactive, disabled, etc.; if it has icons, what do they mean; are interactive elements visually distinguishable from non-interactive elements; and so on.
At least older versions of Windows had a more consistent way of indicating some of these things, although sometimes they did not work very well, often they worked OK. (The conventions for doing so might have been improved, although at least they had some that, at least partially, worked.)
> A good example of bad UI that drives me mad today on Windows 11 is something as simple as resizing windows. ... it's not clear where the "grab" area for resizing a window exists anymore
I had just used ALT+SPACE to do stuff such as resize, move, etc. I have not used Windows 11 so I don't know if it works on Windows 11, but I would hope that it does if Microsoft wants to avoid confusing people. (On other older versions of Windows, even if they moved everything I was able to use it because most of the keyboard commands still work the same as older versions of Windows, so that is helpful (for example, you can still push ALT+TAB to switch between full-screen programs, ALT+F4 to close a full-screen program, etc; I don't know whether or not there is any other way to do such things like that). However, many of the changes will cause confusion despite this, or will cause other problems, that they removed stuff that is useful in favor of less useful or more worthless stuff.)
> Forcing users to click on graphical elements presents many challenges: what constitutes an "element"; what are its boundaries; when is it active, inactive, disabled, etc.; if it has icons, what do they mean; are interactive elements visually distinguishable from non-interactive elements; and so on.
There are standards and common conventions for a lot of this in the Windows 9X/2000 design language, and even in basic HTML. These challenges could have been solved (for values of) by using them consistently, and I think we might have been there for a little while, at least within the Windows bubble. The fact that we threw all of those out the window with new and worse design, then did that again a few more times just to make sure all the users learned to never bother actually learning the UI, since it will just change on them anyway, doesn't entail that this is an unsolvable problem (well, it might be now, but I doubt it was back in 1995).
> Like you, I do have a soft spot for the Windows 2000 GUI in particular, and consider it the pinnacle of Microsoft's designs, but it still feels outdated and inneficient by modern standards. The reason for this is because it follows the visual trends of the era, and it can't accomodate some of the UX improvements newer GUIs have (universal search, tiled/snappable windows, workspaces, etc.).
I fail to see why any of these features couldn't be implemented within the design constraints of the Windows 9X/2000 design language. There are certainly technical constrains, but I can't see any design constrains. They were never implemented at the time, and those features didn't become relevant until we'd gone through several rounds of different designs, so we never had the opportunity to see how it would work out.
> There are standards and common conventions for a lot of this in the Windows 9X/2000 design language, and even in basic HTML. These challenges could have been solved (for values of) by using them consistently [...]
The thing is that GUIs naturally have to evolve to cater to their user base. The "office" metaphor was useful in the 1980s and 90s for making computing familiar to people who were used to "desktops", "folders", "files", etc. Some of these terms still exist today, but the vast majority of users can't relate to it, so it's meaningless to them.
This is why GUIs will always have to change and adapt to trends, which will always cause friction for existing users.
My point is that by minimizing the amount of graphical elements (note: not completely eliminate them), we minimize the amount of this friction. The difficult thing is, of course, maintaining the appropriate balance of all elements while optimizing for usability, which is ultimately very subjective.
But consider that CLIs are effectively timeless. The friction comes from their lack of discoverability, arcane I/O, every program can have a different UI, etc. And yet this interface has persisted and has largely remained the same for decades. Most programs rarely change their CLI, so the user only needs to learn a few commands to be productive.
So I think that the most usable UI is somewhere in the middle. It should avoid the constant churn of GUIs, and be more accessible than CLIs. This is possible to build for power users, but it can also be made approachable for less technical users.
> I fail to see why any of these features couldn't be implemented within the design constraints of the Windows 9X/2000 design language.
That's true. But then again, what exactly is the Windows 9x/2000 design language, and what makes it better than the modern Windows GUI? Is it the basic Start Menu? The task panel with blocks for each window instead of icons? The square instead of round windows? The lack of smooth transitions, transparency, and graphical effects? The overall brutalist theme?
We can certainly add all the features I mentioned to Windows 9x/2000, and we had some of them even back then via 3rd party tools, but isn't that essentially what modern Windows has become? There are ways to revert some Windows 11 features today with alternative shells and such, so is that the ideal UI then?
When I think of Win2k, I think of the overall simplicity. This is mostly due to nostalgia than for any practical reasons. I'm sure that I couldn't stand using its barebones UI today, as much as I would enjoy the simplicity for a brief moment.
> The thing is that GUIs naturally have to evolve to cater to their user base. The "office" metaphor was useful in the 1980s and 90s for making computing familiar to people who were used to "desktops", "folders", "files", etc. Some of these terms still exist today, but the vast majority of users can't relate to it, so it's meaningless to them.
We still 'dial' with our phones, even though phones haven't had dials in over 50 years by this point. Nobody would even explain phones using that metaphor anymore. Even just having a foundation of common terminology is helpful in teaching people new systems.
> This is why GUIs will always have to change and adapt to trends, which will always cause friction for existing users.
I fail to see the connection.
> My point is that by minimizing the amount of graphical elements (note: not completely eliminate them), we minimize the amount of this friction. The difficult thing is, of course, maintaining the appropriate balance of all elements while optimizing for usability, which is ultimately very subjective.
This is true in today's world, but not necessarily in a world where the UI language of computers is stable and users can trust their computers to not change render their understanding of the system from underneath them. If all buttons had the same hints to tell a user 'I'm a button', in the same way default HTML links tell users 'I'm a link', then we could trust users to have this understanding.
> But consider that CLIs are effectively timeless. The friction comes from their lack of discoverability, arcane I/O, every program can have a different UI, etc. And yet this interface has persisted and has largely remained the same for decades. Most programs rarely change their CLI, so the user only needs to learn a few commands to be productive.
It's remained true in a small niche of power users, while for the rest of the world, this environment might as well not exist (beyond the functionality it provides to them after it's been filtered through several layers). CLIs are irrelevant dead-end in the story of user accessible design; one that there's probably some lessons to take from, but not one to entertain in any serious manner.
> That's true. But then again, what exactly is the Windows 9x/2000 design language, and what makes it better than the modern Windows GUI? Is it the basic Start Menu? The task panel with blocks for each window instead of icons? The square instead of round windows? The lack of smooth transitions, transparency, and graphical effects? The overall brutalist theme?
Yes.
> We can certainly add all the features I mentioned to Windows 9x/2000, and we had some of them even back then via 3rd party tools, but isn't that essentially what modern Windows has become? There are ways to revert some Windows 11 features today with alternative shells and such, so is that the ideal UI then?
The classic theme survived up until Windows 7, and I'll give that a pass, since although there still are holes where the newer design language of Windows peeks through, it's stayed mostly consistent, and even managed to add new features without breaking the design language to fit them.
Then that died with Windows 8, and there's been no hope for consistency in UI language since. The dream of a casual user being able to learn a UI and stick to it is dead, since even if they do, it will just change out from underneath them. That's why they don't even bother. Heck, even I barely bother.
> I'm sure that I couldn't stand using its barebones UI today, as much as I would enjoy the simplicity for a brief moment.
I disagree. I don't use many modern UI features, and the few that I do use, like snappable windows, are things I can imagine working within the old design language. I still write documents using a copy of Word 2000 in a Win2K VM every now and then, and when I don't use that, I use LibreOffice, a program many people refuse to use because it looks ancient to them. That's a feature for me. It not changing and thus not breaking my workflow is a huge feature that nothing in Windows 11 can even hope to compare with.
This might work well for power users, but common users may frustrate out. I do think the classic UI + keyboard driven menu clicks (e.g. ALT+some key combination) is the best case, where power uses can mostly use key stroke combinations to navigate the menu, while common users can click with a mouse.
Whatever the UI it is, being consistent is the most important. But sadly as you said, UI designers need to eat, too.
> It's incredible how quickly we've devolved into full-blown sci-fi dystopia.
How so? The steps towards where we are now have been gradual over the last 2 decades, at least. This recent step has opened the door for those in power to grab onto even more power and wealth, and they're naturally seizing it. All of this was comically predictable. Oh, and BTW, the people on this very website have brought us here. :)
You know what will happen next? Absolutely nothing. A vocal minority will make a ruckus that will be ignored, partly because nobody will hear it due to our corrupted media channels, and partly because the vast majority doesn't care and are too amused by their shiny toys and way of life.
This dystopia is only different from fictional ones in that those in power have managed to convince the majority of people that they're not living in a dystopia. It's kind of a genius move.
The employees of these companies are complicit in creating the greatest data harvesting and manipulation machine ever built, whose use cases have yet to be fully realized, yet when the government wants to use it for what governments do best—which was reasonable to expect given the corporate-government symbiosis we've been living in for decades—then it's a step too far?
Give me a fucking break. Stop the performative outrage, and go enjoy the fruits of your labor like the rest of the elites you're destroying the world with.
This is not something I would ever use. The idea of giving a probabilistic model the permission to run commands with full access to my filesystem, and at the very least not reviewing and approving everything it does, is bonkers to me.
But I'm amused by the people asking for the source code. You trust a tool from a giant corporation with not only your local data, but with all your data on external services as well, yet trusting a single developer with a fraction of this is a concern? (:
I don’t think that’s as crazy as you do. Corporations are supposed to have checks and balances in place, safeguards, policies. Individuals might have none of these.
> And if we switch to a payment model, then the internet becomes another system where the poor are naturally disadvantaged and the rich get unlimited benefit
As opposed to the current system where everyone is disadvantaged and the rich get richer?
Every business transaction in history has had a producer and a consumer, where both parties are in direct contact. Advertisers, on the other hand, insert themselves in the middle, promising to help both sides, while actually being a leech without doing any of the work. It is a despicable industry based on psychological manipulation, responsible for countless deaths, the corruption of every form of media ever invented, and of democratic processes throughout the world.
Sane business models are possible on the internet. Some of them exist already. But it's too late now for any of them to gain traction when advertisers are the same corporations that control it, and they have convinced the world that their products are "free".
This is to say that technical merits are rarely good indicators of a good game. As a gamer, I don't really care about the game engine, and even less about the language it's written in. Good programmers often obsess about these details, but it's easy to miss the forest for the trees, which is what I think happened here. Game design is a separate skill from game development, and not many people excel at both.
Still, it's great seeing this here, as the technical achievements are quite remarkable.
reply