This seems to be comparing apples to oranges. The intent of the users inside ChatGPT and on the website would be vastly different. Comparing them doesn't make much sense sans other variables (= better understanding the intent)
For a European alternative to Google Docs / Notion, we made https://kraa.io/about that might work for you if all you need is a simple editor with collab features.
I'm guilty of this as well. https://kraa.io/about has some fade-in animation for the intro text – driven by wanting the initial impression to be focused/minimal and 'unravel' as you go. I take it that most HN folks would vastly prefer to NOT have this?
I’ll say as someone who suffers from severe motion sickness and the OP site makes me feel deeply uncomfortable, that your site does the fade in fast enough that it doesn’t give me any discomfort. Seems fine to me. Maybe I should consider being a consultant for vestibular motion sickness accessibility, haha. I’d get paid to answer “on a scale of 1-10, how pukey does this app make you feel?”
I think it looks fine except it's missing a more obvious hint that there's more to see when I scroll. The one that's there is just textual and very delayed.
Not sure if I second this or not. I did want to scroll, but I don't know how much of that was influenced from the context or the extreme minimalism making me want to look for more - I'm interested in how I would have reacted to the site not knowing it had scroll fade. I could see an argument with the "Don't Make Me Think" principle.
An old article, but imo still relevant and interesting. My favorite part:
> Life and work would be so easy if a lack of quality could be explained in a sentence, and fixed with a better technique. If an artifact lacks quality, it is not just one aspect that needs improvement and then it’s all good. Quality is not just the method, just the form, or just the content. The lack of quality doesn’t cumulate in a spot, it is fundamental...
That’s true, I’m trying to figure out a better testing environment with a feedback loop.
I did try letting the models iterate on the bot code based on a summary of an end-of-game ‘report’, but that showed only marginal improvements vs. zero-shot
reply