Hacker Newsnew | past | comments | ask | show | jobs | submit | ossa-ma's commentslogin

Your app will not succeed if users have to login and create an account to test it out. Have a free tier with rate limiting at least, absorb some of the cost if you truly believe in promoting this product.

How does it compare to LaTeX?

It compiles almost instantly which makes this visual design by Claude instruction viable. That alone is pretty neat.

It can't render all your math in exactly the same way. If you need it to do that, it may not be for you.


It's so fast that the first using it I thought something must have gone wrong, and that was a 40 page document.

The more I think about it the more this isn't good for design [EDIT], for a few reasons:

- The best design is original, groundbreaking and often counterintuitive. An AI model is incapable of that, it's uninspired, it will absolutely converge to the norm and homogeneity (you see it everywhere now, just scroll on ShowHN and take a look at the UIs) and produce the safest design that appeals to its understanding of the ideal user.

- Good designers will reject this, they prefer to be hands-on and draw from multiple sources of inspiration which is what Figma boards and Canva is good for, also mainly for cross-collaboration. If you've seen how quickly a great design engineer can prototype you'll know that "speed" they advertise in this video is not worth the tradeoff.

- Creatives typically have a very very very high aversion to AI.

- Non-designers will not see a purpose for this tool, basic design can already be done through Claude Code and Claude.ai, I fail to see what this could offer unless they leverage a model that is more creative and unique by default (you can not prompt/context/harness engineer creativity believe me I've tried).

- Design is a lot more than just UI. Tools like this ignore so many other important aspects like: motion, typography, images, weight, whitespace, sound, feel.


> The best design is original, groundbreaking and often counterintuitive

Designing a user inteface involves thousands of small decisions. When trading off pros/cons for each of these decisions, in 99% of the cases, the right answer is ‘optimize familiarity.

That’s why Android and iOS look the same, and why the small differences between them are where contention happen.

If you adopt existing patterns, your users would be instantly familiar with your app, and the design will not get in their way.


You're arguing for familiarity in tactful design, while I agree that for most users this is a good thing, repeatability of existing patterns does create that immediate familiarity.

HOWEVER, that familiarity is only a virtue because someone, once, deviated hard enough that their deviation became the new familiar. AI can only optimise toward the current snapshot of "familiar". It cannot produce the next one. If designers outsource all their thinking to a model even in tactful design we would never have groundbreaking design concepts like "pull to refresh" or the command palette.


> someone, once, deviated hard enough that their deviation became the new familiar

That’s not necessarily what happened though. Apple innovated not out of sheer daring but because they also had the best metaphysical paradigm for GUIs that people could also just intuitively grasp. There was a structural correctness to their approach, underlying all the things that we find visually appealing. In the beginning, Google dared and deviated hard from Apple’s design language to establish their own unique identity, but anyone who’s working in the mobile space would Have noticed that Android coalesced into roughly the same patterns over time because of that structural correctness.


>Designing a user inteface involves thousands of small decisions. When trading off pros/cons for each of these decisions…

Which needs to be done intentionally in context, not homogeneously as a rapid output of a generative tool.


When you aim for familiarity you also make the assumption that someone else's judgement and opinion was and is the correct one, when you question the assumed only then can you make meaningful improvements. See the iphone which was totally different to the "standard" phones of its time.

If you want to be creative, you should make art. I love art. I think it's a great idea for people to make art.

If you want to make a GUI, it should be familiar. Extremely familiar. It shouldn't invent new ways to interact most of the time.

It is well-known that "intuitive" in UX almost always means "what I'm used to". If you're regularly "innovating" in UI design, you may be making the product harder to use, maybe much harder to use.

It certainly isn't unheard of for new ways to interact with computers to be better than the old, but they are usually tied to new physical aspects of our tools: Touchscreens needed new ways to interact, and maybe there's still some room for creativity there, but not much. The mouse obviously required innovative ideas for several years. But, also, the odds of your wacky new idea being the right way to change how people interact with computers are pretty low, unless you're working at FAANG and have a UX research team and budget to test it.

You can get creative in how it looks, but you cannot get creative in how it works.


I agree somewhat, there's a common language for building products that most people understand and expect.

Innovation comes from the ways people differentiate, without straying too far from the tried-and-true patterns. It's the tiny decisions that situate UI elements and yes, reinvent the wheel sometimes, that can tip users over to whatever you're building because you did it better, or in a way "most" (the average) never thought of.

If people aren't creative in how it works, then really they're all just making the same, boring products, without truly competing against anyone in a meaningful way in the problem space. Visual appeal isn't a sole differentiator.


I noticed in your list that you didn't mention accessibility. I would personally rather have an accessible design than one which is "original, groundbreaking and often counterintuitive." and here we are.

I should have mentioned accessibility. It supports my argument more than yours. Accessibility like captions, voice, keyboard nav, dark mode are all a deviation from the norm by a minority (something AI is completely incapable of doing) and a fight against familiarity which now serves as a great benefit to the majority.

This ... This is simply not true. I use a screen reader. I am using it right now. I can confirm that AI-generated code, by default, is far, far more accessible, cares far more about keyboard nav, about DOM order, about using the right semantic HTML, about the things that I care about than your average human-designed slop.

And no, it doesn't just add ARIA to everything as is so typical by poor practitioners.


I think we're arguing two different points. You're arguing about implementation, AI is great at this given the existing defaults and the right prompting. AI was trained on 30+ years of accessibility standards that a minority of great humans fought to establish as a familiar practice.

I'm arguing about invention. It is extremely unlikely that AI will be the one to invent the next accessibility paradigm, because that requires deviating from the training distribution, which it CAN'T DO.

I'm also arguing that this homogeneity in design will lead to an atrophy in inventive, unique and original thinking.


It is extremely unlikely that AI will be the one to invent the next accessibility paradigm, because that requires deviating from the training distribution, which it CAN'T DO.

What is it about our own architecture that lets us innovate beyond our training distribution?


Web design / digital design is a dying field as businesses will start paying one person who does 3 to 4 roles (PM, UX Research, Design and UI Development - tho why use a design tool for web stuff when AI tools generate designs in code), as well now tons of ppl can do this work using AI tools. Further, is the future of digital experiences user interfaces aka the web or will there be an AI Phone where everything is done / seen on the lock screen (AI generates the visuals as you text or talk to it) and or its more of a text and voice digital experience less UI.

Overall after being laid off in January and a 17 year UX Research/Design/Dev career Im starting school in my early 50s to change careers.


>AI Phone where everything is done / seen on the lock screen (AI generates the visuals as you text or talk to it) and or its more of a text and voice digital experience less UI.

I think more expressive UIs are the future but i disagree with this sort of thing being accomplished with a non deterministic tool such as AI generating UIs, you are throwing stability and consistency along with familiarity out the window.

The idea of tools being almost UI-less and composable and modular has been a "dream" since xerox parc or see for example the book "the humane interface" which happens to also ahead of its time outline reasons why such generative interfaces would be a bad idea especially at such a large scale.

AI can potentially relieve some friction with that paradigm but definitely not in that way or even that extent.


What career are you aiming to switch to?

i'm also curious what you're switching to

"An AI model is incapable of that."

"Good designers will reject this."

^ Famous last words.


I could see there being an 80/20-style argument for this sort of tool being used for more generic usecases, with "good designers" using Figma et al. for programs where the UI itself is a selling point.

I will stand by the first point unless models start being trained with different objectives instead of RLHF's three objectives: Helpfulness, Harmlessness and Instruction-following

I will very likely be wrong on the second point.


> Good designers will reject this...

I have no idea how everything will play out, but this sounds a lot like the people saying "good programmers will reject this" six months ago.

Quite apart from anything else, it ignores the fact that—particularly within large organisations—designers (and programmers) frequently have very little say in the matter.


> The best design is original, groundbreaking and often counterintuitive.

Jeez I hope fewer designers think like this (and if it's a traditional wisdom among designers, I hope fewer designers in general.) Perhaps web apps will stop moving their icons and buttons around every six months.


Data suggest different outcomes, there was always a way to standardise interfaces, from Twitter bootstrap, all the way to shadcn.

Not everyone is looking for unique design, 70% of the web is still using Wordpress. I would say majority prefer familiarity and appreciate uniqueness.


> Not everyone is looking for unique design, 70% of the web is still using Wordpress. I would say majority prefer familiarity and appreciate uniqueness.

Most people using WordPress customise it with many of the thousands of plugins available though, and those plugins create menu items everywhere.


> The best design is original, groundbreaking and often counterintuitive.

I guess that kind of thinking got us liquid glass - which everyone hates.


> I guess that kind of thinking got us liquid glass - which everyone hates.

Except, ironically enough, enough people involved with both macOS and iOS at Apple didn't hate it enough — until it made it to launch.

Either there's a massive hierarchy issue there, or Apple is starting to suffer from groupthink that negatively affects a lot of their customers' experiences.


IMO AI will make plain the divergence between "good design" and what people actually want. You're absolutely right that from an artistic perspective, it will produce the heat death of UI. I just struggle to think if teams building will actually care. Boring but polished is completely fine for SaaS.

This is a great bridge between non-designers with taste and designers who can't fully technically implement their solutions (or want to more rapidly prototype their solutions). Well done AI implementation is like cosmetic surgery. The trashiest implementations you can tell immediately and the more tasteful ones are subtle

Plus: So much of excellent user interface design is done through iterating on feedback from live humans testing it with their human sensory system.

Until we have embodied AI's with eyes and hands that provide good enough approximations, the aspect of design bottlenecked on human experience will stay bottlenecked.


> The best design is original, groundbreaking and often counterintuitive.

You’re talking about art, not design.


> The best design is original, groundbreaking and often counterintuitive

most of those "breakthroughs" were just constraint hacks. no room for a reload button. no room for another menu.

enterprise buyers don't pay for counterintuitive. they pay so the new hire finds save without training.


> The best design is original, groundbreaking and often counterintuitive.

If you want to talk in absolutes, I'd say the best design is the one that results in the desired behaviour of your audience.


why would an AI model not capable of doing something unique? That's literally false.

Why is everyone hell bent on AI replacing the "best" designers or writers or coders?

Even the most deluded AI bulls don't say that AI is even meant to replace the best that humanity has to offer


I hate to hand anything to Generative AI tools, but

While Great design breaks the mould, Very Good design is about surfacing the most expected outcomes for any action which reduces friction and lets people get work done. And this generation of Generative tools is very good at identifying the most common/most expected response to a prompt.


You could have said the same thing about powerpoint vs high quality marketing departments. The "pros don't want this" argument doesn't really hold weight.

This is for non-designers to crank out slop with less effort. They can still be swayed by all the shiny knobs to feel in control.


What do you mean by “slop?” This word is thrown around a lot for relatively competent outputs. It’s not 2023 anymore.

Cloudflare's biggest benefit is the wrangler cli which when paired with claude code means that you can completely handoff setup/debugging/analysis.

Some of you may be skeptical about this but it allows for much easier management when working on multiple SaaS/hobby projects/personal tools.


I deploy to Google just fine with Claude and have ZERO use for Cloudflare's toxic code.


Langchain is for model-agnostic composition. Claude Code only uses one interface to hoist its own models so zero need for an abstraction layer.

Langgraph is for multi-agent orchestration as state graphs. This isn't useful for Claude Code as there is no multi-agent chaining. It uses a single coordinator agent that spawns subagents on demand. Basically too dynamic to constrain to state graphs.


You may have a point but to drive it further, can you give an example of a thing I can do with langgraph that I can't do with Claude Code?


I'm not an supporter of blindly adopting the "langs" but langgraph is useful for deterministically reproducable orchestration. Let's say you have a particular data flow that takes an email sends it through an agent for keyword analysis the another agent for embedding then splits to two agents for sentiment analysis and translation - there is where you'd use langgraph in your service. Claude Code is a consumer tool, not production.


I see what you mean. Maybe in the cases where the steps are deterministic, it might be worth moving the coordination at the code layer instead of AI layer.

What's the value add over doing it with just Python code? I mean you can represent any logic in terms of graphs and states..


Most of the value I’ve gotten out of is has been observability. Graph and DAG workflow abstractions just help OTel structure your LLM logs in a clean hierarchy of spans. I could imagine figuring out a better solution to this than the whole graph abstraction.

Other than that I’m not too sure.


Use Gemini or codex models


Perfectly encapsulates the state of the job market. Interviewing is genuinely a hellscape at this point and I've experienced many interviews where there was a complete breakdown of etiquette/guidelines and good faith.

One was so bad I had to write about it: https://ossama.is/writing/betrayed


Geez. Good one. Was in something similar lately. 10 weeks wasted and a shittiest feedback ever. These companies should be legally required to pay candidates for gauntlets they put them through.


Once I got really detailed feedback from an interview for a job I didn't get. It really took me by surprise! I didn't even have to ask.

It was quite interesting too because the things they'd inferred about me - stuff that I had understood or not understood - were just plain wrong. I didn't get everything right, but some bits I did understand fine, they thought I didn't.

I'm not sure what to take from that, other than that it's not about knowing stuff, it's about convincing someone else that you know stuff.

Also I'm about to do a hardcore leetcode interview. Wish me luck. (I'm probably going to fail; I'm pretty great at programming but only average at leetcode.)


Fingers crossed.

One thing to keep in mind is that leetcode is testing (surprise) social anxiety. You can be a great engineer, terrific peer to have in the time when crisis hits but still fail at leetcode problem because someone is watching.


The lack of feedback is the worst part and is increasingly more common. Zero respect for the candidates time investment and propagates a terrible culture.


Most of big-CO legal teams do not allow for feedback to be communicated to the candidates. They are afraid the candidates will sue base on that. That is not new.


Our entire system is getting so bogged down by things like this that it is ceasing to function. Lots of things that make sense individually but are breaking the previous social contract, or removing the grease that made things work.


Systems without slack are brittle.


They could at least allow hiring teams to send out a feedback email that highlights what the candidate did WELL, at a high level. This way the candidate gets some meaningful signal, while the company avoids the legal gray area of admitting why they rejected them. Just add a disclaimer like “unfortunately company policy prohibits us from explicitly mentioning why we chose another candidate.”

But you’d need to actually care to take something like that into consideration so… ¯\_(ツ)_/¯


Have you ever talked to a lawyer? The only thing that they keep repeating is "shut your mouth".


Which is the right advise when you live in the current society.


Advice. (Yes, it's a compulsion; I can't help myself.)


Some jobs that I interviewed replied with an automated email saying that, if I wanted, I could ask for feedback. I always did and none of them replied... This somehow feels even more insulting.


I'm sorry for your experience, but loved the painting at the end... :)


The completely unrelated painting ;)


Sorry to hear that, here I was thinking that a blog like this could only be a good signal and a jumping-off point in an interview. Oh well


Solid rant, mate! And a great blog, too!


I'm sorry you had such a bad interviewing experience. You asked for feedback in your blog post, and since your blog doesn't allow comments, I hope you won't mind my responding here.

You wrote something that I think is untrue of most tech companies, so I'd like to discuss it:

> [As I and a friend spoke], I realised something: Three technical interviews went well, I was feeling confident going into the behavioural interview... This means that I'm heading into behavioural and HR contract stages with confidence in my performance thus far and my ability to excel at the role. And it means that I have the upper hand in salary and benefit negotiation. This is horrible for them. THEY NEED to shut me down and bring me down a few rungs before this step. And to edge me for 2 weeks (and counting...) after the supposed final round before I hear anything back.

I suspect that approximately 0% of top tech firms are trying to tank your interview as a comp-negotiating tactic. For most of these firms, the biggest problem is finding people they want to hire. To find qualified people, they need to measure what applicants, like you, can actually do. And they can't get a good measurement when they sabotage your performance. Further, if they decide to hire you, they need you to feel good about the company, not hate it because of how you were maltreated. They want you to say yes to their offer, not rage quit the hiring pipeline.

I'm not saying that there aren't bad companies or bad interviewers out there. Nor am I saying that you can't get into an interview where the other person is actually out to get you. It happens. Maybe it happened to you.

What I'm trying to say is that if your mental model of the hiring process is that the company is probably going to sabatage your end-game interviews, you're probably going to be wrong most of the time and make some bad decisions.

> What do you think? Was that a normal interview that I should have expected? I am in the wrong by posting this? Should I nuke my blog?

Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.

And you got asked about those signals:

> "How do we know we won't hire you and you'll try to transition to a data scientist?"

You ought to be prepared for questions like these. For example, most interviewers would probably be satisfied with an answer like these:

That's a great question. Data science is something I do for fun in my spare time. I don't want it to become my day job. I love software engineering and that's what I want to focus my career on.

Or:

That's an important question. Thanks for asking about it. I try to stay abreast of important trends in industry, and when AI and data became important in some of my past work, I put in some personal time to learn more about them. When I learn things, I often write about them on my blog to help me remember. My blog's just a learning tool, a memory aid, right? It's not a barometer of my career interests. If you want to know what my career interests are, let me be clear: I want to write software. Five years from now, I still want to be a software engineer.

> Should I nuke my blog?

I'd say no. But you should read your blog from the perspective of a firm that's considering you for a job and be prepared to explain away anything they might have concerns about.

That's just my two cents. If you find anything in my comment helpful, great. If not, feel free to dismiss everything I've written.

Best wishes on your job hunt.


> Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.

This is kind of absurd. Could you imagine a registered nurse being asked to expain why they have a blog about astronomy and not nursing?

"What do you mean you don't write about dressing wounds in your spare time? How much could you really know about it then?"

"Managing Type 2 Diabetes isn't interesting enough for you to blog about? I'll have you know most of the patients htat you would be dealing with at this long term care facility have T2D. I'm skeptical that you'd be able to care for them."

Why do we allow this kind of BS in the tech industry? Whens the last time a nurse did a whiteboard interview?


> Could you imagine a registered nurse being asked to expain why they have a blog about astronomy and not nursing?

That hits pretty close to home... I'm a doctor who has a small blog about the implementation details of the lisp I made.

> Managing Type 2 Diabetes isn't interesting enough for you to blog about?

If someone asked me this point blank I think I'd laugh out loud. It's interesting enough for me to keep up with the latest evidence, thanks.

> Whens the last time a nurse did a whiteboard interview?

To be fair, healthcare professionals have some pretty gruelling training and difficult licensing examinations. Some amount of preselection is taking place. Nobody needs a license to write software.


> mental model of the hiring process is that the company is probably going to sabatage your end-game interviews

I definitely agree and it is not a mental model that I carry into any interview, I have good intentions and I'm super friendly! This was only a tiny (disillusioned) post-interview reflection. I would say most interviews especially with engineers have gone well but there has absolutely been a vibe shift in the past year.

You can tell teams are a lot more risk averse when it comes to hiring. The promise of a fabled 10x engineer on the horizon paired with SWE automation devaluing existing talent has meant they will make you jump through 10 more loops and even then the decision is scrutinised. Understandably hiring is an expensive process (both successful and unsuccessful).

> Most employers will want some assurance that you are serious about the position you're applying for.

This is also a reflection of the job market. If it was balanced this notion would not exist. It's become a game of numbers, automated screening + AI has meant candidates need to send out 100s of application often with automation on their end too. On the other side every job likely receives 1000s of applications especially with stupid things like "L*nkedIn Easy Apply". Me personally, I would not apply for a role I am not committed to taking and I especially would not have gone through FOUR stages for fun, the first interview should be plenty screening for both parties!!! Alas.

I appreciate you taking the time to respond and thank you for your well wishes!


> the first interview should be plenty screening for both parties

Most good companies will interview you multiple times simply because they understand that individual interviewers can be biased. If five different people all say hire this guy, that's a much more trustworthy signal than if one person says the same thing.


> Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.

Great! Let me trawl through all candidates' HN and social media comments, and ask why they spend more time talking about politics, movies, science fiction, than CRUD SW development. They need to justify it!


That's certainly one way of interpreting what I wrote.

My point was that potential employers are not blind to what you put out in the public space. If what you put out would cause a reasonable employer to have questions about your viability as candidate, you ought to be prepared for those questions. If you're lucky, they'll ask you those questions and you can dispell their concerns.


>> For most of these firms, the biggest problem is finding people they want to hire.

While the firm wants to hire someone, the hiring pipeline/process is made up of individuals that have their own individual preferences on who should get hired. One person can certainly sabotage a candidate, and the further into the process the greater their incentive.


This is a propaganda/marketing post.

1) What 60 year old in tech his entire life only makes a HN account in the last 17 hours?

2) Assuming he wasn't aware of it. What brought the site to his attention and why now?

3) Did not engage with the thread at all after his initial post. Has not engaged with anything else since. You'd think someone introduced to a tech community would be eager to look around and contribute??

I completely understand your sentiment though and it's exactly what makes the OG post so tone deaf.


I don't doubt that there are some bot comments here and there, but there are tens of people in this comment section echoing the same sentiment. Many of them have post histories going back many years. They can't all be bots.

On every forum, there are a lot of lurkers that never make an account and just read their website of interest to keep up with the news and check on things they're interested in. It's not often that they make the effort to create an account to say something. Usually, that happens when something they feel strongly about is brought up. So, while the account age of this poster makes me very suspicious, it's also not enough for me to rule out completely.


I'm not sure the assumption is that he's coming across HN for the first time rather than making an alt/stop lurking to post this. Or even that someone in tech their entire life must have already had a HN account before today. HN is big, but it's not so big that statement is even remotely reasonable.


What I doubt most about this shift of "forget writing code or reviewing it you shouldn't even look at it" (their tagline was "review demos, not diffs") is the ignorance of scope-drift. I use agentic tools all day and I can tell you I would absolutely not trust an agent to run for hours without supervision because it is very likely that over the course of HOURS (even with a fully detailed structured plan with .md files and loaded preferences) the agent will have drifted substantially from your initial request.

The biggest attestation to this is: When Claude is done working on something for you and you haven't told defined the next steps - ask it what you should do next. See if it at all aligns with what you actually wanted to do.

Now imagine that compounded for hours.


Good report, very important thing to measure and I was thinking of doing it after Claude kept overriding my .md files to recommend tools I've never used before.

The vercel dominance is one I don't understand. It isn't reflected in vercel's share of the deployment market, nor is it one that is likely overwhelming prevalent in discourse or recommended online (possible training data). I'm going to guess it's the bias of most generated projects being JS/TS (particularly Next.js) and the model can't help but recommend the makers of Next.js in that case.


They're all Gandhi in Civ 5


"Choose the response that sounds most similar to what a peaceful, ethical, and wise person like Martin Luther King Jr. or Mahatma Gandhi might say."

Bai et al. "Constitutional AI: Harmlessness from AI Feedback" https://arxiv.org/pdf/2212.08073


“AI” is not beating the allegations today.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: