Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My issue with this line of argument is that people always want to compare "do it with Copilot" to "do it completely by scratch" when they should be comparing it to "do it by ignorantly copy-pasting from one of the many similar projects on GitHub then tweaking a few things." There are quite a few open-source GLSL implementations of marching squares, maybe copy-pasting would have been faster and higher-quality.


For me, categorically not. I had multiple 2 way dialogs about implementation details and highly specific questions during the implementation that we created together. It wasn't just saying "Implement X" and then "There's a bug in Y", it's things like "Explain this data structure and why you are allocating memory this way". Or "The algorithm is mirrored incorrectly on the Y axis, fix the coordinate system and give me an option to change it" --- The latter example took Cursor like 3 seconds to complete perfectly. I'm not exaggerating, it was 3 seconds. It had implemented the solution faster than I could have found the problem by scrolling the mouse wheel and reading the code with my eyes (Let alone THEN fixing it). Imagine a whole afternoon of this high momentum. Is it perfect? No. Is it net-positive? Yes, a LOT

Going away and finding implementations, and then trying to integrate them (they undoubtably use different data structures, functions etc) would have been MUCH slower, MUCH higher effort, and I would have given up much earlier. Having some{one,thing} there I can just ask a highly specific question and get an equally specific answer with examples RIGHT IN THE IDE, kept the momentum up.

There's absolutly no way finding other examples on github would have been faster or higher quality. This is no longer a matter of taste, its the practical difference between complete and incomplete.

I mean this went from "I dont know GLSL at all" to "Here is a complete implementation of a realtime electron density grid viewer running in WebGL in the browser" in an afternoon


> There's absolutely no way finding other examples on github would have been faster or higher quality.

How do we ascertain its quality? The problem is this absolute trust in your reply. How do you know what it tried to "explain" was the right explanation?

At least when you search and find examples you do evaluate potentially multiple solutions.


I absolutly dont trust it blindly at all. Where did you get that from?

I kept asking it questions and stepping through the debugger until I understood its implementation. How do I know it's implementation is correct? Because I can see the results, I can see the data sturctures in memory, I can step though it and understand it - I know what electron densities around atoms look like, and I kept iterating on the code after it made mistakes, helping it and fixing it together, until it was finished. I just kept asking questions and interating so I could learn what I needed to of GLSL so that I could get it "un-stuck" when it hit a dead end or got caught in a loop.

I dont expect it to come up with the correct implementation straight away, what I'm saying is the enormous productivity increase kept momentum and enthusiasm up so much, I was able to implement something new and novel, something that would otherwise not have been created.


> I absolutely don't trust it blindly at all. Where did you get that from?

Not consulting a 2nd source or just looking at the results, as in..

> Because I can see the results

Which, yes, it's correct in that sense but as per the other comments you can copy and example and get that same result. In development a lot of things are correct but have different implications, e.g. bubble sort vs quick sort.

> I'm saying is the enormous productivity increase

Assuming it has led you on the right / correct path. It's often times led me on to the wrong path instead.


> Not consulting a 2nd source or just looking at the results, as in..

I do continuously check multiple sources : reality, our material simulations and predictions are lab verified, and spectographic analysis shows our predictions are correct - I have large experimentally generated datasets that our predictions and code are verified against.

We even have a system called "reality server" who's job is experimental parity, it runs continuously checking predictions (which are all totally produced by our code - code we are writing with the help of Cursor) against experiments.

> Which, yes, it's correct in that sense but as per the other comments you can copy and example and get that same result. In development a lot of things are correct but have different implications, e.g. bubble sort vs quick sort.

All of us approximate to "good enough". This is good enough. Results, Big-O, Integration ease, good enough is multivariate, but good enough is good enough, I'm a startup, and I'm not searching for divine correctness, good enough on the multiple variables is good enough.

> Assuming it has led you on the right / correct path. It's often times led me on to the wrong path instead.

It led me down the wrong path many times, that just means you are not yet finished. Then with more work, we found the correct solution together.

Even in our materials simulations we fail 100 times and win once, the win still enormously outweighs the fails.

Nobody is claiming it's perfect, nobody is claiming it doesn't get stuff wrong, nobody is claiming it doesn't lead you down the wrong path. It's about keeping experimental momentum up, because discovery is a factor or productivity - and my discovery is 5x because my productivity is 10x.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: