Hacker Newsnew | past | comments | ask | show | jobs | submit | sebazzz's commentslogin

With UnsafeAccessor you can often avoid reflection.

I wonder how the voting components are protected from integrity failures?

They are quite expensive and there is not something similar on the market (even not from house brands of Aldi, Lidl, etc).

Cadbury have TimeOut but it's not quite the same. (It's lighter, less chocolate and less dense)

Clearly intended to be the direct competitor though, since "Have a break, have a KitKat" is the KitKat slogan, and timeout is also a break.

https://www.tesco.com/groceries/en-GB/products/316651552


You still have things like git squash etc.


That doesn't make any sense. There's 10,000+ lines of code. There shouldn't be a single commit "Initial commit". I'm fine with squashing some commits and creating a clean history, but this isn't a clean history it's obfuscated.


I do this all the time. I’ll spend weeks or months on a project, with thousands of wip commits and various fragmented branches. When ready, I’ll squash it all into a single initial commit for public consumption.


I also do this. Lots of weird commit messages because fuck that, I'm busy. Commits that are just there to put some stuff aside, things like that. I don't owe it to anyone to show how messy my kitchen is.


Does your makefile also do this https://github.com/xtellect/spaces/blob/422dbba85b5a7e9a209a...

This repo is full of so many strange and hilarious things. Look, I'm a lisper, and this is even too many parentheses for me https://github.com/xtellect/spaces/blob/master/spaces.c#L471...


On the other hand, others don’t have to adopt, use or like your stuff which would be the reasons to publish it.

One big commit definitely doesn’t help with creating confidence in this project.


> I don't owe it to anyone to show how messy my kitchen is.

There was once a time when sharing code had a social obligation.

This attitude you have isn't in the same spirit. GitHub (or any forge) was never meant to be a garbage dumping ground for whatever idea you cooked up at 3AM.


Never happened. My projects start with me goofing around and playing with things, accidentally committing my editor config or a logfile, etc. The first commit on my public release is a snapshot of the first working version, minus all the dumb typos and malcommits I made along the way.

I don’t owe it to anyone to show how the sausage was made. Once it’s out the door and public, things are different. But before then? No one was the moral right to see all my mistakes leading up to the first release.


Explain why you think making a single commit is related to any source code sharing obligation? You completely failed to establish why making a single commit is indicative of it being garbage. Your statements are a series of non-sequiturs so far and thus I can't take you seriously.


> Explain why you think making a single commit is related to any source code sharing obligation?

When you share code it's presumably for people to use. It is often useful to have commit history to establish a few things (trust in the author, see their thought process, debug issues, figure out how to use things, etc).

> You completely failed to establish why making a single commit is indicative of it being garbage.

A single commit doesn't mean it's garbage. It erodes trust in the author and the project. It makes it hard for me to use the code, which is presumably why you share code.

My garbage code response was in regards to the growing trend to code (usually with ai) some idea, slap an initial commit on it and throw it on GitHub (like using a napkin and tossing it in the rubbish bin).


Here's the thing, get used to single big commits. Eventually, somebody is going to try to train on specific change sets. This'll enable models to learn specific authors mannerisms, idiosyncracies etc... Single large commits creates an info asymmetry boundary, which is about the only defense a creator has in a world of willful infringement to train algorithms to replace or devalue them in the market. It sucks... But this is the world we're growing into now.

It requires self-discipline to stay organized. A vcs is just a tool. I'm never organized, my brain just works that way. Whatever the tool, I'll create a mess with it. So as long as the project structure and its code is all good I can't care about anything else.


that world never existed


I have done "Initial commit"s after having almost finished something. Sometimes fter >10k lines. Totally unrelated to LLMs, as I have done it years ago as well, and has nothing to do with LLMs. I see why you would think what you do though, but it does not logically follow.


It may have been released with a new repo created, losing all the previously-private history.


Yes and no.

Have you looked at the code? It was clearly generated in one form or another (see the other comments).

The author created a new GitHub account and this is their first repository. It looks to be generated from another code base as a sorta amalgamation (either through code generation, ai, or another means).

We're supposed to implicitly trust this person (new GitHub account, first repository, no commit history, 10k+ lines of complicated code).

Jia Tan worked way too hard, all they had to do was upload a few files and share on HN :)


> We're supposed to implicitly trust this person

That would be rather foolish even with a fully viewable history.

I don't understand why you're so worked up about this—nobody is forcing you to use the code.


I think there are 3 levels at play here. One is code as curation, a model I'm not particularly interested in. Clearly the publisher, despite not being paid, is a supplicant. and as a curator I'm as much or more interested the in process being used and the longetivity of the code base.

The second is code as artifact. Is this code useful, performant, with a reasonable API.

The third is code as concept, or architecture. This is really what interests me here. I use explicit allocators any time I can get away with it, and it's an excellent tool for involved systems projects. I'm not really interested in using this code, but having implemented these things many times, looking at how other people made the various tradeoffs, how it all came together, is really valuable input for when I'm going to do this again. Maybe there are some really brand new ideas here.

While I'm unsympathetic to the first perspective, it's valid. But I don't think its fair to castigate someone who put something on GitHub for not meeting someones adoption criteria.


If you have a need for vetted & customizable & extensible allocators, I recommend https://github.com/emeryberger/Heap-Layers

In the meantime, I don't see much value from your criticism of this particular project. I don't think this is a great example of AI slop even if it is generated, and you haven't clearly articulated harm.


> no commit history, 10k+ lines of complicated code

This kind of pattern is incredibly common when e.g. a sublibrary of a closed source project is extracted from a monorepository. Search for "_LICENSE" in the source code and you'll see leftover signs that this was indeed at one point limited to "single-process-package hardware" for rent extraction purpouses.

Now, for me, my bread and butter monorepos are Perforce based, contain 100GB+ of binaries (gamedev - so high-resolution textures, meshes, animation data, voxely nonsense, etc.) which take an hour+ to check out the latest commit, and frequently have mishandled bulk file moves (copied and deleted, instead of explicitly moved through p4/p4v) which might mean terrabytes of bandwidth would be used over days if trying to create a git equivalent of the full history... all to mostly throw it away and then give yourself the added task of scrubbing said history to ensure it contains no code signing keys, trade secrets, unprofessional easter eggs, or other such nonsense.

There are times when such attention to detail and extra work make sense, but I have no reason to suspect this is one of them. And I've seen monocommits of much worse - typically injested from .zip or similar dumps of "golden master" copies, archived for the purpouses of contract fulfillment, without full VCS history.

Even Linux, the titular git project, has some of these shenannigans going on. You need to resort to git grafts to go earlier than the Linux-2.6.12-rc2 dump, which is significantly girthier.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

https://github.com/torvalds/linux/commit/1da177e4c3f41524e88...

0 parents.

> It looks to be generated from another code base as a sorta amalgamation (either through code generation, ai, or another means).

I'm only skimming the code, but other posters point out some C macros may have been expanded. The repeated pattern of `(chunk)->...` reminds me of a C-ism where you defensively parenthesize macro args in case they're something complex like `a + b`, so it expands to `(a + b)->...` instead of `a + b->...`.

One explaination for that would be stripping "out of scope" macros that the sublibrary depends on but wishes to avoid including.

> We're supposed to implicitly trust this person

Not necessairly, but cleaner code, git history, and a more previously active account aren't necessairly meant to suggest trust either.


> One explaination for that would be stripping "out of scope" macros that the sublibrary depends on but wishes to avoid including.

Another explaination would be the original source being multi-file, with the single-file variant being generated. E.g. duktape ( https://github.com/svaarala/duktape ) generates src-custom/duktape.c from src-input/*/*.c ( https://github.com/svaarala/duktape/tree/master/src-input ) via python script, as documented in the Readme:

https://github.com/svaarala/duktape/tree/master?tab=readme-o...


Ouch, reminds me of hours debugging OAuth2 implementation in my Surface 1 app for Twitter because the nonce or some other checksum was not calculated correctly.


If I am not incorrect, for early boot applications, the application must be set to use the native subsystem.


Apparently they are recruiting from Kenia too, promising great pay, but in reality they are being abused as cannon fodder.


Also take the randomness out of it. Sometimes the agent executing tests one way, sometimes the other way.


I've found https://github.com/casey/just to be very very useful. Allows to bind common commands simple smaller commands that can be easily referenced. Good for humans too.


> > so you need to tell them the specifics > That is the entire point, right?

Honestly it is a problem with using GPT as a coding agent. It would literally rewrite the language runtime to make a bad formula or specification work.

That's what I like with Factory.ai droid: making the spec with one agent and implementing it with another agent.


> It would literally rewrite the language runtime

If you let the agent go down this path, that's on you not the agent. Be in the loop more

> making the spec with one agent and implementing it with another agent

You don't need a specialized framework to do this, just read/write tools. I do it this way all the time


> US software providers can shut down their software services in an instant, paralyzing European societies.

And no US SaaS provider or cloud provider will ever be trusted again, instant cutting off a part of the US economy.


Lots of the world already doesn't given their current and past actions, especially under the current administration. Google now gives out your personal and financial info to ICE without a judicial warrant.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: