Cantor is mentioned, but I'd also mention the idea that some infinities are equivalent (e.g. Integers and Rationals), but others are not (Rationals and Real numbers).
We use SQLite IMMEDIATE transactions, which lock files for writes for a few milliseconds while commiting data to the file. This is not a problem in practice until you reach more than dozens of concurrent writers. StorX configures a default busy timeout of 1.5s, but it can be configured as per your needs. You can also get a lot more out of it by being smart about how you spread your data over DB files (eg: one file per user instead of one for multiple/all users), and also by considering when you call openFile() and closeFile() (eg: keep write transactions short, don't leave a file handler open while running long calculations).
I’ve always meant to write a post about this. Bun is pretty similar and has the `$` helper from dax built in. In the past I would have used Python for scripts that were too complicated for Bash. But the type system in Python is still not great. TypeScript’s is great: flexible, intuitive, powerful inference so you don’t have to do many annotations. And Deno with URL imports mean you can have a single-file script with external dependencies and it just works. (Python does this now too with inline dependencies and uv run.) Deno and Bun also come with decent APIs that are not quite a standard library but help a lot. Deno has a stdlib too.
You can see in my other scripts in my dotfiles that between dax for shelling out and cliffy or commander.js as a CLI builder, TS is a great language for building little CLIs.
I believe rerere is a local cache, so you'd still have to resolve the conflicts again on another machine. The recursive merge doesn't have this issue — the conflict resolution inside the merge commits is effectively remembered (although due to how Git operates it actually never even considers it a conflict to be remembered — just a snapshot of the closest state to the merged branches)
Are people repeatedly handling merge conflicts on multiple machines?
If there was a better way to handle "I needed to merge in the middle of my PR work" without introducing reverse merged permanently in the history I wouldn't mind merge commits.
But tools will sometimes skip over others work if you `git pull` a change into your local repo due to getting confused which leg of the merge to follow.
One place where it mattered was when I was working on a large PHP web site, where backend devs and frontend devs would be working in the same branch — this way you don't have to go back and forth to get the new API, and this workflow was quite unique and, in my mind, quite efficient. The branchs also could live for some time (e.g. in case of large refactorings), and it's a good idea to merge in the master branch frequently, so recursive merge was really nice. Nowadays, of course, you design the API for your frontend, mobile, etc, upfront, so there's little reason to do that anymore.
Honestly if the tooling were better at keeping upstream on the left I wouldn't mind as much but IIRC `git pull` puts your branch on the left which means walking history requires analysing each merge commit to figure out where history actually is vs where a temporary branch is.
That is my main problem with merge, I think the commit ballooning is annoying too but that is easier to ignore.
Rerere is dangerous and counterproductive - it tries to give rebase the same functionality that merge has, but since rebase is fundamentally wrong it only stacks the wrongness.
I don't know which it was but Dr. Jorge Diaz has an excellent video on Lagrangian mechanics as part of a series on quantum mechanics (this video just pertains to the formalism applicable classically)
I don't remember the specific video but it was pretty elementary and got across the point that I had missed, you're not looking for a global optimum through some fancy operations on function spaces, you're just doing the old fashioned calculus thing of finding a maximum by setting a derivative to zero. Except you are doing that only at one endpoint of the mystery function, and its value (the boundary value) and derivative at that point (zero) are known, and you can work out the ODE that continues the solution. That's the Euler-Lagrange equation and suddenly everything makes sense.
You can test that out. Ollama does allow you to run open source models at home. I've been playing around with a bit lately and have been really enjoying it.
On my to do list is two models running at once and building a middle layer for them to interact.
One of my fun experiments recently has been putting ChatGPT in conversation mode when I go for a walk. I recently had a 45 minute conversation where "we" fleshed out a multi-agent platform. I think a key is that you need to give each agent an "inner conversation" and criteria for when output from it gets copied to the other agents and the main chat, coupled with a process to regularly compact. I intend to set up a test system I want to run continuously, and given I enjoy working on compilers maybe I'll see how much cheaper you can do something like what OP did if you orchestrator a few agents with domain knowledge in specific areas.
I think I'd want to test a state of the art model, but it'd be fascinating to see how far you can get with Ollama as well - especially whether you can compensate for less smarts by just giving it far more runtime than I could afford with e.g. Claude.
https://en.wikipedia.org/wiki/Cantor%27s_paradise
reply