Hacker Newsnew | past | comments | ask | show | jobs | submit | Grosvenor's commentslogin

SEEKING WORK | Data Scientist / Consultant | Canada/Remote Worldwide

I'm a data scientist with over 20 years of experience specializing in consulting and fractional leadership. I do the data science that AI's can't do. I thrive on gnarly problems where standard off-the-shelf solutions fall short, and recently where Ai just can't complete the task even if walked through the problem. My track record includes saving a German automaker from lemon law recalls and helping a major cloud vendor predict server failures to enable load shedding.

I've tackled a wide range of challenges across various industries, including oil reservoir and well engineering forecasting, automotive part failure prediction, and shipping piracy risk to route ships away from danger. My technical work extends to realtime routing (CVRP-PD-TW) for on-demand delivery, legal entity and contract term extraction, and wound identification with tissue classification. I also work with the current wave of LLMs and agents, and make them do magic.

I've worked with the standard stacks you’d expect: Python, PyTorch, Spark/Ray, AWS, Agentic engineering, etc. But I believe the solution must be driven by the problem, not the tools. I bring years of experience helping companies plan, prototype, and productionize sane data science solutions.

Please reach out if you have a difficult problem to solve. I do love stuff in physical meat-space.

NB: Please do not contact me if you are working on ads, gambling, or "enshittification". I prefer to sleep at night.


My Bona fides: I've written my own Mathematica clone at least twice, maybe three times. Each time I get it parsing expressions and doing basic math, getting to basic calculus. Then I look up the sheer cliff face in front of me and think better of the whole thing.

There is an architectural flaw in Woxi that will sink it hard. Looking through the codebase things like polynomials are implemented in the rust code, not in woxilang. This will kill you long term.

The right approach is to have a tiny core interpreter, maybe go to JIT at some point if you can figure that out. Then implement all the functionality in woxilang itself. That means addition and subtraction, calculus, etc are term rewriting rules written in woxilang, not rust code.

This frees you up in the interpreter. Any improvements you make there will immediately show up over the entire language. It's also a better language to implement symbolic math in than rust.

It also means contributors only need to know one language: woxilang. No need to split between rust and woxilang.


I noticed the same thing, having also written an interpreter for the Wolfram language that focused on the core rule/rewriting/pattern language. At its heart it’s more or less a Lisp-like language where the core can be quite small and a lot of the functionality built via pattern matching and rewriting atop that. Aside from the sheer scale of WL, I ended up setting aside my experiments replicating it when I did performance comparisons and realized how challenging it would be to not just match WL in functionality but performance.

Woxi reminds me of some experiments I did to see how far vibe coding could get me on similar math and symbolic reasoning tools. It seems like unless you explicitly and very actively force a design with a small core, the models tend towards building out a lot of complex, hard-coded logic that ultimately is hard to tune, maintain, or reason about in terms of correctness.

Interesting exercise with woxi in terms of what vibe coding can produce. Not sure about the WL implementation though.

(For context, I write compiler/interpreter tools for a living - have been for a couple decades)


I’ve personally had luck at correcting the complex one-off logic the agents produce with the right prompting.

and when I say prompting, I just mean code review feedback. All of this is engineering management. I review code. I’ll point out architectural flaws if they matter and I use judgement to determine if they matter. Code debt is a choice, and you can afford it in some situations but not others. We don’t nit over style because we have a linter. Better documentation results in better contribution quality. etc.

Agent coordination? Gastown? All I hear is organizational design and cybernetics


Mh, I thought about this a little and came actually to exactly the opposite conclusion: Implement as much as possible in Rust to get the fastest code possible. Do you have any more insights why this should not be possible / unsustainable?


You have two distinct products 1) An interpreter 2) a math language. Don't write your math in some funny imperative computer language.

Keep the interpreters surface area as small as possible. Do some work to make sure you can accelerate numeric, and JIT/compile functions down to something as close to native as you can.

Wolfram, and Taliesin Beynon have both said Wolfram were working internally to get a JIT working in the interpreter loop. Keep the core small, and do that now while it's easy.

Also, it's just easier to write in Mathematica. It's probably 10x smaller than the rust code:

    f[x_Integer]:=13*x;
    f::help:="Multiplies x by 13, in case you needed an easy function for that."
EDIT: Another important thing to note is the people who really deeply know specific subjects in math won't be the best, or even good rust programmers. So letting them program in woxilang will give the an opportunity to contribute which they wouldn't have had otherwise.


I'm not a PL expert but isn't building a decent JIT a massive undertaking? I guess you're saying that the JIT itself would be what makes a project like this worth using in the first place?


It's like most things in software, if you constrain the problem enough, focus on the problems you actually have and make some smart choices early on, it can be a very modest lift on the order of a week or two for a 90% solution, but on the other end of the spectrum, it's a lifetime of work for a team of hundreds...


Symbolic manipulation?


Sorry, perhaps, a dumb question:

Is it not that Mathematica, and most of the Wolfram innovation, is about a smart way of applying some rule-based inference. I think of it as parametrized PROLOG rules, with large lib. So term rewriting all the way to the end, correct me if I'm wrong.

Where does the mini-core+JIT come into this?

Thanks for taking time to answer.


The interpreter / JIT is the one actually applying the rules.


So it is the tokenizer, and rule expansion, that gets JIT'd, right? I mean - there's no some secondary process running on top of the rule expansion?


implementing addition in woxilang itself?? this gotta be terribly slow. am i missing something?


Mathematica has symbolic and infinite-precision addition, so you can't automatically take advantage of obvious compiled code.


What? Arbitrary precision arithmetic implemented in a compiled language will be faster than the alternative. This is no great mystery. The same is true of essentially all low-level symbolic or numerical math algorithms. You need to get to a fairly high level before this stops being true.


Of course. The point is whether you interpret a call to arbitrary_precision_add or compile the call doesn't matter much.


You are missing the term "JIT", which would enable a host of runtime optimizations which include generating calls to some static piece of native code which performs addition.


But surely you can have a "fast path" that is implemented in the host language, right?


I am confused for the same reason you are. Isn't the rust code just "pre-jitted" code essentially? i.e. hand optimized. You are going to want to hand-optimize some functions in cases where the jit cannot do a good job in its current form. You probably also want a benchmarking system where you compare the jitted code to the hand optimized code, to prove to yourself that the hand optimized code is still worth keeping, after any automatic jit improvements you make. And if you don't want the runtime overhead of the jit then you can pre-jit certain functions and distribute them as part of the binary's executable code


Switching out to an interpreted language has got to be anathema to a rewrite-it-in-Rust project


Can you give me your recruiters number?


They sent out an email about raising their prices earlier today. Perhaps it was everyone running for the doors.


and a sausage fest.

Edit: picture is from a Vienna meet up. Not OpenAI.


Nah. You seem like a crumpet man.


I heard back from them when I reached out to them in the past. We weren't the right fit, but Sam was professional and communicated well.

They're probably just busy.


> Not a problem for me, but every SGI out there is fixed function only.

Is that true? I remember sgi had a shader library for modeling light aimed at the automotive market. All the demos and examples were showing off car paint colours in different environments.


SGIs that matter (MIPS, etc)

IRIX only supports about OpenGL 1.2. It does have a fragment shading extension though:

https://tech-pubs.net/reputable-archive/fragment_lighting.tx...


They got the N64 running on the MiSTer. So an indy should be possible, they're closely related systems.

I'd love an Onyx/RE on an FPGA someday. Next to my FPGA cray.


The CPUs are close, but the Indy is otherwise pretty different from the N64. Totally different graphics architecture, and - relevant to getting it on MiSTer - it’s a workstation rather than a video game console, necessitating quite a bit more complexity. I’d be really surprised if it could be squeezed on.

(Though, full disclosure, I said the same thing about the N64 before the core for it came out - the folks working on MiSTer are incredible.)


Huh. I had thought the n64 was basically an Indy xz graphics. What was the rcp closest to?

I was always confused why sgi didn’t throw the rcp on a pci card and dominate the pc graphics market.


To my knowledge - and I'm not an expert here - the N64 hardware is pretty unique and doesn't really resemble any of SGI's other chipsets. Not in precise capabilities - the XZ, for instance, didn't even support hardware texture mapping - and not in overall technical design.

It does seem a little bit like an ultra-simplified, integrated version of the RealityEngine [0]. The RealityEngine had "6, 8, or 12 Geometry Engines" split out across three to six boards, each powered by an Intel i860XP, that then passed their work along to Fragment Generators. This roughly corresponds to the RSP, which was just another MIPS core (with vector/matrix extensions), passing its work along to the RDP on the N64. I'm not sure how programmable the RealityEngine's pipeline was compared to the surprisingly flexible RSP.

Remember, the constraints for a graphics workstation are really different than for a game console - especially on the low-end, totally different corners are going to be cut. An Indy still needed to be able to generate a high resolution display and allow modelling complex scenes for film and TV; but while some degree of real-time 3D was important, it was expected that artists could be modelling using wireframe or simplified displays. A game console was displaying low-resolution and relatively low-detail scenes, but they still wanted them to look aesthetically "complete" - shading, textures, fog, lighting, particles - while running at real-time speeds. SGI used their expertise and built something custom-fit for the job at hand, rather than just reusing an existing solution.

[0] https://cseweb.ucsd.edu/~ravir/274/15/papers/p109-akeley.pdf


I would have loved to have that paper when I was learning 3D and OpenGL.


Nay, the N64 is pretty unique hardware-wise. Conceptually it's vaguely similar to the O2, the RCP is an R4000 fixed function CPU with some extra graphics instructions IIRC.


Yes. There were early models available with 320x200 or 640x480 rez.

They didn't have accelerometers, so it was just a dumb screen on your face.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: