I recently talked with a PI from a well-known university lab, and asked why they were doing a startup, given the ML research problems they were working on.
They said a company was the only way to get access to the compute power they needed for that research.
A startup sounds like probably a good solution, if they get paired with the right product- and business-minded people, and together they find a winning collaboration. (Edit: Or if they get acquired rapidly in the AI boom, and negotiate the right deal to enable their research longer-term.)
When I saw this the other day -- and it just went on and on, like a good human author who was going to write this kind of story probably wouldn't -- I looked for a note that it was AI-generated, and I didn't find it.
All I found was a human name given as the author.
We might generously say that the AI was a ghostwriter, or an unattributed collaboration with a ghostwriter, which IIUC is sometimes considered OK within the field of writing. But LLMs carry additional ethical baggage in the minds of writers. I think you won't find a sympathetic ear from professional writers on this.
I understand enthusiasm about tweaking AI, and/or enthusiasm about the commercial potential of that right now. But I'm disappointed to find an AI-generated article pushed on HN under the false pretense of being human-written. Especially an article that requires considerable investment of time even to skim.
I continue to resonate with the Oxide take when I hear this kind of sentiment expressed about AI prose
"... LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it.
If, however, prose is LLM-generated, this social contract becomes ripped up: a reader cannot assume that the writer understands their ideas because they might not so much have read the product of the LLM that they tasked to write it. If one is lucky, these are LLM hallucinations: obviously wrong and quickly discarded. If one is unlucky, however, it will be a kind of LLM-induced cognitive dissonance: a puzzle in which pieces don’t fit because there is in fact no puzzle at all. This can leave a reader frustrated: why should they spend more time reading prose than the writer spent writing it?"
I sadly agree with this sentiment. But to add my own thoughts, I wonder if our “human generation” (all consciously existing today) are just plainly dinosaurs. Like in three decades we’ll have a society that knew LLMs from birth.
As such, we can’t comprehend the world they live in. A world in which you ask your device to give you any story and it gives you an entire book to read. I’d like to think that as humans we inevitably want whatever is next. So I’d like to think this future generation will learn to not only control, but be beyond more creative than current people can even imagine.
Did people who used typewriters imagine a world with iPhones? Did people flying planes imagine self landing rockets? Did people riding horses imagine electric cars? Did people living in caves imagine ocean crossing ships?
I kindly can’t tell if you missed my point. As much as past writers and readers could imagine a version of our present, I also imagine that if they got transported here they would still be in awe of what they saw
I agree. I imagine that a writer who predicted modern technology would still be in awe to see smartphone videoconf halfway around the globe finally realized.
And also be surprised by some of the uses to which it's put. And horrified by some of the societal backsliding despite what should be utopian technology.
That commoditization already happened for software developers, years ago. (Just look at the big-tech commodity worker interview process that even startups now mimic.)
Kudos to design studios who can still avoid that, and shine as unique talent.
Businesses naturally see their "suppliers" and "resources" as exchangeable. And to a degree, they really are, at the end of the day.
But it's still a non-trivial activity with long feedback loops, that requires a level of expertise.
Making workers easily exchangeable requires processes that ultimately underutilise their abilities, finding the lowest common denominator. Some businesses clearly can and want to afford that. Pretty much by definition, that leads to mediocre work.
From what I gather, a good chunk, if not the majority of agency work serves that particular need. But there's plenty of clients out there that want something else. Like all of mine.
Debian's interests, whether they know it or not, is for the government not to be able to mandate what features must be present in their open source software. They should be happy to have such a vocal advocate involved in this important fight.
Scene. Ext. Town street. Night. Invader military vehicles patrolling, announcing curfew through loudspeakers.
TEEN: *runs at invaders* Hey, you thugs! You can't make me obey! I support Bob, over there! *points at Bob's house*
THUGS: Grrr! Thugs smash!
BOB: Please! I have done nothing! I don't know who that teen is!
JOE: You should be happy to have such a vocal advocate in this important fight.
NARRATOR: Ironically, Bob and Jane were quietly plotting strategy and tactics for the Resistance. Until they and their children were dragged out into the street that night.
I think this site is either satire, or serious but with a certain kind of humor in which both they and the reader know they're lying (but it's in everyone's interest to play along).
They do say this:
> Is this legal? / our clean room process is based on well-established legal precedent. The robots performing reconstruction have provably never accessed the original source code. We maintain detailed audit logs that definitely exist and are available upon request to courts in select jurisdictions.
Unless they're rejecting almost all of open source packages submitted by the customer, due to those packages being in the training set of the foundation model that they use, this is really the opposite of cleanroom.
> By convention, the client looks under /satellite/ by default. If that path is already taken, place a satproto_root.json file at the domain root containing { "sat_root": "my-custom-repo" } — the client checks this first.
Unfortunately. It's a great solution to a problem lots of tools face. A pity that people trying to establish new standards aren't aware of it aparently.
Ah, just like AT Proto when it was released, introducing compatibility hazards and security vulnerabilities by putting stuff in the root rather than in .well-known. Sigh.
No. That is for the host/domain entirely not a specific stream.
I might want several directories in the future, and even if I don't, I might want it separate from my .well-known robots.txt. Many, many reasons I can think of not to blend these.
(Even the glossy hype intro aside) The laughing enthusiasm in parts of this video, such as when mentioning a bombing, and technological capability movements in connection with that, hits a note that should be called out.
This is one of many recent occasions to remind ourselves: War is not entertainment.
War is horrific. It's lives and families ruined. Misery, and destruction.
Professionals in quiet rooms may have moments of dark sense of humor about some of the finer details, which they keep to themselves.
Everyone else should be universally horrified. Except for moments of noting genuine goodness in face of the horror.
Historically, Smalltalk has many browsers (views). This System Browser is one of many browsers, and the most busy-looking.
You can browse within it, and also spawn off other kinds of browsers from it.
And these browsers are extensible with others. As someone new to Smalltalk, I was pretty easily able to add a visual class hierarchy browser into this environment:
Half the things we know or think about in HCI, the people at PARC figured out before we were born, and sometimes before the hardware to test it existed.
They said a company was the only way to get access to the compute power they needed for that research.
A startup sounds like probably a good solution, if they get paired with the right product- and business-minded people, and together they find a winning collaboration. (Edit: Or if they get acquired rapidly in the AI boom, and negotiate the right deal to enable their research longer-term.)
reply