This is just CRDT merges and better diffs?? I think the future of version control is much, much weirder than this. Like if you have CRDTs why not have ephemeral branches with real-time collaborative editing and live CI as you type
I used to work at a startup that was trying to replace ads as the funding source for news (we failed, obviously)
but the crazy thing we discovered is that the people who run news websites mostly don’t know where their ads are coming from, have forgotten how the ad system was installed in the first place, and cannot turn them off if they try
we actually shipped a server-side ad blocker, for a parter who had so completely lost control of their own platform that it was the only way to make the ads stop
Maybe there are some details missing here, but asking for more detailed or tailored feedback makes it seem like he cares and was willing to hear you out. Sometimes people are in their own industry for so long that they forget what their industry and tools look like to outside eyes. A simple menu to him could’ve been overwhelming for you as a quick example.
I ran into this a while back at a talk when the speaker used the phrase "perfectly ordinary sodium iodide gamma ray spectrometer". I pointed out to him afterwards that that's not something that most people would expect to follow "perfectly ordinary" in a sentence, and he explained that, yes, today you'd be using thallium-doped CsI or NaI scintillators instead.
If the response is an exact quote, the tone is "you must be stupid." It doesn't convey caring and wiling to hear things, and if they can't understand that before sending it, it makes perfect sense that the product sucks, and it will only get worse.
It’s not the tone. It’s how you perceive the tone to be. Be careful, especially in a culturally diverse and international environment. Plenty of cultures cringe when they receive overly friendly phrased words, as it will not sound honest and curious to them but condescending and fake (in this context it may be perceived as sarcasm); whereas others will experience and mean it as straightforward openness.
Communication is hard. Even harder in writing. A usually working approach is to assume friendliness.
> we actually shipped a server-side ad blocker, for a parter who had so completely lost control of their own platform that it was the only way to make the ads stop
I'll bolster it. I've worked on a site-you-have-heard-of. They were struggling and as a response they would change marketing leadership basically every year to try to find a new way to reach a new or different demographic. Every year the new marketing leader would say "we're not doing any of that previous idiot's strategy, as I am the one who knows best". And as each marketer tried to make their mark, 50 new Google Tag Manager script injects would appear.
Now, whose job was it to remove the previous 200 Tag Manager scripts? Obviously the last guy's, because those were his experiments and he was in charge at the time so new guy was clearly not responsible for it. And at the end of the year, 250 Tag Manager injections would now exist and we would turn the page to reveal a new CMO.
And thus ends the parable of how I put a wrapping feature flag on the code that added Google Tag Manager to the site so that I could display the effects of the insanity and demonstrate why the PageSpeed metrics were ass and why engineering couldn't fix it (in a way they would permit, anyways).
Somewhere in the net of tubes of our AC we have a machine that produces rocks. They randomly shoot of the air vents, please install ballistic shields in front of the vents to stop them from hitting our customers.
Which sounds insane until you realise that you’ve just described in outline something very like the iron dome missile defence system, which actually exists in reality.
(And of course you’ll get no argument from me that it’s insane that such things need to exist at all, but such is the world we live in.)
Thank you for this insight. Even as a developer, I can easily lose track of all the trackers I've included in a webpage. Usually, if I see a tracker in the code, it's already obfuscated and I provide the benefit of the doubt to leave it in.
It's only when I jump back into the ads management page where I'm able to get a better idea. Even then, the specific trackers are hidden behind a variety of menu items that can change every time. This post made me realize that I need a better strategy as things are getting ridiculous with ads.
I used to be someone who didn't use ad blockers because some of them are botnets. It's just not the same anymore, as I would trust the botnets with my data over the advertisers.
Was your company called Scroll by chance, the one that Twitter acquired?
When I ran Android Police, we were one of the largest Scroll users in the beginning and I was pretty upset when Scroll shut down.
However, it never amounted to any meaningful revenue and was just a nice way to implement ad-free subscriptions across various sites. Other big sites used it too, like The Verge and Gizmodo and I thought it had some potential.
> the people who run news websites mostly don’t know where their ads are coming from, have forgotten how the ad system was installed in the first place, and cannot turn them off if they try
I think this might be selection bias in your customer base. I've had some friends who worked at a local news outlet. The ads on their website were a big deal and they had a full-time position dedicated to managing internet advertising.
Oh Lord you need to take on some non-tech companies as clients if this surprises you. I've had clients who forgot they had a website and thought that monthly hosting bill was just for something to do with the back-office Internet connection.
It was a janitorial temp company and they didn't really care about computers. Whoever had been their IT guy before me had made a pretty neat website that would let clients book cleaning staff and give them a birds-eye- view of upcoming staffing needs. It was marginally better than their existing phone and email based system but not enough to make them change it, and over years of saying "let's try it next quarter" eventually everybody forgot about it.
> How the hell did they end up not knowing how to manage the content on their site?
The knowledge atrophied. To me the harder problem is keeping knowledge off the bus… it gets on of its own accord and then boom: knowledge lost. People leave the company, and with them, lessons. People are in constant crunch time, and don't have time for the last 2% of the work that takes 98% of the time, like adequately documenting the weird bits of the system. Half the time the corp site is an afterthought to main engineering, relegated to some CMS that marketing can have, and trust me marketing is not writing docs.
Company leadership at nigh every job I have worked on encourages the company, collectively, to forget. Dev turnovers at most places I've worked average around 2y… that's knowledge, just walking out the door.
hi, I'm a dev who was working in journalism around thirty years ago and still has some connections.
The entire industry is run by actual journalists, it's one of the few industries where people who know how to do the job still rise to the top. Unlike most other industries, where the top brass are MBAs who don't actually know how to do things like build airplanes or write software or what have you. Which is honestly great except when it's not.
The web has never found a way to make journalism as profitable as it was back in the print days, so they mostly see technologists as people who get in their way, as disposable or replaceable.
So imagine the state of their tech stack — CMS's integrated with the front end, if not Wordpress then something like that, nothing headless. “Hey you should remove this plugin" what's a plugin? "look… this Bonzai Buddy, who installed it?" Some guy who left twenty years ago. And it's not in a template, it's in the articles and executed by an eval().
They have no motivation to fix any of it, because again, web sites for newspapers aren't profitable. Subscriptions are profitable. I think the real reason why Substack is successful is not that email is a good format for journalism — in fact it’s terrible — but because you generally cannot inject javascript into it. Which comes back to Gruber’s point — javascript was a disaster for the web as a document standard.
(personally, I haven't read news on the web in something like twenty years — RSS ftw)
You would be correct, but...and I say this as a subscriber to Apple's "all-in-one" package...Apple News+ is in many ways garbage. Low-rent articles from publications whose time has long passed (looking at you, Popular Mechanics), with Taboola-grade ads interspersed (as Gruber said recently, how many 30-something blonde women need hearing aids?).
That said, stay away from the front page and go straight to your selected publications, and it's a good deal with access to WSJ, LA Times, and what have you. You still get crappy ads (which I can't seem to find a way to block with PiHole), but the content is there. For all my bitching, I'd still recommend it.
I agree that Apple News+ is bad, but I think this is an example of why these plans always fail:
Someone says "I would pay good money for a service that does..." and then the service that does the thing appears and the goalposts keep moving as people realize their threshold for wanting to pay for something is higher than they originally thought.
Popular Mechanics is so sad these days. Like the Discovery Channel, they just had to take something that was good and intentionally turn it into garbage for some coin.
"For all my bitching, I'd still recommend it" has been my take since I got it sometime last year. It's kind of remarkable -- the ads are absolute trash and the apps, while not bad, are a little weird in hard-to-define ways other than "Apple used to do better at this whole UI thing". But if you want just a handful of the paywalled publications it unlocks for you, it's a great deal.
I pay for Apple One and yet the apple news app on my phone is still riddled with ads with weird AI generated people and horrible articles from crappy publishers pushing some other sensationalist garbage.
I would gladly pay an extra $20/m for a Disney style internet fast pass where I can browse any site that is subscribed to the service without ads, cookie preferences already set, no login or login managed by the extension for the fast pass service, and maybe a search provider that allows me to filter out SSO spam sites and adwhores like Meta and Google, and where some significant portion of my monthly pay is sent to the participating sites I browse.
My only overriding and most prominent concern is that given how every other webservice has been, that once they have sufficient ownership of the space they will increase the cost, likely significantly, and then they will likely add in their own ads on top of everything else.
It will take a literal once in a century genius to make something like this that actually works and that companies will latch onto.
There are enormous piles of money looming around every corner seeking a return on investment. If you have users that are enjoying a service, one of those piles of money can buy out the owner, double the price, implement ads, and sell all the private data. The bet they are making is it will take longer for the userbase to quit than it will take to make back their investment.
Every popular / beloved service is a target for these giant piles of cash. The fact that lots of people like it is de facto proof that it's underpriced, or over-resourced, or coddles its users with too much content. According to the finance industry, a stable business relationship should have the userbase reluctantly concluding that they have no other option, gritting their teeth and opening their wallet - and that's the sort of maximally profitable entity that a giant pile of cash will leave alone, letting it just exist, as a business.
I think Kagi is kind of making this happen currently with search. Not sure how their adoption number are going, but people are willing to pay $$ for better search with no "sponsored content" rising to the top.
I'm hesitant about a lot of this stuff because it's very easy to get to a place where we let net neutrality degrade even more than it already has. Part of the way that platforms indoctrinate us to accept that paying extra for quality of service or "fast lanes" for specific content types are "necessary" is to degrade the existing experience so much that it seems inevitable.
Good catch. I didn't even think about the fast lanes fiasco. I don't know why businesses have decided that since they have connected to the internet that the internet owes them.
It should be a public utility. It should be as ad free as reasonable. It should not track you.
The internet should be a lot of things that it currently isn't all because rent-seeking money and power grubbing bastards have too many of the strings and love pulling them like they're pulling their puds.
Then there's the TV streaming problem where the three shows (or sites) you're interested in viewing regularly belong to three different subscription services, and they're jealously set against uniting. I guess that's like the same problem as individual paywalled sites, but bigger.
Everyone who thinks that some kind of subscription service will replace ads, needs to take a look at history. Cable TV, satellite TV, etc., might have started ad free, but they soon adopted ads. So you ended up paying for a subscription in addition to high numbers of ads.
I think that cable represents a lot of failures that don't need to repeated. If someone were serious about starting an ad-free subscription service there are things they can do to help ensure it stays ad-free. An easy one would be contract provisions that would require the company to make massive payouts to customers if ads are ever introduced to the service. That kind of provision doesn't cost an ad-free company anything to include, but when somebody gets greedy and starts considering adding ads it would make the idea much less attractive and could force them to look at other ways to enshitify their product.
> contract provisions that would require the company to
IANAL but I suspect bankruptcy law is a subtle and chronic bad influence here.
If a well-behaved company has financial trouble, formerly-binding promises around privacy or ethics may get voided in the name of somehow turning the whole mess into money for creditors. Then the new ownership may be able to do whatever they want with the data.
If the prior management deleted everything before the sale, they could get into legal trouble for destroying "valuable assets" and wrongly prioritizing customers over creditors.
cable didn't start ad free. It started because some valley communities couldn't get a signal at all so the put one community antenna on high ground and ran a cable to houses to get normal broadcast tv with ads to each house. a few ad free stations came latter.
Others mention Apple news+ but there are actually a bunch of services that do this. Zinio is one that I've encountered, but a quick search shows that there are also Magzter, Readly, Flipboard, etc etc. I can't speak to their relative merits/range of content/user hostility. In the early 2010s I used to use one where you bought credits and paid per article (usually on the order of $1-$2 iirc, but depended on the source/length). Can't remember what it was called and I don't see it on any of these lists, so maybe it no longer exists or was bought up.
Anyway, this is something you can have if you actually want it.
What frustrates me to no end, is that Youtube makes about $2 per user per month from ads. Yet if i want to go ad free, they expect me to pay $14 per month.
Why in the hell would they not just sell it to me at cost for $2. Heck, I'll even say I'll be a customer for the REST OF TIME if they did that. I understand why Netflix and other vendors charge $12 - $20 because it has to pay for the copyright. But Youtube does NOT. It's a fucking scam to make us pay a premium.
I refuse to buy Youtube ad free until they drop the price to something $3 or below...
My guess is that the $2/user/month thing is an average across all of the users, and the fact that you use YouTube enough to even consider to pay to go ad free puts you in the much higher range of dollars-per-month users such that $14/month may even lose them money.
FWIW, YouTube Premium Lite is $8/month. It removes ads from most content, just not music, and doesn’t include YouTube Music. For me it’s well worth it.
Presumably because advertisers won't continue to pay $2/user/month for a pool of users that has been denuded of all the users with three bucks a month to rub together for ad-free YouTube.
Unless you have some first hand information that I have, you are more than 10x off.
> I refuse to buy Youtube ad free until they drop the price to something $3 or below...
There in lies the problem. Your eye balls (assuming well employed with $$$ disposable income) is another 10x worth to advertisers.
If I were to make a guess, Youtube for sure will lose money at $14/month on your specific browser.
You are literally subsidizing internet for, let us say for arguments sake, some zip code in rural america or <sub any rural part of the world> 's Youtube streaming needs.
At least in my case, I had Youtube Red and would watch a few hours of content per day. Then I canceled and found the ads so unreasonable that I just stopped using youtube altogether. Now they make no money from me.
There is a comment somewhere on HN where a person described implementing ads for a small, hobby website.
Users complaied about the price to go ad-free (something like $25 per year).
The commenter revealed that the actual revenue from ads was much more than $25 per year. Every person who purchased the ad-free option actually cost them money.
-----
The lesson I took away is that ads pay more than we expect, though i didn't know the specifics of YouTube.
By providing an ad-free option, they are really allowing the user to out-bid the advertiser.
I think for most people, they would not be willing to pay more to avoid the ad than the ad seller is willing to pay to show it. It's a weird conundrum--but people are very cheap.
I think that's the angle I'm going for. If Youtube was $25 per year or even $50 per year, it would be a no-brainer for me to pay that. Even if 50 does NOT outbid the advertiser, wouldn't YT rather have guaranteed income rather than trying to constantly find high bidders.
Youtube claims "we’ve reached 125 million YouTube Music and Premium subscribers globally, including trials"
And I bet most of that is trials and it's probably cumulative rather than right now. I bet that 500m people paying $50 /year would actually make them real money that is dependable - since most people would pay for it again next year to avoid ads. And the lower price would skyrocket subscriptions.
What if they dropped the average price of YouTube Premium to $2? And charged you $20 but people in Africa $1. Then it’d be more comparable to ad revenue. Would you be happier then?
...And you'll find that when you do so magically you seem to get logged out more frequently, and because of their UI, you likely won't notice until the sneaking suspicion the quality of your recommendations has dropped catches up with you
Which is a great idea and a great site, but why is it even necessary. The sheer dumb that means there are 12312 Netflix 'class' stream services is beyond ridiculous. I used to love one-stop shopping, now it's so fragmented I just went back to piracy. I don't have time to monkey with 10 sub services.
My point? As soon as such a service existed, there'd actually be 50 of them, and the stuff you wanted would be on 8 separate services.
it was a single monthly subscription to a bundle, and the clever part is we would measure time spent on each site and divide up the money proportionally, so the site you spent the most time reading would get paid the most.
Our founder had the idea that this would incentivize higher-quality content. We never got enough paying subscribers to really pull it off
Interesting approach! I feel like this is a hard space to break into, because of the friction -- both having to convince content hosts to opt in, and consumers to subscribe.
Not OP, but I’ll throw out that many large commercial websites don’t directly integrate ads themselves. Instead, they use a tag manager.
Often, that tag manager isn’t managed by the technology department, and well-meaning marketing people continue to sign contracts and jam JavaScript into the front end. If there’s also not a good content security policy in place, ad networks quickly become unregulated, all sorts of strange ads come in, and it’s very difficult to control them.
There are a lot of “MarTech” consultants out there that help clients essentially burn their tag manager to the ground, then build it from the ground up to work properly.
one of the other head-exploding experiences from that startup was when a major cell-phone company sat down with us and said, we have an idea: the ad-free cell phone. What if, every time a website would normally show an ad, we just paid them not to, at about the same rate the ads are paying. How much would that cost?
and the answer is: not much money at all. we ran the numbers and a typical user’s browsing was worth something like $20/month total across every site and every app combined
but no one can figure out the logistics, so we’re stuck with ads
That is a really interesting idea! I immediately see some problems, and you probably already thought through these while working on it, but I'm curious to hear if there were good solutions, or if they were non-issues for some reason:
- If it's a niche product, you can just "buy out" the ad space on the website. But if the phone becomes popular enough that the majority of a website's ad revenue comes through this route, there starts to be a bit of an extortion-like opportunity for the website owners. The website has an incentive to show _even more annoying ads_, with the knowledge that most users actually won't see the ads, but they'll still get paid as if they did. They can say "oh, we're adding 5 more banners, so you'll need to pay us 5x the amount you used to"
- I also see problems from the other direction (from the companies purchasing the ad space). By paying a website _not_ to show ads, you're essentially buying ad space. But the other purchasers-of-ad-space will still exist, and will now be competing for a more limited amount of space. So prices should rise, as demand rises. And as prices rise, you'll have to pay the websites more to keep them ad free. This should converge to a new equilibrium eventually, but I wonder if you accounted for that? If you get significant market share, the new equilibrium would be really expensive, because you're essentially trying to out-purchase everyone.
there’s a million small scale AI apps that just aren’t worth building because there’s no way to do the billing that makes sense. If anthropic wanted to own that market, they could introduce a bring-your-own-Claude metaphor, where you login with Claude and token costs get billed to your personal account (after some reasonable monthly freebies from your subscription).
But the big guys don’t seem interested in this, maybe some lesser known model will carve out this space
I shudder to think what the industry will look like if software development and delivery becomes like Youtubing, where the whole stack and monetization is funneled through a single company (or a couple) get to decide who gets how much money.
I am a bit worried that this is the situation I am in with my (unpublished) commercial app right now: one of the major pain points I have is that while I have no doubt the app provides value in itself, I am worried about how many potential users will actually accept paying inference per token...
As an independent dev I also unfortunately don't have investors backing me to subsidize inference for my subscription plan.
I recommend kimi. It's possible for people to haggle with it to get cheap for the first month and as such try out your project and best part of the matter is that kimi intentionally supports api usage in any of their subscribed plan and they also recently changed their billing to be more token usage based like others instead of their previous tool calling limits
It's seriously one of the best models. very comparable to sonnet/opus although kimi isn't the best in coding. I think its a really great solid model overall and might just be worth it in your use case?
Is the use case extremely coding intensive related (where even some minor improvement can matter for 10-100x cost) or just in general. Because if not, then I can recommend Kimi.
I was wondering when I’d see someone try this! I started work on a very similar idea last year but kept getting distracted by weirder and weirder ideas along the way, and never shipped anything. So, bravo!
there’s some debate about whether this is in the spirit of the _original_ Ralph, because it keeps too much context history around. But in practice Claude Code compactions are so low-quality that it’s basically the same as clearing the history every few turns
I’ve had good luck giving it goals like “keep working until the integration test passes on GitHub CI” - that was my longest run, actually, it ran unattended for 24 hours before solving the bug
The creator of claude code said you can just get ralph to run /clear. I think it's hilarious nobody (myself included!) thought of that or tried it and just assumed it couldn't run slash commands like that.
I’m not sure it still makes sense to do OS research so close to the metal. Most computing is done up on the application level, and our abstractions there suck, and I haven’t seen any evidence that “everything is a file” helps much in a world of web APIs and SQL databases
Some of us are still interested in the world underneath all that web stuff!
Multiple experimental operating systems at multiple abstraction levels sounds like a good idea, though. What sort of system software would you like to build?
I’m actually building an “OS” that’s up a level. it’s more like git, it has a concept of files but they’re documents in a distributed store. I can experiment with interaction patterns without caring about device drivers
Operating systems are where device drivers live. It sounds awfully impractical to develop alternatives at this stage. I think OP is right.
I think OSes should just freeze all their features right now. Does anyone remember all the weird churn in the world of Linux, where (i) KDE changed from version 3 to 4, which broke everyone's KDE completely unnecessarily (ii) GNOME changed from version 2 to 3, which did the same (iii) Ubuntu Linux decided to change their desktop environment away from GNOME for no reason - but then unchanged it a few years later? When all was said and done, nothing substantive really got done.
So stop changing things at the OS level. Only make conservative changes which don't break the APIs and UIs. Time to feature-freeze, and work on the layers above. If the upper layers take over the work of the lower layers, then over time the lower layers can get silently replaced.
I have never had so much negative feedback and ad-hom attacks on HN as for that story, I think. :-D
Short version, the chronology goes like this:
2004: Ubuntu does the first more-or-less consumer-quality desktop Linux that is 100% free of charge. No paid version. It uses the current best of breed FOSS components and they choose GNOME 2, Mozilla, and OpenOffice.
By 2006 Ubuntu 6.06 "Dapper Drake" comes out, the first LTS. It is catching on a bit.
Fedora Core 6 and RHEL 4 are also getting established, and both use GNOME 2. Every major distro offers GNOME 2, even KDE-centric ones like SUSE. Paid distros like Mandriva and SUSE as starting to get in some trouble -- why pay when Ubuntu does the job?
Even Solaris uses GNOME 2.
2006-2007, MS is getting worried and starts talking about suing. It doesn't know who yet so it just starts saying intentionally not-vague-at-all things like the Linux desktop infringes "about 265 patents".
This is visibly true if you are 35-40 years old: if you remember desktop GUI OSes before 1995, they were all over the place. Most had desktop drive icons. Most had a global menu bar at the top. This is because most copied MacOS. Windows was an ugly mess and only lunatics copied that. (Enter the Open Group with Motif.)
But then came Win95. Huge hit.
After 1995, every GUI gets a task bar, it gets buttons for apps, even window managers like Fvwm95 and soon after IceWM. QNX Neutrino looks like it. OS/2 Warp 4 looks like it. Everyone copies it.
Around the time NT 4 is out and Win98 is taking shape, both KDE and GNOME get going and copy the Win9x look and feel. Xfce dumps its CDE look and feel, goes FOSS, and becomes a Win95 copy.
MS had a case. Everyone had copied them. MS is not stupid and it's been sued lots of times. You betcha it patented everything and kept the receipts. The only problem it has is: who does it sue?
RH says no. GNOME 3 says "oh noes our industry leading GU is, er, yeah, stale, it's stagnant, it's not changing, so what we're gonna do is rip it up and start again! With no taskbar and no hierarchical start menu and no menu bars in windows and no OK and CANCEL buttons at the bottom" and all the other things that they can identify that are from Win9x.
GNOME is mainly sponsored by Red Hat.
Canonical tries to get involved; RH says fsck off. It can't use KDE, that's visibly a ripoff. Ditto Xfce, Enlightenment, etc. LXDE doesn't exist yet.
So it does its own thing based on the Netbook Launcher. If it daren't imitate Windows then what's the leading other candidate? This Mac OS X thing is taking off. It has borrowed some stuff from Windows like Cmd+Tab and Fast User Switching and stuff and got away with it. Let's do that, then.
SUSE just wearily says "OK, how much? Where do we sign?"
RISC OS had a recognizable task bar around 1987, so 2006-2007 is just long enough for any patent on that concept to definitely expire. This story doesn't make any sense. As for dialog boxes with buttons at the bottom and plenty of buttons inside apps, the Amiga had them in 1984.
Yes, the Icon Bar is prior art, but there are 2 problems with that.
1. It directly inspired the NeXTstep Dock.
This is unprovable after so long, but the strong suspicion is that the Dock inspired Windows 4 "Chicago" (later Windows 95) -- MS definitely knew of NeXT, but probably never heard of Acorn.
So it's 2nd hand inspiration.
2. The Dock isn't a taskbar either.
3. What the prior art may be doesn't matter unless Acorn asserted it, which AFAIK it didn't, as it no longer existed by the time of the legal threats. Nobody else did either.
4. The product development of Win95 is well documented and you can see WIP versions, get them from the Internet Archive and run them, or just peruse screenshot galleries.
The odd thing is that the early development versions look less like the Dock or Icon Bar than later ones. It's not a direct copy: it's convergent evolution. If they'd copied, they would have got there a lot sooner, and it would be more similar than it is.
> so 2006-2007 is just long enough for any patent on that concept to definitely expire.
RISC OS as Arthur: 1987
NeXTstep 0.8 demo: 1988
Windows "Chicago" test builds: 1993, 5Y later, well inside a 20Y patent lifespan
Win95 release: 8Y later
KDE first release: 1998
GNOME first release: 1999
The chronology doesn't add up, IMHO.
> This story doesn't make any sense. As for dialog boxes with buttons at the bottom and plenty of buttons inside apps, the Amiga had them in 1984.
You're missing a different point here.
Buttons at the bottom date back to at least the Lisa.
The point is that GNOME 3 visibly and demonstrably was trying to avoid potential litigation by moving them to the CSD bar at the top. Just as in 1983 or so GEM made its menu bar drop-down instead of pull-down (menus open on mouseover, not on click) and in 1985 or so AmigaOS made them appear and open only on a right-click -- in attempts to avoid getting sued by Apple.
> The point is that GNOME 3 visibly and demonstrably was trying to avoid potential litigation by moving them to the CSD bar at the top.
Well, the buttons in the titlebar at the top are reminiscent of old Windows CE dialog boxes, so I guess they're not really original either! What both Unity and GNOME 3 looks like to me is an honest attempt to immediately lead in "convergence" with mobile touch-based solutions. They first came up in the netbook era where making Linux run out-of-the-box on a market-leading small-screen, perhaps touch-based device was quite easy - a kind of ease we're only now getting back to, in fact.
That's why it's a research OS, a lot of people (or at least some) think that the current range of mainstream OS are not very well designed, and we can do better.
I'm not saying Plan 9 is the alternative, but it is kind of amazing how un-networked modern Operating Systems are, and we just rely on disparate apps and protocols to make it feel like the OS is integrated into networks, but they only semi-are.
I didn’t really see the appeal until I learned how to use FUSE.
There’s something elegant about filesystems. Even more than pipes, filesystems can be used to glue programs together. Want to control your webcam with Vim? Expose a writable file. Want to share a device across the network? Expose it as a file system, mount that filesystem on your computer.
Idk I still find low level OS stuff super interesting because it hasn't had a rework in so long. With everything we've learnt since the age of modern computing, drives larger than a few MBs, super fast memory and fast cryptography to name a few.
It's interesting to imagine a new OS that incorporates these changes from it's infancy.
I appreciate all of the effort put in by Linux, BSD, Android, QNX and closed source OSs' have put in to building upon existing ideas and innovating gradually on them. But man I really want to see something better than everything is a file. I really enjoyed the stuff BeOS was pitching.
The most "research" thing I'm aware of 9front (since you're speaking in present tense) doing is that GEFS needed to work out a lot of things for itself that weren't in the Bε-tree proof-of-concept FS that came before.
I dunno how "close to the metal" you'd consider that.
("GEFS" being a disk filesystem that's been discussed on HN.)
The "everything is a file" approach is nice in many cases, I'm worried though if it works everywhere. Maybe if done right. Subversion (SVN) shows branches as separate file trees.. and ClearCase too (though I'm on thin ice with ClearCase, having used it very little). And I just can't stand the file-oriented way SVN works, I could never get used to it.
But there are a lot of other cases where "it's a file" does work, I've experimented with creating Fuse filesystem interfaces to some stuff now and then.
You're going to have to explain to me how a parametrized request/response system like calling a Web API or making a SQL query can be mapped to reading files. I've seen some stuff that people do with FUSE and it looks like ridiculous circus hoop jumping to making the Brainfuck-is-Turing-complete version of a query system. We have syntax for a reason.
Typically, if you were writing your hypothetical sql client in rc shell, you'd implement an interface that looks something like:
<>/mnt/sql/clone{
echo 'SELECT * from ...' >[1=0]
cat /mnt/sql/^`{read}^/data # or awk, or whatever
}
This is also roughly how webfs works. Making network connections from the shell follows the same pattern. So, for that matter, does making network connections from C, just the file descriptor management is in C.
This is... I don't know. I don't get why I would care to sling SQL over a file system versus a network socket.
I mean, Postgres could offer an SSH interface as a dumb pipe to psql to just have you push text SQL queries in your application. But it doesn't, it offers a binary protocol over a network socket. All the database engines have had the same decision point and have basically gone down the same path of implementing a wire protocol over a persistent socket connection.
So yeah, I don't get what doing things this way would give me as either a service provider or a service consumer. It looks like video game achievements for OS development nerds, "unlocked 'everything is a file'." But it doesn't look like it actually enables anything meaningful.
But if it requires understanding of a data protocol, it doesn't really matter if it's over the file system or a socket or flock of coked-up carrier pigeons. You still need to write custom user space code somewhere. Exposing it over the file system doesn't magically make composable applications, it just shuffles the code around a bit.
In other words, the transport protocol is just not the hard part of anything.
It's not hard, but it's sure a huge portion of the repeated boilerplate glue. Additionally, the data protocols are also fairly standardized in Plan 9; The typical format is tabular plain text with '%q'-verb quoting.
There's a reason that the 9front implementation of things usually ends up at about 10% the size of the upstream.
The benefit is that you can allocate arbitrary computers to compute arbitrary things. As it is now, you have to use kubernetes and it's a comedy. Though perhaps the same in effect, there are dozens of layers of abstraction that will forever sting you.
You're thinking from the perspective of the terminal user—ie, a drooling, barely-conscious human trying to grasp syntax and legal oddities of long-dead humans. Instead you need to think from the perspective of a star trek captain. Presumably they aren't manually slinging sql queries. Such tasks are best automated. We are all the drooling terminal user in the end, but plan9 enabled you to at least pretend to be competent.
Plan9 allows for implementing file servers in user space and exporting a whole file tree as a virtual "folder", so it's really more of "everything as a file server". No different than FUSE, really.
From what I've seen, Plan 9 fans turn their noses up at FUSE. They say FUSE is not "it", but don't really seem to explain what "it" is to differentiate it from FUSE.
And as Feynman said, you don't truly understand a thing until you can teach it. So that leaves us in a weird predicament where the biggest proponents of Plan 9 apparently don't understand Plan 9 well enough to teach it to the rest of us.
It depends what you mean by "it". FUSE clearly doesn't give you every feature in plan9, and in fact you can't have that without giving up the current Linux syscall API completely and replacing it with something vastly simpler that leaves a lot more to be done in user space. That's not something that Linux is going to do by default, seeing as they have a backward compatibility guarantee for existing software. Which is totally OK as far as it goes; the two systems just have different underlying goals.
Plan 9 supports file server processes natively, and that's the part that's most FUSE-like. The full OS also has many other worthwhile features that are not really addressed by FUSE on its own, or even by Linux taken as a whole.
One key difference is that the equivalent to kernel syscalls on *nix generally involves userland-provided services, and this applies to a lot more than just ordinary file access. The local equivalents to arbitrary "containerization/namespacing" and "sandboxing" are just natively available and inherent to how the system works. You can't do this out of the box on *nix where every syscall directly involves kernel facilities, so the kernel must have special provisions to containerize, sandbox, delegate specific things to userland services etc.
"Plan 9 praisers who don't actually use Plan 9" have a tendentious way of speaking, that's actually a lot like "AI slop", that Plan 9 users can instantly recognise. Telltale signs include speaking about Plan 9 in the past tense, and a belief that with Plan 9 you can somehow just strap all your computers together to get more performance or something. Because "you have 9P".
In addition to the sibling comment, you might also consider simply not using the APIs or SQL queries to begin with. Many people have entire careers without touching either.
I think you're failing to get that using a filesystem API to work with things that aren't naturally anything like filesystems might get perverse. And standard filesystems are a pretty unnatural way to lay out information anyway, given that they force everything into a tree structure.
This is what I was trying to get at. A lot of the data I deal with is directed, cyclic graphs. Actually, I personally think most data sets we care about are actually directed graphs of some kind, but we've gotten so used to thinking of them as trees that we force the metaphor too far. I mean, file systems are an excellent example of a thing we actually want to be a graph but we've forced into being a tree. Because otherwise why would we have ever invented symlinks?
There's a bunch of literature about accessing graphs through tree lenses. I'm not sure exactly what you're looking for.
SQL certainly forces you to look at graphs as trees. Do you have an specific interface you're trying to access? If you're trying to use a graph database, why mention APIs and SQL?
I just assumed they wanted to interface with existing json over http apis rather than write their own code. The sibling of my previous comment addresses that concern.
Can Plan 9 do transactions? If so, it is unsuitable for being a database. It can run databases, because those can work without transactions. But can't do native writes without them. Can it do transactional reads? How would you represent isolation levels?
How do you do a join on Plan 9? I get the impression that these are coded in each client. But complicated queries need indexes and optimizer. SQL database has advantage that can feed it and it figures out the plan.
Plan 9 is just a brand smeared across a codebase, just like every other operating system.
> If so, it is unsuitable for being a database. It can run databases, because those can work without transactions. But can't do native writes without them. Can it do transactional reads? How would you represent isolation levels?
Indeed, no, we shouldn't be sure everything-is-a-file makes sense to do OS research. I don't think this is particularly necessarily what need to considered close to the metal. But it is os research.
I think you're right about where computing is today. It's mostly at the app level.
I think you once again hit a super hard conventionality chord & speak to where we are by saying we don't have much evidence of "everything is a file* helping, anywhere. Broadly.
But analyzing where we are & assessing they everything-is-a-file isn't a sure thing doesn't dissuade me. Apps have wanted control, and there's beenfew drivers to try to unite & tie together computing. App makers would actively resist if not drag their feet against giving up total dominion of the user experience. OS makers don't have the capital to take over the power from apps. The strain of unweaving this corporate power interests is immense.
There have been some attempts. BeOS tried to do interesting things with enriching files, with making their more of a database. Microsoft's cancelled WinFS is rumored to have similarly made a sort of OS filesystem/database hybrid that would be useful to the users without the apps. But these are some of the few examples we have of trying anything.
We're in this era where agents are happening, and it's clear that there's very few clear good paths available to us now for agents to actuate & articulate the changes they could and should be doing. Which is just a reflection of app design where the system state is all bundled up deeply inside these bespoke awkward UIs. App design doesn't afford good access, and part of the proof is that other machines can't control apps short of enormous visual processing, which leaves much ambiguity. If agents can't it also strongly implies humans had little chance to master and advance their experience too.
I strongly think we should have some frontiers for active OS research that are user impactful. We ought be figuring out how to allow better for users, in ways that will work broadly & cross cuttingly. Everything is a file seems like one very strong candidate here, for liberating some of the power out of the narrow & super specific rigid & closed application layer.
I think Dan was also super on point writing A Social Filesystem. Which is that social networks & many online systems are everything-as-a-file under the hood. And that there is generic networked multi-party social networking platform available, that we have a super OS already here that does files super interestingly. And Dan points out how it unlocks things, how not having one specific app but having our online data allow multiple consumers, multiple tools, is super interesting an opening.
So, everything is a file is very webful. A URL logically ought be. A multi-media personal data server for every file you can imagine creates an interest powerful OS, and a networked OS.
And users have been warped into fitting the small box their apps demand of them so far. They've had no option about it. All incentive has been to trap users more and more to have no off roads to keep your tool being the one tool for the job.
Distribute the power. Decentralize off the app. Allow other tools. Empower broader OS or platform to let users work across media types and to combine multiple tools and views in their workflow. Allow them to script and control the world around them, to #m2m orchestrate & drive tool use.
I don't disagree with anything you said I just think it's a 30 year old basis you stand from, one that hasn't helped had gotten better and which has ongoingly shrunk what is possible & limited the ability to even start trying for more or better. I don't think we are served by what it feels like you are trying to highlight. And I think "everything is a file" could be an incredible way to start opening up better, possibly, maybe!! but I'm very down to hear other reasonable or out there ideas!! I'm just not interested in staying in the disgraceful anti-user app-controlled unyielding quagmire we have been trapped in for decades.
I guess I feel like if we’re rewriting device drivers then we’re in a turing tarpit. I think there’s room for innovation at what is traditionally considered the application level - we run git, postgres, document stores etc as applications. I think the way to solve the next generation of coordinating is by doing more interesting stuff on this layer.
reply