Receive-only DVB dishes are very common (although technically illegal and occassionally subject to house raids and removal). They often try to jam the signal from time to time by inducing noise on the broadcast frequency (locally referred to as parasite). They have been cracking down on Starlinks though (most likely from the supply chain, which is very likely to be a compromised regime asset). In any case, as I wrote in another comment, it does not solve a real problem and no one uses this. You can simply watch news/video content live (or DVR'd) on your TV.
I installed this so you don't have to. It did feel a bit quirky and not super polished. Fails to download the image model. The audio/tts model fails to load.
In 15 minutes of serving Gemma, I got precisely zero actual inference requests, and a bunch of health checks and two attestations.
At the moment they don't have enough sustained demand to justify the earning estimates.
I kind of see your point, but I also kind of don't.
Sure, it would be great if you'd immediately get hammered with hundreds of requests and start make money quickly. It would also be great if it was a bit more transparent, and you could see more stats (what counts as "idle"? Is my machine currently eligible to serve models?). But it's still very new, I'd say give it some time and let's see how it goes.
If you have it running and you get zero requests, it uses close to zero power above what your computer uses anyway. It doesn't cost you anything to have it running, and if you get requests, you make money. Seems like an easy decision to me.
Bootstrapping will be near-impossible (or incredibly costly) unless they offer inference consumers models with established demand arriving at some least-cost router service where they can undercut the competition (if they actually can). And then dogfood the opportunistic provider side on their own Macs, but with a preference to putting third parties first in the queue. Everything else is just wishful thinking.
weird to learn that they do not generate inference requests to their network themselves to motivate early adopters at least to host their inference software
If they paid promised > $1k/m for FLUX 2B on a Mac they would go broke in less than a month. On a single 5090 that model would provide an inference througput so high they'd have to pay close to $50k/m for the results.
The numbers are absolute fraud. You shouldn't be installing their software cause fraud could be not just about numbers.
Can you rephrase that? I don't think I've read it correctly. It sounds like you are saying it would normally cost $50k on a 5090 and they can do equivalent work paying $1k. That's sounds like a $49k profit margin, but you say they will go broke.
Given their estimates of a Mac being able to generate $1k (per month?) a 5090 with a lot more power would be able to generate $50k. For a $3k piece of hardware. Which is obviously not realistic. (As in, nobody is paying that much for the images, which seems to match well with no actual requests on the system.)
and I don't think they ever will unless they're highly competitive (hopefully that price they have stays? at least for users)
I was thinking of building this exact thing a year ago but my main stopper was economics: it would never make sense for someone to use the API, thus nobody can make money off of zero demand.
I guess we just have to look at how Uber and Airbnb bootstrapped themselves. Another issue with my original idea was that it was for compute in general, when the main, best use-case, is long(er)-running software like AI training (but I guess inference is long running enough).
But there already exist software out there that lets you rent out your GPU so...
People underestimate how efficient cost/token is for beefy GPUs if you are able to batch. Unlikely for one off consumer unit to be able to compete long term.
None of the people I know inside Iran actually use this Toosheh[1] thing. And I mean zero, nada, none. Not one. Most are unaware of its existence. This sounds something that sounded cool pre-Starlink era that received funds and favors from western governments and NGOs and did not result in anything useful (not surprising that they get international press too despite being a total failure.) Realistically, a download-only solution does not solve a problem. Persian video content that people watch are delivered via DVB Satellite TV video channels. With Internet, what people want is to communicate and therefore need realtime access and data upload capability to contact others and use web services, not download a new offline copy of Wikipedia everyday! In practice, Iranians inside Iran end up mostly using VPNs and tunnels of various sorts. Often some variant of shadowsocks with SNI spoofing, which stop working in a full blackout. What will be left during a full blackout is people who have government-sanctioned SIM cards with full Internet access (known as "white SIMs") to propagandize on social networks in favor of the reigme when everyone else is disconnected and, a tiny set of people who have acquired Starlink terminals.
The same set of people behind that project were supposedly given additional resources to smuggle Starlinks inside, and in the Persian community on Twitter, there's an ongoing meme mocking where those Starlinks actually went and given to whom, never to get an answer...
> in the Persian community on Twitter, there's an ongoing meme mocking where those Starlinks actually went and given to whom, never to get an answer
Of course not. From the article:
Because the technology is banned by the government, access remains limited and carries risk; Iranian authorities have recently arrested Starlink users and sellers.
I also wonder: if roles were reversed and (say for example) the US government was blocking parts of the internet to stop rampant CCP propaganda during a war with China - would "NetFreedom Pioneers" be advocating for ways for Americans to get around the block so they could continue to consume CCP propaganda because "muh freedom".
I expect not to be honest.
Not saying the cause is wrong necessarily but let's just call a spade a spade - this is aimed to help one side and one side only, "freedom" has nothing to do with it
It appears they do receive funds from US government so while I cannot directly answer your hypothetical question, I imagine they would lose at least some of their fundings. The author of the IEEE article is receiving compensation as Executive Director.
Yes, I should be able to consume content from anywhere to build a more accurate picture of reality regardless of military conflicts. Because of “muh freedom.”
Truth is the first casualty of war, yet truth is required to operate a democracy effectively. Responsible citizens will do their best to see through the fog of war by aggregating multiple perspectives.
I acknowledge governments do not see things the same way.
This is the framing used by the IR regime* in Iran and a moment's reflection makes it clear it is complete nonsense. Mass scale propaganda in Iran is delivered via satelite channels. Iran Inernational, BBC Persian, ..., are all accessible in Iran.
The blackout by IR regime is preventing Iranians from letting us, the rest of the world, learn what actually happened during the massacres earlier this year, get the actual facts about the number killed, the manner of regime crackdown, the nationalities of the (invited) paramlitaries from neighboring lands involved in mowing down unarmed civilians, testimonials from families regarding the treatment given to their loved ones, what they are put through to get the bodies of their dead loved ones, what condition they receive the bodies of their loved ones, and what pressures and evils they endure to bury their loved ones.
p.s. right now, with the almost two months of blackout, they only information sources we have are either regime flunkies and "professors" or obvious contra regime propaganda outlets like Iran International. And reasonable non-Iranians would likely have reservations about info delivered by an outfit like Iran International. Whereas if we could hear from civil society in IR occupied Iran, from prominent individuals, organizations, trade groups, universities, ...., then it would be completely impossible for the IR in Iran to mislead regarding the horrors they inflicted on Iranians.
* Preemptive note: They, the Shia of Khomeini, themselves refer to their system as "nezaam" which means precisely and exactly "regime".
>What will be left during a full blackout is people who have government-sanctioned SIM cards with full Internet access (known as "white SIMs") to propagandize on social networks in favor of the reigme when everyone else is disconnected and a tiny set of people who have Starlink.
One would think this is exactly the sort of circumstances under which store-and-forward/delay-tolerant routing would be useful. Years before Jack Dorsey thought of bitchat[0] I had the same idea, but never pursued it because I live in a western country but not in a "tech city", in other words, nobody around here is interested in being an early adopter of an app primarily of use only to preppers or people living under repressive authoritarian regimes.
Anyways, it's a great idea in theory, as the techno-anarchist preppers that LARP with off-the-shelf lilygo LoRa tranceivers will be happy to tell you. But in practice nobody who actually could benefit from these seems to adopt these things. Or at least I never hear about it, if they indeed do. Perhaps today's internet blackouts are too transient for a 2026 version of samizdat to develop?
Do the people you know inside Iran plan to just wait it out, or do they have some other solution ready for a total blackout?
To be clear while the annoying firewall has been a forever thing, and even grandmas know how to use VPNs day-to-day to access Instagram, a full, long-term blackout, has been a relatively new thing, so I don't think there's enough prep for that. Bitchat was certainly something that was spoken about after the January protests and before the war broke out. There was even a thief who cloned and renamed it something Persian without attribution and with shady security and the Bitchat guy got upset about it just a few weeks ago.
There are some government-sanctioned messengers that apparently keep working but some people would not use it as they are completely insecure and watched by big brother, of course. The biggest issue is getting data out of the country not internal comms (e.g. video evidence of massacre, for example, so that some poeple like in this very thread don't get the ammo to whitewash the regime, intentionally or accidentally.)
>The biggest issue is getting data out of the country not internal comms
No doubt. Unless there's somebody friendly just across the border in Azerbaijan or Basrah or something, I don't see how they'd do it. Maybe point a dish and establish a point to point link, but you'd need to pre-arrange that.
I think what you are suggesting is more practical today than before, since there are at least a few people who have some sort of access. The real catch is really the prep, or lack thereof. The anecdotes around me is they are hoping (perhaps wishfully) for a total regime collapse and internet freedom relatively soon.
Couldn't be more simplistic. Of course a three trillion dollar Google would behave differently than a 2008 Google with or without DoubleClick.
Even today, I would argue an average sample of Googlers will likely think slightly differently about these things than an average sample of Facebook employees; but of course both will have to respond to influence from the external world: i.e. customer, society, govt.
The GNU-adjacent thing would be FSF, and I'd say many EFF supporters are antagonistic towards the FSF (and/or RMS) because of their "extremist" stances. I'd characterize EFF as "corporate Open Source" vs. FSF/GNU "Free Software."
Even if it were true, that is not the logic they cite though. They make up a story of the impressions were reduced relative to the platform's old days, not absolute terms; they don't address the cost of tweeting being minimal at all, almost certainly a year of tweeting would be less costly than writing a rant blog post against X. Many brands just autopost everything everywhere for syndication purposes.
So we know why they did it. They wanted to take a stance against X. They just didn't have the balls to say it out loud or the dignity to leave quietly.
This might be a dumb question: Is the author looking to run 4k display at HiDPI 8k framebuffer and then downscale? What's the advantage of doing so versus direct 4k low-DPI? Some sort of "free" antialiasing?
From what I understand, the main goal is to fix the problem that non-native (1:1 pixel mapping) resolutions and scaling look worse than native. This is a problem when you ship high-dpi displays that need UI scaling in order for things to be readable. Apple's solution was to render everything at a higher, non-native resolution so that images were always downscaled to fit the display.
So to oversimplify, Windows can have a problem where if you are running 1.5X scaling so text is big enough, you can't fit 4K of native pixels on a 4K display so videos are blurry. If instead you were rendering a scaled image to a 6K framebuffer and then downscaling to 4K, there would be minimal loss of resolution.
I do not know who was the moron that first used scaling in conjunction with displays having a higher resolution, but this is a non-solution that should have never been used anywhere.
Already more than 35 years ago the correct solution was used. For text and for graphics, the sizes must be specified only in length units, e.g. in typographic points or millimeters or inches, e.g. by configuring a 12-point font for a document or for an UI element. Then the rasterizer for fonts and for graphics renders correctly everything at a visual size that is independent of the display resolution, so it is completely irrelevant whether a display is HiDPI or not.
To combat the effect of rounding to an integer number of pixels, besides anti-aliasing methods, the TTF/OTF fonts have always included methods of hinting that can produce pixel-perfect characters at low screen resolutions, if that is desired (if the font designer does the tedious work required to implement this). Thus there never exists any reason for using scaling with fonts.
For things like icons, the right manner has unfortunately been less standardized, but it should have been equally easy to always have a vector variant of the icons that can be used at arbitrary display resolutions, supplemented by a set of pre-rendered bitmap versions of the icons, suitable for low screen resolutions.
I am always astonished by the frequent discussions about problems caused by "scaling" on HiDPI displays in other operating systems, because I have been using only HiDPI displays for more than a dozen years and I had no problems with them while using typefaces that are beautifully rendered at high resolution, because I use X11 with XFCE, where there is no scaling, I just set the true DPI value of the monitors and everything works fine.
> Then the rasterizer for fonts and for graphics renders correctly everything at a visual size that is independent of the display resolution, so it is completely irrelevant whether a display is HiDPI or not.
Well that sounds great in theory, but then you'll get only one button per screen on your laptop and maybe two on your desktop. More likely one and a half.
> From what I understand, the main goal is to fix the problem that non-native (1:1 pixel mapping) resolutions and scaling look worse than native.
That would be my instinct as well, but the author seems to be delibarately doing the exact opposite. Trying to force a 2x HiDPI and then downscaling to native display resolution whereas he could have just done a 1:1 LoDPI rendering. What you get in the end is some equivalent of hack/brute-force smoothing/antialiasing of what was rendered in the downsample.
The author said that the problem is that Apple has introduced a size limit for the display (3360x1890) that is lower than the size of the actual display, which is a standard 4k display (3840x2160).
So 1:1 rendering can cover only a part of the screen, while the remainder remains unused.
If the maximum size limit is used but applied to the entire screen, it does not match the native resolution so interpolation is used to convert between images with different resolutions, blurring the on-screen image.
All the attempts were done with the hope that there is some way to convince the system to somehow use the greater native image size instead of the smaller size forced by the limits.
Nope, you completely misread the post. All Mac’s including M4s and M5s can run at a 1:1 4K resolution all day long filling the screen completely. That’s not what the OP wanted though, they wanted to render at 8k (roughly 7680 px by 4320 px), then downsample that by 2x in each direction to map to the 4K display. Supposedly to make things “look better” than rendering at the native resolution but it sounds insane to me.
That does not seem to be the case for my M4 Mac mini in native "low-DPI" mode with a 4K display, so I think the problem only appears in HiDPI (7680x4320 framebuffer downscaled back to 3840x2160 only). The author seems to be confirming the max intermediate framebuffer is 6720 pixels wide.
even on a native 2K monitor, having a virtual 5K frame buffer downscaled to 2K yields perfectly enjoyable results, compared to how macOS' native 2K image would look like; it causes eye-bleed :)
Assuming by 2K you mean 2560x1440, I also prefer non-integer HiDPI 2560x1440 mode over both native and HiDPI 1080p modes on my large (55”) 4K display, and the non-integer scaling is only rarely a problem.
Impressive, of course; but not quite that impressive.
Only true if all you're running is matmul (supercomputer has general purpose CPUs so more flexible than M1 GPU) - also those flops are probably FP64 in supercomputer ratings and FP32 in M1.
As a smart man I knew used to say, supercomputers are about I/O not raw compute. Those have terabytes of RAM not 8GB.
Your question hits directly at latency vs. throughput distinction. Depends on which you mean by "fast."
Throughput-wise, the supercomputer is competitive because it has a lot of local RAM connected to lots of independent nodes, which, in aggregate, is comparable to modern laptop's RAM throughput (still much more than disk) with a caveat, that you can only leverage the supercomputer bandwidth if your workload is embarrassingly parallel running on all nodes[1]. Latency-wise, old RAM still beats NVMe by two or three orders of magnitude.
[1]: there's another advantage that supercomputer has which is lots more of local SRAM caches. If the workload is parallel and can benefit from cache locality, it blows away the modern microprocessor.
reply