Another interesting part of the Raspberry Pi VideoCore blob is that it implements DRM for Raspberry Pi products: the Pi Camera V2 has an Atmel ATSHA204A CryptoAuthentication chip on it and uses an HMAC+nonce challenge/response system to authenticate with the VideoCore blob when it goes to bring up the CSI interface. Marcan42 dumped the keys from the VideoCore blob and documented the system a few years ago.
According to the Pi Foundation, this is because simple peripherals are too easy to clone and they need to recoup their investment in accessory design.
I was reminded of this while I was researching Twitter speculation yesterday that something similar is done for the DSI interface for displays. I wasn't able to substantiate this - the FKMS (FakeKMS/FirmwareKMS) and proprietary Raspberry Pi video drivers, where link negotiation and backlight control is done in the blob, do only support specific displays. However, it's unclear to me if this is due to driver support or an intentional lock-in. The open-source KMS driver (not yet usable on Raspberry Pi 4) where link negotiation and backlight control is done in the kernel, of course supports anything with a driver.
>According to the Pi Foundation, this is because simple peripherals are too easy to clone and they need to recoup their investment in accessory design.
I find this completely fair, but then maybe don't call yourself a "charity" and an "open platform" and just be upfront that you need to lock down your hardware to recoup the investment.
This is the part where HNers learn that simply being registered as a charity doesn't mean jack shit.
Hospital systems that make billions of dollars a year of profit are charities! Companies that own billions of dollars of real estate holdings for no purpose except speculation are charities!
Non-profit simply means that profits aren't paid out to shareholders. It doesn't mean you can't make enormous profits and accrue vast amounts of wealth.
There is no "jack shit" about the Raspberry Pi Foundation's charitable status.
It's really not in question.
They've even relatively recently split the business in two (Foundation and Trading Company) to further protect the charitable aims of the Foundation and avoid the ugly Ikea situation.
There is also (in the UK and in the USA as I understand it) a distinction between not-for-profit and charitable status.
In the USA AFAIK most self-declared non-profit organisations follow or are advised to model themselves on the Georgia Nonprofit Corporation code. But not all non-profits are charities (all charities are non-profits, but as you say, it does not mean they don't _make_ profits from time to time; they just don't return them to shareholders).
In the UK we have slightly different non-profit codes like the CIC (Community Interest Company). They are very distinct from charities.
Notoriously in England, public schools are charities. My school made god knows how much by offering a few bursaries and thereby counting itself as a charity. My best friend's school, old Slough Comp, is possibly one of the greatest forces of anti-charity and anti-egalitarianism in the country and yet is still - IIRC - a charity. It means absolutely zero whatsoever. Perhaps other countries are different.
Yeah, sure. When they started[0], the aristocracy were educated by private tutors, and these schools actually were for the poor (ok, fine: for 'poor', read 'slightly less than royalty'). That context has obviously changed a lot since, and it now feels a bit silly.
Also, while that person's comment is correct, it's worth noting that not all private schools are referred to as public schools. That name is only for the oldest, mostly-boarding schools that were big players when the system was (finally) formalised in the 19th century. The vast majority of private schools would just be called private schools. And then state schools are the genuinely-free ones.
[0] ETA: Mostly around the Tudor period I believe; i.e. by Shakespeare's time most if not all of them were well established.
Yes -- but also all "Public" schools were not "Church" schools.
Basically there is a time when education was almost exclusively monastic; those who were not taught privately were taught by religious institutions.
The public schools were free of that influence to a greater extent.
There is one more tier of school you don't mention which sits somewhere below "public": the "commercial school". There were some of these owned by the livery companies, and were a tier of schools that were created along the lines of the public schools but before the school system was fully established. Most of them were not fee-paying but were funded by donations or livery company charitable funds. They taught largely vocational skills (but professional ones rather than technical ones); parents sent their kids to commercial schools to bring back the knowledge to professionalise the family business or to set them up in a trade.
(I went to a school that was originally founded this way, but was a part-state-owned grammar school by the time I got there a century later)
Oh wow, thanks for adding that detail. I didn’t know about practically any of that. That’s a fascinating side of things: I thought there must have been a slight lacuna in my understanding, that not all the educated classes could have employed private tutors, and that definitely fills in a missing link for me.
Though it adds another small question: aren’t/weren’t most public schools severely Anglican? I know my school was quite radical at the time for admitting Jewish boys, so that always painted a picture of a not-exactly-super-secular institution, but maybe I’ve got the wrong impression in some way...
Also, those commercial schools sound a bit - subtracting for a moment the fee-paying aspect - like the German technical education system which I’ve always liked the sound of. I wish we had something more like that today, though obviously now it wouldn’t - or shouldn’t - be fee-paying.
> Though it adds another small question: aren’t/weren’t most public schools severely Anglican?
Yes -- implicitly.
(sidebar: I am not sure that, when public schools really first sprang up, it was even possible to educate people of Jewish descent; the situation for Jews in england in particular was deeply complicated by their unique relationship to the state as established by the Magna Carta. Either way, they were not landowners by law and therefore probably not that interesting to the church.)
But at any rate as Wikipedia says, the first public schools appear to have been generalised and detached versions of grammar schools, which were the schools run for wealthy families that were attached to churches and monasteries.
Those schools started off teaching young people the skills needed to function in church life, but eventually they seem to have become so generalised for various other trades that they separated themselves in an administrative sense.
They'd have had lots of clergy doing the teaching nonetheless, I imagine, simply because really only clergy had access to education at that point.
I am not sure how "technical" the school I went to ever was in its earliest form (we did have technical schools in the UK for a while as a precursor to the comprehensive system).
I get the impression it was commercial in the sense that it taught reading and writing necessary for conducting a business, maths necessary for bookkeeping and engineering, and some science.
(The livery company that founded it still owns half of it -- the outside half, literally)
No, it’s really as simple as in my own reply: the American meaning is exactly what the term conveyed when ‘public schools’ began, centuries before today’s ‘state schools’ existed anywhere.
It was a school that was open, in principle, to anyone. Think ‘free as in speech’ vs ‘free as in beer’, but with the added sense - like ‘public transport’ - of being democratic and round-about-accessible to all.
As for the US: when it began, fee-paying schools were the dominant mode, and there wasn’t really an aristocracy with private tutors to distinguish it from. So it never needed the ‘public’ - and when government schooling became a thing, it pretty naturally took on the ‘private’ qualifier instead.
Actually, I think it was that a subset of private fee-paying schools were set up to prepare students for positions in ‘Public Life’ ie politics, military, clergy, civil service - basically running the country aka “The Ruling Classes”.
Nope. Like I said a moment ago[0], it really did just mean ‘free as in speech’, like not-quite-free public transport suggests. Your explanation is certainly very neat and plausible - all the makings of a folk etymology - but it happens not to be correct.
“An endowed secondary boarding school in Great Britain offering a classical curriculum and preparation for the universities or public service.”
The purpose of public schools isn’t academic excellence (though of-course they support it for their most able pupils) but networking and the ‘habits of person’ necessary to compete socially in class-sensitive institutions.
Actually, it's worse than that! Fee charging schools can be known as both public and private schools. Those terms are never used in the UK for schools funded by the state. A state funded school will be labelled as Primary, Secondary etc. Some are known as State schools. Some are Academies (a bit more complicated but largely publically funded) etc. Basically in England anyway, Public and Private schools are fee charging schools. The name does not refer to how they are funded.
I went to a private school aged nine to 13 and a public school 13 to 18. Then I went to a polytechnic, which changed its name after a year and then two (three?) years later it was a university! Whilst in sixth form (17/18)
So the message here is that the public/private distinction for school nomenclature here in England and perhaps some or most if not all the UK doesn't mean what it does elsewhere, unless it does except where it doesn't ... except on a weekend when all bets are off. Clear? Jolly good. As you were, carry on!
Thanks for adding that detail! Also for being the one person who replied ITT with a non ’apocryphal’ etymology. I was getting ready to dig into another “public schools are called public because you can see them from the road!” pseudohistory...
And, more importantly, thanks for adding detail on the state school side of things, which I suppose I left out of my answer because it’s not something I know about. It was definitely sorely needed.
In brief, read 'for the public' rather than 'by the public'.
Another example - public houses (aka pubs) are generally for-profit private (or large chains may be public in the sense of being listed) companies that take your money in exchange for real ale and good food; not social housing!
The more confusing thing is that we now (see history in sibling comments) have 'private schools' too. What you call 'public' are 'state' schools here, or something more specific where it's implied ('grammar', 'comprehensive', 'academy').
I know you're being dramatic, but at least charities in the UK do have vastly different treatment by HMRC, and much more granular financial reporting requirements.
It doesn't mean 'absolutely zero' - the key point is that it must have charitable aims and objectives which it strives to achieve or encourage.
For example, a university student union can sell you beer in the SU bar and run events to raise cash in order to further its aims in education and student experience etc.
A UK registered charity has to have a "public benefit requirement" and has some fierce governance and reporting requirements, it's more than just a non-profit company.
It's the same in the US, but in both cases, "public benefit" is nebulous and unenforceable. If you just say "we're advancing public health!" that's a charitable purpose, even if the majority of what you're doing is profit-seeking and completely unrelated to that.
Much like how CEOs get wide discretion as to what "advancing shareholder interests" means - maybe it's in the long-term interest of public health to build a huge amount of real-estate holdings that you could (hypothetically) use to generate revenue and advance public health (uh huh) some time in the future. That's perfectly fine for a non-profit to do - they really are just a corporation that doesn't pay out profits to shareholders, they keep it all internally.
Examples: the Susan Komen foundation. College endowments. Hospitals. Etc.
In my time at a non-profit, we had what we called our "contribution margin" which was equivalent to profit in a for-profit company, and that was tens of millions of dollars a year. Like I said, we had big real-estate holdings etc which is where all the profit went year-over-year. And we actually did do important public health work, but we were also essentially a contractor for various state and federal agencies and definitely did turn a profit.
The only requirement in the US is that at least 5% of the activity must be charitable in nature - that's not a typo. So spend 5% on some studies and reports and the rest becomes your personal slush fund. It's a fantastic little arrangement.
> a UK charity, which has to spend all of its money on charitable purposes.
No.
> The most popular charities in the UK spend anything between 26.2% and 87.3% of their yearly income on charitable causes, according to the best available data.
Also that doesn't include accumulation of wealth in general - it's perfectly fine to sock away a billion dollars (or pounds) because in principle that money is going to go to charitable activities in the future. Sometime. But there's no legal requirement that "sometime" ever come, so it's just a slush fund.
Again, please don't think of charities as being charities in the traditional sense of feeding nuns and orphans. It may be better to think of them as "non-shareholder corporations". They are corporations, which make money, and accumulate wealth, which is controlled by the board. The difference is that the purpose of the accumulation of wealth isn't for the benefit of shareholders, but in principle it's for the public. In practice it is a slush fund for the board.
You've got tons of UK universities that build up huge endowments, right? Do you think they're the only ones who do that? And not everybody is using that money for scholarships, as it were...
I know how to think about UK charities, thanks mate, I am trying to offer perspective, experience and knowledge that's different to yours.
I'm a trustee of a small UK charity, I do their books, I'm in touch with lots of other trustees and in no way can these companies be run as a "slush fund for the board". The regulatory regime demands too much transparency for that to happen at any scale.
> The most popular charities in the UK spend anything between 26.2% and 87.3% of their yearly income on charitable causes, according to the best available data.
...
> Also that doesn't include accumulation of wealth in general - it's perfectly fine to sock away a billion dollars (or pounds) because in principle that money is going to go to charitable activities in the future. Sometime. But there's no legal requirement that "sometime" ever come, so it's just a slush fund.
Yes, UK charities are allowed to spend on fundraising, investment and may build up reserves. Some of those reserves might be restricted, for specific purposes even within the definition of their charitable purposes, and that needs particular accounting. But that money is absolutely locked up for their registered purposes, it can't go to personal benefits, and their boards of unpaid trustees are on the hook for mismanagement.
If they spent every pound they received on their purposes, lots of charities would cease to exist (or exist 100% on grants from other organisations). That would certainly suit a lot of simple-minded people's perspective on "what a charity should be" but it would shrink the sector to almost nothing.
(I once did data entry for Oxfam, entering direct debit donations posted to the organisation - a few angry people liked to use those appeal envelopes to protest about the fact that Oxfam advertised at all).
Part of the problem with the cynical view of charities that you're responding to is that if it goes unchallenged, it actually becomes practically impossible to help charities improve their charitable efficiency.
If people think all charities are BS, they stop donating, and it becomes meaningless to say charity X is doing a better job on a structural level than charity Y, which is for sure important information for donors.
I've worked on some stuff for a social organisation that is now a registered charity, and it is amazing how deep the tendrils of the regulations actually go -- the extent to which things have to be structured to avoid conveying benefits that aren't the objectives of the charity.
There are two parts of the Raspberry Pi organisation - the Foundation(a charity) and the Trading company(not a charity).
The hardware is developed sold by the non-charitable part, and recently announced that they prioritise orders from industrial customers over private ones. You cannot expect them to act like a charity - they aren't one.
It would be way less of a problem if they would commit to removing the DRM by a certain date. At the very least, they should commit to doing that when they stop supporting the platform and making the proprietary addons to it.
The Raspberry Pi Foundation is a charity. It just owns Raspberry Pi Limited which is a tech company. Don't confuse the two. RPL does the hardware and software stuff. RPF is focused purely on education and outreach work, and can do that because RPL provide the money.
No, this is not fair. If making accessories is not profitable, then don't make them. Instead release the documentation and let others make the accessories.
I understand this as it being profitable to make and sell accessories, but not to engineer the accessories. The engineering is done in the hope of being able to sell the accessory. If they can't sell the accessory, the engineering investment is purely loss.
Anyway, engineering things to manufacture them and sell for profit is not a charity. It's indistinguishable from normal business.
Also Raspberry Pi SBCs and accessories are made by Raspberry Pi Trading Ltd, not by the foundation. If you look up FCC testing reports, it's all submitted by Rpi Trading. That's not charity even legally.
Charity, right? You wouldn't know from their own website, because they seem to not publicize their business structure very much. My guess that it's similar to mozilla foundation/co split. Foundation does nothing about the products, it's just a public face. And corportaion does all the product work, and is very for profit.
Anyone can make a camera, there are no restrictions. What is difficult are the algorithms for processing the data that comes off the sensor. The RPI foundation developed a complete solution (camera+software) and they don't want to see copycat cameras make use of their investment in image processing software.
You can think of the VideoCore ISP firmware task as a proprietary application and the Pi Camera as a hardware security dongle for that proprietary application.
You can choose not to run that application and access raw data from CSI if you'd like, but if you want to run the special ISP firmware application, you need the hardware dongle.
I think I feel the same way about the DRMed blob as I do about the Pi in general: I understand why the Pi folks did things the way they did, I don't think it's unethical by any stretch, but the situation is disappointing and I would prefer the alternative.
It does. Most people get enraged about RPI cameras unnecessarily. If you don't want to buy cameras from the RPI foundation and want to use/design/connect your own, nobody is stopping you. If you want raw sensor data off the RPI cameras, nobody is stopping you either. It's only when you want to use their image processing with a clone camera that you run into problems.
I just can't get worked up about this, I think this is completely fair.
Yeah, this is a dick move for sure. I had no idea the camera was locked down like this. Just plug in a better quality USB camera and you've bypassed the whole protection. Pointless.
According to Google, the Pi Foundation explicitly does not call itself or the Pi a "open platform". The only hits are from the forum, where people are pointing out that it isn't.
Not really. The firmware is basically protecting itself - the closed source firmware contains proprietary image processing code (ISP) for the camera which Pi Trading paid for, so it's only supposed be used with the Pi Camera.
A complete open source re-implementation would either not support ISP, or would include a non-proprietary version of the same or similar code, at which point they shouldn't care.
There's also protecting access to the hardware video codecs, in order to account for licensing fees. They might care about that if the MPEG-LA starts being a dick about it.
pretty offtopic but i gave up on waiting for Pi to be open for anything more than business. their shady history of pushing microsoft repos (and crypto keys) without my consent in their Raspbian OS was the last straw.
for those who arent amicable with such a 'charitable' definition of open, pine64 has existed for quite some time. the rock platform easily handles my docker workloads.
I'd hardly argue that the RK3399 SoC is more "open" than any of the broadcom SoCs. You can't even _see_ the MMIO tables for the RK3399 without an NDA. This isn't different from Broadcom SoCs, but at least the Pi has a large community behind reverse engineering everything. The RK3399 is barely supported by non-Linux distributions (last I checked FreeBSD didn't even boot the A72 core cluster, just the A53s, which leaves a ton of perf on the table) because any work to support it requires either a) blackbox reverse engineering or b) mucking around with the patches vendors (who _do_ have NDA access) have submitted to Linux.
> I'd hardly argue that the RK3399 SoC is more "open" than any of the broadcom SoCs.
What? The gru-kevin chromebook (RK3399) can be booted without using a single binary blob, absolutely everything (even the arm trusted firmware) built from source. I use mine that way.
I've never seen a laptop with a "broadcom SoC" that could make that claim.
I think you've been deluded by a certain fruit pastry vendor's marketing budget.
I think we're coming at this from two different definitions of "open". There is a very large difference between "blobless Linux compatible" and "open hardware". Have you tried to use any of these SoCs on any OS other than Linux? Good luck lol, because as I said Rockchip has only really upstreamed support for this chip in Linux and has left everyone high and dry.
This is why I specifically called out the issue of MMIO mappings. Seriously, try and scrounge up the addresses for the DMA engines on the RK3399. Or, hell, literally any information whatsoever about the UARTs. The primary place to find any of this is not in public vendor documentation but rather in the device trees Rockchip has submitted to Linux and the various drives different vendors have submitted. Device trees and digging through a pile of barely document C spaghetti is not a suitable replacement for actual documentation.
I am coming at this primarily from the perspective of an OS developer. I am not particularly concerned with blobs and non-free code. I am, however, incredibly frustrated by the fact that I cannot do even very basic things with my hardware without spending hours going spelunking in the Linux kernel because vendors refuse to actually provide public documentation for the products they ship.
There is a subtly annoying trend in the FOSS community to assume that Linux and Linux derivatives are the only operating systems worth caring about. An undocumented chip which ""runs Linux without blobs"", while perfectly great for Linux users, is not some wonderful thing for everyone who doesn't care about Linux. Frankly, I'd much prefer a blob-ful SoC if it meant I could actually do hardware bring up because a sprinkling of non-free components is much better than not supporting an entire platform.
I don't know how you turned a classic ARM vendor problem into an anti Linux rant. If it weren't for the GPL you would just get a binary blob for a proprietary Linux fork and then you would have to reverse engineer the undocumented features from raw machine code.
This isn't a criticism of Linux itself or the GPL, it's a criticism of the fact that both people replying to me and vendors treat Linux as if it's the only operating system worth caring about. Support for Linux is neat but it is not (and cannot) be the definition of open hardware because Linux is not the only things.
>If it weren't for the GPL you would just get a binary blob for a proprietary Linux fork and then you would have to reverse engineer the undocumented features from raw machine code.
Just because they could be doing significantly worse does not mean that they're actually good. Yes, they could do this. It would not be an open hardware platform. They could also upstream software support in such a way that it is a pain in the ass to figure out how the hardware works, which I also contend is not open hardware platform.
And, anyways, it's a rather bold assumption of yours to assume that the drivers actually work as intended lol. I'm sure many in the Linux kernel community would deeply appreciate being able to just write the damn drivers themselves instead of being left with broken and incomplete drivers with little recourse outside of reverse engineering the hardware or begging random vendors to fix it. This is not indicative of open hardware, this is literally just the same problem except now you've blessed the platform as open.
I see your point. I think everyone is getting caught up around the definition of "open," and we have different wants and needs.
I see how it's frustrating to you that people get GPL source and declare "victory" and give up, leaving BSD in the dust. I'm definitely someone who does this, and you made me think about how much nicer it would be if people continued to advocate for documentation instead.
What I want from my "open" hardware is the ability to access the full source code for what is being executed on my device, to ensure there are not unexpected background tasks and to be able to debug issues end to end without having to do as much reverse engineering. So for me, the giant supervisor blob on the Pi is obnoxious - it does random things I don't know about, and I can't look into it to see what's going on.
What you want from "open" hardware is the ability to write driver code under a very specific software license (BSD).
So for my needs, I'll take GPL drivers against hardware all day versus documentation against a black-box HAL. For your needs, maybe the black-box HAL is OK since it lets you run BSD.
Also, "the drivers work as intended" could just as easily be moved to "the blackbox HAL works as intended" and you have the same problem in your "Broadcom is fine because the HAL interface is documented" scenario. If you make a HAL call and it doesn't do what you expect, you're back to RE again anyway.
Anyway, I think we can all agree that full documentation is the ideal state, but sans that ideal state, I'll take GPL source over documentation and a HAL blob, and you'll take the HAL blob and documentation over GPL source. I think everyone wins if we all keep pushing for the ideal state of unencumbered documentation where we can, instead of sitting back when we have one thing or the other!
Obviously documentation would be ideal, but I still think an SoC with freely licensed source code available that talks to peripheral devices directly qualifies as much more "open" than one where even "open source" drivers are actually just talking to mailboxes to a HAL running in an RTOS.
I think the challenge with the Broadcom stuff is fundamentally architectural - even with more reverse engineering eyes on the scene, you're working with an SoC that's a big proprietary blob running on an obscure architecture (VPU) which is hosting an ARM Linux box, vs. the Rockchip stuff which is a much more traditional model with memory and register mapped peripherals.
I am also a big Broadcom VideoCore hater (lol), but:
>an SoC with freely licensed source code
Free as in "GPL". The GPL is incompatible with a slew of other licenses and so being dependent on reading the Linux source for how to write drivers for your own OS is an incredibly perilous legal position should you not license your code under the GPL. Real documentation (for talking to blobs or real hardware) almost allows you to license your code however you please and so it is objectively more free.
This, mind you, is not a hypothetical: I am quite literally in this position and have abandoned Rockchip SoCs (despite being all around better tech) because supporting them would possibly violate the project license and the rights of the other contributors. I can, however, support Broadcom SoCs because their documentation is freely available without licensing stipulations despite their non-free blobs. The Rockchip SoCs are free and open only to the extent you either like Linux or can adhere to the GPL, which is frankly a bizarre position to be in when trying to use an SoC.
One method would be to have a unique key burned into the image sensor by the manufacturer. That key will be in turn used to cryptographically sign the raw signal output from the sensor to verify that the image was indeed generated by that specific sensor.
Now if the image is compressed, this is obviously moot. But for important documentation and the like, it's feasible to store the signed raw signal to confirm that the image was taken by that specific camera. Of course, this depends on the security of the keystore, the trustworthiness of the manufacturer, etc.
> One method would be to have a unique key burned into the image sensor by the manufacturer. That key will be in turn used to cryptographically sign the raw signal output from the sensor to verify that the image was indeed generated by that specific sensor.
This would be horrible for privacy, although somewhat mitigated if the camera program/app discarded the signature by default.
Yeah it would, and ideally it should be possible for the user to choose to include the signature or not in their images. Though I wouldn't be surprised to see this type of tech being the norm in the future, perhaps in a sneaky way like what they did with printers and digital watermarking (https://en.wikipedia.org/wiki/Machine_Identification_Code). We may even see this in other integrated sensors like a MEMS mic with a built-in AD on the silicon.
This isn't doable. Nothing prevents you from gluing or projecting a screen directly into the sensor, after tone mapping the image properly. There is no winning. It wouldn't even be expensive!
Yes, that has been repeatedly pointed out. And yet the industry still did it and your digital cables carrying video aren't going to work properly without the HDCP DRM.
Yeah, but I can get an HDCP-compliant HDMI capture card for 8$ on Aliexpress, so I really don't care and it really doesn't work. You can also buy splitters that happen to disable HDCP.
Or just take-a-picture-of-a-picture. It's possible to do such things much more convincingly than when Trump tweeted out that classified satellite pic in 2019 with a flash visible in the middle of it.
Sure, okay. I was just following what I had thought to be the widely accepted narrative on this, eg:
"CNBC reported that Trump was shown the photo during the briefing. A flash visible in the center of the image suggests Trump or someone else took a photo of the original image — which Hanham says might have been the intelligence briefing slide."
In any case, the point is that with proper staging, you could absolutely take a picture-of-a-picture in a way that would result in the image being marked as genuine and untampered, even accounting for the signing info including a GPS-based time- and position-stamp and including camera details like focal length.
You don't need a centralized authority. Every manufacturer can issue their own keys.
I take a digitally signed photo and tell you "I took the photo with this tamper proof Canon camera, and I can prove it by taking more photos of any subject you ask for and signing them with the same key".
If you worry that I made an authentic-looking counterfeit Canon camera (but you're satisfied I couldn't have extracted the private key from a real one), Canon can confirm that they sold a camera with that key.
But what prevents me from saying I'm a manufacturer of tamper proof gspr cameras, that just happen to generate deepfakes?
Surely there will be enough cheap devices out there that not everyone can be expected to remember the names of venerable manufacturers? I personally have no idea who makes the camera in my phone.
Anyway, the point is moot. The analog hole is still there, you'll just feed the pixels straight from the deep fake generator into the Really Real Tamper Proof Canon's CCD.
Shout out again to the interesting perspective on the topic from the IpFire Forum. Some excerpts:
>Now, everybody is looking for a cheap ARM board with performance and loads of features. The Raspberry Foundation is a charity that pays probably no tax at all, but somehow is selling lots and lots of boards at an absolutely “amazing” price.
>Amazing because nobody else in Europe can compete with them. Paying no taxes helps. The second step is that they have almost completely outsourced their software development. They call it Open Source-ed, but that is not the same.
>Over many years, there has never been a release of that piece of hardware that was supported by a mainline kernel. Neither Linux nor any other of the *BSDs. They simply do not care what software runs on it.
Just in case anyone is confused by the 'paying no taxes' bit.
Indeed Raspberry PI Ltd (formerly Raspberry PI Trading Ltd) paid no tax on its profits in 2020 (the most recent filing year) but not because its a charity - which it isn't - rather because they got tax deductions for R&D. I strongly suspect that these deductions would be available to any other firm that spent the same amount on R&D.
And of course they did pay a significant amount of VAT (sales taxes) on these boards.
In short the tax insinuation bit of this is very likely completely unjustified.
Edit: I've read the rest of the post this comment quotes from (on the IPFire Forum) - it goes on to accuse RPi of tax evasion (i.e illegality) - seemingly because they are annoyed that they don't make it easy to run their software on it. This is not an 'interesting perspective'.
The interesting perspective i took away was how something like the raspberry pi shapes the market as a whole whether intended or not. See comment in the original discussion if you are interested.
In short, even if not through tax advantages, it is very hard to compete with a charity (or even the "free" open source developers it attracts as a results). As such it is difficult to imagine how a competing open source hardware would emerge. I found it interesting since there is no clear cut solution to this. Or even the consensus that the state is somehow bad, after all, having a raspberry is absolutely amazing. It just has consequences.
I noted in a previous post that the Beaglebone was was what the Pi should have been. It is open hardware, they are targetting the educational market, and has some interesting real-time coprocessors. But it is too pricey, so neither I, nor many others, bought one. It's a shame really.
My own view why the Pi succeeded: they understood the market. They were prepared to innovate. Everyone else seemed to be a "me-too". Sure, others came out with products that were more powerful, but then they cost more, and some of the hardware wasn't properly supported. Competitors never really offered a compelling reason why we should buy their offerings.
Take the Pi 0. Before that came out, the field was open for a competitor to see a gap in the market and capitalise on it. But none did. So then the Pi 0 came out and took a slice of the pie whilst everyone was asleep.
Roll on a few years, when the tech progressed, the competition had the opportunity to produce something like a Pi 0 but cheaper or better. What did they do? Absolutely nothing. This allowed the Foundation to once again create another product: the Pi 0 2. The power of a Pi 3, at the price of a Pi 0W (near enough).
The competition is clueless, which has allowed the Foundation to knock the ball out of the pack time after time after time. Upton is Britain's answer to Jobs.
My guess is that RPi got the trade-offs broadly right (price / capabilities / software support / availability) and were serious about their education mission which gave them a strong focus. Others seem to have dabbled but not much more.
Ignoring the split between the Raspberry Pi Charity and the Raspberry Pi For-Profit company, charities have more regulation and restrictions than for-profit companies, no? Wouldn't it be easier to compete if you didn't have to at least pretend to operate a public-good charity?
Of course they pay the VAT to HMRC. They collect the VAT, since it's paid to the company when you buy something off of them - customers don't make a separate payment to the government every time they make a purchase! But the essential point is that every penny of the VAT from their sales comes direct from a customer.
Unlike income tax and employees, customers are not legally liable for VAT on goods they buy, rather companies are liable for VAT on what they sell. So it's not just a question of accounting and collection. VAT is a tax paid by the company.
Of course ultimately the costs fall on the customer as do all costs. Would you say that RPi doesn't pay for components because the customer ultimately pays for these too?
The original post said that RPi paid no tax which is without doubt factually incorrect.
> Over many years, there has never been a release of that piece of hardware that was supported by a mainline kernel. Neither Linux nor any other of the *BSDs.
I think the stuff does get mainline support over time, though? That's no different from what goes on in x86 land, where installing Linux on cutting edge hardware is always painful in some ways and some stuff can even take years to get properly supported. (I'm especially thinking of Intel's mobile platforms from quite a few years back.)
The tax complaints don’t really hold water to me. A competitor could spin up a nonprofit if they wanted or pivot to nonprofit status. But they don’t, because the opportunity costs and limitations of being a nonprofit are nontrivial. There are successful competitors, both for and non profit - BBC Micro, Orange Pi, BeagleBone, etc.
And re: OSS, I don’t remember anybody complaining that Lenovo (for example) “outsources” their Linux thinkpad OS. The raspberry pi foundation is using open source (and some vendor closed source) software in full compliance with its licenses. The post you linked complains that the code quality of the raspberry pi modifications is “bad” and can’t be integrated into mainline Linux, but that doesn’t make it not open source. “Open source” has no obligation to be high code quality.
Yes, i took away something similar, see the original discussion. I had big companies being able to sell at a loss as an example, but it boils down to the same thing.
I found it interesting because there is no clear solution to this, no "bad" party. Having something like the raspberry pi is obviously absolutely amazing. But being able to produce without a profit margin makes it hard to compete. And this of course shapes the market as a whole. Differently put, how are we ever going to get a open source hardware platform when any such project wont be able to compete with the raspberry?
"In common with every other ARM-based SoC, using the VideoCore IV 3d graphics core on the Pi requires a block of closed-source binary driver code (a “blob”) which talks to the hardware. In our case, this blob runs on the VPU vector processor of the BCM2835 (the SOC or System On a Chip at the heart of the Raspberry Pi); our existing open-source graphics drivers are a thin shim running on the ARM11, which talks to that blob via a communication driver in the Linux kernel. The lack of true open-source graphics drivers and documentation is widely acknowledged to be a significant problem for Linux on ARM, as it prevents users from fixing driver bugs, adding features and generally understanding what their hardware is doing.
Earlier today, Broadcom announced the release of full documentation for the VideoCore IV graphics core, and a complete source release of the graphics stack under a 3-clause BSD license."
Which is also confusing. What IP would Broadcom be "losing" by not releasing the driver code but still making the specs and implementation documents public? Is it just an out-of-spite decision, something like "I'm not gonna help a potential competitor that much"?
For many companies releasing their code is not something they would usually consider, regardless of whether it offers a competitive advantage or not. It's just not in the culture. They don't see what they have to gain by releasing the code, but they worry that it may create issues if they do.
So basically if you want to convince these companies to open source some of their components "what do you have to lose?" is not good enough, you have to give them an actual incentive. I suspect that outside of places like HN very few people really care about Broadcom's binary blob in the rpi.
The problem with this is there are unknowns. Maybe nothing, but if there is something none of us have even thought of that can be a big loss. It is really hard to get past this fear.
Unfortunately that's not usually a good enough motivation for most companies. To be clear, I'm not arguing that I don't think it would be a good thing for them to release the code (I most certainly would welcome a fully open source rpi), I've just been confronted to this mindset a lot at work. Closed source is the default, releasing anything publicly means going through many hoops and levels of hierarchy. If there's no obvious benefit for the company and you don't have insiders strongly pushing for it it won't happen.
My bet — and IANAL — is that a corporate lawyer looks at the idea and sees no benefit, but an unknown, probably small, increase in potential liability. In that case why would they approve it? They're not evil, particularly, but they are analyzing the situation in terms of risk and benefit. Societal benefit doesn't make their list.
Also, the lawyer can skim-read the technical documentation, and even if they don't really understand it, they can reassure themselves that if there were any legal issues in it they would have noticed them. By contrast, few lawyers can read code, so they can't give themselves the same reassurance with respect to it.
From what I know about the graphics industry, it's not necessarily about losing your IP, it's about exposing your IP for litigation. There are some big patent trolls out there as well as some very well known names who will exercise their legal department, so the natural thing companies tend to do is to keep things closed. Things will become more interesting as more open source SoCs appear...
> What IP would Broadcom be "losing" by not releasing the driver code
what if there's some IP they licensed from another vendor in there somewhere and it's so entangled (or foundational - eg graphics IP, etc) that they can't release it at all?
what if there's some IP that they don't realize is licensed from another vendor and they get in trouble?
what if there isn't, but someone else says there is, function X is too close to our implementation, and it starts a big legal battle? Or what if you run into some patent troll who makes a business out of digging through code to find anything they can sue over?
what if there's some copyleft code some dumbshit engineer copy/pasted and it ends up leveraging the whole codebase open?
etc etc
This is a classic situation of "Broadcom gains nothing in the next quarter or even the next 5 years from releasing the source, only potential (if unlikely) downsides, and the only people who will be outraged are a handful of nerds who are ultimately irrelevant to Broadcom's (not RPi Foundation's) business".
It is a testament to the success of copyleft that people have now embraced that as the default and view proprietary stuff with outright suspicion just as a default, but a proprietary strategy is both legitimate operationally (nobody opens everything) and as a risk-mitigation strategy.
>Earlier today, Broadcom announced the release of full documentation for the VideoCore IV graphics core, and a complete source release of the graphics stack under a 3-clause BSD license."
Does this mean the Raspberry Pi might get suspend to ram support? That would make building a PDA out of eg the Raspberry Pi zero which gets decent battery life feasible.
I honestly feel like building something like a PDA would be easier using a bespoke layout. You can entirely remove stuff you don't need (like, say, the VPU (lol)) and save battery life. You can also optimize for what you're actually going to be using, eg once you've selected the type of display you can just support the type of interface it will use (SPI perhaps), etc.
(* Yes, "easy" is relative. You'll need a few interesting tools and it'll be a tad deer-in-headlights. But you don't need a rocket science degree to even fathom the idea.)
Remember, that's Videocore IV they released, not VI. I'm still in the reacquaintance phase of getting familiar with HW again, and the devil always seems to be in the details.
Unless you meant to make your PDA out of a <4 RPi.
Also worth noting is the VPU is "the boss" on a raspi device and is responsible for bringing up the arm CPU. Without functioning VPU firmware, the arm CPU that linux runs on doesn't even start. Even if you don't care about graphics at all.
I think "need" implies too much. Because, for example, for 100% headless applications the open+free version is already viable. (But in practice everyone wants to debug using the HDMI from time to time, mostly because almost noone has NTSC lying around.)
Also, without looking at the docs of the VPU, I'd guess that most of the blob functionality is needed for advanced vector stuff and whatnot (so it's only needed if you want to implement OpenGL/EGL/WebGL).
No, you can't boot into ARM code without it.
ARM part is disabled power-on, VPU is a general purpose processor really.
Basically, on start-up it loads its start.elf (i.e. boot application compiled for its instruction set) from boot-media, initializes some hardware, then loads ARM boot image into common memory, then starts ARM.
It also exposes some low-level hardware interface via syscall-type "mailbox" interface.
It looks like, per https://github.com/librerpi/lk-overlay#what-features-work , yes you can boot to Linux on a Pi 2 with composite video and ... it doesn't use the word headless anywhere, but I'd be very surprised if you can't just omit video outputs completely.
EDIT: Actually, reading more carefully it looks like there might be more than one blob and it's not 100% clear to me which this replaces, so now I'm less sure that you can boot without any proprietary blobs. I'm not sure that you can't, but I can't tell.
Note that the above is specific for the Pi 3 - the Pi 4, for instance, doesn't have the issue of a ridiculously undersized power connector (2.5W minimum standard for up to 13 W demand !!)
So many comments about Raspberry Pi Foundation being a charity here, and therefore... Let's just be clear, it is a charity and owns Raspberry Pi Limited. The profits from RPL help fund the charitable work done by RPF.
It's like complaining that the Bill and Melinda Gates Foundation was funded from profits from Microsoft, and therefore Windows should be FOSS software.
> It's like complaining that the Bill and Melinda Gates Foundation was funded from profits from Microsoft, and therefore Windows should be FOSS software.
Except that no one confuses Microsoft with the Bill and Melinda Gates Foundation. I'm sure that a lot of people are either unaware of the existence of Raspberry Pi Limited, or don't know the distinction between it and the Raspberry Pi Foundation.
I'm not saying Raspberry Pi Limited should not exist, or should not sell hardware. I'm just saying that it is understandable that some would be unaware of and perhaps surprised its existence and the distinction between it and the Foundation.
One things I'd like to see come out of RISC-V is a kind of microcontroller/microprocessor hybrid. Hobbyists are doing amazing things with the RP2040 PIO system (I haven't been able to wrap my head around it personally), including HDMI/VGA output and ethernet control. All out of a tiny microcontroller, and by (above-) average Joes in the street. What this proves is that you don't need expensive secret IP to get the job done. You may not do it spectacularly, but you can can do it.
Now imagine a gussy-upped system like this; a system that approaches being a microprocessor, probably not as powerful, that has something like PIO where some of them are dedicated to specific hardware. You could maybe store the drivers in a ROM, if you like.
So you'd have some kind of BIOS system. Want to produce a tone? Just push a value into a register. Write directly to the display buffer. No OS required, although a DOS equivalent would be nice. Not necessarily Linux, though.
Voila, you'd have some kind of real-time system that needs no OS unless you wanted it. The system would be very easy to understand, and a great educational tool. It would also be a Maker's dream. An 80's system bought into the 21st century. No binary blobs or any of that nonsense.
You'll know you've done it right when Doom has been ported to it (no joke intended).
I don't care if it doesn't run Firefox or if the graphics system seems a bit dated. Don't try to out-compete Intel, see what you can do to carve out a niche.
EDIT: Actually, reading more carefully it looks like there might be more than one blob and it's not 100% clear to me which this replaces, so now I'm less sure.
using the vc4-stage1, vc4-stage2, and rpi2-test projects together, you can do the entire boot chain (dram init and loading linux) using open source code
other projects act as demo's or tests on how to run custom code at various stages, but not actually boot linux on the arm core
many of those demos work on the the entire pi model range
pi3 support is only broken due to arm side problems, which could be fixed by just using a different bootloader
The question I have, is: given how maker / oss focused the raspi is, how (and by whom) did they manage to get locked-in and forced to use this proprietary piece?
Ben Upton worked for Broadcom during the time of the creation of the Pi. Broadcom gave them a sweetheart deal on the processors, they took it.
The Pi was initially created as a cheap way to get kids into computer science, they didn’t foresee the closed off parts of the processor being an issue towards that goal. They just wanted a cheap computer for kids to learn on that wouldn’t be the end of the world if they broke it. I mean who is gonna want to run an Open Source VideoCore?
THEN, us “grown up” geeks came along and was like “OHHHH, a cheap Linux SBC… Yes Please…” and brought out the initial run on day one.
So ever since then they were kinda stuck with Broadcom unless they wanted to redo a ton with another manufacturer.
To Bens and Raspberry Pis credit, they have managed to get Broadcom more open than they were, Initially we didn’t even have a data sheet for the processor.
Edit: Pre-coffee brain even more prone to typos then when I’m caffeinated…
They just wanted a cheap computer for kids to learn on that wouldn’t be the end of the world if they broke it.
I've always said that an old PC (but not too old, because retrocomputing has driven up prices then) is probably the best for that. Can be bought for next to nothing or even free, has extensive compatibility with lots of software, and also decades of detailed documentation.
Ebon's reasoning at the time was that he wanted a standardized computer you could build a curriculum around, that was cheap enough that schools could issue to kids without fear of them breaking it, and which was small enough the kids could take home with them in their backpack. (This was obviously well before Chromebooks and iPads took over the education market.)
Well there is a benefit to using new hardware, No issues with aging caps, no issues with sourcing peripherals (Unless a pandemic comes along messing up everyones supply chains), no issues with compatability.
By only have a single platform to support out the box you are getting rid of having to support multiple hardware configurations which could cause headaches for newcomers on day one. Remember it was basically an attempt to remove the roadblocks of getting people into CS. One of those roadblocks is getting people to "hello world".
IMO its a simailar reason to why Arduino worked so well, sure we could push people to using any other microcontrollers but by having single known board (atleast to start with) everyone is in the same boat and makes it eaiser (and cheaper) to offer support and lowers the barrier of entry imo. Basically is solving the fragmentation issue.
Is it the best way to learn? That depends on how you look at things. IMO it makes it a great stepping stone to get into the field which can then lead on to other things/interests, but you will probally learn more earlier on by skipping the "spoon feeding" stage but that (imo) comes with a steeper learning curve which could drive people away from the subject.
I know I delayed my own learning of the NRF platform to start with simply because at the time the toolchain was a PITA to get started with (esp on an unclean machine that had other compilers installed) so on a number of times I got fed up trying to get to "hello world" I would put it down and come back to it at a later time. However that process of less handholding did teach me more about the toolchain.
I generally agree on the simple, common approach being a great draw for Arduino and its related education. I was going through school around the time when arudino took off. IMO, older vendor toolchains were just painful by comparison. Licensed compilers ($$ license), janky IDEs that were death by 1000 cuts, having to learn different port masks (etc) for initializing different microcontrollers, IO libraries for each microcontroller, proprietary programmers (devices to load compiled software to the microcontroller). IMO, this is where FPGAs are largely still stuck in nowadays.
Though it probably wasn't all that bad. My experiences with the bad side of things largely stems from the PIC lineup. I still have trusted configurations of MPLAB + C Compiler that work vs others I just could never get working. Still have the PIC programmer. Some earlier arm tooling (armv6 era) was quite like this, too. Luckily, it has all opened up quite a bit. Either arudino-level ease of use or even drag and drop. The latter did exist in the armv6 era, since I have a Freescale Kinetis that operates like that, minus the simple IDE & compiler of the arduinos.
Simple IDE also means simple install, operation, and licensing to me. There may be a great paid IDE for the Kinetis, but the moment I have to start juggling more logins, node/floating license files, web-only environments, etc, I just remember it as time wasted on superfluous nonsense.
For me these days I value "Time to hello world" over many other things which is why I would rather use PlatformIO when I'm using a platform & framework it supports even if its lagging slightly behind the latest framework version from the vendor directly.
But looking back, back in the day when we had to walk uphill both ways in the snow to compile and write (get off mah lawn! :-P) I'm grateful I did learn "how the glue was made" instead of just using something ready made. But older grumpier me just wants to get shit done so I'm happy those days are pretty much behind me, but I'm ready to dust them off again if it was really needed.
That isn't something schools can buy several hundred of (with a stable platform) and not worry about electrical testing liability etc. Which is the main intended use case of the Pi.
The Raspberry Pi was basically only possible because of the Broadcom SoC. It is a capable chip, with approximately the right set of functionality for an SBC and at a good price (because it was already manufactured in volume). It was only available for something like the Raspberry Pi because Eben Upton worked for Broadcom and so could get to buy the chips and buy them at volume pricing - normally you'd have to have an established buyer relationship and be able to guarantee to buy far more than the Pi team was able to or expecting.
It's also worth remembering that originally the Pi wasn't maker/OSS focused - the goal was to have a computer cheap enough to be used for computing education in schools. In effect a modern day successor to the BBC Micro.
In the context of the goals and constraints the "minor" binary blob required to make it run was irrelevant. Even more so as basically every other similar SoC has exactly the same issue. The Broadcom parts presumably continue to remain competitive for their capability level and so they keep getting used but now thanks to the success of the Pi there is the will and capability of going full OSS.
That's because the idea for the RPi came from Eben Upton, who worked at Broadcom at the time - and no one else had better chips or (and here it gets crucial) would make them available at the low quantities that were initially expected without serious up-front money to get access to technical documentation and experience.
Creating a computing platform is - no matter the CPU vendor - one hell of an effort, often involving bunches of binary blobs of questionable quality, NDAs, buggy, outdated or plain lacking documentation and lots of money. The more effort you can save yourself (such as by using a product you already have experience with), the better.
Not true at all. Pinephone runs on a SoC with no binary blobs user has to install.
And the SoC can do all the stuff my Rpi 2b does, and more (has a proper audio codec, for one, real gigabit ethernet, working suspend to ram, etc.), does suspend to ram, accelerated video decoding, and has much more open documentation... Rpi 2b's soc is pretty terrible usability wise comapred to A64.
It is actually the other way around since a lot of other boards use no proprietary blobs, either out of the box or can be easily adapted to do that. The RPi Foundation are without doubt good at making hardware, but they're even better at letting users believe they're the only player in that field.
If running without blobs is your priority, various Allwinner or Rockchip SoCs would be much better choice. (they run with upstream kernel, and aren't GPU with ARM core added as an afterthought)
The GPU is not the issue. There are open source OpenGL ES drivers available in upstream Mesa right now, that work with upstream kernel DRM driver, and for RPi4 users there's even a Vulkan driver.
It's been pointed out elsewhere but briefly there is a VLIW processor (the "VPU") that is initially in charge of the entire boot sequence before handing off control to the main ARM cores; the bootcode.bin firmware for RPi devices is exactly this code. This includes things like bringing up PLLs and the on-board UART before handing off control to the ARM core where "userspace" code runs.
There are many free RISC-V implementations, and several free GPU drivers for various hardware families, but there is no combination of the two in any meaningful sense right now. If I had to guess I'd say ImgTec is probably one of the ones you could expect to pop up in an SoC somewhere, since I doubt ARM or Qualcomm are going to license their GPUs outside their families... ImgTec recently started contributing some code to Mesa but otherwise have historically been pretty hostile. So the immediate speculation doesn't look great at the moment but who knows what could happen.
Interesting, thanks for both the corrections. Curious, but do you know if there anyone still using Mali outside the ARM-licensed family at this point? I guess there haven't been many new entries to the mobile market for so long the GPU pairings seem natural, now...
Because frankly the parts available with fully free internals are...not very good. How many people care about the raspi having decent performance vs how many people care about the binary blob?
As I understand it, it was a SoC that was already in production by Broadcom for set top boxes so using an "off the shelf" SoC would reduce the time/cost required to bring the original Pi to market. I imagine there would be less risk to the manufacturer in this case since, if the Pi proved a failure and didn't move the expected units, the SoC could be repurposed for STBs.
According to the Pi Foundation, this is because simple peripherals are too easy to clone and they need to recoup their investment in accessory design.
I was reminded of this while I was researching Twitter speculation yesterday that something similar is done for the DSI interface for displays. I wasn't able to substantiate this - the FKMS (FakeKMS/FirmwareKMS) and proprietary Raspberry Pi video drivers, where link negotiation and backlight control is done in the blob, do only support specific displays. However, it's unclear to me if this is due to driver support or an intentional lock-in. The open-source KMS driver (not yet usable on Raspberry Pi 4) where link negotiation and backlight control is done in the kernel, of course supports anything with a driver.