Hacker Newsnew | past | comments | ask | show | jobs | submit | more tomalpha's commentslogin

The killer features of BBC Basic for me were:

- instant-on - you turned on the power switch at the back of the BBC Micro, got the double beep, and in less than a second were dropped into a REPL / shell with the language

- integrated assembler - you could inline assembly language really easily

- great documentation - before the web, documentation meant books - of which there were many - but also crucially in the BBC Micro's case also many television shows from the BBC.

- direct access to hardware - I realise this isn't BBC Basic itself really, but being able to PEEK and POKE (well, use ? and ! operators) to memory-mapped hardware addresses was great fun, and a great way to learn about how things worked.

The nostalgia for me around the language is strong, but without the hardware platform I'm not sure I'd want to go back to it.


The integrated assembler was very good. I worked for Acorn in the early 80s (was co-author of Acorn ISO Pascal), and we used our own H/W and S/W for all software development. ISO Pascal came in two 16K ROMS, one holding the compiler (in VM code), and one everything else (virtual machine, screen editor, Pascal libraries etc) which was all written in assembler using BBC Basic.

The combination of BASIC with the basic ability to have inline assembly was very convenient - just use a BASIC for loop for two-pass assembly, use CHAIN to split source into multiple files, etc.


There's no contact in your profile so I'll say it here: Thank you! That work had a big impact on me when I was getting started and I still think of the B incredibly fondly.


Thanks - it's great to hear that. Acorn was an amazing place to work at that time - an absolute dream job for me straight out of college.


The day my dad brought home my Acorn Electron was a great day indeed. Closely followed by the day I got a tape recorder and DIN to 3.5mm/2.5mm cable.


> The nostalgia for me around the language is strong

Same here. I cut my programming teeth on BBC Basic and later 6502 assembly, initially on an Electron, then the Model Bs at school, and we later had a Master 128 at home.

The integrated multi-pass assembler was a godsend for someone who got to the point of wanting to play around at a lower level, but before getting to that stage the language had other things that set it far apart from other micros of the era:

• Better structured programming constructs: proper procedures and functions where some other 8-bit BASIC implementations had nothing beyond GOTO/GOSUB. With a little hoop jump you could completely do away with line numbers.

• Long variable names, where some implementations only allowed two, or even just one, character. This allowed code to be a little more self-documenting. IIRC it only considered the first 40 characters⁰ despite not erroring when there were more though, so if you used anything longer one variable could silently clobber another.

----

[0] but who was using such long names in the limited memory¹ of an 8-bit home micro?!

[1] I did actually write something a bit akin to modern JS minimisers, to make things fit in the smaller model A² machines: it removed REM statements and did a fairly naive scan-then-search-and-replace to replace long names with shorter ones

[2] these had only 16KB rather than 32, which after taking out screen memory and other standard allocations were taken out didn't leave a lot of room for your code to live in


The name lookup routine is "Find name in catalogue" here: https://archive.org/details/BBCMicroCompendium/page/314/mode...

There's no obvious length check. I guess the actual limit will be 255 or 254 characters, maybe minus a bit if the info block has any extra data.

EDIT: previous discussion: https://news.ycombinator.com/item?id=19246063


Hmm. I wonder where I get the 40-character-limit memory so strongly from…

Line length was limited by a byte-length counter and IIRC included the line number and maybe EOL, so would be something like 253 or 252. Do the maximum usable variable name length will be a couple of bytes less than that as you'll need a couple of characters to do something with is (LongLongLong...Long=1 and so forth).

EDIT: oh, interesting. The only references I can find to a variable name limit of 40 characters are referring to the PC BASIC implementations by MS: GWBasic, QuickBasic, and QBasic. I did do work in those too.


I donated my BBC Model B+ to a computer museum recently, along with a stack of Acorn User magazines (available on the Internet Archive, BTW) and software on cassette. Felt strong pangs of regret driving away. I can still feel the excitement of figuring it all out, a world opening up to me.

Those BBC TV shows had the unusual feature of broadcasting software over the end credits. Just had to tape the screeching and play it back into the computer.


One of the shows also did an experiment of downloading software from the screen itself - you sent off for a little box which I think (it's been a while!) plugged into the Beeb's serial port, fired up a bit of software and just before the end of the show they'd put a little square graphic overlay over the broadcast in the bottom left hand corner.

That was your queue to literally physically stick the box over that square on the screen and then a few minutes later during the end credits that square would turn into what would look like to the human eye just plain old static but to the sensor in the box stuck over it, it was reading it as a datastream that the software would interpret and save.

To be honestly it wasn't terribly reliable, I think we got it to work maybe once or twice in the few times they did it but was an interesting experiment by the BBC back in the 80s!


The Internet Archive should have the cassette software as well, although some of it might be hidden from view due to copyright concerns.


There's this: https://archive.org/details/BASICODE2Manual

I actually tried downloading programes from the Dutch radio station back in those days - and it worked.


>Those BBC TV shows had the unusual feature of broadcasting software over the end credits. Just had to tape the screeching and play it back into the computer.

Can you explain this? Do you mean that BASIC programs were encoded as sound in some way, and then could be uploaded into the computer and run?


Never used a BBC but 8bit computers of this era often used cassettes to load and save data.

The tape would contain bleeps and blurps which would be decoded into bytes by the computer. EG this is the sound produced by an Amstrad cpc464 loading a game: https://www.youtube.com/watch?v=OvChkOHgDIo

This meant that to copy software you didn't even need a computer, just a double cassette deck.

And that by recording the credits of this BBC show to tape and playing that back into the computer you'd load some program. That's actually a brilliant idea, I wonder what kind of software they broadcasted.


Yes, brilliant.

Sounds (pun not intended but noticed) like steganography.

https://en.m.wikipedia.org/wiki/Steganography


Got it, thanks.


Not just basic programmes, anything digital can be encoded in this way.

Software has even been distributed on vinyl records and flexi singles!

You could connect a domestic cassette recorder up to a BBC Micro and use to save and load software onto normal cassettes!

There were favoured devices that would give better results and better cassettes for data storage and so on.

Its all ancient history and folklore now :-)


Thanks.


Yes. That's exactly it. Just like an acoustic modem. And also how software and data was stored on compact audio cassette when disk drives (the floppy kind, not the hard kind) were too expensive or out of reach of the average person.


Makes sense. Thanks.


Before getting my hands on a BBC Micro I'd done all my teenage programming on an Apple II - so the killer feature of BBC Basic for me was that it had a renumber command. No more having to re-type code because I'd used-up all the line numbers between line 110 and 120. A little thing but it felt like magic.


I absolutely hated line numbers. In my current paid job I'm paid to develop Visual Basic applications, not a single line number in sight. Basic has certainly come a long way since the 80s.


The nice thing about line numbers is that you didn't have to learn a different code editor for different computer brands. I could walk up to any computer that had BASIC in a department store, and program up my favorite childish prank on every brand:

         10 PRINT "This computer is overheated."
         20 PRINT "WARNING: Computer about to EXPLODE!..."
         30 GO TO 10
         RUN
Newbies walking up saw the active screen, got wide-eyed and walked away quickly. One even called security, as I watched from a distance. Good times!


Structured Basic was already a thing in the 80's, see VMS Basic, or Turbo Basic.


Yes, VMS Basic will be ported to the OpenVMS platform on amd64 fairly soon and will be available next year. I'm one of the lucky few testing OpenVMS on amd64.


Interesting, thanks for sharing.


> No more having to re-type code because I'd used-up all the line numbers between line 110 and 120.

Line numbers are arbitrary, you can just use GOTO to jump to some out-of-line code then GOTO back at the end. It gets a bit spaghetti'ish if you do it lot, though.


Even back in ~1985 I'd have felt bad about such a practice. And I had only the very vaguest notions about "structured programming".

The school I went to only had a cpuple of computers, so I wrote code longhand on A4 lined paper. When I needed to insert lines, I wrote them on a slip of paper that I placed at the appropriate place on the page and stapled on the right-hand edge.

We've certainly come a long way.


Dijkstra ruined programming. >:(


Desperate times


Possibly this would be more up your alley in that case: http://www.mkw.me.uk/beebem/.

I must admit, I feel somewhat similarly to you. I want to prod at the hardware and write some assembly code. Whereas if I wanted to work with SDL there are better ways for me to do that.

With that being said, BBC Basic was a great entry point into programming for a lot of people and it's perhaps the case that it could still be so, so I do appreciate the fact this project exists.


https://virtual.bbcmic.ro/ is the full experience (clack clack clack)


While it's far from the same, I see a lot of similarities with modern web browsers (and part of why I love to play with them):

- Instant-on - You hit F12 and in less than a second you've got an IDE with a REPL

- Integrated assembler - While I don't think you can inline it, WASM is really easily used: https://developer.mozilla.org/en-US/docs/WebAssembly/Loading...

- Great documentation: https://developer.mozilla.org/en-US/

- Way too much access to hardware: I wish browsers had less access to hardware due to privacy and security, and I don't know how low level the APIs get, but it's something you can play around with as a random person with a web browser, so that's neat.


One of the books was an entire annotated disassembly of the BASIC interpreter, if memory serves. I vaguely remember there being some sort of kerfuffle about that.

Another thing that one got: a printed circuit diagram of the machine.

As for today: One can get get an entire annotated disassembly for Elite, including the version that used the Second Processor: https://www.bbcelite.com


Those books had tremendous impact. BBC Basic was the first programming language I ever saw, in a children's book in a library, years before I ever got to touch a PC. It made computers seem so straightforward that it felt natural to reach for one as a tool or a toy. I've only ever seen a BBC Micro in a museum.


Another feature that stands out on the BBC is that the underlying routines that BASIC uses for maths, IO, etc, are available via Assembly, so you could easily integrate them into your Assembly programs.


>- integrated assembler - you could inline assembly language really easily

Yes, as easily as this:

some BASIC statements here

[ some assembly statements here ]

some BASIC statements here

IOW, you just had to enclose your assembly language statements in square brackets. That's it.

Of course, you would need to know what memory addresses to operate on, in a real-life program, as opposed to a demo, so that you could share data in some way between the BASIC code and the assembly code, otherwise the program might not be able to do anything useful.

I don’t know about the multi-pass assembler feature that others have mentioned in this thread.


On your last point: on the first pass the assembler wouldn't know about labels that came later on the assembly, but on the second pass it would have seen them. IIRC normal way to run the assembler was to do to a for loop from 0 to 3 with step size 3, as 0 indicated suppressing all assembler errors.


Ah, got it, thanks. But why not 0 to 1 with step size 1? Wouldn't that also give two passes, which should be sufficient, and which the said normal way also does?


There were four OPT (modes) for the assembler, numbered 0 to 3. 0 suppressed all errors and screen output. 3 did the opposite.

Using 1 would suppress errors, which would mean you wouldn't know if your code was bad.

You could use 0 and 3 or, if you don't want a listing, 0 and 2.

Search this page for 'first pass', for a more complete explanation: https://central.kaserver5.org/Kasoft/Typeset/BBC/Ch43.html


I think it's clear now - mode 0 is used to suppress the error messages about yet-unseen labels in the first pass, and 3 to give the output with any errors. Meanwhile, by the end of the first pass, all labels would have been seen, so in the second pass, the assembler could insert the correct addresses for them, at the places where those labels were used in jump statements, even if some of those statements were before where the labels were defined.

But I'll check that page out anyway.

Thanks again.


100% correct.


The other big difference with BBC Basic was that it had functions. The other version of Basic I used only had Subroutines accessed via Gosub.


Not just functions, but procedures too, and local variables. DEF PROC, DEF FN, and LOCAL.


Yeah, I never got that close access to one, unfortunately. I was a Dragon 32 kid, and only because my Dad bought me one cheap when they went bust.


And that is also why Dragon Data owes me about 4,000 UKP (in 1980's money).


> instant-on

Sometimes replaced with a smoke screen at this age: https://www.youtube.com/watch?v=TU55-7dWMi0


Mine did that some time ago. I've put it into storage for when I can afford to have it recapped and reconditioned.


Agreed and - perhaps apart from the integrated assembler - these were common features of 8-bit machines.

I had a ponder on the attractions of the 8-bit era a few days ago ...

https://thechipletter.substack.com/p/the-virtues-of-the-8-bi...

Closest I've found to a modern version is the Colour Maximite

https://geoffg.net/maximite.html


Are there any instant on/boot to REPL systems available today?


The Maximite , you can load on a Raspberry Pi Pico or buy the hardware... https://en.m.wikipedia.org/wiki/Maximite


MicroPython on a raspberry pi, pyboard, esp, etc.


Teletext mode (MODE 7 IIRC) was fun too!


Having functions and procedures also made it stand out at the time.


Additionally - the BBC would put "how to code" programmes on the TV. That's how my neighbour got started when I was knee high.


The yahoo link you reference in [1] notes “All numbers in thousands”

I think you might be off by 3 orders of magnitude and google could “afford” to pay every homeless person a thousand times more than you’re suggesting.

I further think that undermines your argument about the problem being fundamentally intractable due to the scale.


That is true it says all numbers in thousands also I noticed I accidentally was on years and not quarters. So 11k a month for example is the new number for a good year [1] and 5k a month is the example for a bad year [2]. So yes now it seems more tractable than it was before, but my argument was just supposed to show one reason why it was intractable there could be many other reasons, though my argument fell flat for that one reason there are still many other reasons it could be intractable.

So the question still remains is it tractable? The answer given my above argument is still up in the air, because the honest truth is there are many underlying assumptions in my argument so again it doesn’t really say much about it being tractable. It was only trying to say it was intractable which again it fell short of doing. For example in the per month. After Google dumps all or even some of their profit into that for even one month it is somehow going to still reach the same profit margins the next month the proof for that is up to someone trying to prove it’s tractable. There are too many other variables like this that exist and it really needs a much bigger burden to show that something like that is tractable.

[1]: https://www.wolframalpha.com/input?i=%2870+billion+%2F+12%29... [2]: https://www.wolframalpha.com/input?i=%2830+billion+%2F+12%29...


How much cheaper are RISC-V chips in reality?

I get there might be a saving for not having to pay ARM a royalty of some kind, but are RISC-V chips cheaper in practice as a result?

I was under the impression (rightly or wrongly) that the arm royalties per chip were really small.


I think one of the biggest fears from ARM would be the popularity of the Raspberry Pi and their community.

There are better boards than the Raspberry Pi (strictly speaking specifications here). I took that path of playing with a lot of alternative boards, my biggest issue is lack of support, some boards had kernels never updated, etc...(YMMV).

If Raspberry Pi released a RISC-V board I have no reason to believe the community would not be just as strong. Sure in the beginning they would support both, but eventually the ARM support would wither.


> some boards had kernels never updated

As much as I love tinkering, this is why I stick with RPi boards and not others. RPis do everything I need, and have amazing support in software.


Is it truly amazing? I was under impression that Raspberry requires some blobs to run properly. Is there detailed specifications for Broadcom chip they're using? I was under impression that it was NDA and not possible to obtain for ordinary mortal. So may be it's good because of sheer number of people tinkering with it and smoothing rough edges, but it could be better. Please correct me if I'm wrong.


I would better phrase it as every other board has amazingly bad support.


Sometimes it is a matter of choosing the one that sucks the least.


> Please correct me if I'm wrong.

My memory told me it was the GPU that needed the blobs. So I asked at DDG

https://duckduckgo.com/?t=ftsa&q=binary+blobs+and+the+Raspbe...

Turned up this: https://wiki.debian.org/RaspberryPi and it says...

> All Raspberry Pi models before the 4 (1A, 1B, 1A+, 1B+, Zero, Zero W, 2, 3, Zero 2 W) boot from their GPU (not from the CPU!), so they require a non-free binary blob to boot

So the 4 (and I suppose the 5, if it ever actually comes...)

Goes on to say:

> Since then, Broadcom publicly released some code, licensed as 3-Clause BSD, to aid the making of an open source GPU driver. The "rpi-open-firmware" effort to replace the VPU firmware blob started in 2016. See more at https://news.ycombinator.com/item?id=11703842 . Unfortunately development of rpi-open-firmware is currently (2021-06) stalled.

So there you are. Not wrong, are you, but not strictly correct, depending on "...to run properly" definition

https://github.com/librerpi/rpi-open-firmware has updates 3-months ago


You're right. Pretty much all the low level stuff below the kernel in a pi is closed source.

Want your own custom boot rom so you can start up in half a second rather than the default 3 seconds before linux gets loaded? - sorry, we can't share the code for that with you, nor the specs for you to write it yourself!


There’s a few popular bare metal projects. Pi 1541 springs to mind. So it is possible, tho perhaps only with earlier generations


That's not what they meant. Yes, anybody can write bare metal alternative to Linux ( at the very least by looking at the Linux codebase). But still that Pi 1541 depends on the bootcode.bin, fixup.dat and start.elf binary blobs, which the OP was complaining about.


It wasn’t clear to me what they meant. I’m not familiar with the details of boot.bin but my read of what GP was implying was you had to use the rapsbian kernel and drivers. Thanks for the information.


Same. I would prefer to use some of the alternatives, but at the end the support level is important.

That said, I'll tolerate some regression in support to switch to a RISC-V based competitor.


Beaglebone?


And also lots of the alternative boards target RPI users ("like raspberry pi but better, cheaper etc"). If RPI switched then it would probably make sense for many of the other boards to switch to in order to stay comparable to RPI.


If they want to compete they have to agree on something like ACPI so the OS vendors can target their boards without a separate distribution for each board.


Most ARM boards already use Device Tree with Linux, there isn't any new agreement needed. What is missing is getting the drivers for each board into the same source tree.

NetBSD provides a filesystem image containing one kernel that will boot on all supported ARMv8 boards, you may need to write a board-specific build of u-boot to the start of that image. A Linux distribution could do the same as this.


Is Raspberry Pi really that big for ARM?


In the past, one of often cited reasons for the lack of success of ARM server offerings was that every developer machine is x64, no software is ever tested on ARM and no dev has a machine to do so. As the Raspberry Pi got more performant it brought a lot of people running software on ARM, leading to lots of software publishing ARM builds, toolchains getting exercised, bugs ironed out, etc.


With Apple having moved to Arm there are definitely ecosystem benefits brewing too. I can run a Debian VM out of the box with UTM, compile an Arm64 package using standard Linux toolchain and have it run directly on a RPi. It's like having a supercharged RPi development environment with me all the time.


I wouldn’t be surprised if the existence of many ARM docker images for hobbyist projects were indirectly due to the popularity of the Raspberry PI (in anrditi to Apple switching to ARM, more recently).


for hobbyists yes. Usually people stick to the technology they first started experimenting with, and RPi is that platform for many future experts.

If you are going to learn computer architecture, you will learn something cheap you have on hand


The technology is linux, gpios etc. not which instruction set the CPU has. That is completely irrelevant and raspberry switching to USB-C is a much bigger change from a user standpoint than switching to RISC-V would be.

Assuming performance and software support is comparable. Which obviously won't be the case for a long long time.

But there are few things as irrelevant as the CPU instruction set. (Part from specific extensions, like AES support enabling quick crypto etc.)


> The technology is ...

The technology is what people know. Using a different SoC board, camera, ... requires adds more time to gain the same level of knowledge.

Doctors will latch onto a single product solution so they don't have to spend the time learning how to operate an alternative. Hospitals need to stock consumables based not on the best products but on what doctors know.

Airlines retain the same air crafts to reduce time spent learning to operate an alternative. Boeing marketed this as a sales feature with 737 MAX, no extra flight training required!

Software developers will often stick with the same language, even though others better fit the domain problem. Few seem willing to take the time to try and play with new concepts, languages, and operating systems.

Trying new and different things drives innovation not the world.


Exactly my point. Changing the instruction set on the CPU won't change the API for the camera etc.

Some things will surely change because the hardware backed features work in a different way, but mostly it will be similar enough.

Going from a raspberry pi to an orange pi could be a much bigger leap than switching to an raspberry with RISC-V that has mature software support.


Very much disagree. Of course linux and GPIO is important, but the widest use case for them among myself and people I know is as a build box and/or something to test ARM software on. One person uses it to learn ASM. From my small and surely non-representative sample, the architecture is maybe the most important thing. So I don't think we can confidently say that CPU arch isn't relevant.


But instruction set, in practice and in this case, is tightly coupled with form factor and MIPS per watt. I work in mobile robotics: my low end choice is RPi, high end an NVIDIA board. While I can see Risc-V challenging ARM here, they don’t yet. (Excepting an ultra low power/low compute edge-case.) I just don’t see any CISC architecture that’s available today competing.


> But there are few things as irrelevant as the CPU instruction set.

I’m going to have to disagree with that statement…

… but only because I’m in the middle of goofing around with ARM assembly on an RPi as we speak.


I ... do not know.

The Amiga 500 didn't make people stick to the m68k. They went IBM PC as did everyone else.


And they are still bitter about it !


There aren't many other options if you want or need a system with physical access near your desk that is also designed to run GNU/Linux. I think the only other options are less-known SBC-based systems, or systems where GNU/Linux is an afterthought at best. With the Raspberry Pi 4, it's quite likely that one of your colleagues already got the locally preferred distribution to boot on it. Lots of people use them to reproduce and fix generic AArch64 issues, even if they have remote root access (including the ability to install another OS) to much faster lab machines.


I think a lot of the motivation here is to avoid RPI leading to further interest and development on the RISC-V side. SIFIVE may be willing to make something perfect for the RPI, but the real risk is if RPI goes RISC-V, then software gets ported and tested on RISC-V. People start liking it and hacking it. And then RISC-V suddenly gets a lot more interest from ARM licensees and their competitors. Which, at the very least, will lead to higher licensee leverage in negotiations with ARM on renewal.


Bingo. This is what they mean by "strategic investment." It's not because they are just lovers of the education and maker market. They'd be negligent to their shareholders to not make this investment.


>but the real risk is if RPI goes RISC-V, then software gets ported and tested on RISC-V.

It's already happening. RISC-V doesn't need the Raspberry Pi.

Yet it would indeed be an accelerator, but I'm not sure how big. ARM are certainly very afraid.


Cheaper is not even the primary reason to use RISC-V over Arm.

The ability to modify the design for your application and be able to apply a plethora of cores where they are needed at only the cost of silicon area.

Apple has been moving their management cores on their M-series parts from Arm to RV and they have an architectural license.

Everyone has an architectural license for RISC-V, you can add your own instructions, change the mix of available instructions. A whole parametric RV32 or RV64 will be available on every node at every fab.

This move by Arm is absolutely to block RPI from moving to RISC-V. They have mindshare and distribution.


Other than geopolitical and totalitarianism concerns, is that a good and bad thing that let you freely change your instruction … implementing no charge I understand, but everyone has their own architecture …

Art is not just about no limit, but what you do with the limitation including illuminating there is a limit. Not just about beyond it but of course you could.


Not sure, but really small royalties on $4 computers can still be accumulate. They may also prevent $4 computers turning into $1 computers even if huge volumes would call for this otherwise.

Moreover, those royalties put fences on what might otherwise be more open and collaborative. It would be sad, if the tinkering goals set out originally by Raspberry Pi were curtailed by ISA license restrictions. RISC-V inherently has an edge over ARM in that dimension.


SiFive would practically give away their latest P870 cores (close to A78 performance levels) to get them shipping by the millions in the Pi because of the free advertising, free exposure, and massive boost it would give to the development of the RISC-V ecosystem.

Once such a switch was made, Pi could even consider using free and open source cores for their low-end devices where margins are slim (an area where RISC-V has already been gaining ground rapidly).


Where can I buy one of these p870 cores and how much do they typically cost? (Or boards containing them?)

These are the newest OoO cores with full 1.0 vector extensions, right?


As a normal consumer you can't. But the predecessor P670 should be available in the SG2380 board in 2024Q3 [0]. It will have 16 P670 and 8 X280 cores, and will cost $120 to $200 (without included RAM).

[0] https://forum.sophgo.com/t/about-the-sg2380-oasis-category/3...


>P870

These cores have just been announced, and are available for licensing. You can contact SiFive if you're interested.

If what you want is hardware somebody's already made, that's going to take 2-3 years as per tradition.

>These are the newest OoO cores with full 1.0 vector extensions, right?

Yes, but so are multiple generations of predecessors. It seems that hardware based on P670[0] and X280[1] (both match that description) will be available for purchase in less than 10 months from now[2].

0. https://sifive.cdn.prismic.io/sifive/7be0420e-dac1-4558-85bc...

1. https://sifive.cdn.prismic.io/sifive/9405d3d0-35a1-4680-a259...

2. https://forum.sophgo.com/t/about-the-sg2380-oasis-category/3...


A tougher question as so many factors that I hope Asianometry on YouTube cover one day.

  Whilst the RISC-V instruction set is free from licence/patent fees, the design of those chips will be made by a company that will need to recover costs, so there will be a cost.   Compared to ARM who already offer up free core designs for use for free like the cortex-m0 and others.  I know the Raspberry PICO uses a cortex-m0x.
Though, many do seem to blur the lines between the instruction set and the core design you get in the chip you buy.

Over time, but that will be when you get open-source RISC-V chips that compete with the designs of the market offerings other companies make and sell. Which as a mindset may cause the evolution of RISC-V issues as people could get burned due to expectations exceeding the reality and finding the get what you pay for those to still hold stronger than envisioned. Yes, eventually open-source CPU cores will get there, but production costs and scale will still become a factor, as to many aspects that all add up to the final cost. This is even on a level of software stack tried and fully robust of equality, which is still behind ARM.

ARM may well kill off x86 as a mainstream before RISC-V fully bites it.

Look at how long ARM took to get where it is, and started with microcontrollers, used in many things. That is where RISC-V will and is starting to get traction, but scaling beyond that, whilst could be faster than ARM's history, it's not as clear-cut as many foresee.


ARM did not start with microcontrollers - it started with desktop CPUs. In fact, it was a very long time before it got to microcontrollers!


For "coming down the pipeline" they're essentially free.

Today, the c910 is an Apache 2, hardware proven out of order core on GitHub here https://github.com/T-head-Semi/openc910 a little slower than an RPi3's core.


What’s the price per unit?


There are no license fees if that's what you're asking.

Additionally it's very competitive in PPA metrics for it's gate count, so cheaper than similar cores in terms of wafer area as well.


I think the point is, very few people are looking to buy some RTL, they want the silicon. So the question is what's the price of the silicon? My understanding is anything comparable to the SoC in the RPi boards is still quite a lot pricier.



Every other place seems to claim that the RRP is $119. So this must be a mistake/scam? Or did the price go down this much?


Good question. I looked it up and apparently the royalties are around 2% of the price of the chip, so it probably won't make a huge difference. They do also charge an up front "membership fee" which seems to vary from $200k to $10m depending on the chip.

RISC-V is very very popular already, but most RISC-V cores are not user accessible. They're controllers in hard drives, embedded management cores in SoCs, etc. Definitely nice to not have to pay for licensing that, and verification doesn't matter so much since you are more likely to be able to work around bugs in software.

I suspect it won't make a huge difference to visible CPUs. Probably the biggest impact will be flexibility.


Regardless of cost, working with almost anyone other than broadcom would probably be seen as a huge plus as well.


A sibling comment makes the more relevant point about powerline, but it’s worth considering that whatever the power company has to pay, you the consumer will be paying for in the end.

You could make the case (a good one I think) that a homeowner providing connectivity should get a discount to their bill, or that the power company could directly charge for a separate communication channel.

That’s kinda saying the same thing in different ways and depending on your point of view they might not feel fair perhaps. But the consumer is paying either way.

Edit: “separate communication channel” could include a human meter reader too!


> Elon will allow people he disagrees with on the platform

I’m not comfortable relying on a single “benevolent” dictator decided what is and isn’t allowed on the platform.

Particularly because the benevolence or lack thereof is, at best, hotly debated in this context.


I like to think some awards are less bad than others. See Autocar’s blog[1] about how they voted for the Car Of The Year award which they do every year. It seems to be reasonably transparent.

[1] https://www.autocar.co.uk/opinion/new-cars/how-autocar-voted...


Be very careful with 'X of the Year' verbiage, especially at the end of the year. There are many publications that use this verbiage for awards, and stumbling upon them via Google or other means almost always results in biased and paid for opinions.


"Best movie of the year" quotes made in February are my favorite


To be fair, a glance at her LinkedIn profile (top google search result for her name) suggests that 1969 could well be her birth year.


Totally unprofessional to be born in that year, IMO. She should have considered the long-term ramifications when deciding to emerge from the womb.


It's like the apocryphal 13th floor in hotels, people should be born in either 1968 or 1970. Otherwise I might make a fool of myself by mocking their username.

Same as anyone born on April 20th.


Don't be too hard on yourself. I know someone who was born in 1988 who quickly realized it's a bad idea to have a username that ends in '88'.

If they can avoid it, then these GenXers born in 1969 can :)


In fairness, if I was born in 1969 I would be deeply torn between using it in every single username I had for the sake of comedy, or avoiding it precisely because people tend to make assumptions and actually being born then is never going to be anyone's first thought.


Ah, well not the first time I've made a fool of myself jumping to the wrong conclusion. I'll change the comment.


I mean, nobody forced them to use their birth year in their username


Our of pure boredom, I can confirm records show this.


I'm looking forward to the Y2k69 hysteria.


Where every username containing 69 has already been registered before anyone born in 2069 can use one?


A friend of mine was born on 6/9/69 and she made sure to tell everyone.


I’d go further for the specific example of the book. For me the price of reading a good book isn’t a price at all. It’s a gain. I’ve paid the first up front price to be able to read the book and gain enjoyment from it.

I get that there are different reasons for reading a book, but even then, it also seems overly simplistic and absolutist to me.


A single game of Factorio can take a long time. Mine usually take around 100 hours.

The craving to (tweak|move|refactor|grow) the base for certain personality types that are richly represented on HN can mean you can spend hundreds more on it too.


Try installing the Space Exploration mod, and you can easily add an extra zero to that figure.


Such a summary can be very helpful to disambiguate the subject matter and save me (and I’m guessing many other folks too) the time of reading every article to find out whether it’s interesting/relevant to me.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: