- instant-on - you turned on the power switch at the back of the BBC Micro, got the double beep, and in less than a second were dropped into a REPL / shell with the language
- integrated assembler - you could inline assembly language really easily
- great documentation - before the web, documentation meant books - of which there were many - but also crucially in the BBC Micro's case also many television shows from the BBC.
- direct access to hardware - I realise this isn't BBC Basic itself really, but being able to PEEK and POKE (well, use ? and ! operators) to memory-mapped hardware addresses was great fun, and a great way to learn about how things worked.
The nostalgia for me around the language is strong, but without the hardware platform I'm not sure I'd want to go back to it.
The integrated assembler was very good. I worked for Acorn in the early 80s (was co-author of Acorn ISO Pascal), and we used our own H/W and S/W for all software development. ISO Pascal came in two 16K ROMS, one holding the compiler (in VM code), and one everything else (virtual machine, screen editor, Pascal libraries etc) which was all written in assembler using BBC Basic.
The combination of BASIC with the basic ability to have inline assembly was very convenient - just use a BASIC for loop for two-pass assembly, use CHAIN to split source into multiple files, etc.
There's no contact in your profile so I'll say it here: Thank you! That work had a big impact on me when I was getting started and I still think of the B incredibly fondly.
> The nostalgia for me around the language is strong
Same here. I cut my programming teeth on BBC Basic and later 6502 assembly, initially on an Electron, then the Model Bs at school, and we later had a Master 128 at home.
The integrated multi-pass assembler was a godsend for someone who got to the point of wanting to play around at a lower level, but before getting to that stage the language had other things that set it far apart from other micros of the era:
• Better structured programming constructs: proper procedures and functions where some other 8-bit BASIC implementations had nothing beyond GOTO/GOSUB. With a little hoop jump you could completely do away with line numbers.
• Long variable names, where some implementations only allowed two, or even just one, character. This allowed code to be a little more self-documenting. IIRC it only considered the first 40 characters⁰ despite not erroring when there were more though, so if you used anything longer one variable could silently clobber another.
----
[0] but who was using such long names in the limited memory¹ of an 8-bit home micro?!
[1] I did actually write something a bit akin to modern JS minimisers, to make things fit in the smaller model A² machines: it removed REM statements and did a fairly naive scan-then-search-and-replace to replace long names with shorter ones
[2] these had only 16KB rather than 32, which after taking out screen memory and other standard allocations were taken out didn't leave a lot of room for your code to live in
Hmm. I wonder where I get the 40-character-limit memory so strongly from…
Line length was limited by a byte-length counter and IIRC included the line number and maybe EOL, so would be something like 253 or 252. Do the maximum usable variable name length will be a couple of bytes less than that as you'll need a couple of characters to do something with is (LongLongLong...Long=1 and so forth).
EDIT: oh, interesting. The only references I can find to a variable name limit of 40 characters are referring to the PC BASIC implementations by MS: GWBasic, QuickBasic, and QBasic. I did do work in those too.
I donated my BBC Model B+ to a computer museum recently, along with a stack of Acorn User magazines (available on the Internet Archive, BTW) and software on cassette. Felt strong pangs of regret driving away. I can still feel the excitement of figuring it all out, a world opening up to me.
Those BBC TV shows had the unusual feature of broadcasting software over the end credits. Just had to tape the screeching and play it back into the computer.
One of the shows also did an experiment of downloading software from the screen itself - you sent off for a little box which I think (it's been a while!) plugged into the Beeb's serial port, fired up a bit of software and just before the end of the show they'd put a little square graphic overlay over the broadcast in the bottom left hand corner.
That was your queue to literally physically stick the box over that square on the screen and then a few minutes later during the end credits that square would turn into what would look like to the human eye just plain old static but to the sensor in the box stuck over it, it was reading it as a datastream that the software would interpret and save.
To be honestly it wasn't terribly reliable, I think we got it to work maybe once or twice in the few times they did it but was an interesting experiment by the BBC back in the 80s!
>Those BBC TV shows had the unusual feature of broadcasting software over the end credits. Just had to tape the screeching and play it back into the computer.
Can you explain this? Do you mean that BASIC programs were encoded as sound in some way, and then could be uploaded into the computer and run?
Never used a BBC but 8bit computers of this era often used cassettes to load and save data.
The tape would contain bleeps and blurps which would be decoded into bytes by the computer. EG this is the sound produced by an Amstrad cpc464 loading a game: https://www.youtube.com/watch?v=OvChkOHgDIo
This meant that to copy software you didn't even need a computer, just a double cassette deck.
And that by recording the credits of this BBC show to tape and playing that back into the computer you'd load some program. That's actually a brilliant idea, I wonder what kind of software they broadcasted.
Yes. That's exactly it. Just like an acoustic modem. And also how software and data was stored on compact audio cassette when disk drives (the floppy kind, not the hard kind) were too expensive or out of reach of the average person.
Before getting my hands on a BBC Micro I'd done all my teenage programming on an Apple II - so the killer feature of BBC Basic for me was that it had a renumber command. No more having to re-type code because I'd used-up all the line numbers between line 110 and 120. A little thing but it felt like magic.
I absolutely hated line numbers. In my current paid job I'm paid to develop Visual Basic applications, not a single line number in sight. Basic has certainly come a long way since the 80s.
The nice thing about line numbers is that you didn't have to learn a different code editor for different computer brands. I could walk up to any computer that had BASIC in a department store, and program up my favorite childish prank on every brand:
10 PRINT "This computer is overheated."
20 PRINT "WARNING: Computer about to EXPLODE!..."
30 GO TO 10
RUN
Newbies walking up saw the active screen, got wide-eyed and walked away quickly. One even called security, as I watched from a distance. Good times!
Yes, VMS Basic will be ported to the OpenVMS platform on amd64 fairly soon and will be available next year. I'm one of the lucky few testing OpenVMS on amd64.
> No more having to re-type code because I'd used-up all the line numbers between line 110 and 120.
Line numbers are arbitrary, you can just use GOTO to jump to some out-of-line code then GOTO back at the end. It gets a bit spaghetti'ish if you do it lot, though.
Even back in ~1985 I'd have felt bad about such a practice. And I had only the very vaguest notions about "structured programming".
The school I went to only had a cpuple of computers, so I wrote code longhand on A4 lined paper. When I needed to insert lines, I wrote them on a slip of paper that I placed at the appropriate place on the page and stapled on the right-hand edge.
I must admit, I feel somewhat similarly to you. I want to prod at the hardware and write some assembly code. Whereas if I wanted to work with SDL there are better ways for me to do that.
With that being said, BBC Basic was a great entry point into programming for a lot of people and it's perhaps the case that it could still be so, so I do appreciate the fact this project exists.
- Way too much access to hardware: I wish browsers had less access to hardware due to privacy and security, and I don't know how low level the APIs get, but it's something you can play around with as a random person with a web browser, so that's neat.
One of the books was an entire annotated disassembly of the BASIC interpreter, if memory serves. I vaguely remember there being some sort of kerfuffle about that.
Another thing that one got: a printed circuit diagram of the machine.
As for today: One can get get an entire annotated disassembly for Elite, including the version that used the Second Processor: https://www.bbcelite.com
Those books had tremendous impact. BBC Basic was the first programming language I ever saw, in a children's book in a library, years before I ever got to touch a PC. It made computers seem so straightforward that it felt natural to reach for one as a tool or a toy. I've only ever seen a BBC Micro in a museum.
Another feature that stands out on the BBC is that the underlying routines that BASIC uses for maths, IO, etc, are available via Assembly, so you could easily integrate them into your Assembly programs.
>- integrated assembler - you could inline assembly language really easily
Yes, as easily as this:
some BASIC statements here
[ some assembly statements here ]
some BASIC statements here
IOW, you just had to enclose your assembly language statements in square brackets. That's it.
Of course, you would need to know what memory addresses to operate on, in a real-life program, as opposed to a demo, so that you could share data in some way between the BASIC code and the assembly code, otherwise the program might not be able to do anything useful.
I don’t know about the multi-pass assembler feature that others have mentioned in this thread.
On your last point: on the first pass the assembler wouldn't know about labels that came later on the assembly, but on the second pass it would have seen them. IIRC normal way to run the assembler was to do to a for loop from 0 to 3 with step size 3, as 0 indicated suppressing all assembler errors.
Ah, got it, thanks. But why not 0 to 1 with step size 1?
Wouldn't that also give two passes, which should be sufficient, and which the said normal way also does?
I think it's clear now - mode 0 is used to suppress the error messages about yet-unseen labels in the first pass, and 3 to give the output with any errors. Meanwhile, by the end of the first pass, all labels would have been seen, so in the second pass, the assembler could insert the correct addresses for them, at the places where those labels were used in jump statements, even if some of those statements were before where the labels were defined.
That is true it says all numbers in thousands also I noticed I accidentally was on years and not quarters. So 11k a month for example is the new number for a good year [1] and 5k a month is the example for a bad year [2]. So yes now it seems more tractable than it was before, but my argument was just supposed to show one reason why it was intractable there could be many other reasons, though my argument fell flat for that one reason there are still many other reasons it could be intractable.
So the question still remains is it tractable? The answer given my above argument is still up in the air, because the honest truth is there are many underlying assumptions in my argument so again it doesn’t really say much about it being tractable. It was only trying to say it was intractable which again it fell short of doing. For example in the per month. After Google dumps all or even some of their profit into that for even one month it is somehow going to still reach the same profit margins the next month the proof for that is up to someone trying to prove it’s tractable. There are too many other variables like this that exist and it really needs a much bigger burden to show that something like that is tractable.
I think one of the biggest fears from ARM would be the popularity of the Raspberry Pi and their community.
There are better boards than the Raspberry Pi (strictly speaking specifications here). I took that path of playing with a lot of alternative boards, my biggest issue is lack of support, some boards had kernels never updated, etc...(YMMV).
If Raspberry Pi released a RISC-V board I have no reason to believe the community would not be just as strong. Sure in the beginning they would support both, but eventually the ARM support would wither.
Is it truly amazing? I was under impression that Raspberry requires some blobs to run properly. Is there detailed specifications for Broadcom chip they're using? I was under impression that it was NDA and not possible to obtain for ordinary mortal. So may be it's good because of sheer number of people tinkering with it and smoothing rough edges, but it could be better. Please correct me if I'm wrong.
> All Raspberry Pi models before the 4 (1A, 1B, 1A+, 1B+, Zero, Zero W, 2, 3, Zero 2 W) boot from their GPU (not from the CPU!), so they require a non-free binary blob to boot
So the 4 (and I suppose the 5, if it ever actually comes...)
Goes on to say:
> Since then, Broadcom publicly released some code, licensed as 3-Clause BSD, to aid the making of an open source GPU driver. The "rpi-open-firmware" effort to replace the VPU firmware blob started in 2016. See more at https://news.ycombinator.com/item?id=11703842 . Unfortunately development of rpi-open-firmware is currently (2021-06) stalled.
So there you are. Not wrong, are you, but not strictly correct, depending on "...to run properly" definition
You're right. Pretty much all the low level stuff below the kernel in a pi is closed source.
Want your own custom boot rom so you can start up in half a second rather than the default 3 seconds before linux gets loaded? - sorry, we can't share the code for that with you, nor the specs for you to write it yourself!
That's not what they meant. Yes, anybody can write bare metal alternative to Linux ( at the very least by looking at the Linux codebase). But still that Pi 1541 depends on the bootcode.bin, fixup.dat and start.elf binary blobs, which the OP was complaining about.
It wasn’t clear to me what they meant. I’m not familiar with the details of boot.bin but my read of what GP was implying was you had to use the rapsbian kernel and drivers. Thanks for the information.
And also lots of the alternative boards target RPI users ("like raspberry pi but better, cheaper etc"). If RPI switched then it would probably make sense for many of the other boards to switch to in order to stay comparable to RPI.
If they want to compete they have to agree on something like ACPI so the OS vendors can target their boards without a separate distribution for each board.
Most ARM boards already use Device Tree with Linux, there isn't any new agreement needed. What is missing is getting the drivers for each board into the same source tree.
NetBSD provides a filesystem image containing one kernel that will boot on all supported ARMv8 boards, you may need to write a board-specific build of u-boot to the start of that image. A Linux distribution could do the same as this.
In the past, one of often cited reasons for the lack of success of ARM server offerings was that every developer machine is x64, no software is ever tested on ARM and no dev has a machine to do so. As the Raspberry Pi got more performant it brought a lot of people running software on ARM, leading to lots of software publishing ARM builds, toolchains getting exercised, bugs ironed out, etc.
With Apple having moved to Arm there are definitely ecosystem benefits brewing too. I can run a Debian VM out of the box with UTM, compile an Arm64 package using standard Linux toolchain and have it run directly on a RPi. It's like having a supercharged RPi development environment with me all the time.
I wouldn’t be surprised if the existence of many ARM docker images for hobbyist projects were indirectly due to the popularity of the Raspberry PI (in anrditi to Apple switching to ARM, more recently).
The technology is linux, gpios etc. not which instruction set the CPU has. That is completely irrelevant and raspberry switching to USB-C is a much bigger change from a user standpoint than switching to RISC-V would be.
Assuming performance and software support is comparable. Which obviously won't be the case for a long long time.
But there are few things as irrelevant as the CPU instruction set. (Part from specific extensions, like AES support enabling quick crypto etc.)
The technology is what people know. Using a different SoC board, camera, ... requires adds more time to gain the same level of knowledge.
Doctors will latch onto a single product solution so they don't have to spend the time learning how to operate an alternative. Hospitals need to stock consumables based not on the best products but on what doctors know.
Airlines retain the same air crafts to reduce time spent learning to operate an alternative. Boeing marketed this as a sales feature with 737 MAX, no extra flight training required!
Software developers will often stick with the same language, even though others better fit the domain problem. Few seem willing to take the time to try and play with new concepts, languages, and operating systems.
Trying new and different things drives innovation not the world.
Very much disagree. Of course linux and GPIO is important, but the widest use case for them among myself and people I know is as a build box and/or something to test ARM software on. One person uses it to learn ASM. From my small and surely non-representative sample, the architecture is maybe the most important thing. So I don't think we can confidently say that CPU arch isn't relevant.
But instruction set, in practice and in this case, is tightly coupled with form factor and MIPS per watt. I work in mobile robotics: my low end choice is RPi, high end an NVIDIA board. While I can see Risc-V challenging ARM here, they don’t yet. (Excepting an ultra low power/low compute edge-case.) I just don’t see any CISC architecture that’s available today competing.
There aren't many other options if you want or need a system with physical access near your desk that is also designed to run GNU/Linux. I think the only other options are less-known SBC-based systems, or systems where GNU/Linux is an afterthought at best. With the Raspberry Pi 4, it's quite likely that one of your colleagues already got the locally preferred distribution to boot on it. Lots of people use them to reproduce and fix generic AArch64 issues, even if they have remote root access (including the ability to install another OS) to much faster lab machines.
I think a lot of the motivation here is to avoid RPI leading to further interest and development on the RISC-V side. SIFIVE may be willing to make something perfect for the RPI, but the real risk is if RPI goes RISC-V, then software gets ported and tested on RISC-V. People start liking it and hacking it. And then RISC-V suddenly gets a lot more interest from ARM licensees and their competitors. Which, at the very least, will lead to higher licensee leverage in negotiations with ARM on renewal.
Bingo. This is what they mean by "strategic investment." It's not because they are just lovers of the education and maker market. They'd be negligent to their shareholders to not make this investment.
Cheaper is not even the primary reason to use RISC-V over Arm.
The ability to modify the design for your application and be able to apply a plethora of cores where they are needed at only the cost of silicon area.
Apple has been moving their management cores on their M-series parts from Arm to RV and they have an architectural license.
Everyone has an architectural license for RISC-V, you can add your own instructions, change the mix of available instructions. A whole parametric RV32 or RV64 will be available on every node at every fab.
This move by Arm is absolutely to block RPI from moving to RISC-V. They have mindshare and distribution.
Other than geopolitical and totalitarianism concerns, is that a good and bad thing that let you freely change your instruction … implementing no charge I understand, but everyone has their own architecture …
Art is not just about no limit, but what you do with the limitation including illuminating there is a limit. Not just about beyond it but of course you could.
Not sure, but really small royalties on $4 computers can still be accumulate. They may also prevent $4 computers turning into $1 computers even if huge volumes would call for this otherwise.
Moreover, those royalties put fences on what might otherwise be more open and collaborative. It would be sad, if the tinkering goals set out originally by Raspberry Pi were curtailed by ISA license restrictions. RISC-V inherently has an edge over ARM in that dimension.
SiFive would practically give away their latest P870 cores (close to A78 performance levels) to get them shipping by the millions in the Pi because of the free advertising, free exposure, and massive boost it would give to the development of the RISC-V ecosystem.
Once such a switch was made, Pi could even consider using free and open source cores for their low-end devices where margins are slim (an area where RISC-V has already been gaining ground rapidly).
As a normal consumer you can't.
But the predecessor P670 should be available in the SG2380 board in 2024Q3 [0].
It will have 16 P670 and 8 X280 cores, and will cost $120 to $200 (without included RAM).
These cores have just been announced, and are available for licensing. You can contact SiFive if you're interested.
If what you want is hardware somebody's already made, that's going to take 2-3 years as per tradition.
>These are the newest OoO cores with full 1.0 vector extensions, right?
Yes, but so are multiple generations of predecessors. It seems that hardware based on P670[0] and X280[1] (both match that description) will be available for purchase in less than 10 months from now[2].
A tougher question as so many factors that I hope Asianometry on YouTube cover one day.
Whilst the RISC-V instruction set is free from licence/patent fees, the design of those chips will be made by a company that will need to recover costs, so there will be a cost. Compared to ARM who already offer up free core designs for use for free like the cortex-m0 and others. I know the Raspberry PICO uses a cortex-m0x.
Though, many do seem to blur the lines between the instruction set and the core design you get in the chip you buy.
Over time, but that will be when you get open-source RISC-V chips that compete with the designs of the market offerings other companies make and sell. Which as a mindset may cause the evolution of RISC-V issues as people could get burned due to expectations exceeding the reality and finding the get what you pay for those to still hold stronger than envisioned. Yes, eventually open-source CPU cores will get there, but production costs and scale will still become a factor, as to many aspects that all add up to the final cost. This is even on a level of software stack tried and fully robust of equality, which is still behind ARM.
ARM may well kill off x86 as a mainstream before RISC-V fully bites it.
Look at how long ARM took to get where it is, and started with microcontrollers, used in many things. That is where RISC-V will and is starting to get traction, but scaling beyond that, whilst could be faster than ARM's history, it's not as clear-cut as many foresee.
For "coming down the pipeline" they're essentially free.
Today, the c910 is an Apache 2, hardware proven out of order core on GitHub here https://github.com/T-head-Semi/openc910 a little slower than an RPi3's core.
I think the point is, very few people are looking to buy some RTL, they want the silicon. So the question is what's the price of the silicon? My understanding is anything comparable to the SoC in the RPi boards is still quite a lot pricier.
Good question. I looked it up and apparently the royalties are around 2% of the price of the chip, so it probably won't make a huge difference. They do also charge an up front "membership fee" which seems to vary from $200k to $10m depending on the chip.
RISC-V is very very popular already, but most RISC-V cores are not user accessible. They're controllers in hard drives, embedded management cores in SoCs, etc. Definitely nice to not have to pay for licensing that, and verification doesn't matter so much since you are more likely to be able to work around bugs in software.
I suspect it won't make a huge difference to visible CPUs. Probably the biggest impact will be flexibility.
A sibling comment makes the more relevant point about powerline, but it’s worth considering that whatever the power company has to pay, you the consumer will be paying for in the end.
You could make the case (a good one I think) that a homeowner providing connectivity should get a discount to their bill, or that the power company could directly charge for a separate communication channel.
That’s kinda saying the same thing in different ways and depending on your point of view they might not feel fair perhaps. But the consumer is paying either way.
Edit: “separate communication channel” could include a human meter reader too!
I like to think some awards are less bad than others.
See Autocar’s blog[1] about how they voted for the Car Of The Year award which they do every year. It seems to be reasonably transparent.
Be very careful with 'X of the Year' verbiage, especially at the end of the year. There are many publications that use this verbiage for awards, and stumbling upon them via Google or other means almost always results in biased and paid for opinions.
It's like the apocryphal 13th floor in hotels, people should be born in either 1968 or 1970. Otherwise I might make a fool of myself by mocking their username.
In fairness, if I was born in 1969 I would be deeply torn between using it in every single username I had for the sake of comedy, or avoiding it precisely because people tend to make assumptions and actually being born then is never going to be anyone's first thought.
I’d go further for the specific example of the book. For me the price of reading a good book isn’t a price at all. It’s a gain. I’ve paid the first up front price to be able to read the book and gain enjoyment from it.
I get that there are different reasons for reading a book, but even then, it also seems overly simplistic and absolutist to me.
A single game of Factorio can take a long time. Mine usually take around 100 hours.
The craving to (tweak|move|refactor|grow) the base for certain personality types that are richly represented on HN can mean you can spend hundreds more on it too.
Such a summary can be very helpful to disambiguate the subject matter and save me (and I’m guessing many other folks too) the time of reading every article to find out whether it’s interesting/relevant to me.
- instant-on - you turned on the power switch at the back of the BBC Micro, got the double beep, and in less than a second were dropped into a REPL / shell with the language
- integrated assembler - you could inline assembly language really easily
- great documentation - before the web, documentation meant books - of which there were many - but also crucially in the BBC Micro's case also many television shows from the BBC.
- direct access to hardware - I realise this isn't BBC Basic itself really, but being able to PEEK and POKE (well, use ? and ! operators) to memory-mapped hardware addresses was great fun, and a great way to learn about how things worked.
The nostalgia for me around the language is strong, but without the hardware platform I'm not sure I'd want to go back to it.