I just tried out your app for the first time. First time trying to learn Spanish. I feel exactly like the user you describe but it is because I have to click Don’t remember for 70-80% of the words.
I’ve always had difficulty remembering vocabulary. I remember cramming German in School 30 years back. We had 20 words we had to learn per week and I could sit a whole night repeating and repeating them just because they wouldn’t stick. And then in the morning they were all gone anyway. So I gather I am a bad language learner.
In your algorithm, do you assume everyone’s recall is the same or do you optimize for a recall rate which make everyone fail a certain percentage of the word? If so, knowing that I am supposed to not remember 70% would be a good reminder in the app to not feel bad.
How about in-app purchases and subscriptions? The code is already there. Is it abusive?
Is it abusive because it is tied to hardware?
No, I see it as the opposite. I see it as Volkswagen simplifying production by limiting variability and giving you the option to get a less capable product at at a cheaper price.
A 6 and a 8 core processor is probably the same die also and produced at the same cost. Maybe 2 cord were turned off because they were faulty or maybe they were turned off because some people don’t have the need and money for 8 cores. Does it matter? Now they can still buy a computer. Is that a bad thing?
> How about in-app purchases and subscriptions? The code is already there. Is it abusive?
Sometimes yes and sometimes no. Pure software is a bit different than hardware as copies are effectively zero cost. Same would go for e-books, music, etc. Not that they get a full free pass, these media can also engage in abusive practices.
> Is it abusive because it is tied to hardware?
Yes. Another example of the absurdity is if you want to buy half an apple and the store charges you enough to cover the costs of a full apple, then pull out a full apple and destroy half of it before handing it to you. Does that seem ok? Putting in extra effort to make something worse is bad.
> A 6 and a 8 core processor is probably the same die also and produced at the same cost. Maybe 2 cord were turned off because they were faulty or maybe they were turned off because some people don’t have the need and money for 8 cores.
Big difference between these two cases. If the two extra cores were faulty, then charging a lower price makes sense. Like paying less for used tires that have some wear on them. But taking a perfectly good chip and purposefully disabling two cores is like taking a belt sander to a new tire and then charging less.
I did some benchmarking on BlobFuse2 vs NFS vs azcopy on Azure for a CT imaging reconstruction a year back or so. As I remember it, it was not clear if Fuse (copy on demand) or azcopy (copy all necessary data before starting the workload) was the winner. The use case and specific application access pattern really mattered A LOT:
* Reading full files favored azcopy (even if reading parts just when they were needed).
* If the application closed and opened each file multiple times it favored azcopy.
* If only a small part of many files were read, it favored fuse
Also, the 3rd party library we were calling to do the reconstruction had a limit in the number of threads reading in parallell when preloading projection image data (optimized for what was reasonable on local storage) so that favored azcopy.
Don’t remember that NFS ever came out ahead.
So, benchmark, benchmark, benchmark and see what possibilities you have in adapting the preloading behavior before choosing.
With Fuse you can make it transparent for the Application, it just exposes the mount with all the files. When your application reads them, it's pulled from Object storage, while az-copy is a utility to copy it to your disk
This!
When you are doing something simple (as in there are known best practices) you do want people to have the same formal education. They’ll talk the same language and everything will be smooth. Nobody wants a self taught surgeon or pilot on the team. There is a best practice for washing your hands and you want your surgeon to know it.
But when you are in the complex domain (as in there are no known good practices), what you want is many different viewpoints on the team. So getting people with different backgrounds (different academic background, tinkerers, different cultures, different work experience etc) together is the way to go.
Same with the discussion about remote work. People do not seem to get that they’re no best way but it depends on the type of work. If it’s simple or complicated, let people stay at home to concentrate. If it is complex, give them the opportunity, and the knowledge it’s good, meet up by a whiteboard. And what’s best may of course differ from day to day.
I worked in the telecom business 15 years ago on 4G (LTE) and there it was considered a big savior compared to how it was done before.
Basically before they had a lot of error handling code and it was a significant part of the code base (don’t remember but let’s say 50%) and this error handling code had the worst quality because it is very hard to inject faults everywhere. So basically the error handling code had a lot of bugs in it which made the system fail to recover properly.
But DbC was a godsend in the way that now you didn’t try to handle errors inside the program any longer. Now the only thing that mattered was that a service should be able to handle clients and other services failing. And failure in a few well defined interfaces is much easier to handle. So the quality became much better.
What about the crashes then? Well, by actually crashing and getting really good failure point detection it was much easier to find bugs and remove them. So the failures grew less and less. Also, at that time I believe there were 70 ms between voice packages so as long as the service could recover within that timeframe, no cell phone users would suffer.
Plus of course much less error prone error handling code to write.
And as someone else said, DbC should never be turned of in production. Of course, in embedded systems, speed is not so important as long as it is fast enough to not miss any deadlines. And you need to code it so it doesn’t miss deadlines during integration and verification with DbC so there is no reason to turn them off in production.
Bertrand Meyer in his usual painstakingly detailed manner explains how to integrate DbC with Exception/Error handling in his paper Applying Design by Contract linked to here - https://news.ycombinator.com/item?id=42133876
I loved the books as a child and as a father I love reading them for my children. And they love them too.
Some things have not aged well in them though. Thinking specifically around the gender roles. Not matching Sweden of today. Basically all men are working and having a good time and the women are taking care of children and their husbands. But I sometimes make a lesson of that and tell them that it used to be more like that and ask them whom of my wife and I do different chores and takes care of them. Then we can laugh about it a bit together instead of me grinding my teeth. “Mom’s work is never done”.
I think complexity frameworks (like Cynefin) describes it pretty good. When the complexity is low, there are best practices (use a specific gauge of wires in an electric installation in a house or surgeons cleaning according to a specific process before a surgery) but as the complexity goes up best practices are replaced with different good practices and good practices with the exploration of different ideas. Certificates are very good when there are best practices but the value diminishes as the complexity increases.
So, how complex is software production? I’d say that there are seldom best practises but often good practices (in example DDD, clean code and the testing pyramid) on the technical side. And then a lot of exploration on the business side (iterative development).
So is a certificate of value? Maybe if you do Wordpress templates but not when you push the boundary of LLMs. And there’s a gray zone in between.
The job for an embedded engineer can vary wildly and it gets hard to define what embedded software even is. I’ve worked on microcontrollers in elevators and battery management systems for battery packs on the low end and I’ve worked on application processors, many-core processors DSPs and soft cores in FPGAs in telecom on the high end. Sometimes you don’t even notice the hardware. All depends on the job and the size of the company (do they have a platform team abstracting all the hardware away?).
As others say, many companies in the embedded space have had a very hard time realizing they are software companies and their practices are very old school and frustrating.
Talking salaries (Sweden), yeah it’s a bit higher in the cloud but not wildly so.
My recommendation is to start working in a not tiny company and on an existing product. Then it’s more about adding logic rather than knowing everything about RTOS and bootloaders. Them you will pick these things up as you go.
Sweden here also. Quite common with 25 days and paid overtime or 30 days with unpaid overtime.
And then you have parental days which are 480 in total per child which can be used both before they start preschool and for longer vacations when they are older. In Sweden it’s also quite common that both parents split it 50-50.
So 4-5 weeks of time off in the summer is not uncommon at all for parents and totally accepted by companies.
Last time I was looking (pre covid) it was kinda hard to find readily available hardware to talk on 5.9GHz where there V2X 802.11p works. That is probably easier now that "Wi-Fi 6" uses that same chunk of spectrum as 802.11p. I'm guessing there is now (or soon will be) a Wi-Fi 6 usb (or perhaps PCI or M2/nVME) adaptor with enough flex in it's firmware that it can be convinced to talk 802.11p. Then you'd be talking a RasPi and a WiFi adaptor to start hacking on this.
There's a SDR openWiFi project that lets you build 5.8GHz WiFi using one of a list of SDR boards - it'd probably be fairly easy to tweak that into doing 802.11p, but now you're talking many hundreds of dollars worth of hardware - a bit outside FlipperZero/RasPi+dongle pricing.
I’ve always had difficulty remembering vocabulary. I remember cramming German in School 30 years back. We had 20 words we had to learn per week and I could sit a whole night repeating and repeating them just because they wouldn’t stick. And then in the morning they were all gone anyway. So I gather I am a bad language learner.
In your algorithm, do you assume everyone’s recall is the same or do you optimize for a recall rate which make everyone fail a certain percentage of the word? If so, knowing that I am supposed to not remember 70% would be a good reminder in the app to not feel bad.