Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Inside the stacked RAM modules used in the Apple III (righto.com)
74 points by picture on Oct 31, 2020 | hide | past | favorite | 27 comments


> Unfortunately, the Apple III was a business failure due to reliability issues and competition from the IBM PC introduced a year later.

Curious about these reliability issues, this prompted me to read more about the Apple III on Wikipedia and it's quite interesting:

> There was way too short a time frame in manufacturing and development. When the decision was made to announce, there were only three Apple IIIs in existence, and they were all wire-wrapped boards.

> The case of the Apple III had long since been set in concrete, so they had a certain size logic board to fit the circuits on ... They went to three different outside houses and nobody could get a layout that would fit on the board.

> They used the smallest line circuit boards that could be used. They ran about 1,000 of these boards as preproduction units to give to the dealers as demonstration units. They really didn't work ... Apple swapped out the boards. The problem was, at this point there were other problems, things like chips that didn't fit. There were a million problems that you would normally take care of when you do your preproduction and pilot run. Basically, customers were shipped the pilot run.

https://en.wikipedia.org/wiki/Apple_III


The Apple III is actually a pretty interesting machine as far as older 8-bit systems go, and it includes some unique hardware features to work around the limitations of the 6502 CPU:

- The machine supports a maximum of 512 kB of RAM.

- The 6502 zero-page can be relocated on the fly to anywhere in the 512 kB address space.

- Certain 6502 addressing modes can access the entire 512 kB address space without bank switching, where it otherwise would only be able to access 64 kB.

- The area of RAM reserved for the operating system and device drivers can be protected from regular apps.

- It also includes built-in RGB video at 280x192x16 colors, and a 6-bit DAC for sound.

But even without the initial reliability problems the machine probably didn't stand a chance in the market.

It fell in the trap of being simultaneously too much computer (too expensive for what it does) and not enough computer (not powerful enough for its rather complicated operating system). And its 8-bit architecture would be quickly eclipsed by 8086 and 68000-based machines, so it was both state of the art and outdated at the same time.

Apple was right in that they needed a machine that was significantly more powerful than the Apple II, but their timing was off and the architecture was wrong. The IBM PC did a much better job of hitting the sweet spot of price vs. power vs. complexity.


Another problem with the Apple III was that Steve Jobs didn't like cooling fans or vents so the Apple III (like the Apple II and Mac) didn't have a fan. Reportedly, the Apple III got hot enough to warp or melt floppy disks. Another problem with heating/cooling cycles was that integrated circuits would migrate out of their sockets. Apple's recommended "fix" was for customers to lift the computer a few inches and drop it, hopefully reseating the ICs.

https://www.techjunkie.com/apple-iii-drop/


It's curious to see how far back the form over function at Apple goes. Truly part of the company spirit.


And then 20 years later Steve Job repeated the same mistake again with Power Mac G4 Cube. ( No Fan )

Though ultimately the tech got good enough. Another 20 years passed the G4 Cube we now have 5nm A14 on a Smartphone or iPad Air that are powerful enough for 80-90% of consumer use cases. All without a Fan. We also have a roadmap of 3nm and 2nm ~2025.

Unfortunately Steve did not live long enough to see it.


I seem to recall hearing that Steve Jobs loved the beautiful carved wood prototype Apple /// case design* that he had commissioned and wasn't willing to change it to fit the logic board (or to move the floppy drive outside the case, etc.), instead directing the designers to "saw it in half", i.e. split it into two smaller boards, connected by a ribbon cable. Unfortunately the ribbon cable put tension on the boards, causing them to flex, which could cause the socketed RAM chips to unseat, as cited in the Wikipedia article, hence the "pick up your Apple /// and drop it" approach to re-seating the chips.

Jobs remained adamant that computers with fans were junk, so the original Macintosh also lacked a cooling fan, in spite of having an internal power supply, CRT, and supporting analog board. This presumably contributed to the large number of analog board failures in classic compact Mac models.

*which might be part of Stanford's Apple museum collection or at the Computer History Museum?


Compatibility with the Apple II was horrible too. How they didn't get that right is just beyond belief.


At the time compatibility was not considered as important as you might think today.

# Radio Shack

Model I

Model II (not compatible)

Model III (weakly compatible with Model I)

Color Computer (not compatible)

# Commodore

PET

VIC (not compatible)

64 (not compatible)

Amiga (not compatible)

# Atari

400

800 (mostly compatible)

ST (??).

# IBM

PC

PC jr (weakly compatible with Model I)

PS/2(mostly compatible)

Doing this from memory. Sure I got some this wrong but it illustrates the point that compatibility had not been show to be so important


Also, Texas Instruments released its Professional Computer, a TI PC going up against IBM PC. It ran MS-DOS and used an 8088 CPU, but it was not a clone and wasn't totally compatible. (Different graphics, different expansion bus, slightly different MS-DOS, slightly different MS-BASIC, etc.)

Byte Magazine's review (https://archive.org/details/byte-magazine-1983-12/page/n287/...) described it as "daring to be somewhat different".


You'll notice it and the DEC Rainbow didn't last long. Compatibility was important. The PC AT showed compatibility plus was important. After all it was the whole point of the turbo switch.


It's interesting to ask why people didn't understand the importance of compatibility at first.

One reason could be the tendency to disregard the cost of software, which is a mistake people still make today.

But there was also another factor back then: computer hardware was very expensive. It is understandable to think that, if you're going to spend a gigantic pile of money on computer equipment, then it needs to be the absolute best computer equipment you can get. I think wanting to squeeze the most value out of your investment could explain why buyers and manufacturers were willing to sacrifice compatibility.


Well, it was really important for Apple because the II had all the software, and what came with the III was junk. They did have an emulator but it didn't do 80 column at first.

The Atari 400/800, had a sort of compatible successor in the 1200XL which wasn't compatible enough so didn't sell well. Succeeded by the 600XL * 800XL that were actually compatible. Those were followed by the 65XE/130XE/XEGS. The ST was a parallel different line.

The IBM PC jr was just IBM being idiots with their product differentiation. Notice, it didn't do well because it sucked at compatibility.

Amiga was bought.

Compatibility was actually important when it the predecessor system was popular. Some companies didn't do it well.


Woz didn't like that there were extra chips in the design to disable the more advanced features when using Apple II compatibility mode.


An interesting related fact is that the 48k Sinclair Spectrum also used 8 32kBit DRAMs (in addition to 8 16kBit chips).

However, these were not the special modules used in the Apple 3, but 64kBit DRAMs which had a fault in one half of the matrix, such as TI TMS4532-20 or OKI M3732. The defective half of the chip was simply disabled.


If I've understood correctly the one half was disabled on the Spectrum (i.e. the Spectrum would never try to access the defective part and the half that wasn't working had to be identified by Sinclair) - so they really were defective chips, in contrast to chips where the manufacturer deals with the defect.


some people had lucky (not defective) or replaced it for good 64 kBit drama and add a switch to change between the lower and upper bank. (my father did this)


Author here for your vintage DRAM questions :-)


Don't have any vintage DRAM questions, alas, but I thank you for one of my favourite blogs!


How did we get from two chip modules to multi chip modules? Was the upper address decoding logic its own chip, just part of one of the chips or replicated in each chip?


It depends on the module, but for example a 1 gigabyte DRAM module can be built from sixteen 64 meg x 8 chips. Each chip is 8 bits wide, so 8 chips give you 64 bits. This is duplicated for two banks. The row and column addressing is repeated in each chip, while the circuitry to select the right module and bank is external. I'm sure someone else can provide more details on modern modules.

One reference is http://www.softnology.biz/pdf/lecture05-dram.pdf


Yes, what Ken said. There was another way to do it if the memory chips were already wide enough. Some static RAMs had multiple enables of different polarities. For example, the 6264 had an E1* (active low) and an E2 (active high)... wire an address bit to E1* on one chip and E2 on the other and you'll get the decoding done without adding a third chip.

http://users.ece.utexas.edu/~valvano/Datasheets/MCM6264.pdf


Another great post! I had a couple of questions:

>"The second layer of polysilicon (poly 2) is arranged in diagonal regions to implement the selection transistors."

Since the diagonal polysilicon covers two different capacitors how is each one selected independently? Does each one have separate metal wire connected to it?

Is the sense line really a bunch of different sense lines? Does each bit have its own sense? Is the sense line part of the metal wiring then?


The bits are in a matrix, with sense lines in silicon vertically and row select lines in metal horizontally. When a row select line is activated, 128 transistors are turned on and each sense line is connected to the single capacitor in that row. Thus, 128 bits are read in parallel.

Thus, each horizontal row select line is connected to 128 different transistor gates, in other words connected to 64 of the diagonal poly 2 regions.

The confusing part is that the rows zig-zag a bit to improve the density. In other words, the physical rows in the photo don't quite match the electrical/logical rows. Look at the photo and imagine pulling it apart like a zipper. Then shift the middle part up half a step so the capacitors line up. Then the logical rows are all in a straight line. Hopefully that explanation isn't entirely confusing. In brief, the two capacitors with the same poly 2 diagonal are in the same electrical row, so they are selected together (along with 126 other capacitors).

I'll probably write more about this, so let me know if there is a clearer way to explain this :-)


Thanks, this explanation was really helpful. I think your zipper analogy would actually be a nice addition to your post. It also helped me appreciate just how ingenuous this design is. Cheers.


Tangential: in the heyday of PocketPC PDAs, you could send away an iPaq for a storage upgrade that used stacked DRAM chips. They came with a special driver for WinCE that permitted such addressing.


Dumb question from a clueless noob... Since the stacking required a select line, why not skip the process and just add more ram slots? Seems like that'd be cheaper than custom hardware


Using these modules gives you twice the density that you'd get otherwise. If you have unlimited space, you can add more RAM slots. These make sense if you have a fixed area for memory chips.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: