> “a 64MB POWER Indigo 2 with XZ Graphics and a 2GB SCSI drive would run you around £58,000”
And then you had to buy the software. A license for a 3D modeling package like Softimage or Alias cost at least $10-15k, and you probably also needed a separate raytracing package for high-quality output.
But if, in 1994, you did have an SGI and Alias and enough artistic skill and technical competence (and patience…) to produce liquid logos and dancing soda bottles and face morphs, you would certainly recoup that $80k investment quickly. It was a very rare skill that needed very rare hardware. You could get highly paid freelance work by simply calling up ad agencies.
That scarcity a bit hard to imagine today, when anyone can download Blender onto their standard desktop computer and learn it by watching online videos. It’s cool that 3D art has been thoroughly democratized.
porcoda 157 days ago [-]
One thing I miss from that era of machines was just the way they looked: at the time, most machines were grey or black boxes, but the SGIs had some degree of personality to them. The O2's were fun little curvy boxes. One of my favorites were the large rack systems - one of my jobs had us working with the Origin 2000 and PowerChallenge machines. Compared to some of the generic clusters of rack mounted Alpha systems that we had around the same time, the SGIs just had a cool look to them.
cyphax 157 days ago [-]
They had their own startup sounds, too (at least the machines I have). And it wasn't just the machines; peripherals (mouse, keyboard, monitor) had this nice looking texture on them as well. They were definitely cool back in the days!
hulitu 157 days ago [-]
Design has died on PC cases.
junto 157 days ago [-]
This brings back memories. Back around the same time in the mid-90’s I remember visiting a post-production company, who had multiple SGI Onyx supercomputers in their stylish glass walled server room, dedicated to processing special effects for film and advertising.
They were so expensive they only made sense to run 24/7/365 in order to get their money’s worth. They had a service engineer on call permanently who wasn’t allowed to be further away than 25 miles from the servers at any time.
Lovely system, the R8000 is indeed a rare bird.. given the hype not a particularly remarkable CPU by any measure versus its contemporaries at launch (Alpha A21064A, IBM POWER2 and HP PA-RISC all traded heavy blows in that era, SPARC seemed perpetually behind) but would be a nice one to score. It was an interesting time as Alpha was really pushing clock speed while POWER2, PA-RISC, and the R8000 showed impressive numbers at much lower clocks.
mattbillenstein 157 days ago [-]
I interviewed at Intel with the old Alpha group after all the acquisitions back in the day - around 2000 - and I remember talking with the guys there in that they couldn't get their EDA tools ported to the Alpha architecture; so they were designing this amazing cpu, but had to do it on "gasping and wheezing" Sparc systems. Good times.
We had the same issues where I ended up working; it was a year or two before 32-bit intel systems started to show up and they absolutely screamed compared to Sparc, but couldn't handle really big jobs. When the amd64 stuff started to come around, that's when you could just see the writing on the wall - Intel / AMD were gonna absolutely kill Sun...
kev009 157 days ago [-]
Once upon a time you would buy an entire system from the EDA vendor like Mentor Graphics, Zuken or whomever, early on those could be quite bespoke and then eventually they used COTS like the HP 9000/300 and Apollo DN10000. And then the generic RISC systems purchased direct from the OEM and eventually the amd64 dominance you describe.
I have some IBM POWER2SC (SuperChip) systems with intel asset tags that presumably were used for something special (very pricey machine), maybe MCAD :)
PA-RISC was really pushing the performance horizon in the late '90s (like 2-3 years ahead of intel) and had great EDA tools support which was an odd situation because the ISA was effectively frozen in 1997 (thanks Itanium) and just got process and implementation updates that scaled pretty well.
mattbillenstein 157 days ago [-]
Nice, that was before my time - we were basically a Sun shop and actually did a lot of chips for them (and SGI) too. I never saw any of the HP, IBM, or other systems.
lproven 155 days ago [-]
> it was a year or two before 32-bit intel systems started to show up
Do you mean 64-bit Intel systems?
mattbillenstein 154 days ago [-]
Actually no, just 32-bit intel with Linux - we were a Sun shop and Linux with EDA tools hadn't matured enough although Linux had been around for awhile.
But yeah, it was only a couple more years until amd64 came around and really made a difference.
lproven 154 days ago [-]
Wow. OK then.
Around 2000 was the time I first switched to Linux as my primary desktop OS -- I disliked Windows XP that badly, I put up with the slightly ropy experience of Caldera OpenLinux.
So it was very definitely A Thing by the turn of the century, but maybe not all the tools that everyone needed were available yet. (And the only usable desktop distros still cost money back then.)
mattbillenstein 157 days ago [-]
And now flipping through the pictures of the internals - I ended up working for LSI Logic in the early 2000's, so it's fun to see all these LSI chips although they were before my time. Certainly some of my older colleagues worked on these.
You'll notice the numbers on these L1Axxxx - this is an internal LSI number, after they ran out of numbers for the xxxx part they bumped it to L2Axxxx and I worked on a few chips with those designations.
kev009 157 days ago [-]
These are probably all semi-custom standard cell, which LSI was known for, no?
mattbillenstein 157 days ago [-]
Yeah, LSI had their own standard cell libraries up to around 180nm, then they didn't want to invest in fabs and started working with TSMC - beginning of the end for them really. I signed off a couple chips on both sides of that transition.
I worked in a group that did physical design, timing closure, test insertion, etc. I did a lot of layout automation in Avanti's tools; sometime after I left, it all went to Cadence I believe.
msla 157 days ago [-]
> The R8000 is not a CPU in the traditional sense. It is a processor, but that processor is comprised of many individual chips
And retrocomputing geeks (and just any geek sufficiently old) got rueful grins on their faces.
This is traditional, in the sense of being old-fashioned. CPUs were built out of discrete components back when that meant individual vacuum tubes, discrete solid-state components, and then, finally, discrete chips. Thousands of individual SSI chips in computers like the Apollo Guidance Computer. Even after the first single-chip CPU was developed, larger computers still used multi-chip CPUs, like the PDP-11 architecture being implemented on four chips in the LSI-11 chipset.
That's before some people were born, I guess, so we have this.
kev009 157 days ago [-]
IBM's POWER and POWER2 were also many (8-10 depending) chips in CMOS which was an exotic approach as most RISCs had converged to single die microprocessors but made it possible to beat a lot of vendors in the superscalar punch and push the complexity of SMP out a while (for both vendor and user). The POWER2SC converged it all onto one die, and then eventually IBM became well known for exotic RISC MCMs.
In the mainframes the TCM is somewhat famous, really beautiful Bipolar designs with relatively low gate count but a ton of high performing dice (and high heat, hence TCM) on an exotic MCM.
alberth 157 days ago [-]
SGI dominated the workstation graphic market back in the day.
Depends on exactly what segment of the graphic market, it seems to me that some segments had quite big Sun presence, and early HDTV had the unlikely straggler-survivor with Symbolics.
Kimitri 157 days ago [-]
I had a Teal Indigo2 for a few years about 15 years ago. I loved it! It had the cool feet that let you prop it up sideways so you could have it in tower mode. The feet had these little scoops embedded in them so the machine could more effectively hoover up all the dust from the floor. Fantastic!
Tsiklon 157 days ago [-]
I’d love an old UNIX machine like this or one of the later Solaris SPARC desktop towers. Beautiful machines running now rare software.
How does the documentation for the software development environment for this machine stack up today?
mst 157 days ago [-]
Oh, cool!
I used a purple Indigo 2 as my desktop for a few years.
When there were some issues with the local hot and cold running power for a few weeks, sometimes I'd get home to my study after being out and about and see 'brownout detected' on my console xterm.
That was my cue to add "coax the x86 kit in the rack back to life" to my task list once I'd had a coffee and settled in.
(later it got rehomed to DrHyde's place in London where it served honourably as a CPAN testers machine until finally passing away of old age)
madduci 157 days ago [-]
TIL Windows 3.1 was running on MIPS
p_l 157 days ago [-]
Windows NT 3.x and 4.x ran pretty much on whatever had at least 32bits and little endian byte order, and vendor willing to support the porting effort.
Though it's especially hilarious that MIPS ARC died pretty fast (outside of SGI ARC variant which was big-endian) so on most other platforms Windows was dragging around a wrapper around platform firmware that provided enough ARC interfaces for ntoskrnl.exe to boot.
The later version was slightly revamped to run Windows better, for instance by having some kind of native-code-accelerated GPU emulation.
It was in turn replaced by SoftWindows95 which could run 32-bit Windows 9x, although by then, PC performance was starting to catch up with workstation performance. A fast 80486 with local-bus graphics and disk controllers was quite responsive and a Pentium even more so and by the 2nd-generation 3.3V Pentium chips (75/90/100, and for the richer, 120/133MHz) with PCI graphics and disk were getting up to near low-end Unix workstation performance.
A software-emulated version of Windows wasn't so desirable any more: you had to buy most of the apps to run on it, and while it was OK for simple productivity stuff, around that time Windows NT 4 came out and now a high-end PC with SCSI disks had a pretty good GUI on a stable OS. Some of those workstation apps started to get ported to x86 NT and you could buy a fast x86 box for a lot less than a UNIX workstation.
guenthert 157 days ago [-]
> I bought a sled, so now IRIX is installed on a real 4GB SCSI Quantum Fireball HDD ... whilst it lasts, anyway.
Yeah, I think the disks are the crux of the matter. Afaik, SCSI disks (those with a parallel interface) haven't been made in decades (those with FC interface are still made, I think). IDE drives, OTOH, can trivially be replaced (upgraded) with CF cards. Is there a SCSI to IDE (oh the horror) adapter?
And me disposing of an Sun Ultra 60 because it (was ancient and) came with the inferior IDE interface ...
Yeah, it's wild to think how much human effort is in each of these subsystems. And how fast-paced everything was at the time, how quickly it became obsolete.
And then you had to buy the software. A license for a 3D modeling package like Softimage or Alias cost at least $10-15k, and you probably also needed a separate raytracing package for high-quality output.
Someone is selling a copy of Alias for SGI for $2500 on eBay today: https://www.ebay.com/itm/335622694059
But if, in 1994, you did have an SGI and Alias and enough artistic skill and technical competence (and patience…) to produce liquid logos and dancing soda bottles and face morphs, you would certainly recoup that $80k investment quickly. It was a very rare skill that needed very rare hardware. You could get highly paid freelance work by simply calling up ad agencies.
That scarcity a bit hard to imagine today, when anyone can download Blender onto their standard desktop computer and learn it by watching online videos. It’s cool that 3D art has been thoroughly democratized.
They were so expensive they only made sense to run 24/7/365 in order to get their money’s worth. They had a service engineer on call permanently who wasn’t allowed to be further away than 25 miles from the servers at any time.
http://www.sgidepot.co.uk/onyxgs.html
We had the same issues where I ended up working; it was a year or two before 32-bit intel systems started to show up and they absolutely screamed compared to Sparc, but couldn't handle really big jobs. When the amd64 stuff started to come around, that's when you could just see the writing on the wall - Intel / AMD were gonna absolutely kill Sun...
I have some IBM POWER2SC (SuperChip) systems with intel asset tags that presumably were used for something special (very pricey machine), maybe MCAD :)
PA-RISC was really pushing the performance horizon in the late '90s (like 2-3 years ahead of intel) and had great EDA tools support which was an odd situation because the ISA was effectively frozen in 1997 (thanks Itanium) and just got process and implementation updates that scaled pretty well.
Do you mean 64-bit Intel systems?
But yeah, it was only a couple more years until amd64 came around and really made a difference.
Around 2000 was the time I first switched to Linux as my primary desktop OS -- I disliked Windows XP that badly, I put up with the slightly ropy experience of Caldera OpenLinux.
So it was very definitely A Thing by the turn of the century, but maybe not all the tools that everyone needed were available yet. (And the only usable desktop distros still cost money back then.)
You'll notice the numbers on these L1Axxxx - this is an internal LSI number, after they ran out of numbers for the xxxx part they bumped it to L2Axxxx and I worked on a few chips with those designations.
I worked in a group that did physical design, timing closure, test insertion, etc. I did a lot of layout automation in Avanti's tools; sometime after I left, it all went to Cadence I believe.
And retrocomputing geeks (and just any geek sufficiently old) got rueful grins on their faces.
This is traditional, in the sense of being old-fashioned. CPUs were built out of discrete components back when that meant individual vacuum tubes, discrete solid-state components, and then, finally, discrete chips. Thousands of individual SSI chips in computers like the Apollo Guidance Computer. Even after the first single-chip CPU was developed, larger computers still used multi-chip CPUs, like the PDP-11 architecture being implemented on four chips in the LSI-11 chipset.
https://gunkies.org/wiki/LSI-11_chip_set
That's before some people were born, I guess, so we have this.
In the mainframes the TCM is somewhat famous, really beautiful Bipolar designs with relatively low gate count but a ton of high performing dice (and high heat, hence TCM) on an exotic MCM.
Steve Jobs wanted NeXT to essentially be SGI.
I have a soft spot for the Octane.
https://en.wikipedia.org/wiki/SGI_Octane
The G4 cube, was Apple version of it whe when Jobs returned to Apple.
https://en.wikipedia.org/wiki/Power_Mac_G4_Cube
How does the documentation for the software development environment for this machine stack up today?
I used a purple Indigo 2 as my desktop for a few years.
When there were some issues with the local hot and cold running power for a few weeks, sometimes I'd get home to my study after being out and about and see 'brownout detected' on my console xterm.
That was my cue to add "coax the x86 kit in the rack back to life" to my task list once I'd had a coffee and settled in.
(later it got rehomed to DrHyde's place in London where it served honourably as a CPAN testers machine until finally passing away of old age)
Though it's especially hilarious that MIPS ARC died pretty fast (outside of SGI ARC variant which was big-endian) so on most other platforms Windows was dragging around a wrapper around platform firmware that provided enough ARC interfaces for ntoskrnl.exe to boot.
It wasn't and didn't.
Note the article mentions a bundled copy of SoftWindows.
This was a full-system x86-32 PC emulator in software.
https://wiki.preterhuman.net/Insignia_SoftWindows
It ran on lots of OSes as well as classic MacOS.
It was the successor of an earlier product called SoftPC:
https://en.wikipedia.org/wiki/SoftPC
The later version was slightly revamped to run Windows better, for instance by having some kind of native-code-accelerated GPU emulation.
It was in turn replaced by SoftWindows95 which could run 32-bit Windows 9x, although by then, PC performance was starting to catch up with workstation performance. A fast 80486 with local-bus graphics and disk controllers was quite responsive and a Pentium even more so and by the 2nd-generation 3.3V Pentium chips (75/90/100, and for the richer, 120/133MHz) with PCI graphics and disk were getting up to near low-end Unix workstation performance.
A software-emulated version of Windows wasn't so desirable any more: you had to buy most of the apps to run on it, and while it was OK for simple productivity stuff, around that time Windows NT 4 came out and now a high-end PC with SCSI disks had a pretty good GUI on a stable OS. Some of those workstation apps started to get ported to x86 NT and you could buy a fast x86 box for a lot less than a UNIX workstation.
Yeah, I think the disks are the crux of the matter. Afaik, SCSI disks (those with a parallel interface) haven't been made in decades (those with FC interface are still made, I think). IDE drives, OTOH, can trivially be replaced (upgraded) with CF cards. Is there a SCSI to IDE (oh the horror) adapter?
And me disposing of an Sun Ultra 60 because it (was ancient and) came with the inferior IDE interface ...
just like SW today. History repeats itself.