Just need to do a dnf update on them all…
Wow, that’s kind of a lot more Linux than I was expecting, but it also makes sense. Pretty cool tbh.
So basically, everybody switched from expensive UNIX™ to cheap “unix”-in-all-but-trademark-certification once it became feasible, and otherwise nothing has changed in 30 years.
Except this time the Unix-like took 100% of the market
Was too clear this thing is just better
So you’re telling me that there was a Mac super computer in '05?
https://en.wikipedia.org/wiki/System_X_(supercomputer)
G5
Oof, in only a couple years it was worthless.
If I recall correctly they linked a bunch of powermacs together with FireWire.
It apparently later was transitioned to Xserves
Ah hahahaha!!!
Windows! Some dumbass put Windows on a supercomputer!
Prob Microsoft themselves
Ironically, even Microsoft uses Linux in its Azure datacenters, iirc
Good point.
But still, the 30% efficient supercomputer.
And Mac! Whatever that means 🤣
Probably need one, just for the benchmark comparisons.
deleted by creator
Wait what Mac?
The Big Mac. 3rd fastest when it was built and also the cheapest, costing only $5.2 million.
3rd fastest
And 1st tastiest
Interesting. It’s like those data centers that ran on thousands of Xboxes
Wha?
(searches interwebs)
Wow, that completely passed me by…
I think it was PS3 that shipped with “Other OS” functionality, and were sold a little cheaper than production costs would indicate, to make it up on games.
Only thing is, a bunch of institutions discovered you could order a pallet of PS3’s, set up Linux, and have a pretty skookum cluster for cheap.
I’m pretty sure Sony dropped “Other OS” not because of vague concerns of piracy, but because they were effectively subsidizing supercomputers.
Don’t know if any of those PS3 clusters made it onto Top500.
It was 33rd in 2010:
In November 2010, the Air Force Research Laboratory created a powerful supercomputer, nicknamed the “Condor Cluster”, by connecting together 1,760 consoles with 168 GPUs and 84 coordinating servers in a parallel array capable of 500 trillion floating-point operations per second (500 TFLOPS). As built, the Condor Cluster was the 33rd largest supercomputer in the world and was used to analyze high definition satellite imagery at a cost of only one tenth that of a traditional supercomputer.
https://en.wikipedia.org/wiki/PlayStation_3_cluster
Makes me think how PS2 had export restrictions because “its graphics chip is sufficiently powerful to control missiles equipped with terrain reading navigation systems”
That’s so friggin cool to think about!
Oh Xserve, we hardly knew ye 😢
Mac is a flavor of Unix, not that surprising really.
Mac is also also derived from BSD since it is built on Darwin
Apple had its current desktop environment for it’s proprietary ecosystem built on BSD with their own twist while supercomputers are typically multiuser parallel computing beats, so I’d say it is really fucking surprising. Pretty and responsive desktop environments and breathtaking number crunchers are the polar opposites of a product. Fuck me, you’ll find UNIX roots in Windows NT but my flabbers would be ghasted if Deep Blue had dropped a Blue Screen.
As someone who worked on designing racks in the super computer space about 10 q5vyrs ago I had no clue windows and mac even tried to entered the space
about 10 q5vyrs ago
Have you been distracted and typed a password/PSK in the wrong field 8)
Lol typing on phone plus bevy. Can’t defend it beyond that
There was a time when a bunch of organisations made their own supercomputers by just clustering a lot of regular computers:
https://en.wikipedia.org/wiki/System_X_(supercomputer)For Windows I couldn’t find anything.
If you google “Windows supercomputer”, you just get lots of results about Microsoft supercomputers, which of course all run on Linux.No there was HPC sku of Windows 2003 and 2008 : https://en.m.wikipedia.org/wiki/Windows_Server_2003#Windows_Compute_Cluster_Server
Microsoft earnestly tried to enter the space with a deployment system, a job scheduler and an MPI implementation. Licenses were quite cheap and they were pushing hard with free consulting and support, but it did not stick.
but it did not stick.
Yeah. It was bad. The job of a Supercomputer is to be really fast and really parallel. Windows for Supercomputing was… not.
I honestly thought it might make it, considering the engineering talent that Microsoft had.
But I think time proves that Unix and Linux just had an insurmountable head start. Windows, to the best of my knowledge, never came close to closing the gap.
At this point I think it’s most telling that even Azure runs on Linux. Microsoft’s twin flagship products somehow still only work well when Linux does the heavy lifting and works as the glue between
But, surely Windows is the wrong OS?
Windows is a per-user GUI… supercomputing is all about crunching numbers, isn’t it?
I can understand M$ trying to get into this market and I know Windows server can be used to run stuff, but again, you don’t need a GUI on each node a supercomputer they’d be better off with DOS…?
I could see the NT kernel being okay in isolation, but the rest of Windows coming along for the ride puts the kibosh on that idea.
But, surely Windows is the wrong OS?
Oh yes! To be clear - trying to put any version of Windows on a super-computer is every bit as insane as you might imagine. By what I heard in the rumor mill, it went every bit as badly as anyone might have guessed.
But I like to root for an underdog, and it was neat to hear about Microsoft engineers trying to take the Windows kernel somewhere it had no rational excuse to run, perhaps by sheer force of will and hard work.
Yeh it was system x I worked on out default was redhat. I forget the other options but win and mac sure as shut wasn’t on the list
Would the one made out of playstations be in this statistic?
I think you can actually see it in the graph.
The Condor Cluster with its 500 Teraflops would have been in the Top 500 supercomputers from 2009 till ~2014.
The PS3 operating system is a BSD, and you can see a thin yellow line in that exact time frame.Yes, in the linux stat. The otheros option on the early PS3 allowed you to boot linux, which is what most, of not all, of the clusters used.
What would the other be
TempleOS
When you really have to look deep into god’s mind you just have to put templeOS on a supercomputer.
If you install TempleOS on the fastest supercomputer Frontier, you get Event Horizon.
WARNING: Gory, disturbing pictureDo NOT network-enable TempleOS.
God will get angry if you do.
What movie/tv show is this image from?
Event Horizon
Praise be upon him
a glowie’s worst nightmare
How can there be N/A though? How can any functional computer not have an operating system? Or is just reading the really big MHz number of the CPU count as it being a supercomputer?
Thanks for the links!
We’re gonna take the test, and we’re gonna keep taking it until we get one hundred percent in the bitch!
Any idea how it’d look if broken down into distros? I’m assuming enterprise support would be favoured so Red Hat or Ubuntu would dominate?
I can’t imagine Supercomputers to use a mainstream operating system such as Ubuntu. But clearly people even put Windows on it, so I shouldn’t be surprised…
The previously fastest ran on Red Hat Enterprise Linux, the current fastest runs on SUSE Enterprise Linux.
The current third fastest (owned by Microsoft) runs Ubuntu. That’s as far as I care to research.current fastest runs on SUSE Enterprise Linux
No wayyy! Why SUSE tho?
Because all the Arch consultants were busy posting on the internet.
This looks impressive for Linux, and I’m glad FLOSS has such an impact! However, I wonder if the numbers are still this good if you consider more supercomputers. Maybe not. Or maybe yes! We’d have to see the evidence.
There’s no reason to believe smaller supercomputers would have significantly different OS’s.
At some point you enter the realm of mainframes and servers.
Mainframes almost all run Linux now, the last Unix’s are close to EOL.
Servers have about a 75% Linux market share, with the rest mostly running Windows and some BSD.I wonder if the numbers are still this good if you consider more supercomputers.
Great question. My guess is not terribly different.
“Top 500 Supercomputers” is arguably a self-referential term. I’ve seen the term “super-computer” defined whether it was among the 500 fastest computer in the world, on the day it went live.
As new super-computers come online, workloads from older ones tend to migrate to the new ones.
So there usually aren’t a huge number of currently operating supercomputers outside of the top 500.
When a super-computer falls toward the bottom of the top 500, there’s a good chance it is getting turned off soon.
That said, I’m referring here only to the super-computers that spend a lot of time advertising their existence.
I suspect there’s a decent number out there today that prefer not to be listed. But I have no reason to think those don’t also run Linux.