A university near me must be going through a hardware refresh, because they’ve recently been auctioning off a bunch of ~5 year old desktops at extremely low prices. The only problem is that you can’t buy just one or two. All the auction lots are batches of 10-30 units.

It got me wondering if I could buy a bunch of machines and set them up as a distributed computing cluster, sort of a poor man’s version of the way modern supercomputers are built. A little research revealed that this is far from a new idea. The first ever really successful distributed computing cluster (called Beowulf) was built by a team at NASA in 1994 using off the shelf PCs instead of the expensive custom hardware being used by other super computing projects at the time. It was also a watershed moment for Linux, then only a few yeas old, which was used to run Beowulf.

Unfortunately, a cluster like this seems less practical for a homelab than I had hoped. I initially imagined that there would be some kind of abstraction layer allowing any application to run across all computers on the cluster in the same way that it might scale to consume as many threads and cores as are available on a CPU. After some more research I’ve concluded that this is not the case. The only programs that can really take advantage of distributed computing seem to be ones specifically designed for it. Most of these fall broadly into two categories: expensive enterprise software licensed to large companies, and bespoke programs written by academics for their own research.

So I’m curious what everyone else thinks about this. Have any of you built or admind a Beowulf cluster? Are there any useful applications that would make it worth building for the average user?

  • Ramin Honary@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 months ago

    Someone with more expertise can correct me if I am wrong, but the last I heard about this, I heard that cluster computing was obsoleted by modern IaaS and cloud computing technology.

    For example, the Xen project provides Unikernels as part of their Xen Cloud product. The unikernel is (as I understand it) basically a tiny guest operating system that statically links to a programming language runtime or virtual machine. So the Xen guest boots up a single executable program composed of the programming language runtime environment (like the Java virtual machine) statically linked to the unikernel, and then runs whatever high-level programming language that the virtual machine supports, like Java, C#, Python, Erlang, what have you.

    The reason for this is if you skip running Linux altogether, even a tiny Linux build like Alpine, and just boot directly into the virtual machine process, this tends to be a lot more memory efficient, and so you can fit more processes into the memory of a single physical compute node. Microsoft Azure does something similar (I think).

    To use it, basically you write a program a service in a programming language that runs on a VM and build it to run on a Xen unikernel. When you run the server, Xen allocates the computing resources for it and launches the executable program directly on the VM without an operating system, so the VM is, in effect, the operating system.