Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (Darkly)
  • No Skin
Collapse

Chebucto Regional Softball Club

  1. Home
  2. Uncategorized
  3. When I see server farms they often feature network cables, so many network cables.
A forum for discussing and organizing recreational softball and baseball games and leagues in the greater Halifax area.

When I see server farms they often feature network cables, so many network cables.

Scheduled Pinned Locked Moved Uncategorized
15 Posts 9 Posters 16 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • myrmepropagandistF myrmepropagandist

    When I see server farms they often feature network cables, so many network cables. But... if you were building a massive computing center would you need all of those cables if you had the most high-tech equipment?

    That is ... are the cable bundles something we'll outgrow?

    I guess I need to look at some tours of the most massive data centers?

    L This user is from outside of this forum
    L This user is from outside of this forum
    Leon P Smith
    wrote last edited by
    #4

    @futurebird Yes, most large datacenters have I'm sure many metric shittons of ethernet cables in them. Usually a lot of work goes into cable management, otherwise things
    would be totally out of control.

    There are companies that have even designed large compute clusters in clever ways to minimize cable length, which minimizes latency and saves money and effort in wiring.

    There's often a fair bit of fiber optic cables too, but there's tradeoffs in terms of cost. Also, it takes an incredibly beefy server to be able to make much use of the very high bandwidths that fiber optics provide, so sometimes application servers are copper to switches, which aggregate multiple servers together into one very high bandwidth fiber optic backbone link.

    1 Reply Last reply
    0
    • loptaL lopta

      @futurebird Have you looked at @oxidecomputer at all?

      Space HoboS This user is from outside of this forum
      Space HoboS This user is from outside of this forum
      Space Hobo
      wrote last edited by
      #5

      @lopta @futurebird @oxidecomputer "Until Oxide, cloud architecture remained exclusively in public cloud datacenters, inaccessible to 85% of global IT workloads that run on-premises."

      Huh, so I guess they're pretending OpenStack never existed?

      loptaL 1 Reply Last reply
      0
      • Space HoboS Space Hobo

        @lopta @futurebird @oxidecomputer "Until Oxide, cloud architecture remained exclusively in public cloud datacenters, inaccessible to 85% of global IT workloads that run on-premises."

        Huh, so I guess they're pretending OpenStack never existed?

        loptaL This user is from outside of this forum
        loptaL This user is from outside of this forum
        lopta
        wrote last edited by
        #6

        @spacehobo @futurebird @oxidecomputer I think they're talking more about a tightly-integrated platform underneath that software stack.

        Space HoboS 1 Reply Last reply
        0
        • loptaL lopta

          @spacehobo @futurebird @oxidecomputer I think they're talking more about a tightly-integrated platform underneath that software stack.

          Space HoboS This user is from outside of this forum
          Space HoboS This user is from outside of this forum
          Space Hobo
          wrote last edited by
          #7

          @lopta @futurebird @oxidecomputer I mean, there's always k8s for that kind of thing, but maybe I'm one of those "cloud computing just means a VM spawning API and crypto is short for cryptography" Olds now.

          loptaL 1 Reply Last reply
          0
          • myrmepropagandistF myrmepropagandist

            When I see server farms they often feature network cables, so many network cables. But... if you were building a massive computing center would you need all of those cables if you had the most high-tech equipment?

            That is ... are the cable bundles something we'll outgrow?

            I guess I need to look at some tours of the most massive data centers?

            myrmepropagandistF This user is from outside of this forum
            myrmepropagandistF This user is from outside of this forum
            myrmepropagandist
            wrote last edited by futurebird@sauropods.win
            #8

            This video pretty much shows what I expected, although finding out that there are millions of machines on those racks was kind of vertigo inducing. This is what "a data center" looks like today. And people keep all those cables tidy, and replace the servers when they break.

            Dan PortsD 1 Reply Last reply
            0
            • Space HoboS Space Hobo

              @lopta @futurebird @oxidecomputer I mean, there's always k8s for that kind of thing, but maybe I'm one of those "cloud computing just means a VM spawning API and crypto is short for cryptography" Olds now.

              loptaL This user is from outside of this forum
              loptaL This user is from outside of this forum
              lopta
              wrote last edited by
              #9

              @spacehobo @futurebird @oxidecomputer What I've seen of their work is encouraging but I haven't got to test it myself. Looks like they put a lot of work into power, networking and firmware in an attempt to weed out a lot of the cruft that you get with racks of raditional servers.

              J ? 2 Replies Last reply
              0
              • myrmepropagandistF myrmepropagandist

                This video pretty much shows what I expected, although finding out that there are millions of machines on those racks was kind of vertigo inducing. This is what "a data center" looks like today. And people keep all those cables tidy, and replace the servers when they break.

                Dan PortsD This user is from outside of this forum
                Dan PortsD This user is from outside of this forum
                Dan Ports
                wrote last edited by
                #10

                @futurebird We recently brought in these same cable-tidiers to manage my (much, much smaller) research lab, and they are much, much better at running cables neatly than I am. It is very obvious which racks they wired and which ones I did.

                (This is probably why they look nervous whenever I walk into the server room.)

                1 Reply Last reply
                0
                • loptaL lopta

                  @spacehobo @futurebird @oxidecomputer What I've seen of their work is encouraging but I haven't got to test it myself. Looks like they put a lot of work into power, networking and firmware in an attempt to weed out a lot of the cruft that you get with racks of raditional servers.

                  J This user is from outside of this forum
                  J This user is from outside of this forum
                  James Widman
                  wrote last edited by
                  #11

                  @lopta @spacehobo @futurebird @oxidecomputer to me the most impressive part is where they completely replaced the BIOS with their own firmware, written from scratch in rust, that is co-designed with the kernel.

                  But also, close to OP's point, they designed the rack so that the owner never does any cable management within the rack. E.g. when you need to add another motherboard, you just slide it into an empty slot, and the back of it auto-connects to DC power & network cables.

                  1 Reply Last reply
                  0
                  • loptaL lopta

                    @spacehobo @futurebird @oxidecomputer What I've seen of their work is encouraging but I haven't got to test it myself. Looks like they put a lot of work into power, networking and firmware in an attempt to weed out a lot of the cruft that you get with racks of raditional servers.

                    ? Offline
                    ? Offline
                    Guest
                    wrote last edited by
                    #12

                    @lopta @spacehobo @futurebird @oxidecomputer my mind went to oxide as well, and while I can’t find pictures they do things like integrating power and network into the rack chassis, so replacing a compute/storage sled is just a case of sliding a new one in. No running cables. I don’t know of anybody else *selling* that kind of thing.

                    ? 1 Reply Last reply
                    0
                    • ? Guest

                      @lopta @spacehobo @futurebird @oxidecomputer my mind went to oxide as well, and while I can’t find pictures they do things like integrating power and network into the rack chassis, so replacing a compute/storage sled is just a case of sliding a new one in. No running cables. I don’t know of anybody else *selling* that kind of thing.

                      ? Offline
                      ? Offline
                      Guest
                      wrote last edited by
                      #13

                      @lopta @spacehobo @futurebird @oxidecomputer in principle you could extend that, especially in sci-fi. In the same way these take common component like power and sacrifice flexibility of compute nodes (you need to buy Oxide devices) for easy maintenance (you don’t need to recable), you could do that with the whole rack. Replace the device with blocks of, I don’t know, memistor arrays that combine memory and compute in the same components. Fill an asteroid with precisely machined tunnels; line them with superconducting rails that can be transport, power supply, or network as needed. Route around failed nodes with virtualization until you have enough to shuffle them. Or just take the hit like an SSD and overprovision so you have spare nodes to fill in. There’s enough weird future tech in the concept of a 20k year data centre that it’s hard to know what failure modes need to be considered; but if you have reliable enough hardware overprovisioning and protecting it may be a better approach than trying to build repair systems which add additional (and different) complexity. I would imagine that at that scale storage would have to be kept live; I’d expect high performance memory to decay from Weird Quantum Effects.

                      1 Reply Last reply
                      0
                      • myrmepropagandistF myrmepropagandist

                        To write SF you gotta just be full of hubris. Yeah yeah I can totally learn enough about networking to describe a data center of the future.

                        But, it turns out I only have a hazy notion why contemporary ones are filled with all those cable bundles.

                        It's clear to me those need to go if you want a self-repairing data center than can last for 20k years or more.

                        Even if you seal the place up ... the sagging leads to problems over time.

                        It needs to be one solid state machine.

                        Barry GoldmanB This user is from outside of this forum
                        Barry GoldmanB This user is from outside of this forum
                        Barry Goldman
                        wrote last edited by
                        #14

                        @futurebird not one solid machine to survive that long. delocalized, the cables are alive and constantly growing repairing re-attaching, especially as new data storage units grow..

                        1 Reply Last reply
                        0
                        • myrmepropagandistF myrmepropagandist

                          To write SF you gotta just be full of hubris. Yeah yeah I can totally learn enough about networking to describe a data center of the future.

                          But, it turns out I only have a hazy notion why contemporary ones are filled with all those cable bundles.

                          It's clear to me those need to go if you want a self-repairing data center than can last for 20k years or more.

                          Even if you seal the place up ... the sagging leads to problems over time.

                          It needs to be one solid state machine.

                          ? Offline
                          ? Offline
                          Guest
                          wrote last edited by
                          #15

                          @futurebird one solid state machine never touched by radiation, connected to the outside by connections that are oxidation proof, thermal shock proof, physical shock proof, ...

                          And if you put up a lead shield to block the low energy radiation, the high energy radiation becomes low(er) energy ...

                          If I was designing this data center, I would hope I did not intend to be alive 200 years from now ...

                          1 Reply Last reply
                          0

                          Reply
                          • Reply as topic
                          Log in to reply
                          • Oldest to Newest
                          • Newest to Oldest
                          • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          Powered by NodeBB Contributors
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • World
                          • Users
                          • Groups