• Karna@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      That solves the media distribution related storage issue, but not the CI/CD pipeline infra issue.

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    11 hours ago

    yet another reason to back flatpaks and distro-agnostic software packaging. We cant afford to use dozens of build systems to maintain dozens of functionally-identical application repositories

    • Mwa@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Let the community package it to deb,rpm etc while the devs focus on flatpak/appimage

    • ubergeek@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      Pretty sure flatpak uses alpine as a bootstrap… Flatpak, after all, brings along an entire distro to run an app.

    • balsoft@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      5 hours ago

      I don’t think it’s a solution for this, it would just mean maintaining many distro-agnostic repos. Forks and alternatives always thrive in the FOSS world.

  • ryannathans@aussie.zone
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    22 hours ago

    How are they so small and underfunded? My hobby home servers and internet connection satisfy their simple requirements

      • DaPorkchop_@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        That’s ~2.4Gbit/s. There are multiple residential ISPs in my area offering 10Gbit/s up for around $40/month, so even if we assume the bandwidth is significantly oversubscribed a single cheap residential internet plan should be able to handle that bandwidth no problem (let alone a for a datacenter setup which probably has 100Gbit/s links or faster)

      • chaoticnumber@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        20 hours ago

        That averages out to around 300 megabytes per second. No way anyone has that at home comercially.

        One of the best comercial fiber connections i ever saw will provide 50 megabytes per second upload, best effort that is.

        No way in hell you can satisfy that bandwidth requirement at home. Lets not mention that they need 3 nodes with such bw.

        • DaPorkchop_@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 hour ago

          50MB/s is like 0.4Gbit/s. Idk where you are, but in Switzerland you can get a symmetric 10Gbit/s fiber link for like 40 bucks a month as a residential customer. Considering 100Gbit/s and even 400Gbit/s links are already widely deployed in datacenter environments, 300MB/s (or 2.4Gbit/s) could easily be handled even by a single machine (especially since the workload basically consists of serving static files).

      • ryannathans@aussie.zone
        link
        fedilink
        arrow-up
        4
        ·
        21 hours ago

        On my current internet plan I can move about 130TB/month and that’s sufficent for me, but I could upgrade plan to satisfy the requirement

        • Karna@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          1 hour ago

          Your home server might have the required bandwidth but not requisite the infra to support server load (hundreds of parallel connections/downloads).

          Bandwidth is only one aspect of the problem.