-
Notifications
You must be signed in to change notification settings - Fork 1
Description
To achieve the same end of distributing the NixOS binary caches (making this a self-contained task to tackle, decoupled from the task of deduplicating the content in NAR files for reduced transport bandwidth), for now, I'm suggesting the possibility of using instead a good ol' battle-tested infrastructure that we all already know: torrent + Mainline DHT or a secured server that lists the torrent files.
I don't think I have to justify this choice by making a list of content/datasets already being distributed with torrent, but for completeness sake:
300 TB of data still in a http/ftp server:
Until the perf shortcomings of ipfs are fixed (independently tracked in https://github.com/rht/sfpi-benchmark (doesn't matter if it is not endorsed by a specific party), and who knows how/when ipld, filestore, cluster are going to be integrated), I think it'd be more prudent to use torrent to engineer production-grade (vs proof-of-concept) CDN infrastructure (can be quantified by a reduction on the operational cost, or an increase in scalability on the nixops side), but test several emerging distributed p2p content distribution / computational systems for research.
I hope I am not (unintentionally) criticising a specific closed-group-for-profit party for one more time. I have fact-checked the rules of our host, i.e. github, in http://todogroup.org/opencodeofconduct/ to be sufficiently liberal and understanding (whether the web should be decentralized is beside the point), in particular, the line
Our open source community prioritizes marginalized people’s safety over privileged people’s comfort
(marginalized people might refer to scholars/researchers without a strong foothold)