Offload AI workloads from edge devices to powerful cloud servers—without changing your code.
This repository demonstrates how to seamlessly distribute AI training across heterogeneous devices using active storage. Run heavy PyTorch models on resource-constrained edge devices by intelligently offloading computation to the cloud.
Note: This work is part of the ICOS EU Project 🇪🇺 and builds upon our previous research in edge computing optimization, MetaOS architecture, and adaptive ML for constrained environments. See Related Publications for more details.
Reduce memory usage by 80%+ on edge devices while maintaining performance. Perfect for deploying AI on resource-constrained hardware like Raspberry Pi, Orange Pi, or embedded systems.
- Minimal code changes - Keep your existing PyTorch/scikit-learn code
- Transparent offloading - dataClay handles distribution automatically
- Resource flexibility - Mix and match edge devices with cloud resources
- Real datasets - Includes CPU utilization data on Hugging Face
To train a model on the server using dataClay:
-
Open
train_dataclay.pyand set the correctserver_ip. -
Ensure Docker Compose is up and running:
docker-compose up -d-
Define two experiment names:
client_experimentfor storing client results.experiment_namefor storing server results.
-
Run the training notebook:
./AIoffload/0_server_experiment.ipynbFor standalone (non-dataClay) experiments:
- Open the notebook:
./Baseline/1_client_experiment.ipynb- Set
experiment_nameto track and store metrics and results.
You'll need to configure both the server and client environments.
- Install Miniconda:
-
For Linux:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh
-
For macOS:
curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh bash Miniconda3-latest-MacOSX-x86_64.sh
- Initialize Conda:
source ~/miniconda3/bin/activate
conda initThen restart your terminal.
- Create and activate the server environment:
conda create -n server_env python=3.10.16 -y
conda activate server_env- Clone the repository and install dependencies:
git clone <repository_url>
cd <repository_directory>
pip install -r requirements.txtMake sure
torchis included (add a specific CUDA version if needed).
- Start Docker + dataClay:
docker-compose down
docker-compose up -d-
Install Miniconda (same steps as server).
-
Create and activate the client environment:
conda create -n client_env python=3.10.16 -y
conda activate client_env- Clone the repository and install dependencies:
git clone <repository_url>
cd <repository_directory>
pip install -r requirements-client.txttorch in client requirements if unnecessary.
- Install Jupyter Notebook:
conda install -c conda-forge notebook- Launch Jupyter:
jupyter-notebookThis project is licensed under CC BY-NC-SA 4.0. You are free to share and adapt the material with proper attribution, for non-commercial purposes only, and under the same license.
If you use this work in your research, please cite:
@article{barcel2025offloading,
title={Offloading Artificial Intelligence Workloads across the Computing Continuum by means of Active Storage Systems},
author={Barcel{\'o}, Alex and Ord{\'o}{\~n}ez, Sebasti{\'a}n A Cajas and Samanta, Jaydeep and Su{\'a}rez-Cetrulo, Andr{\'e}s L and Ghosh, Romila and Carbajo, Ricardo Sim{\'o}n and Queralt, Anna},
journal={Future Generation Computer Systems},
pages={108271},
year={2025},
publisher={Elsevier},
doi={10.1016/j.future.2025.108271},
url={https://www.sciencedirect.com/science/article/pii/S0167739X25005655}
}- 📖 Official Blog Post: CeADAR Publication
- 📄 arXiv Preprint: arXiv:2512.02646
- 📋 CITATION.cff: See CITATION.cff for structured citation metadata
This work builds upon and extends our previous research in the ICOS project:
-
ICOS: An Intelligent MetaOS for the Continuum Garcia et al., MECC '25
-
Adaptive Machine Learning for Resource-Constrained Environments Cajas Ordóñez et al., Lecture Notes in Computer Science, 2025
-
Drift-MoE: A Mixture of Experts Approach to Handle Concept Drifts arXiv:2507.18464
-
Intelligent Edge Computing and Machine Learning: A Survey of Optimization and Applications Future Internet 2025, 17(9), 417
🇪🇺 This work has received funding from the European Union's HORIZON research and innovation programme under grant agreement No. 101070177 (ICOS Project).
🌟 Star us on GitHub if this helps your research! 🌟


