Skip to content

EmbeddedLLM/skypilot

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4,897 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🍴 EmbeddedLLM Fork

This is a downstream fork of skypilot-org/skypilot maintained by EmbeddedLLM. It tracks upstream releases and stacks a small set of custom patches on top.

Upstream version0.12.0
Branchellm-0.12.0
Imageghcr.io/embeddedllm/skypilot:v0.12.0
Upstream reposkypilot-org/skypilot

🐳 Image Tags

TagMeaning
v0.12.0Stable, production-ready build based on upstream v0.12.0
v0.12.0-devDevelopment build off ellm-0.12.0 branch, not yet stable

πŸ”§ Custom Patches

Patch Commit Files Changed Description
Enable dual GPU in a single API server 493fb1f sky/clouds/kubernetes.py
sky/provision/kubernetes/utils.py
Makes get_node_accelerator_count check both nvidia.com/gpu and amd.com/gpu resource keys so nodes with either vendor GPU report a non-zero accelerator count.
Note: the original kubernetes.py resource-key selection from this patch keyed off skypilot.co/gpu node labels. The formatter-driven resource-key selection in 9cd2668 + the mixed-cluster fix in f65b71f superseded that logic; this patch's contribution is now the get_node_accelerator_count change.
Automatic AMD GPU detection via device plugin labels 9cd2668
(simplified by b35db4a)
sky/provision/kubernetes/utils.py
sky/catalog/kubernetes_catalog.py
sky/clouds/kubernetes.py
sky/utils/gpu_names.py
sky/client/cli/command.py
AMD GPU nodes are detected automatically when the AMD device plugin is installed β€” no sky gpus label required. Adds AMDGPULabelFormatter which reads the amd.com/gpu.product-name = <NAME> direct label. Each node must expose exactly one GPU type (homogeneous-node assumption); nodes with multiple distinct AMD GPUs (e.g. iGPU + dGPU) are not supported, and neither is the AMD device plugin's suffix label format emitted on such nodes. iGPU-only nodes are treated like any other GPU node. All GPU detection code paths β€” sky gpus list, per-node status, pod scheduling, CPU-only node selection β€” iterate all formatters per node, enabling mixed NVIDIA + AMD clusters with no extra configuration. Adds 33 AMD canonical GPU names (MI Instinct CDNA1–4, Radeon Pro W-series, Radeon RX RDNA2/3) to the shared GPU name registry.
Follow-up b35db4a dropped suffix-format support (amd.com/gpu.product-name.<NAME> = "1", only emitted on multi-GPU-type nodes which we don't support) and removed iGPU/APU filtering, since iGPU-only nodes are valid GPU nodes under the homogeneous-node assumption. Net effect: AMDGPULabelFormatter is now structurally identical to other single-key formatters.
Fix pod scheduling for mixed NVIDIA + AMD clusters e9581a9 sky/provision/kubernetes/instance.py Fixes pod scheduling and SkyServe replica placement on AMD GPU nodes in a mixed cluster. instance.py previously called get_gpu_resource_key(context) (cluster-wide default) in three places, which returned the wrong vendor key for the non-default GPU type:
  • needs_gpus: AMD pods were seen as not needing GPUs (NVIDIA key missing from AMD pod limits) β†’ nvidia RuntimeClass never skipped, toleration never added
  • gpu_toleration: wrong vendor key in pod toleration β†’ pod rejected by AMD node taint
  • error messages: wrong resource key shown when scheduling fails
Fix reads the GPU resource key directly from the pod's resource limits and checks all SUPPORTED_GPU_RESOURCE_KEYS values instead of the cluster default.
Fix wrong GPU resource key for NVIDIA pods in mixed clusters f65b71f sky/clouds/kubernetes.py In a mixed AMD + NVIDIA cluster, requesting an NVIDIA GPU (e.g. A4000) previously put amd.com/gpu in the pod's resource limits instead of nvidia.com/gpu, making the pod unschedulable β€” NVIDIA nodes don't have amd.com/gpu capacity, and AMD nodes don't have the right node-affinity label. Root cause: when the matched label key was non-AMD (GFD, SkyPilot, etc.), the code called get_gpu_resource_key(context), which scans the cluster and returns the first vendor key in dict-iteration order (amd.com/gpu before nvidia.com/gpu); in a mixed cluster this picks AMD for an NVIDIA-targeted request. Fix: derive the resource key directly from the label-formatter category β€” amd.com/* β†’ amd.com/gpu; any other recognized GPU label β†’ nvidia.com/gpu; fall back to get_gpu_resource_key only when no formatter matched.
Fix GPU count display for NVIDIA replicas in mixed clusters 57c8ba2 sky/provision/kubernetes/utils.py process_skypilot_pods read each pod's gpu_count from the cluster-default resource key (get_gpu_resource_key(context)). In a mixed cluster this is amd.com/gpu, so an NVIDIA pod's nvidia.com/gpu request was missed and gpu_count came back as 0. Effect: sky status / cost-report displayed NVIDIA replicas as having no accelerators in mixed clusters. Replica scheduling itself was correct (handled by f65b71f); only the status display lied. Fix: iterate SUPPORTED_GPU_RESOURCE_KEYS.values() and read whichever vendor key the pod actually requested.
Fix node-affinity values rendered as int for AMD GPU labels 37a0b54 sky/provision/kubernetes/utils.py
sky/templates/kubernetes-ray.yml.j2
For AMD device-plugin suffix labels (e.g. amd.com/gpu.product-name.AMD_Radeon_RX_7900_XTX="1"), the Kubernetes Python client deserializes the value "1" as Python int 1. The int leaked into k8s_acc_label_values and was rendered into the node-affinity matchExpressions.values list as a JSON number, causing pod creation to fail with: cannot unmarshal number into Go struct field NodeSelectorRequirement…values of type string. Two fixes: coerce the label value to str in get_accelerator_label_key_values (source of truth), and explicitly quote {{label_value}} in the j2 template as defense against future int leakage. NVIDIA via GFD is unaffected because GFD label values are non-numeric strings.
[Kubernetes] Fix podip endpoint in HA mode f3b4561 sky/provision/kubernetes/network.py Fixes http://None endpoint when using high_availability + podip port mode for sky-serve-controller. In HA mode the controller runs as a Deployment β€” Kubernetes assigns random pod name suffixes so the expected {cluster_name}-head pod never exists. Fix uses label selectors instead of pod name lookup, working correctly for both HA and non-HA modes.
[Kubernetes] Replace rsync with tar-stream for in-pod file transfer 4f1c887
(plus d5731f4, e3237a7)
sky/utils/command_runner.py
sky/utils/kubernetes/rsync_helper.sh
rsync 3.4.x over kubectl exec deadlocks at session teardown on Ubuntu 26 / kernel 6.8+: the data transfer completes (visible 100% in logs) but neither end ever exits, leaving the SkyServe controller stuck at Preparing SkyPilot runtime (1/3 - initializing). Reproduces with both rsync ends at 3.4.1, with --protocol=31, --old-args, --whole-file, --inplace, --timeout=N; a one-way tar -c | kubectl exec -i -- tar -x works fine. Override KubernetesCommandRunner.rsync to use a one-way tar pipeline, sidestepping rsync's bidirectional teardown handshake entirely. Replicates rsync features: .skyignore/.gitignore via --exclude-ignore, .git/info/exclude via --exclude-from, file-target rename via tar --transform='s/^src$/dst/', --no-same-owner in lieu of --no-owner --no-group. Also forces SPDY transport (KUBECTL_REMOTE_COMMAND_WEBSOCKETS=false) for any kubectl subprocess, since the WebSocket transport on newer kernels deadlocks at ~2-3 MB of bidirectional traffic on the same HTTP/2 stream.
[Kubernetes] Fix kubectl exec hang after setup script completes 502df1c
(plus 32e4e61)
sky/templates/kubernetes-ray.yml.j2 Setup at phase 2/3 hung indefinitely on Ubuntu 26+ even though the remote bash exited cleanly with rc=0. Forensic traces showed the wait stanza's tail -f /tmp/runtime-setup.log & … kill $TAIL_PID failing with kill: (PID) - Permission denied under bash --login -c -i β€” the orphaned tail kept the kubectl exec stdout fd open, so the session never received EOF. Fix: use tail -f --pid=$$ so tail self-terminates when the parent script exits regardless of kill succeeding; add jobs -p | xargs kill + pkill -P $$ belt-and- suspenders before exec 1>&-; exec 2>&-. Also adds forensic set -x tracing tee'd to /tmp/setup_commands_trace.log, EXIT/ERR traps recording line + rc + timestamp, and end-of-body marker ===SKY_SETUP_BODY_COMPLETE=== for diagnosing future hangs.
[Serve] Raise per-controller service capacity for k8s workloads c6f8f23 sky/utils/controller_utils.py Upstream's _get_number_of_services reserves LAUNCHES_PER_SERVICE Γ— LONG_WORKER_MEM_GB (4 Γ— 0.4 β‰ˆ 1.6 GB) per service for an embedded API server's worker pool inside the controller pod, sized for slow cloud-VM launches that load heavyweight cloud SDKs. With 8 GB controller memory this caps services at 2. For k8s-mostly deployments where replica launches are pod-create operations, this is excessive. Lower LAUNCHES_PER_SERVICE 4 β†’ 2 and introduce SERVE_LOCAL_API_LONG_WORKER_MEM_GB = 0.25 (separate knob from the global LONG_WORKER_MEM_GB, so it doesn't affect the central API server's worker pool). Per-service cost drops from ~2.1 GB to ~1.0 GB. Capacity: 8 GB β†’ 6 services (was 2), 16 GB β†’ 14, 32 GB β†’ 30. Tradeoff: a service firing >2 simultaneous replica launches will queue.
Pin uv pip to runtime venv's Python via --python flag 78fe751 sky/skylet/constants.py
sky/templates/kubernetes-ray.yml.j2
sky/adaptors/oci.py
uv's environment auto-discovery is unreliable on user-provided Docker images that ship a Python interpreter at a non-standard prefix (e.g. ROCm images with /opt/python python-build-standalone layouts in PATH). VIRTUAL_ENV is silently ignored and uv resolves against the image's Python: on Python 3.12 base images ray 2.9.3 install hard-fails (no wheels with a matching Python ABI tag (cp312)); on 3.10/3.11 base images uv silently mutates the wrong Python's site-packages without erroring. Add --python <venv>/bin/python to every uv pip install/uninstall/list and uv run invocation that targets the SkyPilot runtime venv. Centralised via new SKY_UV_PIP_INSTALL_CMD / UNINSTALL / LIST constants. Strict improvement on working images (same end state, no longer mutates system Python); enables broken images.

πŸ› οΈ Development Workflow

Never commit directly to ellm-{version}. Create a feature branch off it, then open a PR back into it.

# 1. Branch off the active version branch
git checkout ellm-0.12.0
git checkout -b feat/my-feature

# 2. Make changes, commit
git add <files>
git commit -m "[Area] Description"
git push origin feat/my-feature

# 3. Open a PR β†’ target ellm-0.12.0 (not master)
gh pr create --base ellm-0.12.0 --title "..." --body "..."

Build and push a dev image to test before merging:

docker buildx build --push --platform linux/amd64 \
  -t ghcr.io/embeddedllm/skypilot:v0.12.0-dev \
  -f Dockerfile .

Once the PR is merged and validated, promote to stable:

docker tag ghcr.io/embeddedllm/skypilot:v0.12.0-dev ghcr.io/embeddedllm/skypilot:v0.12.0
docker push ghcr.io/embeddedllm/skypilot:v0.12.0

Deploying with Helm

The Helm chart is pinned to the same upstream version. Always specify --version explicitly:

helm upgrade --install $RELEASE_NAME skypilot/skypilot \
  --version 0.12.0 \
  --namespace $NAMESPACE \
  --create-namespace \
  --set apiService.image=ghcr.io/embeddedllm/skypilot:v0.12.0 \
  --set ingress.authCredentials=$AUTH_STRING

When moving to a new upstream version (e.g. v0.13.0), update both --version and --set apiService.image together. The Helm chart version must always match the image version.

⬆️ Updating to a New Upstream Version

Each upstream version gets its own branch (ellm-0.12.0, ellm-0.13.0, ...). Old branches are kept as rollback points.

# 1. Sync master with upstream
git checkout master
git fetch upstream
git merge upstream/master
git push origin master

# 2. Create new branch from updated master
git checkout -b ellm-{new_version}

# 3. Cherry-pick custom patches (commit hashes from the table above)
git cherry-pick 493fb1f  # Enable dual GPU in a single API server
git cherry-pick f3b4561  # Fix podip endpoint in HA mode
git cherry-pick 9cd2668  # Automatic AMD GPU detection via device plugin labels
git cherry-pick b35db4a  # Drop suffix-format + iGPU filtering (simplifies 9cd2668)
git cherry-pick f65b71f  # Fix wrong GPU resource key for NVIDIA pods in mixed clusters
git cherry-pick 57c8ba2  # Fix GPU count display for NVIDIA replicas in mixed clusters
git cherry-pick 37a0b54  # Fix node-affinity values rendered as int for AMD GPU labels
git cherry-pick e9581a9  # Fix pod scheduling for mixed NVIDIA + AMD clusters
# Replace rsync with tar-stream for in-pod transfer (3 commits, in order):
git cherry-pick d5731f4 e3237a7 4f1c887
# Fix kubectl exec hang at end of setup script (2 commits, in order):
git cherry-pick 32e4e61 502df1c
git cherry-pick c6f8f23  # Raise per-controller service capacity for k8s
git cherry-pick 78fe751  # Pin uv pip to runtime venv via --python
# Resolve any conflicts if upstream changed the same files

# 4. Push new branch
git push origin ellm-{new_version}

After creating the new branch, update this README: bump Upstream version, Branch, Image, and the commit hashes in the patch table. Build and push the new image as ghcr.io/embeddedllm/skypilot:v{new_version}.


SkyPilot

Documentation GitHub Release Join Slack Downloads

Run AI on Any Infrastructure

SkyPilot is a system to run, manage, and scale AI workloads on any AI infrastructure.

SkyPilot gives AI teams a simple interface to run jobs on any infra. Infra teams get a unified control plane to manage any AI compute β€” with advanced scheduling, scaling, and orchestration.

SkyPilot Abstractions

πŸ”₯ News πŸ”₯

  • [Dec 2025] SkyPilot v0.11 released: Multi-Cloud Pools, Fast Managed Jobs, Enterprise-Readiness at Large Scale, Programmability. Release notes
  • [Dec 2025] SkyPilot Pools released: Run batch inference and other jobs on a managed pool of warm workers (across clouds or clusters). blog, docs
  • [Dec 2025] Train an agent to use Google Search as a tool with RL on your Kubernetes or clouds: blog, example
  • [Nov 2025] Serve Kimi K2 Thinking with reasoning capabilities on your Kubernetes or clouds: example
  • [Oct 2025] Run RL training for LLMs with SkyRL on your Kubernetes or clouds: example
  • [Oct 2025] Train and serve Andrej Karpathy's nanochat - the best ChatGPT that $100 can buy: example
  • [Oct 2025] Run large-scale LLM training with TorchTitan on any AI infra: example
  • [Sep 2025] Scaling AI infrastructure at Abridge - 10x faster development with SkyPilot: blog
  • [Sep 2025] Network and Storage Benchmarks for LLM training on the cloud: blog
  • [Aug 2025] Serve and finetune OpenAI GPT-OSS models (gpt-oss-120b, gpt-oss-20b) with one command on any infra: serve + LoRA and full finetuning
  • [Jul 2025] Run distributed RL training for LLMs with Verl (PPO, GRPO) on any cloud: example

Overview

SkyPilot is easy to use for AI teams:

  • Quickly spin up compute on your own infra
  • Environment and job as code β€” simple and portable
  • Easy job management: queue, run, and auto-recover many jobs

SkyPilot makes Kubernetes easy for AI & Infra teams:

  • Slurm-like ease of use, cloud-native robustness
  • Local dev experience on K8s: SSH into pods, sync code, or connect IDE
  • Turbocharge your clusters: gang scheduling, multi-cluster, and scaling

SkyPilot unifies multiple clusters, clouds, and hardware:

  • One interface to use reserved GPUs, Kubernetes clusters, Slurm clusters, or 20+ clouds
  • Flexible provisioning of GPUs, TPUs, CPUs, with auto-retry
  • Team deployment and resource sharing

SkyPilot cuts your cloud costs & maximizes GPU availability:

  • Autostop: automatic cleanup of idle resources
  • Spot instance support: 3-6x cost savings, with preemption auto-recovery
  • Intelligent scheduling: automatically run on the cheapest & most available infra

SkyPilot supports your existing GPU, TPU, and CPU workloads, with no code changes.

Install with pip:

# Choose your clouds:
pip install -U "skypilot[kubernetes,aws,gcp,azure,oci,nebius,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,seeweb,shadeform,verda]"

To get the latest features and fixes, use the nightly build or install from source:

# Choose your clouds:
pip install "skypilot-nightly[kubernetes,aws,gcp,azure,oci,nebius,lambda,runpod,fluidstack,paperspace,cudo,ibm,scp,seeweb,shadeform,verda]"

To use SkyPilot directly with your agent (Claude Code, Codex, etc.), install the SkyPilot Skill. Tell your agent:

Fetch and follow https://github.com/skypilot-org/skypilot/blob/HEAD/agent/INSTALL.md to install the skypilot skill

SkyPilot

Current supported infra: Kubernetes, Slurm, AWS, GCP, Azure, OCI, CoreWeave, Nebius, Lambda Cloud, RunPod, Fluidstack, Cudo, Digital Ocean, Paperspace, Cloudflare, Samsung, IBM, Vast.ai, VMware vSphere, Seeweb, Prime Intellect, Shadeform, Verda Cloud, VastData, Crusoe.

SkyPilot

Getting started

You can find our documentation here.

SkyPilot in 1 minute

A SkyPilot task specifies: resource requirements, data to be synced, setup commands, and the task commands.

Once written in this unified interface (YAML or Python API), the task can be launched on any available infra (Kubernetes, Slurm, cloud, etc.). This avoids vendor lock-in, and allows easily moving jobs to a different provider.

Paste the following into a file my_task.yaml:

resources:
  accelerators: A100:8  # 8x NVIDIA A100 GPU

num_nodes: 1  # Number of VMs to launch

# Working directory (optional) containing the project codebase.
# Its contents are synced to ~/sky_workdir/ on the cluster.
workdir: ~/torch_examples

# Commands to be run before executing the job.
# Typical use: pip install -r requirements.txt, git clone, etc.
setup: |
  cd mnist
  pip install -r requirements.txt

# Commands to run as a job.
# Typical use: launch the main program.
run: |
  cd mnist
  python main.py --epochs 1

Prepare the workdir by cloning:

git clone https://github.com/pytorch/examples.git ~/torch_examples

Launch with sky launch (note: access to GPU instances is needed for this example):

sky launch my_task.yaml

SkyPilot then performs the heavy-lifting for you, including:

  1. Find the cheapest & available infra across your clusters or clouds
  2. Provision the GPUs (pods or VMs), with auto-failover if the infra returned capacity errors
  3. Sync your local workdir to the provisioned cluster
  4. Auto-install dependencies by running the task's setup commands
  5. Run the task's run commands, and stream logs

See Quickstart to get started with SkyPilot.

Runnable examples

See SkyPilot examples that cover: development, training, serving, LLM models, AI apps, and common frameworks.

Latest featured examples:

Task Examples
Training Verl, Finetune Llama 4, TorchTitan, PyTorch, DeepSpeed, NeMo, Ray, Unsloth, Jax/TPU, OpenRLHF
Serving vLLM, SGLang, Ollama
Models DeepSeek-R1, Llama 4, Llama 3, CodeLlama, Qwen, Kimi-K2, Kimi-K2-Thinking, Mixtral
AI apps RAG, vector databases (ChromaDB, CLIP)
Common frameworks Airflow, Jupyter, marimo

Source files can be found in llm/ and examples/.

More information

To learn more, see SkyPilot Overview, SkyPilot docs, and SkyPilot blog.

SkyPilot adopters: Testimonials and Case Studies

Partners and integrations: Community Spotlights

Follow updates:

Read the research:

SkyPilot was initially started at the Sky Computing Lab at UC Berkeley and has since gained many industry contributors. To read about the project's origin and vision, see Concept: Sky Computing.

Questions and feedback

We are excited to hear your feedback:

For general discussions, join us on the SkyPilot Slack.

Contributing

We welcome all contributions to the project! See CONTRIBUTING for how to get involved.

About

SkyPilot: Run AI and batch jobs on any infra (Kubernetes or 12+ clouds). Get unified execution, cost savings, and high GPU availability via a simple interface.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 87.8%
  • JavaScript 8.5%
  • Jinja 1.5%
  • Shell 1.3%
  • HTML 0.5%
  • Go 0.1%
  • Other 0.3%