Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
---
id: "prepare-comfyui"
title: "Prepare a ComfyUI Workflow"
slug: "/guides/prepare-comfyui"
sidebar_position: 5
id: "comfyui"
title: "ComfyUI"
slug: "/guides/solutions/comfyui"
sidebar_position: 2
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

This guide provides step-by-step instructions for preparing a **ComfyUI** workflow with custom nodes before uploading it. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI.
This guide provides step-by-step instructions for preparing a **ComfyUI** workflow with custom nodes to run on Super Protocol. For security reasons, you cannot upload custom nodes directly to a deployed ComfyUI.

:::note

Expand All @@ -28,7 +28,7 @@ You can prepare your model, workflow, and custom node files manually or using Do

1. Clone the [Super-Protocol/solutions](https://github.com/Super-Protocol/solutions/) GitHub repository to the location of your choosing:

```
```shell
git clone https://github.com/Super-Protocol/solutions.git --depth 1
```

Expand All @@ -54,21 +54,21 @@ You can prepare your model, workflow, and custom node files manually or using Do

Access the running container with the following command:

```
```shell
docker exec -it comfyui bash
```

Go to the `models` directory inside the container and download the model files to the corresponding subdirectories using the `wget` command. For example:

```
```shell
wget https://huggingface.co/prompthero/openjourney/resolve/main/mdjrny-v4.safetensors
```

**Copy from your computer**

If you have the model on your computer, copy its files to the container using the following command:

```
```shell
docker cp <LOCAL_FILE> comfyui:<CONTAINER_FILE>
```

Expand All @@ -77,7 +77,7 @@ You can prepare your model, workflow, and custom node files manually or using Do

For example:

```
```shell
docker cp ~/Downloads/openjourney/mdjrny-v4.safetensors comfyui:/opt/ComfyUI/models/checkpoints/mdjrny-v4.safetensors
```

Expand All @@ -87,7 +87,7 @@ You can prepare your model, workflow, and custom node files manually or using Do

8. Unpack the archive using the following command:

```
```shell
tar -xvzf snapshot.tar.gz -C <MODEL_DIRECTORY>
```

Expand Down Expand Up @@ -159,6 +159,6 @@ You can prepare your model, workflow, and custom node files manually or using Do
</TabItem>
</Tabs>

## Contact Super Protocol
## Support

If you face any issues, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new).
If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new).
6 changes: 6 additions & 0 deletions docs/cli/Guides/Solutions/tgwui.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
id: "tgwui"
title: "Text Generation WebUI"
slug: "/guides/solutions/tgwui"
sidebar_position: 1
---
189 changes: 189 additions & 0 deletions docs/cli/Guides/Solutions/unsloth.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,189 @@
---
id: "unsloth"
title: "Unsloth"
slug: "/guides/solutions/unsloth"
sidebar_position: 3
---

This guide provides step-by-step instructions for fine-tuning an AI model using the Super Protocol packaging of [Unsloth](https://unsloth.ai/), an open-source framework for LLM fine-tuning and reinforcement learning.

The <a id="solution"><span className="dashed-underline">solution</span></a> allows you to run fine-tuning within Super Protocol's Trusted Execution Environment (TEE). This provides enhanced security and privacy and enables a range of [confidential collaboration](https://docs.develop.superprotocol.com/cli/guides/fine-tune) scenarios.

## Prerequisites

- [SPCTL](https://docs.develop.superprotocol.com/cli/)
- Git
- BNB and SPPI tokens (opBNB) to pay for transactions and orders

## Repository

Clone the repository with Super Protocol solutions:

```shell
git clone https://github.com/Super-Protocol/solutions.git
```

The Unsloth solution includes a Dockerfile and a helper script `run-unsloth.sh` that facilitates workflow creation. Note that `run-unsloth.sh` does not build an image and instead uses a pre-existing solution offer.

## run-unsloth.sh

Copy SPCTL’s binary and its `config.json` to the `unsloth/scripts` directory inside the cloned Super-Protocol/solutions repository.

### 1. Prepare training scripts

When preparing your training scripts, keep in mind the special file structure within the TEE:

| **Location** | **Purpose** | **Access** |
| :- | :- | :- |
| `/sp/inputs/input-0001`<br/>`/sp/inputs/input-0002`<br/>etc. | Possible data locations<br/> (AI model, dataset, training scripts, etc.) | Read-only |
| `/sp/output` | Output directory for results | Read and write |
| `/sp/certs` | Contains the order certificate, private key, and `workloadInfo` | Read-only |

Your scripts must find the data in `/sp/inputs` and write the results to `/sp/output`.

### 2. Place an order

2.1. Initiate a dialog to construct and place an order:

```shell
./run-unsloth.sh
```

2.2. `Enter TEE offer id (number)`: Enter a compute offer ID. This determines the available compute resources and cost of your order. You can find the full list of available compute offers on the [Marketplace](https://marketplace.superprotocol.com/).

2.3. `Choose run mode`: `1) file`.

2.4. `Select the model option`:

- `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B.
- `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following:
- a path to the model's resource JSON file, if it was already uploaded with SPCTL
- model offer ID, if the model exists on the Marketplace
- a path to the local directory with the model to upload it using SPCTL.
- `3) no model`: No model will be used.

2.5. `Enter path to a .py/.ipynb file OR a directory`: Enter the path to your training script (file or directory). For a directory, select the file to run (entrypoint) when prompted. Note that you cannot reuse resource files in this step; scripts should be uploaded every time.
Comment on lines +56 to +65
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix list indentation to match markdown lint standards.

Lines 60-62 have list items indented with 4 spaces instead of the expected 2 spaces. This triggers MD007 linting errors. The same issue appears at lines 149-151. Update indentation to use 2 spaces for nested list items.

🔎 Proposed fix for list indentation at lines 56-65
 2.4. `Select the model option`:

-    - `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B.
-    - `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following:
-    - `3) no model`: No model will be used.
+  - `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B.
+  - `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following:
+  - `3) no model`: No model will be used.
🔎 Proposed fix for list indentation at lines 145-153
 3. `Select the model option`:

-    - `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B.
-    - `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following:
-    - `3) no model`: No model will be used.
+  - `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B.
+  - `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following:
+  - `3) no model`: No model will be used.

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

60-60: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)


61-61: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)


62-62: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)

🤖 Prompt for AI Agents
In docs/cli/Guides/Solutions/unsloth.md around lines 56 to 65 (and also fix the
same pattern at lines 145 to 153), the nested list items are indented with 4
spaces which triggers MD007; change those nested list items to use 2 spaces
indentation instead (ensure the bullet markers for the three nested items under
option 2 use 2 spaces, and do the same for the other occurrence), keeping the
parent list indentation unchanged so the nested items align with markdown
linting rules.


2.6. `Provide your dataset as a resource JSON path, numeric offer id, or folder path`: As with the model, enter one of the following:

- a path to the dataset's resource JSON file, if it was already uploaded with SPCTL
- dataset offer ID, if the dataset exists on the Marketplace
- a path to the local directory with the dataset to upload it using SPCTL.

2.7. `Upload SPCTL config file as a resource?`: Answer `N` unless you need to use SPCTL from within the TEE during the order execution. In this case, your script should run a `curl` command to download SPCTL and find the uploaded `config.json` in the `/sp/inputs/` subdirectories.

2.8. Wait for the order to be created and find the order ID in the output, for example:

```shell
Unsloth order id: 259126
Done.
```

### 3. Check the order result

3.1. The order will take some time to complete. Check the order status:

```shell
./spctl orders get <ORDER_ID>
```

Replace `<ORDER_ID>` with your order ID.

If you lost the order ID, check all your orders to find it:

```shell
./spctl orders list --my-account --type tee
```

3.2. When the order status is `Done` or `Error`, download the result:

```shell
./spctl orders download-result <ORDER_ID>
```

The downloaded TAR.GZ archive contains the results in the `output` directory and execution logs.

## Dry run

```shell
./run-unsloth.sh --suggest-only
```

The option `--suggest-only` allows you to perform a dry run without actually uploading files and creating orders.

Complete the dialog, as usual; only use absolute paths.

In the output, you will see a prepared command for running the script non-interactively, allowing you to easily modify the variables and avoid re-entering the dialog. For example:

```shell
RUN_MODE=file \
RUN_DIR=/home/user/Downloads/yma-run \
RUN_FILE=sft_example.py \
DATA_RESOURCE=/home/user/unsloth/scripts/yma_data_example-data.json \
MODEL_RESOURCE=/home/user/unsloth/scripts/medgemma-27b-ft-merged.resource.json \
/home/user/unsloth/scripts/run-unsloth.sh \
--tee 8 \
--config ./config.json
```

## Jupyter Notebook

You can launch and use Jupyter Notebook instead of uploading training scripts directly.

Initiate a dialog:

```shell
./run-unsloth.sh
```

When prompted:

1. `Enter TEE offer id`: Enter a compute offer ID. This determines the available compute resources and cost of your order. You can find the full list of available compute offers on the [Marketplace](https://marketplace.superprotocol.com/).

2. `Choose run mode`: `2) jupyter-server`.

3. `Select the model option`:

- `1) Medgemma 27b (offer 15900)`: Select this option if you need an untuned MedGemma 27B.
- `2) your model`: Select this option to use another model. Further, when prompted about `Model input`, enter one of the following:
- a path to the model's resource JSON file, if it was already uploaded with SPCTL
- model offer ID, if the model exists on the Marketplace
- a path to the local directory with the model to upload it using SPCTL.
- `3) no model`: No model will be used.

4. `Enter Jupyter password` or press Enter to proceed without a password.

5. `Select domain option`:

- `1) Temporary Domain (*.superprotocol.io)` is suitable for testing and quick deployments.
- `2) Own domain` will require you to provide a domain name, TLS certificate, private key, and a tunnel server auth token.

Wait for the Tunnels Launcher order to be created.

6. `Provide your dataset as a resource JSON path, numeric offer id, or folder path`: As with the model, enter one of the following:
- a path to the dataset's resource JSON file, if it was already uploaded with SPCTL
- dataset offer ID, if the dataset exists on the Marketplace
- a path to the local directory with the dataset to upload it using SPCTL.

7. `Upload SPCTL config file as a resource?`: Answer `N` unless you need to use SPCTL from within the TEE during the order execution. In this case, your script should run a `curl` command to download SPCTL and find the uploaded `config.json` in the `/sp/inputs/` subdirectories.

8. Wait for the Jupyter order to be ready and find a link in the output; for example:

```shell
===================================================
Jupyter instance is available at: https://beja-bine-envy.superprotocol.io
===================================================
```

8. Open the link in your browser to access Jupyter’s UI.

**Note**:

The data in `/sp/output` will not be published as the order result when running the Jupyter server. To save your fine-tuning results, upload them either:
- via Python code
- using the integrated terminal in the Jupyter server
- using SPCTL with the config uploaded at Step 7.

## Support

If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new).
101 changes: 101 additions & 0 deletions docs/cli/Guides/Solutions/vllm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
id: "vllm"
title: "vLLM"
slug: "/guides/solutions/vllm"
sidebar_position: 4
---

This guide provides step-by-step instructions for running an AI model inference using the Super Protocol packaging of [vLLM](https://www.vllm.ai/), an inference and serving engine for LLMs.

The <a id="solution"><span className="dashed-underline">solution</span></a> allows you to run LLM inference within Super Protocol's Trusted Execution Environment (TEE).

## Prerequisites

- [SPCTL](https://docs.develop.superprotocol.com/cli/)
- Git
- BNB and SPPI tokens (opBNB) to pay for transactions and orders

## Repository

Clone the repository with Super Protocol solutions:

```shell
git clone https://github.com/Super-Protocol/solutions.git
```

The vLLM solution includes a Dockerfile and a helper script `run-vllm.sh` that facilitates workflow creation. Note that `run-vllm.sh` does not build an image and instead uses a pre-existing solution offer.

## run-vllm.sh

Copy SPCTL’s binary and its `config.json` to the `vllm/scripts` directory inside the cloned Super-Protocol/solutions repository.

### Place an order

1. Initiate a dialog to construct and place an order:

```shell
./run-vllm.sh
```

2. `Select domain option`:

- `1) Temporary Domain (*.superprotocol.io)` is suitable for testing and quick deployments.
- `2) Own domain` will require you to provide a domain name, TLS certificate, private key, and a tunnel server auth token.

3. `Enter TEE offer id`: Enter a compute offer ID. This determines the available compute resources and cost of your order. You can find the full list of available compute offers on the [Marketplace](https://marketplace.superprotocol.com/).

4. `Provide model as resource JSON path, numeric offer id, or folder path`: Enter one of the following:

- a path to the model's resource JSON file, if it was already uploaded with SPCTL
- model offer ID, if the model exists on the Marketplace
- a path to the local directory with the model to upload it using SPCTL.

5. `Enter API key` or press `Enter` to generate one automatically.

Wait for the deployment to be ready and find the information about it in the output, for example:

```shell
===================================================
VLLM server is available at: https://whau-trug-nail.superprotocol.io
API key: d75c577d-e538-4d09-8f59-a0f00ae961a3
Order IDs: Launcher=269042, VLLM=269044
===================================================
```

### API

Once deployed on Super Protocol, your model runs inside a TEE and exposes an OpenAI-compatible API. You can interact with it as you would with a local vLLM instance.

Depending on the type of request you want to make, use the following API endpoints:

- Chat Completions (`/v1/chat/completions`)
- Text Completions (`/v1/completions`)
- Embeddings (`/v1/embeddings`)
- Audio Transcriptions & Translations (`/v1/audio/transcriptions`, `/v1/audio/translations`)

See the [full list of API endpoints](https://docs.vllm.ai/en/latest/serving/openai_compatible_server/).

## Dry run

```shell
./run-vllm.sh --suggest-only
```

The option `--suggest-only` allows you to perform a dry run without actually uploading files and creating orders.

Complete the dialog, as usual; only use absolute paths.

In the output, you will see a prepared command for running the script non-interactively, allowing you to easily modify the variables and avoid re-entering the dialog. For example:

```shell
RUN_MODE=temporary \
MODEL_RESOURCE=55 \
VLLM_API_KEY=9c6dbf44-cef7-43a4-b362-43295b244446 \
/home/user/vllm/scripts/run-vllm.sh \
--config ./config.json \
--tee 8
```

## Support

If you have any issues or questions, contact Super Protocol on [Discord](https://discord.gg/superprotocol) or via the [contact form](https://superprotocol.zendesk.com/hc/en-us/requests/new).
Loading