Skip to content

Commit 3a4d2c4

Browse files
docs: Add dependencies example to load-balanced endpoint pattern (#575)
Co-authored-by: promptless[bot] <179508745+promptless[bot]@users.noreply.github.com> Co-authored-by: Mo King <muhsinking@gmail.com>
1 parent 6232da0 commit 3a4d2c4

1 file changed

Lines changed: 3 additions & 2 deletions

File tree

flash/create-endpoints.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,12 +65,13 @@ from runpod_flash import Endpoint, GpuType
6565
api = Endpoint(
6666
name="inference-api",
6767
gpu=GpuType.NVIDIA_GEFORCE_RTX_4090,
68-
workers=(1, 5)
68+
workers=(1, 5),
69+
dependencies=["torch"]
6970
)
7071

7172
@api.post("/predict")
7273
async def predict(data: dict) -> dict:
73-
import torch
74+
import torch # Import inside the function body
7475
# Run inference
7576
return {"prediction": "result"}
7677

0 commit comments

Comments
 (0)