|
| 1 | +# A Haven VLM Connector |
| 2 | + |
| 3 | +A StashApp plugin for Vision-Language Model (VLM) based content tagging and analysis. This plugin is designed with a **local-first philosophy**, empowering users to run analysis on their own hardware (using CPU or GPU) and their local network. It also supports cloud-based VLM endpoints for additional flexibility. The Haven VLM Engine provides advanced automatic content detection and tagging, delivering superior accuracy compared to traditional image classification methods. |
| 4 | + |
| 5 | +## Features |
| 6 | + |
| 7 | +- **Local Network Empowerment**: Distribute processing across home/office computers without cloud dependencies |
| 8 | +- **Context-Aware Detection**: Leverages Vision-Language Models' understanding of visual relationships |
| 9 | +- **Advanced Dependency Management**: Uses PythonDepManager for automatic dependency installation |
| 10 | +- **Enjoying Funscript Haven?** Check out more tools and projects at https://github.com/Haven-hvn |
| 11 | + |
| 12 | +## Requirements |
| 13 | + |
| 14 | +- Python 3.8+ |
| 15 | +- StashApp |
| 16 | +- PythonDepManager plugin (automatically handles dependencies) |
| 17 | +- OpenAI-compatible VLM endpoints (local or cloud-based) |
| 18 | + |
| 19 | +## Installation |
| 20 | + |
| 21 | +1. Clone or download this plugin to your StashApp plugins directory |
| 22 | +2. Ensure PythonDepManager is installed in your StashApp plugins |
| 23 | +3. Configure your VLM endpoints in `haven_vlm_config.py` (local network endpoints recommended) |
| 24 | +4. Restart StashApp |
| 25 | + |
| 26 | +The plugin automatically manages all dependencies. |
| 27 | + |
| 28 | +## Why Local-First? |
| 29 | + |
| 30 | +- **Complete Control**: Process sensitive content on your own hardware |
| 31 | +- **Cost Effective**: Avoid cloud processing fees by using existing resources |
| 32 | +- **Flexible Scaling**: Add more computers to your local network for increased capacity |
| 33 | +- **Privacy Focused**: Keep your media completely private |
| 34 | +- **Hybrid Options**: Combine local and cloud endpoints for optimal flexibility |
| 35 | + |
| 36 | +```mermaid |
| 37 | +graph LR |
| 38 | +A[User's Computer] --> B[Local GPU Machine] |
| 39 | +A --> C[Local CPU Machine 1] |
| 40 | +A --> D[Local CPU Machine 2] |
| 41 | +A --> E[Cloud Endpoint] |
| 42 | +``` |
| 43 | + |
| 44 | +## Configuration |
| 45 | + |
| 46 | +### Easy Setup with LM Studio |
| 47 | + |
| 48 | +[LM Studio](https://lmstudio.ai/) provides the easiest way to configure local endpoints: |
| 49 | + |
| 50 | +1. Download and install [LM Studio](https://lmstudio.ai/) |
| 51 | +2. [Search for or download](https://huggingface.co/models) a vision-capable model; tested with : (in order of high to low accuracy) zai-org/glm-4.6v-flash, huihui-mistral-small-3.2-24b-instruct-2506-abliterated-v2, qwen/qwen3-vl-8b, lfm2.5-vl |
| 52 | +3. Load your desired Model |
| 53 | +4. On the developer tab start the local server using the start toggle |
| 54 | +5. Optionally click the Settings gear then toggle *Serve on local network* |
| 55 | +5. Optionally configure `haven_vlm_config.py`: |
| 56 | + |
| 57 | +By default locahost is included in the config, **remove cloud endpoint if you don't want automatic failover** |
| 58 | +```python |
| 59 | +{ |
| 60 | + "base_url": "http://localhost:1234/v1", # LM Studio default |
| 61 | + "api_key": "", # API key not required |
| 62 | + "name": "lm-studio-local", |
| 63 | + "weight": 5, |
| 64 | + "is_fallback": False |
| 65 | +} |
| 66 | +``` |
| 67 | + |
| 68 | +### Tag Configuration |
| 69 | + |
| 70 | +```python |
| 71 | +"tag_list": [ |
| 72 | + "Basketball point", "Foul", "Break-away", "Turnover" |
| 73 | +] |
| 74 | +``` |
| 75 | + |
| 76 | +### Processing Settings |
| 77 | + |
| 78 | +```python |
| 79 | +VIDEO_FRAME_INTERVAL = 2.0 # Process every 2 seconds |
| 80 | +CONCURRENT_TASK_LIMIT = 8 # Adjust based on local hardware |
| 81 | +``` |
| 82 | + |
| 83 | +## Usage |
| 84 | + |
| 85 | +### Tag Videos |
| 86 | +1. Tag scenes with `VLM_TagMe` |
| 87 | +2. Run "Tag Videos" task |
| 88 | +3. Plugin processes content using local/network resources |
| 89 | + |
| 90 | +### Performance Tips |
| 91 | +- Start with 2-3 local machines for load balancing |
| 92 | +- Assign higher weights to GPU-enabled machines |
| 93 | +- Adjust `CONCURRENT_TASK_LIMIT` based on total system resources |
| 94 | +- Use SSD storage for better I/O performance |
| 95 | + |
| 96 | +## File Structure |
| 97 | + |
| 98 | +``` |
| 99 | +AHavenVLMConnector/ |
| 100 | +├── ahavenvlmconnector.yml |
| 101 | +├── haven_vlm_connector.py |
| 102 | +├── haven_vlm_config.py |
| 103 | +├── haven_vlm_engine.py |
| 104 | +├── haven_media_handler.py |
| 105 | +├── haven_vlm_utility.py |
| 106 | +├── requirements.txt |
| 107 | +└── README.md |
| 108 | +``` |
| 109 | + |
| 110 | +## Troubleshooting |
| 111 | + |
| 112 | +### Local Network Setup |
| 113 | +- Ensure firewalls allow communication between machines |
| 114 | +- Verify all local endpoints are running VLM services |
| 115 | +- Use static IPs for local machines |
| 116 | +- Check `http://local-machine-ip:port/v1` responds correctly |
| 117 | + |
| 118 | +### Performance Optimization |
| 119 | +- **Distribute Load**: Use multiple mid-range machines instead of one high-end |
| 120 | +- **GPU Prioritization**: Assign highest weight to GPU machines |
| 121 | +- **Network Speed**: Use wired Ethernet connections for faster transfer |
| 122 | +- **Resource Monitoring**: Watch system resources during processing |
| 123 | + |
| 124 | +## Development |
| 125 | + |
| 126 | +### Adding Local Endpoints |
| 127 | +1. Install VLM service on network machines |
| 128 | +2. Add endpoint configuration with local IPs |
| 129 | +3. Set appropriate weights based on hardware capability |
| 130 | + |
| 131 | +### Custom Models |
| 132 | +Use any OpenAI-compatible models that support: |
| 133 | +- POST requests to `/v1/chat/completions` |
| 134 | +- Vision capabilities with image input |
| 135 | +- Local deployment options |
| 136 | + |
| 137 | +### Log Messages |
| 138 | + |
| 139 | +Check StashApp logs for detailed processing information and error messages. |
| 140 | + |
| 141 | +## License |
| 142 | + |
| 143 | +This project is part of the StashApp Community Scripts collection. |
0 commit comments