Skip to content

Commit 2a07190

Browse files
HavenCTODogmaDragon
andauthored
initial commit of AHavenVLMConnector (#657)
Co-authored-by: DogmaDragon <103123951+DogmaDragon@users.noreply.github.com>
1 parent 69e44b2 commit 2a07190

17 files changed

Lines changed: 4596 additions & 0 deletions
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Changelog
2+
3+
All notable changes to the A Haven VLM Connector project will be documented in this file.
4+
5+
## [1.0.0] - 2025-06-29
6+
7+
### Added
8+
- **Initial release**
Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
# A Haven VLM Connector
2+
3+
A StashApp plugin for Vision-Language Model (VLM) based content tagging and analysis. This plugin is designed with a **local-first philosophy**, empowering users to run analysis on their own hardware (using CPU or GPU) and their local network. It also supports cloud-based VLM endpoints for additional flexibility. The Haven VLM Engine provides advanced automatic content detection and tagging, delivering superior accuracy compared to traditional image classification methods.
4+
5+
## Features
6+
7+
- **Local Network Empowerment**: Distribute processing across home/office computers without cloud dependencies
8+
- **Context-Aware Detection**: Leverages Vision-Language Models' understanding of visual relationships
9+
- **Advanced Dependency Management**: Uses PythonDepManager for automatic dependency installation
10+
- **Enjoying Funscript Haven?** Check out more tools and projects at https://github.com/Haven-hvn
11+
12+
## Requirements
13+
14+
- Python 3.8+
15+
- StashApp
16+
- PythonDepManager plugin (automatically handles dependencies)
17+
- OpenAI-compatible VLM endpoints (local or cloud-based)
18+
19+
## Installation
20+
21+
1. Clone or download this plugin to your StashApp plugins directory
22+
2. Ensure PythonDepManager is installed in your StashApp plugins
23+
3. Configure your VLM endpoints in `haven_vlm_config.py` (local network endpoints recommended)
24+
4. Restart StashApp
25+
26+
The plugin automatically manages all dependencies.
27+
28+
## Why Local-First?
29+
30+
- **Complete Control**: Process sensitive content on your own hardware
31+
- **Cost Effective**: Avoid cloud processing fees by using existing resources
32+
- **Flexible Scaling**: Add more computers to your local network for increased capacity
33+
- **Privacy Focused**: Keep your media completely private
34+
- **Hybrid Options**: Combine local and cloud endpoints for optimal flexibility
35+
36+
```mermaid
37+
graph LR
38+
A[User's Computer] --> B[Local GPU Machine]
39+
A --> C[Local CPU Machine 1]
40+
A --> D[Local CPU Machine 2]
41+
A --> E[Cloud Endpoint]
42+
```
43+
44+
## Configuration
45+
46+
### Easy Setup with LM Studio
47+
48+
[LM Studio](https://lmstudio.ai/) provides the easiest way to configure local endpoints:
49+
50+
1. Download and install [LM Studio](https://lmstudio.ai/)
51+
2. [Search for or download](https://huggingface.co/models) a vision-capable model; tested with : (in order of high to low accuracy) zai-org/glm-4.6v-flash, huihui-mistral-small-3.2-24b-instruct-2506-abliterated-v2, qwen/qwen3-vl-8b, lfm2.5-vl
52+
3. Load your desired Model
53+
4. On the developer tab start the local server using the start toggle
54+
5. Optionally click the Settings gear then toggle *Serve on local network*
55+
5. Optionally configure `haven_vlm_config.py`:
56+
57+
By default locahost is included in the config, **remove cloud endpoint if you don't want automatic failover**
58+
```python
59+
{
60+
"base_url": "http://localhost:1234/v1", # LM Studio default
61+
"api_key": "", # API key not required
62+
"name": "lm-studio-local",
63+
"weight": 5,
64+
"is_fallback": False
65+
}
66+
```
67+
68+
### Tag Configuration
69+
70+
```python
71+
"tag_list": [
72+
"Basketball point", "Foul", "Break-away", "Turnover"
73+
]
74+
```
75+
76+
### Processing Settings
77+
78+
```python
79+
VIDEO_FRAME_INTERVAL = 2.0 # Process every 2 seconds
80+
CONCURRENT_TASK_LIMIT = 8 # Adjust based on local hardware
81+
```
82+
83+
## Usage
84+
85+
### Tag Videos
86+
1. Tag scenes with `VLM_TagMe`
87+
2. Run "Tag Videos" task
88+
3. Plugin processes content using local/network resources
89+
90+
### Performance Tips
91+
- Start with 2-3 local machines for load balancing
92+
- Assign higher weights to GPU-enabled machines
93+
- Adjust `CONCURRENT_TASK_LIMIT` based on total system resources
94+
- Use SSD storage for better I/O performance
95+
96+
## File Structure
97+
98+
```
99+
AHavenVLMConnector/
100+
├── ahavenvlmconnector.yml
101+
├── haven_vlm_connector.py
102+
├── haven_vlm_config.py
103+
├── haven_vlm_engine.py
104+
├── haven_media_handler.py
105+
├── haven_vlm_utility.py
106+
├── requirements.txt
107+
└── README.md
108+
```
109+
110+
## Troubleshooting
111+
112+
### Local Network Setup
113+
- Ensure firewalls allow communication between machines
114+
- Verify all local endpoints are running VLM services
115+
- Use static IPs for local machines
116+
- Check `http://local-machine-ip:port/v1` responds correctly
117+
118+
### Performance Optimization
119+
- **Distribute Load**: Use multiple mid-range machines instead of one high-end
120+
- **GPU Prioritization**: Assign highest weight to GPU machines
121+
- **Network Speed**: Use wired Ethernet connections for faster transfer
122+
- **Resource Monitoring**: Watch system resources during processing
123+
124+
## Development
125+
126+
### Adding Local Endpoints
127+
1. Install VLM service on network machines
128+
2. Add endpoint configuration with local IPs
129+
3. Set appropriate weights based on hardware capability
130+
131+
### Custom Models
132+
Use any OpenAI-compatible models that support:
133+
- POST requests to `/v1/chat/completions`
134+
- Vision capabilities with image input
135+
- Local deployment options
136+
137+
### Log Messages
138+
139+
Check StashApp logs for detailed processing information and error messages.
140+
141+
## License
142+
143+
This project is part of the StashApp Community Scripts collection.
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
name: Haven VLM Connector
2+
# requires: PythonDepManager
3+
description: Tag videos with Vision-Language Models using any OpenAI-compatible VLM endpoint
4+
version: 1.0.0
5+
url: https://github.com/stashapp/CommunityScripts/tree/main/plugins/AHavenVLMConnector
6+
exec:
7+
- python
8+
- "{pluginDir}/haven_vlm_connector.py"
9+
interface: raw
10+
tasks:
11+
- name: Tag Videos
12+
description: Run VLM analysis on videos with VLM_TagMe tag
13+
defaultArgs:
14+
mode: tag_videos
15+
- name: Collect Incorrect Markers and Images
16+
description: Collects data from markers and images that were VLM tagged but were manually marked with VLM_Incorrect due to the VLM making a mistake. This will collect the data and output as a file which can be used to improve the VLM models.
17+
defaultArgs:
18+
mode: collect_incorrect_markers
19+
- name: Find Marker Settings
20+
description: Find Optimal Marker Settings based on a video that has manually tuned markers and has been processed by the VLM previously. Only 1 video should have VLM_TagMe before running.
21+
defaultArgs:
22+
mode: find_marker_settings
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
"""
2+
Comprehensive sys.exit tracking module
3+
Instruments all sys.exit() calls with full call stack and context
4+
"""
5+
6+
import sys
7+
import traceback
8+
from typing import Optional
9+
10+
# Store original sys.exit
11+
original_exit = sys.exit
12+
13+
# Track if we've already patched
14+
_exit_tracker_patched = False
15+
16+
def install_exit_tracker(logger=None) -> None:
17+
"""
18+
Install the exit tracker by monkey-patching sys.exit
19+
20+
Args:
21+
logger: Optional logger instance (will use fallback print if None)
22+
"""
23+
global _exit_tracker_patched, original_exit
24+
25+
if _exit_tracker_patched:
26+
return
27+
28+
# Store original if not already stored
29+
if hasattr(sys, 'exit') and sys.exit is not original_exit:
30+
original_exit = sys.exit
31+
32+
def tracked_exit(code: int = 0) -> None:
33+
"""Track sys.exit() calls with full call stack"""
34+
# Get current stack trace (not from exception, but current call stack)
35+
stack = traceback.extract_stack()
36+
37+
# Format the stack trace, excluding this tracking function
38+
stack_lines = []
39+
for frame in stack:
40+
# Skip internal Python frames and this tracker
41+
if ('tracked_exit' not in frame.filename and
42+
'/usr/lib' not in frame.filename and
43+
'/System/Library' not in frame.filename and
44+
'exit_tracker.py' not in frame.filename):
45+
stack_lines.append(
46+
f" File \"{frame.filename}\", line {frame.lineno}, in {frame.name}\n {frame.line}"
47+
)
48+
49+
# Take last 15 frames to see the full call chain
50+
stack_str = '\n'.join(stack_lines[-15:])
51+
52+
# Get current exception info if available
53+
exc_info = sys.exc_info()
54+
exc_str = ""
55+
if exc_info[0] is not None:
56+
exc_str = f"\n Active Exception: {exc_info[0].__name__}: {exc_info[1]}"
57+
58+
# Build the error message
59+
error_msg = f"""[DEBUG_EXIT_CODE] ==========================================
60+
[DEBUG_EXIT_CODE] sys.exit() called with code: {code}
61+
[DEBUG_EXIT_CODE] Call stack (last 15 frames):
62+
{stack_str}
63+
{exc_str}
64+
[DEBUG_EXIT_CODE] =========================================="""
65+
66+
# Log using provided logger or fallback to print
67+
if logger:
68+
try:
69+
logger.error(error_msg)
70+
except Exception as log_error:
71+
print(f"[EXIT_TRACKER_LOGGER_ERROR] Failed to log: {log_error}")
72+
print(error_msg)
73+
else:
74+
print(error_msg)
75+
76+
# Call original exit
77+
original_exit(code)
78+
79+
# Install the tracker
80+
sys.exit = tracked_exit
81+
_exit_tracker_patched = True
82+
83+
if logger:
84+
logger.debug("[DEBUG_EXIT_CODE] Exit tracker installed successfully")
85+
else:
86+
print("[DEBUG_EXIT_CODE] Exit tracker installed successfully")
87+
88+
def uninstall_exit_tracker() -> None:
89+
"""Uninstall the exit tracker and restore original sys.exit"""
90+
global _exit_tracker_patched, original_exit
91+
92+
if _exit_tracker_patched:
93+
sys.exit = original_exit
94+
_exit_tracker_patched = False
95+
96+
# Auto-install on import (can be disabled by calling uninstall_exit_tracker())
97+
if not _exit_tracker_patched:
98+
install_exit_tracker()

0 commit comments

Comments
 (0)