90% of the time custom models including gguf files fail to load due to limited availability ".from single file" in all class or config mismatch issues
please provide any class even if it can be made in community examples, just a default class to load all components
lets just say if we use automodel or xyz class
for safetensors
pipe= automodel.from_single_file(model_path, use_safetensors=True, cache_dir=cache_dir, custom_pipeline="", torch_dtype=TORCH_DTYPE, local_files_only=True,)
or for gguf
pipe=automodel.from_single_file(model_path, quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.float16, low_cpu_mem_usage=True,)
then dump
components=pipe.components
we can at least get a decent loading in case of aio safetensors and gguf along with optimizations
loading sdxl single aio safetensors or gguf
pipe=StableDiffusionXLPipeline(components)
pipe=StableDiffusionXLImg2ImgPipeline(components)
loading wan single aio safetensors or gguf
pipe=WanPipeline(components)
can be used further more for other models as well flux acestep all other .
if the feature is there please give a hint . it can solve 90% of model loading problems
Thanks in advance
90% of the time custom models including gguf files fail to load due to limited availability ".from single file" in all class or config mismatch issues
please provide any class even if it can be made in community examples, just a default class to load all components
lets just say if we use automodel or xyz class
for safetensors
pipe= automodel.from_single_file(model_path, use_safetensors=True, cache_dir=cache_dir, custom_pipeline="", torch_dtype=TORCH_DTYPE, local_files_only=True,)
or for gguf
pipe=automodel.from_single_file(model_path, quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), torch_dtype=torch.float16, low_cpu_mem_usage=True,)
then dump
components=pipe.components
we can at least get a decent loading in case of aio safetensors and gguf along with optimizations
loading sdxl single aio safetensors or gguf
pipe=StableDiffusionXLPipeline(components)
pipe=StableDiffusionXLImg2ImgPipeline(components)
loading wan single aio safetensors or gguf
pipe=WanPipeline(components)
can be used further more for other models as well flux acestep all other .
if the feature is there please give a hint . it can solve 90% of model loading problems
Thanks in advance