Skip to content

Conversation

@konflux-internal-p02
Copy link

@konflux-internal-p02 konflux-internal-p02 bot commented Oct 27, 2025

This PR contains the following updates:

Package Change Age Confidence
huggingface-hub ==0.29.0 -> ==1.2.3 age confidence

Release Notes

huggingface/huggingface_hub (huggingface-hub)

v1.2.3: [v1.2.3] Fix private default value in CLI

Compare Source

Patch release for #​3618 by @​Wauplin.

When creating a new repo, we should default to private=None instead of private=False. This is already the case when using the API but not when using the CLI. This is a bug likely introduced when switching to Typer. When defaulting to None, the repo visibility will default to False except if the organization has configured repos to be "private by default" (the check happens server-side, so it shouldn't be hardcoded client-side).

Full Changelog: huggingface/huggingface_hub@v1.2.2...v1.2.3

v1.2.2: [v1.2.2] Fix unbound local error in local folder metadata + fix hf auth list logs

Compare Source

Full Changelog: huggingface/huggingface_hub@v1.2.1...v1.2.2

v1.2.1

Compare Source

v1.2.0: v1.2.1: Smarter Rate Limit Handling, Daily Papers API and more QoL improvements!

Compare Source

🚦 Smarter Rate Limit Handling

We've improved how the huggingface_hub library handles rate limits from the Hub. When you hit a rate limit, you'll now see clear, actionable error messages telling you exactly how long to wait and how many requests you have left.

HfHubHTTPError: 429 Too Many Requests for url: https://huggingface.co/api/models/username/reponame.
Retry after 55 seconds (0/2500 requests remaining in current 300s window).

When a 429 error occurs, the SDK automatically parses the RateLimit header to extract the exact number of seconds until the rate limit resets, then waits precisely that duration before retrying. This applies to file downloads (i.e. Resolvers), uploads, and paginated Hub API calls (list_models, list_datasets, list_spaces, etc.).

More info about Hub rate limits in the docs 👉 here.

✨ HF API

Daily Papers endpoint: You can now programmatically access Hugging Face's daily papers feed. You can filter by week, month, or submitter, and sort by publication date or trending.

from huggingface_hub import list_daily_papers

for paper in list_daily_papers(date="2025-12-03"):
    print(paper.title)

### DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models
### ToolOrchestra: Elevating Intelligence via Efficient Model and Tool Orchestration

### MultiShotMaster: A Controllable Multi-Shot Video Generation Framework
### Deep Research: A Systematic Survey

### MG-Nav: Dual-Scale Visual Navigation via Sparse Spatial Memory
...

Add daily papers endpoint by @​BastienGimbert in #​3502
Add more parameters to daily papers by @​Samoed in #​3585

Offline mode helper: we recommend using huggingface_hub.is_offline_mode() to check whether offline mode is enabled instead of checking HF_HUB_OFFLINE directly.

Add offline_mode helper by @​Wauplin in #​3593
Rename utility to is_offline_mode by @​Wauplin #​3598

Inference Endpoints: You can now configure scaling metrics and thresholds when deploying endpoints.

feat(endpoints): scaling metric and threshold by @​oOraph in #​3525

Exposed utilities: RepoFile and RepoFolder are now available at the root level for easier imports.

Expose RepoFile and RepoFolder at root level by @​Wauplin in #​3564

⚡️ Inference Providers

OVHcloud AI Endpoints was added as an official Inference Provider in v1.1.5. OVHcloud provides European-hosted, GDPR-compliant model serving for your AI applications.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
    model="openai/gpt-oss-20b:ovhcloud",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

Add OVHcloud AI Endpoints as an Inference Provider by @​eliasto in #​3541

We also added support for automatic speech recognition (ASR) with Replicate, so you can now transcribe audio files easily.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="replicate",
    api_key=os.environ["HF_TOKEN"],
)

output = client.automatic_speech_recognition("sample1.flac", model="openai/whisper-large-v3")

[Inference Providers] Add support for ASR with Replicate by @​hanouticelina in #​3538

The truncation_direction parameter in InferenceClient.feature_extraction ( (and its async counterpart) now uses lowercase values ("left"/"right" instead of "Left"/"Right") for consistency with other specs. The Async counterpart has been updated as well

[Inference] Use lowercase left/right truncation direction parameter by @​Wauplin in #​3548

📁 HfFileSystem

HfFileSystem: A new top-level hffs alias make working with the filesystem interface more convenient.

>>> from huggingface_hub import hffs
>>> with hffs.open("datasets/fka/awesome-chatgpt-prompts/prompts.csv", "r") as f:
...     print(f.readline())
"act","prompt"
"An Ethereum Developer","Imagine you are an experienced Ethereum developer tasked..."

[HfFileSystem] Add top level hffs by @​lhoestq in #​3556
[HfFileSystem] Add expand_info arg by @​lhoestq in #​3575

💔 Breaking Change

Paginated results when listing user access requests: list_pending_access_requests, list_accepted_access_requests, and list_rejected_access_requests now return an iterator instead of a list. This allows lazy loading of results for repositories with a large number of access requests. If you need a list, wrap the call with list(...).

Paginated results in list_user_access by @​Wauplin in #​3535

🔧 Other QoL Improvements

📖 Documentation

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes
🏗️ Internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v1.1.7: [v1.1.7] Make hffs accessible at root-level

Compare Source

[HfFileSystem] Add top level hffs by @​lhoestq #​3556.

Example:

>>> from huggingface_hub import hffs
>>> with hffs.open("datasets/fka/awesome-chatgpt-prompts/prompts.csv", "r") as f:
...     print(f.readline())
...     print(f.readline())
"act","prompt"
"An Ethereum Developer","Imagine you are an experienced Ethereum developer tasked..."

Full Changelog: huggingface/huggingface_hub@v1.1.6...v1.1.7

v1.1.6: [v1.1.6] Fix incomplete file listing in snapshot_download + other bugfixes

Compare Source

This release includes multiple bug fixes:


Full Changelog: huggingface/huggingface_hub@v1.1.5...v1.1.6

v1.1.5: [v1.1.5] Welcoming OVHcloud AI Endpoints as a new Inference Provider & More

Compare Source

⚡️ New Inference Provider: OVHcloud AI Endpoints

OVHcloud AI Endpoints is now an official Inference Provider on Hugging Face! 🎉
OVHcloud delivers fast, production ready inference on secure, sovereign, fully 🇪🇺 European infrastructure - combining advanced features with competitive pricing.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    api_key=os.environ["HF_TOKEN"],
)

completion = client.chat.completions.create(
    model="openai/gpt-oss-20b:ovhcloud",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

More snippets examples in the provider documentation 👉 here.

QoL Improvements

Installing the CLI is now much faster, thanks to @​Boulaouaney for adding support for uv, bringing faster package installation.

Bug Fixes

This release also includes the following bug fixes:

v1.1.4: [v1.1.4] Paginated results in list_user_access

Compare Source

  • Paginated results in list_user_access by @​Wauplin in #​3535
    ⚠️ This patch release is a breaking chance but necessary to reflect API update made server-side.

Full Changelog: huggingface/huggingface_hub@v1.1.3...v1.1.4

v1.1.3: [v1.1.3] Avoid HTTP 429 on downloads + fix missing arguments in download API

Compare Source

  • Make 'name' optional in catalog deploy by @​Wauplin in #​3529
  • Pass through additional arguments from HfApi download utils by @​schmrlng in #​3531
  • Avoid redundant call to the Xet connection info URL by @​Wauplin in #​3534
    • => this PR fixes HTTP 429 rate limit issues happening when downloading a very large dataset of small files

Full Changelog: huggingface/huggingface_hub@v1.1.0...v1.1.3

v1.1.2

Compare Source

v1.1.1

Compare Source

v1.1.0: : Faster Downloads, new CLI features and more!

Compare Source

🚀 Optimized Download Experience

⚡ This release significantly improves the file download experience by making it faster and cleaning up the terminal output.

snapshot_download is now always multi-threaded, leading to significant performance gains. We removed a previous limitation, as Xet's internal resource management ensures we can parallelize downloads safely without resource contention. A sample benchmark showed this made the download much faster!

Additionally, the output for snapshot_download and hf download CLI is now much less verbose. Per file logs are hidden by default, and all individual progress bars are combined into a single progress bar, resulting in a much cleaner output.

download_2

Inference Providers

🆕 WaveSpeedAI is now an official Inference Provider on Hugging Face! 🎉 WaveSpeedAI provides fast, scalable, and cost-effective model serving for creative AI applications, supporting text-to-image, image-to-image, text-to-video, and image-to-video tasks. 🎨

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="wavespeed",
    api_key=os.environ["HF_TOKEN"],
)

video = client.text_to_video(
    "A cat riding a bike",
    model="Wan-AI/Wan2.2-TI2V-5B",
)

More snippets examples in the provider documentation 👉 here.

We also added support for image-segmentation task for fal, enabling state-of-the-art background removal with RMBG v2.0.

import os
from huggingface_hub import InferenceClient

client = InferenceClient(
    provider="fal-ai",
    api_key=os.environ["HF_TOKEN"],
)

output = client.image_segmentation("cats.jpg", model="briaai/RMBG-2.0")

MixCollage-05-Nov-2025-11-49-AM-7835

🦾 CLI continues to get even better!

Following the complete revamp of the Hugging Face CLI in v1.0, this release builds on that foundation by adding powerful new features and improving accessibility.

New hf PyPI Package

To make the CLI even easier to access, we've published a new, minimal PyPI package: hf. This package installs the hf CLI tool and It's perfect for quick, isolated execution with modern tools like uvx.

### Run the CLI without installing it
> uvx hf auth whoami

⚠️ Note: This package is for the CLI only. Attempting to import hf in a Python script will correctly raise an ImportError.

A big thank you to @​thorwhalen for generously transferring the hf package name to us on PyPI. This will make the CLI much more accessible for all Hugging Face users. 🤗

Manage Inference Endpoints

A new command group, hf endpoints, has been added to deploy and manage your Inference Endpoints directly from the terminal.

This provides "one-liners" for deploying, deleting, updating, and monitoring endpoints. The CLI offers two clear paths for deployment: hf endpoints deploy for standard Hub models and hf endpoints catalog deploy for optimized Model Catalog configurations.

> hf endpoints --help
Usage: hf endpoints [OPTIONS] COMMAND [ARGS]...

  Manage Hugging Face Inference Endpoints.

Options:
  --help  Show this message and exit.

Commands:
  catalog        Interact with the Inference Endpoints catalog.
  delete         Delete an Inference Endpoint permanently.
  deploy         Deploy an Inference Endpoint from a Hub repository.
  describe       Get information about an existing endpoint.
  ls             Lists all Inference Endpoints for the given namespace.
  pause          Pause an Inference Endpoint.
  resume         Resume an Inference Endpoint.
  scale-to-zero  Scale an Inference Endpoint to zero.
  update         Update an existing endpoint.
Verify Cache Integrity

A new command, hf cache verify, has been added to check your cached files against their checksums on the Hub. This is a great tool to ensure your local cache is not corrupted and is in sync with the remote repository.

> hf cache verify --help
Usage: hf cache verify [OPTIONS] REPO_ID

  Verify checksums for a single repo revision from cache or a local directory.

  Examples:
  - Verify main revision in cache: `hf cache verify gpt2`
  - Verify specific revision: `hf cache verify gpt2 --revision refs/pr/1`
  - Verify dataset: `hf cache verify karpathy/fineweb-edu-100b-shuffle --repo-type dataset`
  - Verify local dir: `hf cache verify deepseek-ai/DeepSeek-OCR --local-dir /path/to/repo`

Arguments:
  REPO_ID  The ID of the repo (e.g. `username/repo-name`).  [required]

Options:
  --repo-type [model|dataset|space]
                                  The type of repository (model, dataset, or
                                  space).  [default: model]
  --revision TEXT                 Git revision id which can be a branch name,
                                  a tag, or a commit hash.
  --cache-dir TEXT                Cache directory to use when verifying files
                                  from cache (defaults to Hugging Face cache).
  --local-dir TEXT                If set, verify files under this directory
                                  instead of the cache.
  --fail-on-missing-files         Fail if some files exist on the remote but
                                  are missing locally.
  --fail-on-extra-files           Fail if some files exist locally but are not
                                  present on the remote revision.
  --token TEXT                    A User Access Token generated from
                                  https://huggingface.co/settings/tokens.
  --help                          Show this message and exit.
Cache Sorting and Limiting

Managing your local cache is now easier. The hf cache ls command has been enhanced with two new options:

  • --sort: Sort your cache by accessed, modified, name, or size. You can also specify order (e.g., modified:asc to find the oldest files).
  • --limit: Get just the top N results after sorting (e.g., --limit 10).
### List top 10 most recently accessed repos
> hf cache ls --sort accessed --limit 10

### Find the 5 largest repos you haven't used in over a year
> hf cache ls --filter "accessed>1y" --sort size --limit 5

Finally, we've patched the CLI installer script to fix a bug for zsh users. The installer now works correctly across all common shells.

🔧 Other

We've fixed a bug in HfFileSystem where the instance cache would break when using multiprocessing with the "fork" start method.

  • [HfFileSystem] improve cache for multiprocessing fork and multithreading by @​lhoestq in #​3500

🌍 Documentation

Thanks to @​BastienGimbert for translating the README to French 🇫🇷 🤗

and Thanks to @​didier-durand for fixing multiple language typos in the library! 🤗

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes
🏗️ internal

Significant community contributions

The following contributors have made significant changes to the library over the last release:

v1.0.1: [v1.0.1] Remove aiohttp from extra dependencies

Compare Source

In huggingface_hub v1.0 release, we've removed our dependency on aiohttp to replace it with httpx but we forgot to remove it from the huggingface_hub[inference] extra dependencies in setup.py. This patch release removes it, making the inference extra removed as well.

The internal method _import_aiohttp being unused, it has been removed as well.

Full Changelog: huggingface/huggingface_hub@v1.0.0...v1.0.1

v1.0.0: v1.0: Building for the Next Decade

Compare Source

Screenshot 2025-10-24 at 10 51 55

Check out our blog post announcement!

🚀 HTTPx migration

The huggingface_hub library now uses httpx instead of requests for HTTP requests. This change was made to improve performance and to support both synchronous and asynchronous requests the same way. We therefore dropped both requests and aiohttp dependencies.

The get_session and hf_raise_for_status still exist and respectively returns an httpx.Client and processes a httpx.Response object. An additional get_async_client utility has been added for async logic.

The exhaustive list of breaking changes can be found here.

🪄 CLI revamp

huggingface_hub 1.0 marks a complete transformation of our command-line experience. We've reimagined the CLI from the ground up, creating a tool that feels native to modern ML workflows while maintaining the simplicity the community love.

One CLI to Rule: Goodbye huggingface-cli

This release marks the end of an era with the complete removal of the huggingface-cli command. The new hf command (introduced in v0.34.0) takes its place with a cleaner, more intuitive design that follows a logical "resource-action" pattern. This breaking change simplifies the user experience and aligns with modern CLI conventions - no more typing those extra 11 characters!

hf CLI Revamp

The new CLI introduces a comprehensive set of commands for repository and file management that expose powerful HfApi functionality directly from the terminal:

> hf repo --help
Usage: hf repo [OPTIONS] COMMAND [ARGS]...

  Manage repos on the Hub.

Options:
  --help  Show this message and exit.

Commands:
  branch    Manage branches for a repo on the Hub.
  create    Create a new repo on the Hub.
  delete    Delete a repo from the Hub.
  move      Move a repository from a namespace to another namespace.
  settings  Update the settings of a repository.
  tag       Manage tags for a repo on the Hub.

A dry run mode has been added to hf download, which lets you preview exactly what will be downloaded before committing to the transfer—showing file sizes, what's already cached, and total bandwidth requirements in a clean table format:

> hf download gpt2 --dry-run   
[dry-run] Fetching 26 files: 100%|██████████████████████████████████████████████████████████| 26/26 [00:00<00:00, 50.66it/s]
[dry-run] Will download 26 files (out of 26) totalling 5.6G.
File                              Bytes to download 
--------------------------------- ----------------- 
.gitattributes                    445.0             
64-8bits.tflite                   125.2M            
64-fp16.tflite                    248.3M            
64.tflite                         495.8M            
README.md                         8.1K              
config.json                       665.0             
flax_model.msgpack                497.8M            
generation_config.json            124.0             
merges.txt                        456.3K            
model.safetensors                 548.1M            
onnx/config.json                  879.0             
onnx/decoder_model.onnx           653.7M            
onnx/decoder_model_merged.onnx    655.2M 
...

The CLI now provides intelligent shell auto-completion that suggests available commands, subcommands, options, and arguments as you type - making command discovery effortless and reducing the need to constantly check --help.

CLI auto-completion Demo

The CLI now also checks for updates in the background, ensuring you never miss important improvements or security fixes. Once every 24 hours, the CLI silently checks PyPI for newer versions and notifies you when an update is available - with personalized upgrade instructions based on your installation method.

The cache management CLI has been completely revamped with the removal of hf scan cache and hf scan delete in favor of docker-inspired commands that are more intuitive. The new hf cache ls provides rich filtering capabilities, hf cache rm enables targeted deletion, and hf cache prune cleans up detached revisions.

### List cached repos
>>> hf cache ls
ID                          SIZE     LAST_ACCESSED LAST_MODIFIED REFS        
--------------------------- -------- ------------- ------------- ----------- 
dataset/nyu-mll/glue          157.4M 2 days ago    2 days ago    main script 
model/LiquidAI/LFM2-VL-1.6B     3.2G 4 days ago    4 days ago    main        
model/microsoft/UserLM-8b      32.1G 4 days ago    4 days ago    main  

Found 3 repo(s) for a total of 5 revision(s) and 35.5G on disk.

### List cached repos with filters
>>> hf cache ls --filter "type=model" --filter "size>3G" --filter "accessed>7d"

### Output in different format
>>> hf cache ls --format json
>>> hf cache ls --revisions  # Replaces the old --verbose flag

### Cache removal
>>> hf cache rm model/meta-llama/Llama-2-70b-hf
>>> hf cache rm $(hf cache ls --filter "accessed>1y" -q)  # Remove old items

### Clean up detached revisions
hf cache prune  # Removes all unreferenced revisions

Under the hood, this transformation is powered by Typer, significantly reducing boilerplate and making the CLI easier to maintain and extend with new features.

CLI Installation: Zero-Friction Setup

The new cross-platform installers simplify CLI installation by creating isolated sandboxed environments without interfering with your existing Python setup or project dependencies. The installers work seamlessly across macOS, Linux, and Windows, automatically handling dependencies and PATH configuration.

### On macOS and Linux
>>> curl -LsSf https://hf.co/cli/install.sh | sh

### On Windows
>>> powershell -ExecutionPolicy ByPass -c "irm https://hf.co/cli/install.ps1 | iex"

Finally, the [cli] extra has been removed - The CLI now ships with the core huggingface_hub package.

💔 Breaking changes

The v1.0 release is a major milestone for the huggingface_hub library. It marks our commitment to API stability and the maturity of the library. We have made several improvements and breaking changes to make the library more robust and easier to use. A migration guide has been written to reduce friction as much as possible: https://huggingface.co/docs/huggingface_hub/concepts/migration.

We'll list all breaking changes below:

  • Minimum Python version is now 3.9 (instead of 3.8).

  • HTTP backend migrated from requests to httpx. Expect some breaking changes on advances features and errors. The exhaustive list can be found here.

  • The deprecated huggingface-cli has been removed, hf (introduced in v0.34) replaces it with a clearer ressource-action CLI.

  • The [cli] extra has been removed - The CLI now ships with the core huggingface_hub package.

  • Long deprecated classes like HfFolder, InferenceAPI, and Repository have been removed.

  • constant.hf_cache_home have been removed. Use constants.HF_HOME instead.

  • use_auth_token is not supported anymore. Use token instead. Previously using use_auth_token automatically redirected to token with a warning

  • removed get_token_permission. Became useless when fine-grained tokens arrived.

  • removed update_repo_visibility. Use update_repo_settings instead.

  • removed is_write_action is all build_hf_headers methods. Not relevant since fine-grained tokens arrived.

  • removed write_permission arg from login method. Not relevant anymore.

  • renamed login(new_session) to login(skip_if_logged_in) in login methods. Not announced but hopefully very little friction. Only some notebooks to update on the Hub (will do it once released)

  • removed resume_download / force_filename / local_dir_use_symlinks parameters from hf_hub_download/snapshot_download (and mixins)

  • removed library / language / tags / task from list_models args

  • upload_file/upload_folder now returns a url to the commit created on the Hub as any other method creating a commit (create_commit, delete_file, etc.)

  • require keyword arguments on login methods

  • Remove any Keras 2.x and tensorflow-related code

  • Removed hf_transfer support. hf_xet is now the default upload/download manager

🔧 Other

Inference Providers

Routing for Chat Completion API in Inference Providers is now done server-side. This saves 1 HTTP call + allows us to centralize logic to route requests to the correct provider. In the future, it enables use cases like choosing fastest or cheapest provider directly.

  • [InferenceClient] Server-side auto-routing for conversational task  by @​Wauplin in #​3448

Also some updates in the docs:

@​strict Typed Dict

We've added support for TypedDict to our @strict framework, which is our data validation tool for dataclasses. Typed dicts are now converted to dataclasses on-the-fly for validation, without mutating the input data. This logic is currently used by transformers to validate config files but is library-agnostic and can therefore be used by anyone. More details in this guide.

from typing import Annotated, TypedDict
from huggingface_hub.dataclasses import validate_typed_dict

def positive_int(value: int):
    if not value >= 0:
        raise ValueError(f"Value must be positive, got {value}")

class User(TypedDict):
    name: str
    age: Annotated[int, positive_int]

### Valid data
validate_typed_dict(User, {"name": "John", "age": 30})
List organization followers

Added a HfApi.list_organization_followers endpoint to list followers of an organization, similar to the existing one for user's followers.

🛠️ Small fixes and maintenance

🐛 Bug and typo fixes

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

To execute skipped test pipelines write comment /ok-to-test.


Documentation

Find out how to configure dependency updates in MintMaker documentation or see all available configuration options in Renovate documentation.

Signed-off-by: konflux-internal-p02 <170854209+konflux-internal-p02[bot]@users.noreply.github.com>
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Oct 27, 2025
@konflux-internal-p02 konflux-internal-p02 bot deleted the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch October 27, 2025 20:10
@konflux-internal-p02 konflux-internal-p02 bot restored the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch October 27, 2025 20:17
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Oct 27, 2025
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Oct 28, 2025
@konflux-internal-p02 konflux-internal-p02 bot deleted the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch October 28, 2025 00:09
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Oct 28, 2025
@konflux-internal-p02 konflux-internal-p02 bot restored the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch October 28, 2025 00:18
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Oct 28, 2025
@konflux-internal-p02 konflux-internal-p02 bot deleted the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch October 28, 2025 04:10
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Oct 28, 2025
@konflux-internal-p02 konflux-internal-p02 bot restored the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch October 28, 2025 04:19
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Oct 28, 2025
@konflux-internal-p02 konflux-internal-p02 bot deleted the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch October 28, 2025 08:09
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Oct 28, 2025
@konflux-internal-p02 konflux-internal-p02 bot restored the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch October 28, 2025 08:16
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Oct 28, 2025
@konflux-internal-p02 konflux-internal-p02 bot deleted the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch October 28, 2025 12:10
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Dec 31, 2025
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Dec 31, 2025
@konflux-internal-p02 konflux-internal-p02 bot force-pushed the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch from 30200d4 to 1f45ce4 Compare December 31, 2025 08:51
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Dec 31, 2025
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Dec 31, 2025
@konflux-internal-p02 konflux-internal-p02 bot force-pushed the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch from 4d10b78 to 1f45ce4 Compare December 31, 2025 12:53
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Dec 31, 2025
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Dec 31, 2025
@konflux-internal-p02 konflux-internal-p02 bot force-pushed the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch from 6896c6b to 1f45ce4 Compare December 31, 2025 16:53
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Dec 31, 2025
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Dec 31, 2025
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Jan 1, 2026
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Jan 1, 2026
@konflux-internal-p02 konflux-internal-p02 bot reopened this Jan 1, 2026
@konflux-internal-p02 konflux-internal-p02 bot force-pushed the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch from e323dc5 to 1f45ce4 Compare January 1, 2026 00:54
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 chore(deps): update dependency huggingface-hub to v1 - autoclosed Jan 1, 2026
@konflux-internal-p02 konflux-internal-p02 bot changed the title chore(deps): update dependency huggingface-hub to v1 - autoclosed chore(deps): update dependency huggingface-hub to v1 Jan 1, 2026
@konflux-internal-p02 konflux-internal-p02 bot reopened this Jan 1, 2026
@konflux-internal-p02 konflux-internal-p02 bot force-pushed the konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x branch from 25c3ce5 to 1f45ce4 Compare January 1, 2026 04:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants