forked from llamastack/llama-stack
-
Notifications
You must be signed in to change notification settings - Fork 0
chore(deps): update dependency huggingface-hub to v1 #95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
konflux-internal-p02
wants to merge
1
commit into
rhoai-2.24
Choose a base branch
from
konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x
base: rhoai-2.24
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
chore(deps): update dependency huggingface-hub to v1 #95
konflux-internal-p02
wants to merge
1
commit into
rhoai-2.24
from
konflux/mintmaker/rhoai-2.24/huggingface-hub-1.x
+1
−1
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: konflux-internal-p02 <170854209+konflux-internal-p02[bot]@users.noreply.github.com>
30200d4 to
1f45ce4
Compare
4d10b78 to
1f45ce4
Compare
6896c6b to
1f45ce4
Compare
e323dc5 to
1f45ce4
Compare
25c3ce5 to
1f45ce4
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
None yet
0 participants
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==0.29.0->==1.2.3Release Notes
huggingface/huggingface_hub (huggingface-hub)
v1.2.3: [v1.2.3] Fixprivatedefault value in CLICompare Source
Patch release for #3618 by @Wauplin.
Full Changelog: huggingface/huggingface_hub@v1.2.2...v1.2.3
v1.2.2: [v1.2.2] Fix unbound local error in local folder metadata + fixhf auth listlogsCompare Source
Full Changelog: huggingface/huggingface_hub@v1.2.1...v1.2.2
v1.2.1Compare Source
v1.2.0: v1.2.1: Smarter Rate Limit Handling, Daily Papers API and more QoL improvements!Compare Source
🚦 Smarter Rate Limit Handling
We've improved how the
huggingface_hublibrary handles rate limits from the Hub. When you hit a rate limit, you'll now see clear, actionable error messages telling you exactly how long to wait and how many requests you have left.When a 429 error occurs, the SDK automatically parses the
RateLimitheader to extract the exact number of seconds until the rate limit resets, then waits precisely that duration before retrying. This applies to file downloads (i.e. Resolvers), uploads, and paginated Hub API calls (list_models,list_datasets,list_spaces, etc.).More info about Hub rate limits in the docs 👉 here.
✨ HF API
Daily Papers endpoint: You can now programmatically access Hugging Face's daily papers feed. You can filter by week, month, or submitter, and sort by publication date or trending.
Offline mode helper: we recommend using
huggingface_hub.is_offline_mode()to check whether offline mode is enabled instead of checkingHF_HUB_OFFLINEdirectly.Inference Endpoints: You can now configure scaling metrics and thresholds when deploying endpoints.
Exposed utilities:
RepoFileandRepoFolderare now available at the root level for easier imports.⚡️ Inference Providers
OVHcloud AI Endpoints was added as an official Inference Provider in
v1.1.5. OVHcloud provides European-hosted, GDPR-compliant model serving for your AI applications.We also added support for automatic speech recognition (ASR) with Replicate, so you can now transcribe audio files easily.
The
truncation_directionparameter inInferenceClient.feature_extraction( (and its async counterpart) now uses lowercase values ("left"/"right" instead of "Left"/"Right") for consistency with other specs. The Async counterpart has been updated as well📁 HfFileSystem
HfFileSystem: A new top-level
hffsalias make working with the filesystem interface more convenient.💔 Breaking Change
Paginated results when listing user access requests:
list_pending_access_requests,list_accepted_access_requests, andlist_rejected_access_requestsnow return an iterator instead of a list. This allows lazy loading of results for repositories with a large number of access requests. If you need a list, wrap the call withlist(...).🔧 Other QoL Improvements
num_workersby @Qubitium in #3532HfApidownload utils by @schmrlng in #3531whoamiby @Wauplin in #3568repo_type_and_id_from_hf_idby @pulltheflower in #3507list_repo_treeinsnapshot_downloadby @hanouticelina in #3565📖 Documentation
hf loginexample tohf auth loginby @alisheryeginbay in #3590🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
FileNotFoundErrorin CLI update check by @hanouticelina in #3574HfHubHTTPErrorreduce error by adding factory function by @owenowenisme in #3579constants.HF_HUB_ETAG_TIMEOUTas timeout forget_hf_file_metadataby @krrome in #3595🏗️ Internal
huggingface_hubas dependency for hf by @Wauplin in #3527Significant community contributions
The following contributors have made significant changes to the library over the last release:
HfApidownload utils (#3531)v1.1.7: [v1.1.7] Makehffsaccessible at root-levelCompare Source
[HfFileSystem] Add top level hffs by @lhoestq #3556.
Example:
Full Changelog: huggingface/huggingface_hub@v1.1.6...v1.1.7
v1.1.6: [v1.1.6] Fix incomplete file listing insnapshot_download+ other bugfixesCompare Source
This release includes multiple bug fixes:
list_repo_treeinsnapshot_download#3565 by @hanouticelinaHfHubHTTPErrorreduce error by adding factory function #3579 by @owenowenismeFileNotFoundErrorin CLI update check #3574 by @hanouticelinatiny-agentsCLI #3573 by @WauplinFull Changelog: huggingface/huggingface_hub@v1.1.5...v1.1.6
v1.1.5: [v1.1.5] Welcoming OVHcloud AI Endpoints as a new Inference Provider & MoreCompare Source
⚡️ New Inference Provider: OVHcloud AI Endpoints
OVHcloud AI Endpoints is now an official Inference Provider on Hugging Face! 🎉
OVHcloud delivers fast, production ready inference on secure, sovereign, fully 🇪🇺 European infrastructure - combining advanced features with competitive pricing.
More snippets examples in the provider documentation 👉 here.
QoL Improvements
Installing the CLI is now much faster, thanks to @Boulaouaney for adding support for
uv, bringing faster package installation.Bug Fixes
This release also includes the following bug fixes:
HF_DEBUGenvironment variable in #3562 by @hanouticelinav1.1.4: [v1.1.4] Paginated results in list_user_accessCompare Source
Full Changelog: huggingface/huggingface_hub@v1.1.3...v1.1.4
v1.1.3: [v1.1.3] Avoid HTTP 429 on downloads + fix missing arguments in download APICompare Source
Full Changelog: huggingface/huggingface_hub@v1.1.0...v1.1.3
v1.1.2Compare Source
v1.1.1Compare Source
v1.1.0: : Faster Downloads, new CLI features and more!Compare Source
🚀 Optimized Download Experience
⚡ This release significantly improves the file download experience by making it faster and cleaning up the terminal output.
snapshot_downloadis now always multi-threaded, leading to significant performance gains. We removed a previous limitation, as Xet's internal resource management ensures we can parallelize downloads safely without resource contention. A sample benchmark showed this made the download much faster!Additionally, the output for
snapshot_downloadandhf downloadCLI is now much less verbose. Per file logs are hidden by default, and all individual progress bars are combined into a single progress bar, resulting in a much cleaner output.snapshot_downloadandhf downloadby @Wauplin in #3523Inference Providers
🆕 WaveSpeedAI is now an official Inference Provider on Hugging Face! 🎉 WaveSpeedAI provides fast, scalable, and cost-effective model serving for creative AI applications, supporting
text-to-image,image-to-image,text-to-video, andimage-to-videotasks. 🎨More snippets examples in the provider documentation 👉 here.
We also added support for
image-segmentationtask for fal, enabling state-of-the-art background removal with RMBG v2.0.image-segmentationfor fal by @hanouticelina in #3521🦾 CLI continues to get even better!
Following the complete revamp of the Hugging Face CLI in
v1.0, this release builds on that foundation by adding powerful new features and improving accessibility.New
hfPyPI PackageTo make the CLI even easier to access, we've published a new, minimal PyPI package:
hf. This package installs thehfCLI tool and It's perfect for quick, isolated execution with modern tools like uvx.import hfin a Python script will correctly raise anImportError.A big thank you to @thorwhalen for generously transferring the
hfpackage name to us on PyPI. This will make the CLI much more accessible for all Hugging Face users. 🤗hfCLI to PyPI by @Wauplin in #3511Manage Inference Endpoints
A new command group,
hf endpoints, has been added to deploy and manage your Inference Endpoints directly from the terminal.This provides "one-liners" for deploying, deleting, updating, and monitoring endpoints. The CLI offers two clear paths for deployment:
hf endpoints deployfor standard Hub models andhf endpoints catalog deployfor optimized Model Catalog configurations.Verify Cache Integrity
A new command,
hf cache verify, has been added to check your cached files against their checksums on the Hub. This is a great tool to ensure your local cache is not corrupted and is in sync with the remote repository.hf cache verifyby @hanouticelina in #3461Cache Sorting and Limiting
Managing your local cache is now easier. The
hf cache lscommand has been enhanced with two new options:--sort: Sort your cache byaccessed,modified,name, orsize. You can also specify order (e.g.,modified:ascto find the oldest files).--limit: Get just the top N results after sorting (e.g.,--limit 10).Finally, we've patched the CLI installer script to fix a bug for
zshusers. The installer now works correctly across all common shells.🔧 Other
We've fixed a bug in
HfFileSystemwhere the instance cache would break when using multiprocessing with the "fork" start method.🌍 Documentation
Thanks to @BastienGimbert for translating the README to French 🇫🇷 🤗
and Thanks to @didier-durand for fixing multiple language typos in the library! 🤗
🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
🏗️ internal
update-inference-typesworkflow by @hanouticelina in #3516Significant community contributions
The following contributors have made significant changes to the library over the last release:
v1.0.1: [v1.0.1] Removeaiohttpfrom extra dependenciesCompare Source
In
huggingface_hubv1.0 release, we've removed our dependency onaiohttpto replace it withhttpxbut we forgot to remove it from thehuggingface_hub[inference]extra dependencies insetup.py. This patch release removes it, making theinferenceextra removed as well.The internal method
_import_aiohttpbeing unused, it has been removed as well.Full Changelog: huggingface/huggingface_hub@v1.0.0...v1.0.1
v1.0.0: v1.0: Building for the Next DecadeCompare Source
Check out our blog post announcement!
🚀 HTTPx migration
The
huggingface_hublibrary now useshttpxinstead ofrequestsfor HTTP requests. This change was made to improve performance and to support both synchronous and asynchronous requests the same way. We therefore dropped bothrequestsandaiohttpdependencies.The
get_sessionandhf_raise_for_statusstill exist and respectively returns anhttpx.Clientand processes ahttpx.Responseobject. An additionalget_async_clientutility has been added for async logic.The exhaustive list of breaking changes can be found here.
hf_raise_for_statuson async stream + tests by @Wauplin in #3442git_vs_httpguide by @Wauplin in #3357🪄 CLI revamp
huggingface_hub 1.0marks a complete transformation of our command-line experience. We've reimagined the CLI from the ground up, creating a tool that feels native to modern ML workflows while maintaining the simplicity the community love.One CLI to Rule: Goodbye
huggingface-cliThis release marks the end of an era with the complete removal of the
huggingface-clicommand. The newhfcommand (introduced inv0.34.0) takes its place with a cleaner, more intuitive design that follows a logical "resource-action" pattern. This breaking change simplifies the user experience and aligns with modern CLI conventions - no more typing those extra 11 characters!huggingface-clientirely in favor ofhfby @Wauplin in #3404hfCLI RevampThe new CLI introduces a comprehensive set of commands for repository and file management that expose powerful
HfApifunctionality directly from the terminal:A dry run mode has been added to
hf download, which lets you preview exactly what will be downloaded before committing to the transfer—showing file sizes, what's already cached, and total bandwidth requirements in a clean table format:The CLI now provides intelligent shell auto-completion that suggests available commands, subcommands, options, and arguments as you type - making command discovery effortless and reducing the need to constantly check
--help.The CLI now also checks for updates in the background, ensuring you never miss important improvements or security fixes. Once every 24 hours, the CLI silently checks PyPI for newer versions and notifies you when an update is available - with personalized upgrade instructions based on your installation method.
The cache management CLI has been completely revamped with the removal of
hf scan cacheandhf scan deletein favor of docker-inspired commands that are more intuitive. The newhf cache lsprovides rich filtering capabilities,hf cache rmenables targeted deletion, andhf cache prunecleans up detached revisions.Under the hood, this transformation is powered by Typer, significantly reducing boilerplate and making the CLI easier to maintain and extend with new features.
hf cacheby @hanouticelina in #3439CLI Installation: Zero-Friction Setup
The new cross-platform installers simplify CLI installation by creating isolated sandboxed environments without interfering with your existing Python setup or project dependencies. The installers work seamlessly across macOS, Linux, and Windows, automatically handling dependencies and
PATHconfiguration.Finally, the
[cli]extra has been removed - The CLI now ships with the corehuggingface_hubpackage.[cli]extra by @hanouticelina in #3451💔 Breaking changes
The v1.0 release is a major milestone for the
huggingface_hublibrary. It marks our commitment to API stability and the maturity of the library. We have made several improvements and breaking changes to make the library more robust and easier to use. A migration guide has been written to reduce friction as much as possible: https://huggingface.co/docs/huggingface_hub/concepts/migration.We'll list all breaking changes below:
Minimum Python version is now 3.9 (instead of 3.8).
HTTP backend migrated from
requeststohttpx. Expect some breaking changes on advances features and errors. The exhaustive list can be found here.The deprecated
huggingface-clihas been removed,hf(introduced inv0.34) replaces it with a clearer ressource-action CLI.huggingface-clientirely in favor ofhfby @Wauplin in #3404The
[cli]extra has been removed - The CLI now ships with the corehuggingface_hubpackage.[cli]extra by @hanouticelina in #3451Long deprecated classes like
HfFolder,InferenceAPI, andRepositoryhave been removed.HfFolderandInferenceAPIclasses by @Wauplin in #3344Repositoryclass by @Wauplin in #3346constant.hf_cache_homehave been removed. Useconstants.HF_HOMEinstead.use_auth_tokenis not supported anymore. Usetokeninstead. Previously usinguse_auth_tokenautomatically redirected totokenwith a warningremoved
get_token_permission. Became useless when fine-grained tokens arrived.removed
update_repo_visibility. Useupdate_repo_settingsinstead.removed
is_write_actionis allbuild_hf_headersmethods. Not relevant since fine-grained tokens arrived.removed
write_permissionarg from login method. Not relevant anymore.renamed
login(new_session)tologin(skip_if_logged_in)in login methods. Not announced but hopefully very little friction. Only some notebooks to update on the Hub (will do it once released)removed
resume_download/force_filename/local_dir_use_symlinksparameters from hf_hub_download/snapshot_download (and mixins)removed
library/language/tags/taskfromlist_modelsargsupload_file/upload_folder now returns a url to the commit created on the Hub as any other method creating a commit (create_commit, delete_file, etc.)
require keyword arguments on login methods
Remove any Keras 2.x and
tensorflow-related codehf_transfersupport.hf_xetis now the default upload/download manager🔧 Other
Inference Providers
Routing for Chat Completion API in Inference Providers is now done server-side. This saves 1 HTTP call + allows us to centralize logic to route requests to the correct provider. In the future, it enables use cases like choosing
fastestorcheapestprovider directly.Also some updates in the docs:
@strict Typed Dict
We've added support for
TypedDictto our@strictframework, which is our data validation tool for dataclasses. Typed dicts are now converted to dataclasses on-the-fly for validation, without mutating the input data. This logic is currently used bytransformersto validate config files but is library-agnostic and can therefore be used by anyone. More details in this guide.List organization followers
Added a
HfApi.list_organization_followersendpoint to list followers of an organization, similar to the existing one for user's followers.🛠️ Small fixes and maintenance
🐛 Bug and typo fixes
sentence_similaritydocstring by @tolgaakar in #3374image-to-imageby @hanouticelina in #3399Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
To execute skipped test pipelines write comment
/ok-to-test.Documentation
Find out how to configure dependency updates in MintMaker documentation or see all available configuration options in Renovate documentation.