feat(tools): HuggingFace config parser for transformer spec#193
feat(tools): HuggingFace config parser for transformer spec#193pradhyum6144 wants to merge 1 commit intomodelpack:mainfrom
Conversation
…modelpack#164) Adds hf_parser.py that converts HuggingFace config.json into ModelPack transformer spec format (PR modelpack#111 vocabulary). Supports Mistral, Mixtral, Qwen2, GPT-2, DeepSeek-V2 (MLA + mixed layers), and unknown models with NEEDS_REVIEW fallback. Includes 26 unit tests. Improvements over PR modelpack#185's field mapping research: - MLA attention fields (kv_lora_rank, q_lora_rank, qk_nope/rope_head_dim) - DeepSeek MoE routing params (routed_scaling_factor, topk_method) - Mixed layers support (first_k_dense_replace, moe_layer_freq) - Correct learned position embedding for GPT-2/GPT-Neo Signed-off-by: pradhyum6144 <pradhyum314@gmail.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a crucial utility for standardizing the representation of HuggingFace transformer model configurations. By providing a robust parser that translates diverse config.json structures into a unified ModelPack specification, it significantly streamlines the process of integrating and analyzing various large language models, ensuring consistency and reducing manual effort in model definition. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a valuable tool for converting HuggingFace model configurations into the ModelPack transformer specification format. The implementation is well-structured and covers a good range of model architectures and their specific parameters. The inclusion of a comprehensive test suite is also a great practice. I've identified a bug in the YAML output generation that could lead to invalid output, a minor documentation inconsistency, and an opportunity for a small refactoring to improve maintainability. Overall, this is a solid contribution.
| if value == NEEDS_REVIEW: | ||
| lines.append(f"{prefix}{key}: {value} # requires human review") | ||
| else: | ||
| lines.append(f'{prefix}{key}: "{value}"') |
There was a problem hiding this comment.
The current string formatting for YAML output does not escape special characters like double quotes within the string value. This can result in invalid YAML. For example, a value of a "b" c would be rendered as key: "a "b" c", which is not valid. You can fix this by using json.dumps() which correctly handles string escaping for JSON, and the output is also valid for YAML.
| lines.append(f'{prefix}{key}: "{value}"') | |
| lines.append(f'{prefix}{key}: {json.dumps(value)}') |
| Usage: | ||
| python tools/hf_parser.py meta-llama/Meta-Llama-3-8B | ||
| python tools/hf_parser.py mistralai/Mistral-7B-v0.3 | ||
| python tools/hf_parser.py --file path/to/config.json |
There was a problem hiding this comment.
The usage example in the docstring mentions a --file flag, but the script's argument parser is implemented to accept a positional argument for both model IDs and file paths. This example should be updated to reflect the actual implementation for clarity.
| python tools/hf_parser.py --file path/to/config.json | |
| python tools/hf_parser.py path/to/config.json |
| use_gated = model_type in ( | ||
| "llama", "mistral", "mixtral", "qwen2", "qwen2_moe", "phi3", | ||
| "gemma", "gemma2", "deepseek_v2", "deepseek_v3", | ||
| ) |
There was a problem hiding this comment.
For checking membership, using a set is more idiomatic and performant than a tuple, especially as the list of models grows. I suggest converting this tuple to a set. For even better maintainability, you could define this as a module-level constant.
| use_gated = model_type in ( | |
| "llama", "mistral", "mixtral", "qwen2", "qwen2_moe", "phi3", | |
| "gemma", "gemma2", "deepseek_v2", "deepseek_v3", | |
| ) | |
| use_gated = model_type in { | |
| "llama", "mistral", "mixtral", "qwen2", "qwen2_moe", "phi3", | |
| "gemma", "gemma2", "deepseek_v2", "deepseek_v3", | |
| } |
Summary
tools/hf_parser.pythat converts HuggingFaceconfig.jsoninto ModelPack transformer spec format using PR [WIP] feat: add model architecture configuration #111's vocabulary (issue Transformer specification and auto-generation method for the existing models #164)NEEDS_REVIEWfallbacktools/hf_parser_test.pywith 26 unit tests covering all supported architectureskv_lora_rank,q_lora_rank,qk_nope_head_dim), DeepSeek MoE routing params, and mixed layer detectionFeatures
kv_lora_rank,q_lora_rank,qk_nope_head_dim,qk_rope_head_dim,v_head_dimrouted_scaling_factor,topk_method,norm_topk_probfirst_k_dense_replace,moe_layer_freqNEEDS_REVIEWfor human verificationUsage
Test plan
pytest tools/hf_parser_test.py -v)NEEDS_REVIEWmarkers instead of crashingRelates to #164