-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Pull Request: Adding HiRA integration into PEFT library #2668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
BenjaminBossan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this PR to add HiRA to PEFT. The method looks promising and the provided code is already quite mature.
When I started reading the paper, I was at first reminded of FedPara, aka LoHa, which is already integrated into PEFT, as that method also relies on the Hadamard product. However, IIUC, the two methods are still distinct: HiRA basically corresponds to LoRA, but instead of adding dW, we multiply it. In that way, it is much closer to LoRA than to LoHa. Still, I wanted to flag this, as I'm not sure you are aware (your paper doesn't seem to be reference FedPara).
At the moment, I haven't done a full in-depth review, but I think that makes more sense once we have completed the next step.
I noticed that you have formatted some unrelated files in method_comparison, could you please undo those changes? Usually, when you run make style, that directory should not be included.
I think a good next step is to add HiRA to the testing matrix we have in PEFT. For now, let's add some entries similar to the ones you can find here:
peft/tests/test_custom_models.py
Lines 70 to 72 in 92d65ca
| ("Vanilla MLP 1 LoRA", "MLP", LoraConfig, {"target_modules": "lin0"}), | |
| ("Vanilla MLP 2 LoRA", "MLP", LoraConfig, {"target_modules": ["lin0"]}), | |
| ("Vanilla MLP 3 LoRA", "MLP", LoraConfig, {"target_modules": ["lin1"]}), |
Since you also support embedding and conv layers, please make sure to include examples with those layers as well (basically, copy the relevant examples from LoRA and adjust them).
Then, please run pytest tests/test_custom_models.py -k "hira and not shira" -v and see if those tests pass. Once we get there, we can discuss the best next steps.
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
|
@hqsiswiliam Do you still plan on working on this PR? |
Hi, BenjaminBossan. Thanks for checking in! I’ll continue working on this PR over the next few days. |
- rename HiRAConfig-> HiraConfig - rename HiRAModel -> HiraModel - remove _enable_peft_forward_hooks - rename HiRALayer -> HiraLayer
…ed-on-lora/model.py Simplify HiRA model to match latest LoRA structure
…h-latest-lora/model.py Align HiRA model with LoRA model
|
@hqsiswiliam Once you're done with your changes, please ping me so that I know the PR is ready for review. |
- remove _custom_modules - fix NaN error when unmerging
- fix unmerge behaviours, only replace 0 with tiny numbers for numeric stable.
Hi @BenjaminBossan, thanks for the heads-up. I’ve completed the updates, and the PR is ready for review whenever you have time. |
BenjaminBossan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for all the updates. I have checked them and did another review of the PR. There is still a bit of work to do, but the rest should be relatively simple.
There is now a merge conflict, which is fortunately easy to resolve. Could you please merge with/rebase on the latest main and fix it?
Also, as we're nearing completion, let's add an example to the examples/ folder. You can add a new one, perhaps something from the paper, or copy an existing one and modify it to use HiRA.
Regarding testing, the existing tests are quite comprehensive but there are still uncovered cases. I added some comments about this, but there is more: HiRA tests should be added for decoder models, encoder-decoder models, etc. This is quite straightforward. Check for instance how this was added for another PEFT method and take the same approach.
src/peft/tuners/hira/__init__.py
Outdated
| return Linear4bit | ||
|
|
||
|
|
||
| # |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove this EETQ code.
| @@ -0,0 +1,90 @@ | |||
| # HiRA | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also add an entry to the _toctree.yml or else this won't appear in the docs.
src/peft/tuners/hira/config.py
Outdated
| layers_pattern (`Optional[Union[List[str], str]]`): | ||
| The layer pattern name, used only if `layers_to_transform` is different from `None`. This should target the | ||
| `nn.ModuleList` of the model, which is often called `'layers'` or `'h'`. | ||
| r_pattern (`dict`): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For consistency with other PEFT methods, let's call this rank_pattern.
src/peft/tuners/hira/model.py
Outdated
| def _check_merge_allowed(self): | ||
| """Verify that the configuration supports merging. | ||
| Currently gptq quantization and replicated layers do not support merging. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are no GPTQ layers for HiRA, so let's completely remove this method.
| return new_module | ||
|
|
||
| @contextmanager | ||
| def _enable_peft_forward_hooks(self, *args, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just so we're on the same boat, by providing this method, we want HiRA to allow mixed adapter batches. If this is the idea, please also add tests for this. That means adding HiRA to the test matrix here:
peft/tests/test_custom_models.py
Line 5675 in 4731379
| MIXED_ADAPTER_TEST_CASES = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also add HiRA to the test matrix for testing multiple adapters:
peft/tests/test_custom_models.py
Line 945 in 4731379
| MULTIPLE_ACTIVE_ADAPTERS_TEST_CASES = [ |
tests/test_custom_models.py
Outdated
|
|
||
| lr = 0.5 | ||
| if config_kwargs.get("use_dora"): | ||
| if config_kwargs.get("use_dora") or config.__class__ == HiraConfig: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if config_kwargs.get("use_dora") or config.__class__ == HiraConfig: | |
| if config_kwargs.get("use_dora") or (config_cls == HiraConfig): |
tests/test_hira.py
Outdated
| from peft.tuners.hira import Linear | ||
|
|
||
|
|
||
| def test_manual_hira_linear_equivalence(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this test (and thus the whole file) can be removed. We're really just re-implementing the forward method and checking if the results are the same. I don't think it adds any real value and we don't have this for any other PEFT method, I don't believe HiRA is special in this regard.
- modify config.__class__ to config_cls - remove _check_merge_allowed in hira model.
- adding HiRA cases to `MULTIPLE_ACTIVE_ADAPTERS_TEST_CASES`.
Feature request
This request proposes integrating HiRA (Hadamard High-Rank Adaptation) as described in the ICLR 2025 oral paper (https://openreview.net/pdf?id=TwJrTz9cRS) (https://iclr.cc/virtual/2025/oral/31839) and implemented in the hqsiswiliam/hira repository into the core PEFT library. This will enable users to apply HiRA through the familiar
get_peft_modelAPI and benefit from its high-rank updates without adding any inference overhead.Motivation
General Motivation
PEFT methods like LoRA achieve parameter-efficient fine-tuning by injecting low-rank updates into pre-trained weights. While effective, purely low-rank adaptation can struggle to capture complex patterns in large language models.
1. Expressiveness grows with the rank
Empirically, increasing the LoRA rank in LLM training yields better downstream performance:
Higher LoRA rank correlates with improved task accuracy.
2. HiRA: Hadamard high-rank updates without extra parameters
HiRA sidesteps the expressiveness constraint by computing a Hadamard-enhanced update:
HiRA uses the Hadamard product to inject high-rank structure into the frozen weight matrix
3. Singular-value patterns
After training, HiRA exhibits a rich singular-value pattern, akin to full-rank fine-tuning (FFT), indicating its ability to model complex transformations without the expensive computational overhead:
HiRA’s singular-value distribution closely mirrors that of FFT.
4. Performance gains
Across commonsense reasoning benchmarks, HiRA outperforms LoRA and other PEFT baselines:
HiRA delivers notable accuracy improvements over baseline adapters.
5. No extra parameter or compute cost
Despite its high-rank behaviour, HiRA introduces no additional trainable parameters compared to LoRA:
HiRA matches LoRA’s GRAM usage and training hours.
6. Complementary with LoRA (HiLoRA)
Combining HiRA and LoRA into a hybrid “HiLoRA” setup yields even stronger results than either method alone:
HiLoRA leverages both low-rank and Hadamard high-rank updates for better expressiveness.
By integrating HiRA into PEFT, users gain richer adaptation capability without sacrificing the parameter efficiency and usability that PEFT provides.
Your contribution
We would be pleased to submit a pull request to integrate HiRA class implementation into the PEFT framework. We welcome any suggestions for alternative integration approaches and appreciate any guidance on best practices.