-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
Add support for QAT full fine-tuning #3238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
+54
−40
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
|
@andrewor14 it's reversed I think, |
Contributor
Author
thanks, fixed |
8063d27 to
11326df
Compare
danielhanchen
requested changes
Sep 8, 2025
Contributor
danielhanchen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! Some small changes
11326df to
b7ee3e9
Compare
Contributor
Author
|
Thanks, just fixed |
**Summary:** Following unslothai#2976, which adds support for QAT + LoRA, this PR adds support for QAT during full fine-tuning. See the [torchao QAT README](https://github.com/pytorch/ao/blob/main/torchao/quantization/qat/README.md) for more details. Current QAT schemes supported are: ``` fp8-int4, targeting the torch.ops.fbgemm.f8i4bf16_shuffled kernel fp8-fp8, targeting the torch.ops.fbgemm.f8f8bf16_rowwise kernel ``` **Test Plan:** https://gist.github.com/andrewor14/048b5c1bd01b7fa23c53913856a8ef9f Full fine-tuning Llama3.1-8B with and without QAT on `yahma/alpaca-cleaned` for 1 epoch: - Batch size = 16 (no grad accum) - Learning rate = 4e-5 - Quantization scheme = fp8-int4 Wikitext perplexity: - QAT improved perplexity by 19.2% compared to regular fine-tuning - QAT's int4 quantized model even outperformed the bf16 baseline - Regular int4 quantized model (without QAT) was significantly worse than the bf16 baseline ``` ==> unsloth_model_full_baseline_output/eval_float.log <== | | |none | 0|word_perplexity|↓ |9.8446|± | N/A| ==> unsloth_model_full_baseline_output/eval_quantized.log <== | | |none | 0|word_perplexity|↓ |11.4595|± | N/A| ==> unsloth_model_full_qat_fp8-int4_output/eval_quantized.log <== | | |none | 0|word_perplexity|↓ |9.2336|± | N/A| ``` Fibonacci test: - Both bf16 baseline and int4 quantized models correctly identified 13 as the next number - QAT quantized model was more succinct in its response - No substantial differences here ``` ### Instruction: Continue the fibonnaci sequence. ### Input: 1, 1, 2, 3, 5, 8 ==> unsloth_model_full_baseline_output/eval_float.log <== ### Response: The next number in the Fibonacci sequence is 13.<|end_of_text|> ==> unsloth_model_full_baseline_output/eval_quantized.log <== ### Response: The next number in the Fibonacci sequence is 13.<|end_of_text|> ==> unsloth_model_full_qat_fp8-int4_output/eval_quantized.log <== ### Response: 13<|end_of_text|> ```
b7ee3e9 to
52a507f
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary: Following #2976, which adds support for QAT + LoRA, this PR adds support for QAT during full fine-tuning. See the torchao QAT README for more details.
Current QAT schemes supported are:
Test Plan: https://gist.github.com/andrewor14/048b5c1bd01b7fa23c53913856a8ef9f
Full fine-tuning Llama3.1-8B with and without QAT on
yahma/alpaca-cleanedfor 1 epoch:Wikitext perplexity:
Fibonacci test: