We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 90e21b5 commit dd1d51aCopy full SHA for dd1d51a
labml_nn/lora/__init__.py
@@ -14,7 +14,7 @@
14
15
Low-Rank Adaptation (LoRA) freezes pre-trained model weights and injects
16
trainable rank decomposition matrices into each layer of the transformer.
17
- This makes it possible to efficiently fine-tune large langauge models by
+ This makes it possible to efficiently fine-tune large language models by
18
reducing trainable parameters by a large factor.
19
20
Here's [the training code](experiment.html) for training a GPT2 model with LoRA
0 commit comments