Dostrajanie (Fine-tuning)
Fine-tuning to proces wykorzystania wstępnie wytrenowanego modelu uczenia maszynowego i dalszego trenowania go na konkretnym zbiorze danych, aby poprawić jego wydajność w określonym zadaniu.
Fine-tuning wykorzystuje uczenie transferowe, adaptując model początkowo wytrenowany na szerokim zbiorze danych (np. duży model językowy wytrenowany na ogromnym korpusie tekstowym) do bardziej wyspecjalizowanego zadania lub zbioru danych. Obejmuje to dostosowywanie wag modelu za pomocą nowych danych, często z mniejszą szybkością uczenia niż podczas początkowego treningu. Fine-tuning pozwala na osiągnięcie wysokiej wydajności przy mniejszej ilości danych i zasobów obliczeniowych w porównaniu do trenowania modelu od podstaw.
graph LR
Center["Dostrajanie (Fine-tuning)"]:::main
Pre_machine_learning["machine-learning"]:::pre --> Center
click Pre_machine_learning "/terms/machine-learning"
Pre_large_language_model["large-language-model"]:::pre --> Center
click Pre_large_language_model "/terms/large-language-model"
Center --> Child_lora["lora"]:::child
click Child_lora "/terms/lora"
Center --> Child_rlhf["rlhf"]:::child
click Child_rlhf "/terms/rlhf"
Rel_front_running["front-running"]:::related -.-> Center
click Rel_front_running "/terms/front-running"
Rel_inference["inference"]:::related -.-> Center
click Rel_inference "/terms/inference"
classDef main fill:#7c3aed,stroke:#8b5cf6,stroke-width:2px,color:white,font-weight:bold,rx:5,ry:5;
classDef pre fill:#0f172a,stroke:#3b82f6,color:#94a3b8,rx:5,ry:5;
classDef child fill:#0f172a,stroke:#10b981,color:#94a3b8,rx:5,ry:5;
classDef related fill:#0f172a,stroke:#8b5cf6,stroke-dasharray: 5 5,color:#94a3b8,rx:5,ry:5;
linkStyle default stroke:#4b5563,stroke-width:2px;
🧠 Sprawdzenie wiedzy
🧒 Wyjaśnij jak 5-latkowi
It's like taking a chef who knows how to cook many things (pre-trained model) and teaching them your specific family recipes (new dataset) so they become great at cooking just your favorite dishes.
🤓 Expert Deep Dive
Fine-tuning operates on the principle that representations learned on large-scale, diverse datasets capture fundamental patterns applicable to related tasks. In deep learning, this typically involves adjusting the weights of a pre-trained network (e.g., ResNet, BERT) using backpropagation on a target dataset. The learning rate is often set significantly lower than during pre-training to avoid drastic weight updates that could disrupt the learned features. Layer freezing is a common strategy: earlier layers capturing low-level features (e.g., edges, textures in images; word embeddings in text) are often frozen, while later layers capturing more task-specific features are fine-tuned. Alternatively, adapter modules can be inserted between layers, allowing task-specific parameters to be learned while keeping the original model weights fixed. The effectiveness relies heavily on the similarity between the pre-training and fine-tuning data distributions and tasks. Domain shift can necessitate more extensive fine-tuning or different adaptation strategies. Overfitting remains a primary concern, especially with very small target datasets, often mitigated by regularization techniques or early stopping.