Models & AlgorithmsKR

From Evaluation to Deployment — The Complete Fine-tuning Guide

Evaluate with Perplexity and KoBEST benchmarks, merge LoRA weights, and deploy with vLLM/Ollama/HuggingFace Spaces.

From Evaluation to Deployment — The Complete Fine-tuning Guide

From Evaluation to Deployment — The Complete Fine-tuning Guide

In Part 1 we covered LoRA fundamentals and ran our first fine-tuning. In Part 2 we tackled QLoRA and Korean dataset construction. Training is done. Now two questions remain:

Series: Part 1: LoRA Theory | Part 2: QLoRA + Korean | Part 3 (this post)
  1. Did the model actually improve? (Evaluation)
  2. How do we serve it to users? (Deployment)
🔒

Sign in to continue reading

Create a free account to access the full content.

Related Posts