Rami Krispin’s Data Science Channel
                                
                            
                            
                    
                                
                                
                                May 17, 2025 at 10:05 PM
                               
                            
                        
                            Fine-Tuning Local Models with LoRA in Python
LoRA (Low-Rank Adaptation) is a technique for fine-tuning large language models by injecting trainable low-rank matrices into each model layer, allowing adaptation with significantly fewer parameters. This makes training more efficient, memory-friendly (and mainly cheaper) while preserving the original model weights. This one-hour tutorial, by NeuralNine, focuses on the theoretical and practical approach for fine-tuning LLMs with LoRA using Python, and it covers:
✅ Theory & Mathematics 
✅ Fine-Tuning on Math Problems 
✅ Evaluation of Math Problems 
✅ Fine-Tuning on Custom Data 
✅ Evaluation of Custom Data
https://www.youtube.com/watch?v=XDOSVh9jJiA
                        
                    
                    
                    
                    
                    
                                    
                                        
                                            👍
                                        
                                    
                                        
                                            ❤️
                                        
                                    
                                    
                                        5