Python & AI Jobs
Python & AI Jobs
June 11, 2025 at 04:09 PM
*Stanford packed 1.5 hours with everything you need to know about LLMs* Here are 5 lessons that stood out from the lecture: 1/ Architecture ≠ Everything → Transformers aren’t the bottleneck anymore. → In practice, data quality, evaluation design, and system efficiency drive real gains. 2/ Tokenizers Are Underrated → A single tokenization choice can break performance on math, code, or logic. → Most models can't generalize numerically because 327 might be one token, while 328 is split. 3/ Scaling Laws Guide Everything → More data + bigger models = better loss. But it's predictable. → You can estimate how much performance you’ll gain before you even train. 4/ Post-training = The Real Upgrade → SFT teaches the model how to behave like an assistant. → RLHF and DPO tune what it says and how it says it. 5/ Training is 90% Logistics → The web is dirty. Deduplication, PII filtering, and domain weighting are massive jobs. → Good data isn’t scraped, it’s curated, reweighted, and post-processed for weeks. Source: https://youtu.be/9vM4p9NN0Ts?si=RWJ_ap8sTaw4XmR-
👍 4

Comments