What is Zero-Shot Learning?
Zero-shot learning is an LLM's ability to perform a task with only instructions and no examples — relying entirely on the model's pre-trained knowledge and instruction-following capabilities.
WHY IT MATTERS
Zero-shot is the most convenient prompting approach: just describe what you want. Modern LLMs are remarkably good at following zero-shot instructions for: classification, summarization, translation, code generation, and many other tasks.
Zero-shot performance depends heavily on: model quality (larger models are better at zero-shot), instruction clarity, and task familiarity (tasks similar to training data work better).
Zero-shot is the default starting point for any LLM application. If zero-shot isn't good enough, add examples (few-shot). If few-shot isn't enough, consider fine-tuning.