Skip to content

Chapter 2

Selecting and customising foundational models

Zero-shot learning allows non-ML experts to interact with Foundation Models (FMs) through web playgrounds or chat interfaces like ChatGPT. By providing natural language commands (prompts), users can perform tasks such as listing action items from meeting transcripts or translating documents.

In-context learning enables developers to improve model outputs by including examples within input prompts. For instance, a prompt to create a social media ad for a product can be enhanced with examples of past ads for similar products.

Fine-tuning allows customization of FMs for specific tasks using a small number of labeled examples. This approach is cost-effective and efficient, requiring far less labeled data than building a task-specific model from scratch. For example, a recruiting firm can fine-tune an FM to automatically process resumes and generate summaries at scale with just a few examples.