Channel: IBM Technology
Date Published: 2025-08-18
Summary
The video differentiates between prompt engineering and context engineering in the realm of large language models (LLMs) and agentic AI. Prompt engineering involves crafting the input text (prompts) to steer the LLM’s behavior, while context engineering is a broader discipline that encompasses programmatically assembling everything the LLM sees during inference, including prompts, retrieved documents, memory, and tools. The distinction is illustrated through an agentic AI model named Graeme, a travel booking agent, whose initial errors are attributed to insufficient context rather than poor prompt wording.
The video explores various prompt engineering techniques such as role assignment (defining the LLM’s persona), few-shot examples (demonstrating desired input-output pairs), chain-of-thought prompting (forcing the model to show its reasoning), and constraint setting (explicitly defining boundaries for the response).
Context engineering, on the other hand, focuses on building dynamic, agentic systems by managing memory (short-term and long-term), maintaining state (tracking progress in multi-step processes), implementing retrieval-augmented generation (RAG) to connect agents to dynamic knowledge sources, and providing access to tools (enabling agents to interact with databases, APIs, and other systems). The video emphasizes that prompt engineering is a component of context engineering, where base prompts are dynamically injected with current context derived from states, memory, and RAG retrievals. Ultimately, the video argues that combining prompt engineering and context engineering leads to better AI systems.
Recommendations
- No recommendations