Prompts 101

The consistency at which you can achieve quality output from an LLM is dramatically improved by having a basic understanding of how LLMs work and their core limitations.
Outputs generated by LLMs are conditional on the prompt you provide. This enables a vehicle for you to steer the generation by influencing what is the most probable output. In production systems, you will typically convert these prompts into templates, supporting a static core that has dynamic content injected such as user_details to offer tailored results.
LLMs are constrained by their context windows. This refers to the amount of data that can be attended to during generation. This is a hard constraint. If your data exceeds this limit, you must deploy a mitigating strategy, such as compaction or a sliding window.
In the below slides and video walkthrough, we provide an introductory description of the role of prompts in conditioning output and the implications of context windows.
Click to progress through slides or use < and > arrow keys