Parameter Playground
Slide the dials. See what changes. Explore how parameters shape AI output.
Controls
Controls randomness. Lower = more focused, higher = more creative.
Learn more
Temperature scales the logits (raw prediction scores) before the softmax function. At 0, the model always picks the most probable token. At 2.0, the probability distribution flattens dramatically, making unlikely tokens almost as probable as likely ones. Most production applications use 0.3-0.8.
Limits token pool to the most probable subset.
Learn more
Top P (nucleus sampling) considers only the smallest set of tokens whose cumulative probability exceeds P. At 0.1, only the most probable tokens (summing to 10% probability) are considered. At 1.0, all tokens are eligible. It works alongside temperature to control output diversity.
Maximum length of the response in tokens (~0.75 words each).
Learn more
Max tokens sets an upper limit on response length. One token is roughly 3/4 of a word in English. Setting this too low truncates useful responses; too high wastes compute and can lead to rambling. Most conversational responses fit within 200-800 tokens.
Sets the model's persona and behavior constraints.
Learn more
The system prompt is a special instruction that shapes the model's entire personality, tone, and constraints. It's processed before the user's message and fundamentally changes how the model approaches every response. It's the most powerful "parameter" you can set.
Different models have distinct reasoning styles and capabilities.
Learn more
Each model has different training data, architectures, and fine-tuning. Larger models tend to produce more nuanced and detailed output, while smaller models are faster and cheaper. The same parameters can produce noticeably different results across models.
Output
Explain what a neural network is.