That's what would happen if it was a logical system. It's not. This is where it gets interesting.
Instead, it's a statistical model, and including that prompt is more like a narrative weight than a logical demand. By including these words in this order, the model will be more likely to explore narratives that are statistically likely to follow them, with that likelihood determined by the content the model was trained on, and the extra redistribution of weights via training.
We don't really need to worry about technically misstating our objectives to an LLM. It doesn't follow objectivity. Instead, we need to be concerned about misrepresenting the overall vibe, which is a much more mysterious task.
Instead, it's a statistical model, and including that prompt is more like a narrative weight than a logical demand. By including these words in this order, the model will be more likely to explore narratives that are statistically likely to follow them, with that likelihood determined by the content the model was trained on, and the extra redistribution of weights via training.
We don't really need to worry about technically misstating our objectives to an LLM. It doesn't follow objectivity. Instead, we need to be concerned about misrepresenting the overall vibe, which is a much more mysterious task.