Skip to content

Rephrase the Prompt

We can improve performance of our LLM by getting the model to rewrite the prompt1 such that is is less ambigious.

This could look something like this

Rephrase and Respond Example

User: Take the last letters of the words in 'Edgar Bob' and concatenate them.

Rephrased Question: Could you please form a new string or series of characters by joining together the final letters from each word in the phrase "Edgar Bob"?

Assistant: The last letters in the words "Edgar" and "Bob" are "r" and "b", hence when concatenated, it forms "rb".

We can implement this using instructor as seen below.

from pydantic import BaseModel, Field
import instructor
from openai import OpenAI

client = instructor.from_openai(OpenAI())


class ImprovedQuestion(BaseModel):
    rewritten_question: str = Field(
        ...,
        description="""An improved, more specific
        version of the original question""",
    )


class FinalResponse(BaseModel):
    answer: str


def rewrite_question(question: str):
    return client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "system",
                "content": """You excel at making questions clearer
                and more specific.""",
            },
            {"role": "user", "content": f"The question is {question}"},
        ],
        response_model=ImprovedQuestion,
    )


def answer_question(question: str):
    return client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": question}],
        max_tokens=1000,
        response_model=FinalResponse,
    )


if __name__ == "__main__":
    rewritten_query = rewrite_question(
        "Take the last letters of the words in 'Elon Musk' and concatenate them"
    )
    print(rewritten_query.model_dump_json(indent=2))
    """
    {
      "rewritten_question": "What are the last letters of each word in 'Elon Musk',
      and how would they look when concatenated together?"
    }
    """

    response = answer_question(rewritten_query.rewritten_question)
    print(response.model_dump_json(indent=2))
    """
    {
      "answer": "The last letters of the words 'Elon Musk' are 'n' and 'k'. When
      concatenated together, they look like 'nk'."
    }
    """

We can also achieve the same benefits by using a better model to generate the question before we prompt a weaker model - this is known as a two-step RaR.

Useful Tips

Here are some phrases that you can add to your prompt to refine the question before you generate a response

  • Reword and elaborate on the inquiry, then provide an answer.
  • Reframe the question with additional context and detail, then provide an answer.
  • Modify the original question for clarity and detail, then offer an answer.
  • Restate and elaborate on the inquiry before proceeding with a response.
  • Given the above question, rephrase and expand it to help you do better answering. Maintain all information in the original question.

References

1: Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves