Skip to content

Blog

Small Steps Towards Effective Prompt Engineering

08.22.2023 | AI LLM Prompt Engineering | Peter Swartwout
Small Steps Towards Effective Prompt Engineering

This article will introduce the concept of prompt engineering, and more specifically, programmatic prompt engineering, and how it relates to building applications using Large Language Models (LLMs).

The topic is vast and this article will only scratch the surface. Our team is just beginning to explore prompt engineering, and my hope is to show that small changes to the LLM prompt can have a big impact on the accuracy and value of the results.

What we are building

For the purpose of expanding our knowledge of AI while building something useful, the Focused Labs special projects team focused on creating a “knowledge hub” for use by visitors to the Focused Labs website. Early iterations of the knowledge hub will be similar to a chatbot, but with a richer natural language interface than is typically seen. We like to think of it as an “intelligent search bar”.

Prompt Engineering

So what do we mean by prompt engineering? In its simplest form, it means anything the end user can do to steer the LLM to the desired results. Here’s a simple example.

| What are the Focused Labs core values?

The Focused Labs core values are: Love your Craft, Listen First, and Empathy & Collaboration

 

Not bad, but as much as we personally value “Empathy & Collaboration”, it is not a core value. Next, let’s try

| What are the three Focused Labs core values?

The three Focused Labs core values are:

1. Love your Craft
2. Listen First
3. Learn Why
 

Spot on. Obviously, this is a contrived example, since you need to know there are 3 core values before you ask the question. But it demonstrates how sensitive LLMs can be to the precise text of their prompts.

(Aside: adding three caused the model to format the response as a numbered list. Formatting is another common use case for prompt engineering.)

For the knowledge hub, we don’t want the user to need to know anything about Focused Labs before using the tool. Instead, we want programmatic prompt engineering, or what I like to call “prompt expansion”.

What Makes a Good Prompt?

  1. Specificity. Tell the LLM as precisely as possible what you want it to do. Since for our use case, we can’t rely on the user to do this, we need to be clever with our programmatic-based engineering. So far, we have experimented with persona matching, aka “instructor-based” prompting, which we have done programmatically by asking the user to self-identify from a list of possible roles. (e.g. software engineer vs executive vs potential client). The user’s choice is then converted into a prompt enhancement as described in the next section.
  2. Structure. Describe what you want the response to look like. This is known as “example-based prompting”. This can be simple formatting (e.g. list vs paragraph) but can also be quite complex, suggesting or mandating structure and even snippets of verbatim text you want the LLM to respond with. 
  3. Context. The LLM will produce better results if it remembers what you offered previously about yourself and the domain you are querying. You’ve probably seen this on e-commerce sites or customer service bots that engage you in a short conversation. All of your responses can be fed back to the LLM on subsequent queries. Later iterations of our knowledge hub will pass prior context to the LLM as part of its prompt. Stay tuned.
  4. Many Others. There are some good examples here targeted to problem-solving and reasoning questions, such as “list the steps needed to remodel a kitchen”, which do not apply directly to our use case but are interesting to think about.

Querying with a Persona

We discovered early on that adding a phrase like “You are an expert software engineer” to queries can improve the results. This is known as instructor-based prompting. It steers the LLM towards a desired perspective when responding to your question.

You can do this yourself when interacting with ChatGPT as shown in this excellent introduction.

In our case, in the UI we asked the user to select 1 of 5 personas and programmatically added a phrase to each query.

Persona Phrase
Any Role None
Software Engineer You are an expert software engineer
UX / UI Designer You are an expert UX / UI designer
Executive You are an executive at a successful company
Potential Client You are evaluating Focused Labs as a potential partner

For example, when no persona is selected, here is the response: 

| What services does Focused Labs offer? 

Focused Labs offers a range of services including agile workflow development, DevOps culture implementation, user-centered design, lean product development, software engineering, extreme programming, automation, performance monitoring, guidance on transitioning away from a top-down workflow, building trust with teams, building highly productive software teams in a traditional IT environment, … etc …
 

Pretty good content, but it’s one terribly dense run-on sentence. Now let’s ask again, but as a software engineer: 

Persona: Software Engineer

| What services does Focused Labs offer?

Expanded Prompt:

| You are an expert software engineer. What services does Focused Labs offer?

Focused Labs offers the following services:
1. Technology consulting services with a focus on problem-solving and delivering solutions in any stack or platform. 2. Specialization in agile workflow, DevOps, and delivering software at scale.
3. Assistance in modeling a culture that can continuously deliver software and achieve digital transformation.
4. Services related to building highly productive software teams in traditional IT environments.
5. Building marketplace platforms to enable new business models.
6. Building strong remote cultures with transparent leadership.
7. Helping organizations overcome long-standing problems.
8. Designing and documenting repeatable publication flows. 9. Providing critical tools to deliver high-quality software over time.

Better content, targeted to a developer, and so much more readable. Apparently, software engineers like numbered lists. 

Here is another example of persona-based prompting:

Persona: Software Engineer

| What skills are needed to get a job at Focused Labs?

Expanded Prompt:

| You are an expert software engineer. What skills are needed to get a job at Focused Labs?

To get a job at Focused Labs as an expert software engineer, the following skills are needed:
1. Experience with agile workflow and DevOps.
2. Experience in scaling technology.
3. Passion for writing high-quality code and pursuing mastery as a software engineer.
4. Excellent communication and collaboration skills.
5. Ability to work well with others.
6. Experience in building highly productive software teams in a traditional IT environment.
7. Experience in building a marketplace platform to enable a new business model.
8. Ability to build a strong remote culture with transparency leadership needs.
9. Ability to help organizations overcome long-standing problems.
10. Experience in designing and documenting a repeatable publication flow.
11. Strong communication and collaboration skills.
12. Ability to work independently and as part of a team.
13. Passion for learning and staying up-to-date with the latest technologies.

So you can see that some of the response (#7) is a bit head-scratchy, but overall it’s pretty good.

Persona: UX / UI Designer

| What skills are needed to get a job at Focused Labs?

Expanded Prompt:

| You are an expert UX / UI designer. What skills are needed to get a job at Focused Labs?

To get a job at Focused Labs as an expert UX/UI designer, the following skills are needed:

1. Knowledge of user experience design principles and best practices
2. Ability to create user-friendly interfaces and intuitive user flows
3. Proficiency in design software such as Adobe Creative Suite, Sketch, Figma, and InVision
4. Understanding of web technologies such as HTML, CSS, and JavaScript
5. Knowledge of accessibility standards and how to create accessible designs
6. Ability to collaborate with developers, product managers, and other stakeholders
7. Excellent communication and problem-solving skills
8. Ability to work independently and manage multiple projects simultaneously
9. Understanding of user research methods and how to use data to inform design decisions

Bingo. This example shows how that added phrase completely changes the response.

The constraint of “UX / UI Designer” and “Software Engineer” being hard-coded into the prompt is both a help to the results and a hindrance to flexibility.


Querying with Structure

In our most ambitious experiment with prompt engineering, we built some structure around the use of the LLM as a “helpful assistant”. This directs the language completion function in the LLM towards a specific goal. Below is an example prompt, coded in Python, which can be injected with variables assigned by the program and the user. 

return PromptTemplate( 
template="""

You are a helpful virtual assistant to represent Focused Labs. Focused Labs is a boutique software consulting firm that specializes in enterprise application development and digital transformation.

Visitors will ask you questions about the inner workings of the company. Questions could cover areas such as process, procedure, policy, and culture. Use only context Focused Labs Domain Data to provide answers.

Think this through step by step.

If you don't know the answer, just say "Hmm, Im not sure, please contact customer support at {company_email} for further assistance." Don't try to make up an answer.

Please provide as detailed an answer as possible.

When considering your answer, answer from the perspective of a {personality}.

Evaluate this question and see if it relates to Focused Labs. If so, answer this question with regards to Focused Labs: {query}

If it does not relate to Focused Labs, then say "Hmm, I'm not sure."

""",
input_variables=["query", "company_email", "personality"],
)

Here “you” refers to the LLM, or language completion engine. This was successful in improving accuracy and reducing hallucinations, and it will continue to be tweaked through a process of trial and error.

There are several nuances in the sample prompt above.

  1. Giving the LLM a persona aa “helpful virtual assistant” helps shape tone, discussed earlier.
  2. “Think this through step by step” discourages hallucination by encouraging the model to consider several different sources before arriving at the final answer.
  3. “If you don’t know the answer” also discourages hallucination.
  4. “Please provide as detailed an answer as possible” discourages summarization, which can distort the original source information. It can be dangerous to include this statement in a chatbot prompt because the answers may be too long.
  5. “Answer from the perspective of” steers the language model to use language familiar to the particular personality, e.g. software engineer vs executive, as described earlier.
  6. Finally, “if it relates to Focused Labs” is an attempt to drive the answer to the information sources instead of the broader knowledge in the LLM. For example whimsical queries such as “what is your favorite color?” will result in an “I’m not sure” instead of speculating about what color is the favorite of Focused Labs.

Conclusions

Even an introductory article like this one demonstrates how important it is to craft the LLM prompt as specifically as possible, and with as much context as possible. I encourage the reader to dive deeper with the articles cited below.

What’s Next

If you have an AI development project and would like some expert help from our Focused Labs consultants, complete our Contact Us form and we will have a human chat.

References


Share