Interacting with Large Language Models

5 tips to maximise value

| minute read

Large Language Models (LLMs) have become an essential tool for businesses and individuals alike. However, to truly harness their potential, we need to interact with them effectively. In this blog, we'll explore five key strategies to improve the quality of responses you get from LLMs, ensuring you extract the maximum value from your interactions.

1. Leverage few-shot prompting

Few-shot prompting is a machine learning technique where you provide an AI model with a few examples within the prompt before asking it to perform a task. This approach helps the model better understand the desired output, leading to more accurate responses. It serves as a middle ground between:

  • Zero-shot learning (no examples provided, leading to variable results)
  • Fully supervised fine-tuning (large datasets used to train the model extensively)

While many users rely on zero-shot prompting, adding a few relevant examples can significantly improve response quality.

As an example we have the user prompt “I want to classify this product review as either positive or negative. Review: The packaging was damaged, and the product stopped working after a week.”

When providing no examples with this zero-shot approach we run the risk the LLM going on some tangent on why the review is positive or negative, while we are only looking for either the word Positive or Negative.
If we now follow a few-shot prompting approach, we can expect a better result by providing the LLM some examples in our prompt:

I want to classify product reviews as either Positive or Negative. Here are some examples:

Review: "The packaging was damaged, and the product stopped working after a week."
Sentiment: Negative

Review: "Fast shipping and excellent performance—couldn’t be happier!"
Sentiment: Positive

Review: "The customer service rep was rude, and I never got a refund."
Sentiment: Negative

Now, classify the sentiment of the following review: Review: "I absolutely love this smartphone! The battery lasts all day, and the camera quality is incredible."

The LLM will most likely respond with the word ‘Positive’

2. Be mindful of the context window

The context window determines how much information an AI model can process at a time. Each model has a token limit, where a token can be a word, part of a word, or punctuation. When a query exceeds this limit, older information gets truncated, affecting the accuracy of responses.

A larger context window means the model can process more information. It allows the AI to handle long prompts or complex tasks like summarizing lengthy documents.

So, if we look at for instance the Chat GPT 3.5 Turbo, it has somewhat over 16000 tokens we can put in. But here too, developments are moving fast. Newer models like Chat GPT-4 have gotten a lot bigger with over 128000 tokens, so usually the context window is not really a problem anymore.
While newer models have larger context windows, it’s still important to optimise your prompts by keeping them concise and ensuring key details remain within the token limit. This will also greatly help with reducing cost, since you pay for the tokens, you consume.

3. Utilize system prompts for better control

The effectiveness of an AI model largely depends on how well you communicate with it. This is where prompts come in – the essential language we use to instruct and guide our AI platforms.

In prompt engineering, you have a user prompt and a system prompt, the latter also being called a meta prompt:

  • User prompts: Task-specific instructions that change based on user input.
  • System prompts: Define general behavior, tone, and ethical guidelines, essentially acting as the AI’s job description.

System prompts can help with:

  • Grounding responses: Limiting the AI’s answers to specific datasets or documents.
  • Tone control: Ensuring responses are positive, polite, and engaging.
  • Safety measures: Preventing harmful or inappropriate content.

Many AI platforms already include jailbreak protection, but adding explicit safeguards within system prompts further enhances reliability.

4. Extend LLM capabilities with function calling

Function calling enables LLMs to go beyond generating text by executing actions, retrieving live data, or interacting with external systems. Instead of just giving you an answer, the model can actually do something for you — like:

  • Searching for real-time information
  • Booking appointments
  • Running calculations

For example, if you need to find a hotel with a private beach, the AI can trigger a function to fetch relevant results rather than just providing generic information.

5. Bridge knowledge gaps with Retrieval-Augmented Generation (RAG)

LLMs rely on pre-existing extensive training datasets, which can become outdated over time. For instance:

  • GPT-3.5 was trained with data up to September 2021.
  • GPT-4 extends its training data to April 2023.

So if you ask an older version of an LLM whether Queen Elizabeth II is still alive, it will probably answer that she is. New models trained with more recent data will answer that she died on 8 September 2022.

To address these knowledge gaps, Retrieval-Augmented Generation (RAG) integrates external knowledge bases into AI queries. This process involves:

  1. Receiving a user’s query.
  2. Retrieving relevant information from an external source (e.g., a database or API).
  3. Combining the retrieved data with the original query before passing it to the LLM.

For example, if an AI agent is helping customers at Contoso Trek, a retailer specialising in outdoor gear, it can:

  • Retrieve product information from a company database.
  • Use that data to generate tailored recommendations.
  • Provide a response like:

“Sure! 😊 Based on our catalogue, I recommend two great options for waterproof hiking shoes:
1. TrailWalker Hiking Shoes and 2. TrekHiker Walking Booths.”

Conclusion

By using these five strategies - few-shot prompting, optimizing context windows, leveraging system prompts, enabling function calling, and integrating external knowledge - you can significantly improve the value you get from LLMs. Whether you’re using AI for business insights, automation, or creative tasks, understanding these principles will help you maximise results and efficiency.

Search

artificial-intelligence

data-science

Related content

Responsible artificial intelligence

As organisations race to seize AI’s benefits, prioritising responsibility is key. Embracing responsible AI practices is not just about staying ahead but building a sustainable competitive advantage. 

AI and cybersecurity

In today’s digital age, traditional cybersecurity measures are no longer sufficient. Cyber threats are evolving rapidly, and adopting innovative solutions is essential to protect your business. Discover how AI is revolutionizing cybersecurity and giving you a strategic edge. 

Digital Banking Experience Report 2023 The AI-enabled banking era

Banks must leverage their trust capital if they are not to lose market share to tech giants broadening their offer into financial services. Our Digital Banking Experience Report 2023 outlines the key trends globally shaping banking in the hyper-connected era.