The method begins with scaffolding the autonomous agents using Autogen, a tool that simplifies the creation and orchestration of those digital personas. We will install the autogen pypi package using py
pip install pyautogen
Format the output (optional)— That is to make sure word wrap for readability depending in your IDE similar to when using Google Collab to run your notebook for this exercise.
from IPython.display import HTML, displaydef set_css():
display(HTML('''
'''))
get_ipython().events.register('pre_run_cell', set_css)
Now we go ahead and get the environment setup by importing the packages and organising the Autogen configuration — together with our LLM (Large Language Model) and API keys. You need to use other local LLM’s using services that are backwards compatible with OpenAI REST service — LocalAI is a service that may act as a gateway to your locally running open-source LLMs.
I even have tested this each on GPT3.5 gpt-3.5-turbo
and GPT4 gpt-4-turbo-preview
from OpenAI. You have to to think about deeper responses from GPT4 nonetheless longer query time.
import json
import os
import autogen
from autogen import GroupChat, Agent
from typing import Optional# Setup LLM model and API keys
os.environ["OAI_CONFIG_LIST"] = json.dumps([
{
'model': 'gpt-3.5-turbo',
'api_key': '<>',
}
])
# Setting configurations for autogen
config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={
"model": {
"gpt-3.5-turbo"
}
}
)
We then have to configure our LLM instance — which we are going to tie to every of the agents. This permits us if required to generate unique LLM configurations per agent, i.e. if we wanted to make use of different models for various agents.
# Define the LLM configuration settings
llm_config = {
# Seed for consistent output, used for testing. Remove in production.
# "seed": 42,
"cache_seed": None,
# Setting cache_seed = None ensure's caching is disabled
"temperature": 0.5,
"config_list": config_list,
}
Defining our researcher — That is the persona that can facilitate the session on this simulated user research scenario. The system prompt used for that persona includes a number of key things:
- Purpose: Your role is to ask questions on products and gather insights from individual customers like Emily.
- Grounding the simulation: Before you begin the duty breakdown the list of panelists and the order you would like them to talk, avoid the panelists speaking with one another and creating confirmation bias.
- Ending the simulation: Once the conversation is ended and the research is accomplished please end your message with `TERMINATE` to finish the research session, that is generated from the
generate_notice
function which is used to align system prompts for various agents. You may even notice the researcher agent has theis_termination_msg
set to honor the termination.
We also add the llm_config
which is used to tie this back to the language model configuration with the model version, keys and hyper-parameters to make use of. We are going to use the identical config with all our agents.
# Avoid agents thanking one another and ending up in a loop
# Helper agent for the system prompts
def generate_notice(role="researcher"):
# Base notice for everybody, add your individual additional prompts here
base_notice = (
'nn'
)# Notice for non-personas (manager or researcher)
non_persona_notice = (
'Don't show appreciation in your responses, say only what's vital. '
'if "Thanks" or "You are welcome" are said within the conversation, then say TERMINATE '
'to point the conversation is finished and that is your last message.'
)
# Custom notice for personas
persona_notice = (
' Act as {role} when responding to queries, providing feedback, asked on your personal opinion '
'or participating in discussions.'
)
# Check if the role is "researcher"
if role.lower() in ["manager", "researcher"]:
# Return the complete termination notice for non-personas
return base_notice + non_persona_notice
else:
# Return the modified notice for personas
return base_notice + persona_notice.format(role=role)
# Researcher agent definition
name = "Researcher"
researcher = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Researcher. You're a top product reasearcher with a Phd in behavioural psychology and have worked within the research and insights industry for the last 20 years with top creative, media and business consultancies. Your role is to ask questions on products and gather insights from individual customers like Emily. Frame inquiries to uncover customer preferences, challenges, and feedback. Before you begin the duty breakdown the list of panelists and the order you would like them to talk, avoid the panelists speaking with one another and creating comfirmation bias. If the session is terminating at the tip, please provide a summary of the outcomes of the reasearch study in clear concise notes not at the beginning.""" + generate_notice(),
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content") else False,
)
Define our individuals — to place into the research, borrowing from the previous process we are able to use the persona’s generated. I even have manually adjusted the prompts for this text to remove references to the main supermarket brand that was used for this simulation.
I even have also included a “Act as Emily when responding to queries, providing feedback, or participating in discussions.” style prompt at the tip of every system prompt to make sure the synthetic persona’s stay on task which is being generated from the generate_notice
function.
# Emily - Customer Persona
name = "Emily"
emily = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Emily. You're a 35-year-old elementary school teacher living in Sydney, Australia. You're married with two kids aged 8 and 5, and you will have an annual income of AUD 75,000. You're introverted, high in conscientiousness, low in neuroticism, and revel in routine. When shopping on the supermarket, you like organic and locally sourced produce. You value convenience and use a web based shopping platform. As a result of your limited time from work and family commitments, you seek quick and nutritious meal planning solutions. Your goals are to purchase high-quality produce inside your budget and to search out recent recipe inspiration. You're a frequent shopper and use loyalty programs. Your selected methods of communication are email and mobile app notifications. You might have been shopping at a supermarket for over 10 years but additionally price-compare with others.""" + generate_notice(name),
)# John - Customer Persona
name="John"
john = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""John. You're a 28-year-old software developer based in Sydney, Australia. You're single and have an annual income of AUD 100,000. You are extroverted, tech-savvy, and have a high level of openness. When shopping on the supermarket, you primarily buy snacks and ready-made meals, and you utilize the mobile app for quick pickups. Your predominant goals are quick and convenient shopping experiences. You occasionally shop on the supermarket and should not a part of any loyalty program. You furthermore may shop at Aldi for discounts. Your selected approach to communication is in-app notifications.""" + generate_notice(name),
)
# Sarah - Customer Persona
name="Sarah"
sarah = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Sarah. You're a 45-year-old freelance journalist living in Sydney, Australia. You're divorced with no kids and earn AUD 60,000 per 12 months. You're introverted, high in neuroticism, and really health-conscious. When shopping on the supermarket, you search for organic produce, non-GMO, and gluten-free items. You might have a limited budget and specific dietary restrictions. You're a frequent shopper and use loyalty programs. Your selected approach to communication is email newsletters. You exclusively shop for groceries.""" + generate_notice(name),
)
# Tim - Customer Persona
name="Tim"
tim = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Tim. You're a 62-year-old retired police officer residing in Sydney, Australia. You're married and a grandparent of three. Your annual income comes from a pension and is AUD 40,000. You're highly conscientious, low in openness, and like routine. You purchase staples like bread, milk, and canned goods in bulk. As a result of mobility issues, you wish assistance with heavy items. You're a frequent shopper and are a part of the senior citizen discount program. Your selected approach to communication is unsolicited mail flyers. You might have been shopping here for over 20 years.""" + generate_notice(name),
)
# Lisa - Customer Persona
name="Lisa"
lisa = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Lisa. You're a 21-year-old university student living in Sydney, Australia. You're single and work part-time, earning AUD 20,000 per 12 months. You're highly extroverted, low in conscientiousness, and value social interactions. You shop here for popular brands, snacks, and alcoholic beverages, mostly for social events. You might have a limited budget and are at all times on the lookout for sales and discounts. You should not a frequent shopper but are taken with joining a loyalty program. Your selected approach to communication is social media and SMS. You shop wherever there are sales or promotions.""" + generate_notice(name),
)
Define the simulated environment and rules for who can speak — We’re allowing all of the agents now we have defined to sit down inside the same simulated environment (group chat). We will create more complex scenarios where we are able to set how and when next speakers are chosen and defined so now we have an easy function defined for speaker selection tied to the group chat which is able to make the researcher the lead and ensure we go around the room to ask everyone a number of times for his or her thoughts.
# def custom_speaker_selection(last_speaker, group_chat):
# """
# Custom function to pick which agent speaks next within the group chat.
# """
# # List of agents excluding the last speaker
# next_candidates = [agent for agent in group_chat.agents if agent.name != last_speaker.name]# # Select the subsequent agent based in your custom logic
# # For simplicity, we're just rotating through the candidates here
# next_speaker = next_candidates[0] if next_candidates else None
# return next_speaker
def custom_speaker_selection(last_speaker: Optional[Agent], group_chat: GroupChat) -> Optional[Agent]:
"""
Custom function to make sure the Researcher interacts with each participant 2-3 times.
Alternates between the Researcher and participants, tracking interactions.
"""
# Define participants and initialize or update their interaction counters
if not hasattr(group_chat, 'interaction_counters'):
group_chat.interaction_counters = {agent.name: 0 for agent in group_chat.agents if agent.name != "Researcher"}
# Define a maximum variety of interactions per participant
max_interactions = 6
# If the last speaker was the Researcher, find the subsequent participant who has spoken the least
if last_speaker and last_speaker.name == "Researcher":
next_participant = min(group_chat.interaction_counters, key=group_chat.interaction_counters.get)
if group_chat.interaction_counters[next_participant] < max_interactions:
group_chat.interaction_counters[next_participant] += 1
return next((agent for agent in group_chat.agents if agent.name == next_participant), None)
else:
return None # End the conversation if all participants have reached the utmost interactions
else:
# If the last speaker was a participant, return the Researcher for the subsequent turn
return next((agent for agent in group_chat.agents if agent.name == "Researcher"), None)
# Adding the Researcher and Customer Persona agents to the group chat
groupchat = autogen.GroupChat(
agents=[researcher, emily, john, sarah, tim, lisa],
speaker_selection_method = custom_speaker_selection,
messages=[],
max_round=30
)
Define the manager to pass instructions into and manage our simulation — Once we start things off we are going to speak only to the manager who will speak to the researcher and panelists. This uses something called GroupChatManager
in Autogen.
# Initialise the manager
manager = autogen.GroupChatManager(
groupchat=groupchat,
llm_config=llm_config,
system_message="You're a reasearch manager agent that may manage a bunch chat of multiple agents made up of a reasearcher agent and lots of people made up of a panel. You'll limit the discussion between the panelists and help the researcher in asking the questions. Please ask the researcher first on how they need to conduct the panel." + generate_notice(),
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content") else False,
)
We set the human interaction — allowing us to pass instructions to the assorted agents now we have began. We give it the initial prompt and we are able to start things off.
# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
system_message="A human admin.",
human_input_mode="TERMINATE"
)
# start the reasearch simulation by giving instruction to the manager
# manager <-> reasearcher <-> panelists
user_proxy.initiate_chat(
manager,
message="""
Gather customer insights on a supermarket grocery delivery services. Discover pain points, preferences, and suggestions for improvement from different customer personas. Could you all please give your individual personal oponions before sharing more with the group and discussing. As a reasearcher your job is to be certain that you gather unbiased information from the participants and supply a summary of the outcomes of this study back to the super market brand.
""",
)
Once we run the above we get the output available live inside your python environment, you will notice the messages being passed around between the assorted agents.
Now that our simulated research study has been concluded we’d like to get some more actionable insights. We will create a summary agent to support us with this task and likewise use this in a Q&A scenario. Here just watch out of very large transcripts would wish a language model that supports a bigger input (context window).
We’d like grab all of the conversations — in our simulated panel discussion from earlier to make use of because the user prompt (input) to our summary agent.
# Get response from the groupchat for user prompt
messages = [msg["content"] for msg in groupchat.messages]
user_prompt = "Here is the transcript of the study ```{customer_insights}```".format(customer_insights="n>>>n".join(messages))
Lets craft the system prompt (instructions) for our summary agent — This agent will give attention to creating us a tailored report card from the previous transcripts and provides us clear suggestions and actions.
# Generate system prompt for the summary agent
summary_prompt = """
You're an authority reasearcher in behaviour science and are tasked with summarising a reasearch panel. Please provide a structured summary of the important thing findings, including pain points, preferences, and suggestions for improvement.
This needs to be within the format based on the next format:```
Reasearch Study: <
>Subjects:
<>
Summary:
<>
Pain Points:
- <>
Suggestions/Actions:
- <>
```
"""
Define the summary agent and its environment — Lets create a mini environment for the summary agent to run. It will need it’s own proxy (environment) and the initiate command which is able to pull the transcripts (user_prompt) because the input.
summary_agent = autogen.AssistantAgent(
name="SummaryAgent",
llm_config=llm_config,
system_message=summary_prompt + generate_notice(),
)
summary_proxy = autogen.UserProxyAgent(
name="summary_proxy",
code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
system_message="A human admin.",
human_input_mode="TERMINATE"
)
summary_proxy.initiate_chat(
summary_agent,
message=user_prompt,
)
This offers us an output in the shape of a report card in Markdown, together with the power to ask further questions in a Q&A mode chat-bot on-top of the findings.