Home Artificial Intelligence Creating Synthetic User Research: Using Persona Prompting and Autonomous Agents

Creating Synthetic User Research: Using Persona Prompting and Autonomous Agents

0
Creating Synthetic User Research: Using Persona Prompting and Autonomous Agents

The method begins with scaffolding the autonomous agents using Autogen, a tool that simplifies the creation and orchestration of those digital personas. We will install the autogen pypi package using py

pip install pyautogen

Format the output (optional)— That is to make sure word wrap for readability depending in your IDE similar to when using Google Collab to run your notebook for this exercise.

from IPython.display import HTML, display

def set_css():
display(HTML('''

'''))
get_ipython().events.register('pre_run_cell', set_css)

Now we go ahead and get the environment setup by importing the packages and organising the Autogen configuration — together with our LLM (Large Language Model) and API keys. You need to use other local LLM’s using services that are backwards compatible with OpenAI REST service — LocalAI is a service that may act as a gateway to your locally running open-source LLMs.

I even have tested this each on GPT3.5 gpt-3.5-turbo and GPT4 gpt-4-turbo-preview from OpenAI. You have to to think about deeper responses from GPT4 nonetheless longer query time.

import json
import os
import autogen
from autogen import GroupChat, Agent
from typing import Optional

# Setup LLM model and API keys
os.environ["OAI_CONFIG_LIST"] = json.dumps([
{
'model': 'gpt-3.5-turbo',
'api_key': '<>',
}
])

# Setting configurations for autogen
config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={
"model": {
"gpt-3.5-turbo"
}
}
)

We then have to configure our LLM instance — which we are going to tie to every of the agents. This permits us if required to generate unique LLM configurations per agent, i.e. if we wanted to make use of different models for various agents.

# Define the LLM configuration settings
llm_config = {
# Seed for consistent output, used for testing. Remove in production.
# "seed": 42,
"cache_seed": None,
# Setting cache_seed = None ensure's caching is disabled
"temperature": 0.5,
"config_list": config_list,
}

Defining our researcher — That is the persona that can facilitate the session on this simulated user research scenario. The system prompt used for that persona includes a number of key things:

  • Purpose: Your role is to ask questions on products and gather insights from individual customers like Emily.
  • Grounding the simulation: Before you begin the duty breakdown the list of panelists and the order you would like them to talk, avoid the panelists speaking with one another and creating confirmation bias.
  • Ending the simulation: Once the conversation is ended and the research is accomplished please end your message with `TERMINATE` to finish the research session, that is generated from the generate_notice function which is used to align system prompts for various agents. You may even notice the researcher agent has the is_termination_msg set to honor the termination.

We also add the llm_config which is used to tie this back to the language model configuration with the model version, keys and hyper-parameters to make use of. We are going to use the identical config with all our agents.

# Avoid agents thanking one another and ending up in a loop
# Helper agent for the system prompts
def generate_notice(role="researcher"):
# Base notice for everybody, add your individual additional prompts here
base_notice = (
'nn'
)

# Notice for non-personas (manager or researcher)
non_persona_notice = (
'Don't show appreciation in your responses, say only what's vital. '
'if "Thanks" or "You are welcome" are said within the conversation, then say TERMINATE '
'to point the conversation is finished and that is your last message.'
)

# Custom notice for personas
persona_notice = (
' Act as {role} when responding to queries, providing feedback, asked on your personal opinion '
'or participating in discussions.'
)

# Check if the role is "researcher"
if role.lower() in ["manager", "researcher"]:
# Return the complete termination notice for non-personas
return base_notice + non_persona_notice
else:
# Return the modified notice for personas
return base_notice + persona_notice.format(role=role)

# Researcher agent definition
name = "Researcher"
researcher = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Researcher. You're a top product reasearcher with a Phd in behavioural psychology and have worked within the research and insights industry for the last 20 years with top creative, media and business consultancies. Your role is to ask questions on products and gather insights from individual customers like Emily. Frame inquiries to uncover customer preferences, challenges, and feedback. Before you begin the duty breakdown the list of panelists and the order you would like them to talk, avoid the panelists speaking with one another and creating comfirmation bias. If the session is terminating at the tip, please provide a summary of the outcomes of the reasearch study in clear concise notes not at the beginning.""" + generate_notice(),
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content") else False,
)

Define our individuals — to place into the research, borrowing from the previous process we are able to use the persona’s generated. I even have manually adjusted the prompts for this text to remove references to the main supermarket brand that was used for this simulation.

I even have also included a “Act as Emily when responding to queries, providing feedback, or participating in discussions.” style prompt at the tip of every system prompt to make sure the synthetic persona’s stay on task which is being generated from the generate_notice function.

# Emily - Customer Persona
name = "Emily"
emily = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Emily. You're a 35-year-old elementary school teacher living in Sydney, Australia. You're married with two kids aged 8 and 5, and you will have an annual income of AUD 75,000. You're introverted, high in conscientiousness, low in neuroticism, and revel in routine. When shopping on the supermarket, you like organic and locally sourced produce. You value convenience and use a web based shopping platform. As a result of your limited time from work and family commitments, you seek quick and nutritious meal planning solutions. Your goals are to purchase high-quality produce inside your budget and to search out recent recipe inspiration. You're a frequent shopper and use loyalty programs. Your selected methods of communication are email and mobile app notifications. You might have been shopping at a supermarket for over 10 years but additionally price-compare with others.""" + generate_notice(name),
)

# John - Customer Persona
name="John"
john = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""John. You're a 28-year-old software developer based in Sydney, Australia. You're single and have an annual income of AUD 100,000. You are extroverted, tech-savvy, and have a high level of openness. When shopping on the supermarket, you primarily buy snacks and ready-made meals, and you utilize the mobile app for quick pickups. Your predominant goals are quick and convenient shopping experiences. You occasionally shop on the supermarket and should not a part of any loyalty program. You furthermore may shop at Aldi for discounts. Your selected approach to communication is in-app notifications.""" + generate_notice(name),
)

# Sarah - Customer Persona
name="Sarah"
sarah = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Sarah. You're a 45-year-old freelance journalist living in Sydney, Australia. You're divorced with no kids and earn AUD 60,000 per 12 months. You're introverted, high in neuroticism, and really health-conscious. When shopping on the supermarket, you search for organic produce, non-GMO, and gluten-free items. You might have a limited budget and specific dietary restrictions. You're a frequent shopper and use loyalty programs. Your selected approach to communication is email newsletters. You exclusively shop for groceries.""" + generate_notice(name),
)

# Tim - Customer Persona
name="Tim"
tim = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Tim. You're a 62-year-old retired police officer residing in Sydney, Australia. You're married and a grandparent of three. Your annual income comes from a pension and is AUD 40,000. You're highly conscientious, low in openness, and like routine. You purchase staples like bread, milk, and canned goods in bulk. As a result of mobility issues, you wish assistance with heavy items. You're a frequent shopper and are a part of the senior citizen discount program. Your selected approach to communication is unsolicited mail flyers. You might have been shopping here for over 20 years.""" + generate_notice(name),
)

# Lisa - Customer Persona
name="Lisa"
lisa = autogen.AssistantAgent(
name=name,
llm_config=llm_config,
system_message="""Lisa. You're a 21-year-old university student living in Sydney, Australia. You're single and work part-time, earning AUD 20,000 per 12 months. You're highly extroverted, low in conscientiousness, and value social interactions. You shop here for popular brands, snacks, and alcoholic beverages, mostly for social events. You might have a limited budget and are at all times on the lookout for sales and discounts. You should not a frequent shopper but are taken with joining a loyalty program. Your selected approach to communication is social media and SMS. You shop wherever there are sales or promotions.""" + generate_notice(name),
)

Define the simulated environment and rules for who can speak — We’re allowing all of the agents now we have defined to sit down inside the same simulated environment (group chat). We will create more complex scenarios where we are able to set how and when next speakers are chosen and defined so now we have an easy function defined for speaker selection tied to the group chat which is able to make the researcher the lead and ensure we go around the room to ask everyone a number of times for his or her thoughts.

# def custom_speaker_selection(last_speaker, group_chat):
# """
# Custom function to pick which agent speaks next within the group chat.
# """
# # List of agents excluding the last speaker
# next_candidates = [agent for agent in group_chat.agents if agent.name != last_speaker.name]

# # Select the subsequent agent based in your custom logic
# # For simplicity, we're just rotating through the candidates here
# next_speaker = next_candidates[0] if next_candidates else None

# return next_speaker

def custom_speaker_selection(last_speaker: Optional[Agent], group_chat: GroupChat) -> Optional[Agent]:
"""
Custom function to make sure the Researcher interacts with each participant 2-3 times.
Alternates between the Researcher and participants, tracking interactions.
"""
# Define participants and initialize or update their interaction counters
if not hasattr(group_chat, 'interaction_counters'):
group_chat.interaction_counters = {agent.name: 0 for agent in group_chat.agents if agent.name != "Researcher"}

# Define a maximum variety of interactions per participant
max_interactions = 6

# If the last speaker was the Researcher, find the subsequent participant who has spoken the least
if last_speaker and last_speaker.name == "Researcher":
next_participant = min(group_chat.interaction_counters, key=group_chat.interaction_counters.get)
if group_chat.interaction_counters[next_participant] < max_interactions:
group_chat.interaction_counters[next_participant] += 1
return next((agent for agent in group_chat.agents if agent.name == next_participant), None)
else:
return None # End the conversation if all participants have reached the utmost interactions
else:
# If the last speaker was a participant, return the Researcher for the subsequent turn
return next((agent for agent in group_chat.agents if agent.name == "Researcher"), None)

# Adding the Researcher and Customer Persona agents to the group chat
groupchat = autogen.GroupChat(
agents=[researcher, emily, john, sarah, tim, lisa],
speaker_selection_method = custom_speaker_selection,
messages=[],
max_round=30
)

Define the manager to pass instructions into and manage our simulation — Once we start things off we are going to speak only to the manager who will speak to the researcher and panelists. This uses something called GroupChatManager in Autogen.

# Initialise the manager
manager = autogen.GroupChatManager(
groupchat=groupchat,
llm_config=llm_config,
system_message="You're a reasearch manager agent that may manage a bunch chat of multiple agents made up of a reasearcher agent and lots of people made up of a panel. You'll limit the discussion between the panelists and help the researcher in asking the questions. Please ask the researcher first on how they need to conduct the panel." + generate_notice(),
is_termination_msg=lambda x: True if "TERMINATE" in x.get("content") else False,
)

We set the human interaction — allowing us to pass instructions to the assorted agents now we have began. We give it the initial prompt and we are able to start things off.

# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
system_message="A human admin.",
human_input_mode="TERMINATE"
)
# start the reasearch simulation by giving instruction to the manager
# manager <-> reasearcher <-> panelists
user_proxy.initiate_chat(
manager,
message="""
Gather customer insights on a supermarket grocery delivery services. Discover pain points, preferences, and suggestions for improvement from different customer personas. Could you all please give your individual personal oponions before sharing more with the group and discussing. As a reasearcher your job is to be certain that you gather unbiased information from the participants and supply a summary of the outcomes of this study back to the super market brand.
""",
)

Once we run the above we get the output available live inside your python environment, you will notice the messages being passed around between the assorted agents.

Live python output — Our researcher talking to panelists

Now that our simulated research study has been concluded we’d like to get some more actionable insights. We will create a summary agent to support us with this task and likewise use this in a Q&A scenario. Here just watch out of very large transcripts would wish a language model that supports a bigger input (context window).

We’d like grab all of the conversations — in our simulated panel discussion from earlier to make use of because the user prompt (input) to our summary agent.

# Get response from the groupchat for user prompt
messages = [msg["content"] for msg in groupchat.messages]
user_prompt = "Here is the transcript of the study ```{customer_insights}```".format(customer_insights="n>>>n".join(messages))

Lets craft the system prompt (instructions) for our summary agent — This agent will give attention to creating us a tailored report card from the previous transcripts and provides us clear suggestions and actions.

# Generate system prompt for the summary agent
summary_prompt = """
You're an authority reasearcher in behaviour science and are tasked with summarising a reasearch panel. Please provide a structured summary of the important thing findings, including pain points, preferences, and suggestions for improvement.
This needs to be within the format based on the next format:

```
Reasearch Study: <></p><p>Subjects:<br><<Overview of the subjects and number, any other key information>></p><p>Summary:<br><<Summary of the study, include detailed analysis as an export>></p><p>Pain Points:<br>- <<List of Pain Points - Be as clear and prescriptive as required. I expect detailed response that can be used by the brand directly to make changes. Give a short paragraph per pain point.>></p><p>Suggestions/Actions:<br>- <<List of Adctions - Be as clear and prescriptive as required. I expect detailed response that can be used by the brand directly to make changes. Give a short paragraph per reccomendation.>><br>```<br>"""</p></span></pre> <p id="8599" class="pw-post-body-paragraph ob oc gu od b hs oe of og hv oh oi oj ok ol om on oo op oq or os ot ou ov ow gn bj"><strong class="od gv">Define the summary agent and its environment</strong> — Lets create a mini environment for the summary agent to run. It will need it’s own proxy (<em class="qm">environment</em>) and the initiate command which is able to pull the transcripts (<em class="qm">user_prompt</em>) because the input.</p> <pre class="nq nr ns nt nu rg rh ri bo rj ba bj"><span id="90e9" class="rk pg gu rh b bf rl rm l rn ro">summary_agent = autogen.AssistantAgent(<br>name="SummaryAgent",<br>llm_config=llm_config,<br>system_message=summary_prompt + generate_notice(),<br>)<br>summary_proxy = autogen.UserProxyAgent(<br>name="summary_proxy",<br>code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},<br>system_message="A human admin.",<br>human_input_mode="TERMINATE"<br>)<br>summary_proxy.initiate_chat(<br>summary_agent,<br>message=user_prompt,<br>)</span></pre> <p id="1b93" class="pw-post-body-paragraph ob oc gu od b hs oe of og hv oh oi oj ok ol om on oo op oq or os ot ou ov ow gn bj">This offers us an output in the shape of a report card in Markdown, together with the power to ask further questions in a Q&A mode chat-bot on-top of the findings.</p> <figure class="nq nr ns nt nu nv nn no paragraph-image"> <div role="button" tabindex="0" class="nw nx fi ny bg nz"> <div class="nn no sh"><picture><source srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*8pdQPGomkUpgtdry-thK2w.png 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*8pdQPGomkUpgtdry-thK2w.png 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*8pdQPGomkUpgtdry-thK2w.png 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*8pdQPGomkUpgtdry-thK2w.png 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*8pdQPGomkUpgtdry-thK2w.png 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*8pdQPGomkUpgtdry-thK2w.png 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*8pdQPGomkUpgtdry-thK2w.png 1400w" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px" type="image/webp"></source><source data-testid="og" srcset="https://miro.medium.com/v2/resize:fit:640/1*8pdQPGomkUpgtdry-thK2w.png 640w, https://miro.medium.com/v2/resize:fit:720/1*8pdQPGomkUpgtdry-thK2w.png 720w, https://miro.medium.com/v2/resize:fit:750/1*8pdQPGomkUpgtdry-thK2w.png 750w, https://miro.medium.com/v2/resize:fit:786/1*8pdQPGomkUpgtdry-thK2w.png 786w, https://miro.medium.com/v2/resize:fit:828/1*8pdQPGomkUpgtdry-thK2w.png 828w, https://miro.medium.com/v2/resize:fit:1100/1*8pdQPGomkUpgtdry-thK2w.png 1100w, https://miro.medium.com/v2/resize:fit:1400/1*8pdQPGomkUpgtdry-thK2w.png 1400w" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px"></source><img alt="" class="bg mu oa c" width="700" height="456" loading="lazy" role="presentation"></picture></div> </div><figcaption class="qh fe qi nn no qj qk be b bf z dt">Live output of a report card from Summary Agent followed by open Q&A</figcaption></figure> </div> </div> <footer> <div class="td-post-source-tags"> <ul class="td-tags td-post-small-box clearfix"> <li><span>TAGS</span></li> <li><a href="https://bardai.ai/tag/agents/">Agents</a></li> <li><a href="https://bardai.ai/tag/autonomous/">Autonomous</a></li> <li><a href="https://bardai.ai/tag/creating/">Creating</a></li> <li><a href="https://bardai.ai/tag/persona/">Persona</a></li> <li><a href="https://bardai.ai/tag/prompting/">prompting</a></li> <li><a href="https://bardai.ai/tag/research/">Research</a></li> <li><a href="https://bardai.ai/tag/synthetic/">Synthetic</a></li> <li><a href="https://bardai.ai/tag/user/">user</a></li> </ul> </div> <div class="td-block-row td-post-next-prev"> <div class="td-block-span6 td-post-prev-post"> <div class="td-post-next-prev-content"> <span>Previous article</span> <a href="https://bardai.ai/artificial-intelligence/openai-plans-to-launch-ai-voice-assistant-ahead-of-gpt-5-apply-for-voice-engine-trademark/">OpenAI plans to launch 'AI voice assistant' ahead of GPT-5… Apply for 'Voice Engine' trademark</a> </div> </div> <div class="td-next-prev-separator"></div> <div class="td-block-span6 td-post-next-post"> <div class="td-post-next-prev-content"> <span>Next article</span> <a href="https://bardai.ai/artificial-intelligence/sora-first-impressions/">Sora: First Impressions</a> </div> </div> </div> <!-- author box --> <div class="author-box-wrap"> <a href="https://bardai.ai/author/admin/"> <img alt='' src='https://secure.gravatar.com/avatar/731691ed8047471ea7177691e54e11a4?s=96&d=mm&r=g' srcset='https://secure.gravatar.com/avatar/731691ed8047471ea7177691e54e11a4?s=192&d=mm&r=g 2x' class='avatar avatar-96 photo' height='96' width='96' decoding='async'/> </a> <div class="desc"> <div class="td-author-name vcard author"><span class="fn"> <a href="https://bardai.ai/author/admin/">admin</a> </span></div> <div class="td-author-url"><a href="http://bardai.ai">http://bardai.ai</a></div> <div class="td-author-description"> </div> <div class="clearfix"></div> </div> </div> </footer> </div> </article> <div class="comments" id="comments"> <div id="respond" class="comment-respond"> <h3 id="reply-title" class="comment-reply-title">LEAVE A REPLY <small><a rel="nofollow" id="cancel-comment-reply-link" href="/artificial-intelligence/creating-synthetic-user-research-using-persona-prompting-and-autonomous-agents/#respond" style="display:none;">Cancel reply</a></small></h3><form action="https://bardai.ai/wp-comments-post.php" method="post" id="commentform" class="comment-form" novalidate><div class="clearfix"></div> <div class="comment-form-input-wrap td-form-comment"> <textarea placeholder="Comment:" id="comment" name="comment" cols="45" rows="8" aria-required="true"></textarea> <div class="td-warning-comment">Please enter your comment!</div> </div> <div class="comment-form-input-wrap td-form-author"> <input class="" id="author" name="author" placeholder="Name:*" type="text" value="" size="30" aria-required='true' /> <div class="td-warning-author">Please enter your name here</div> </div> <div class="comment-form-input-wrap td-form-email"> <input class="" id="email" name="email" placeholder="Email:*" type="text" value="" size="30" aria-required='true' /> <div class="td-warning-email-error">You have entered an incorrect email address!</div> <div class="td-warning-email">Please enter your email address here</div> </div> <div class="comment-form-input-wrap td-form-url"> <input class="" id="url" name="url" placeholder="Website:" type="text" value="" size="30" /> </div> <p class="comment-form-cookies-consent"><input id="wp-comment-cookies-consent" name="wp-comment-cookies-consent" type="checkbox" value="yes" /><label for="wp-comment-cookies-consent">Save my name, email, and website in this browser for the next time I comment.</label></p> <p class="form-submit"><input name="submit" type="submit" id="submit" class="submit" value="Post Comment" /> <input type='hidden' name='comment_post_ID' value='15351' id='comment_post_ID' /> <input type='hidden' name='comment_parent' id='comment_parent' value='0' /> </p></form> </div><!-- #respond --> </div> <!-- /.content --> </div> </div> </div> </div> </div> <div class="td-footer-page td-footer-container td-container-wrap"> <div class="td-sub-footer-container td-container-wrap"> <div class="td-container"> <div class="td-pb-row"> <div class="td-pb-span td-sub-footer-menu"> </div> <div class="td-pb-span td-sub-footer-copy"> © Newspaper WordPress Theme by TagDiv </div> </div> </div> </div> </div> </div><!--close td-outer-wrap--> <script type="text/javascript" src="https://bardai.ai/wp-content/themes/Newspaper/includes/js/tagdiv-theme.min.js?ver=10.3.9" id="tagdiv-theme-js-js"></script> <script type="text/javascript" src="https://bardai.ai/wp-includes/js/comment-reply.min.js?ver=6.4.4" id="comment-reply-js" async="async" data-wp-strategy="async"></script> </body> </html>