Understanding the Generative AI User

-

in some interesting conversations recently about designing LLM-based tools for end users, and one in every of the vital product design questions that this brings up is “what do people find out about AI?” This matters because, as any product designer will let you know, it’s worthwhile to understand the user as a way to successfully construct something for them to make use of. Imagine if you happen to were constructing a web site and also you assumed all of the visitors can be fluent in Mandarin, so that you wrote the location in that language, but then it turned out your users all spoke Spanish. It’s like that, because while your site could be amazing, you’ve constructed it with a fatally flawed assumption and made it significantly less more likely to succeed because of this.

So, once we construct LLM-based tools for users, we’ve got to step back and take a look at how those users conceive of LLMs. For instance:

  • They might not likely know anything about how LLMs work
  • They might not realize that there are LLMs underpinning tools they already use
  • They might have unrealistic expectations for the capabilities of an LLM, due to their experiences with very robustly featured agents
  • They might have a way of mistrust or hostility to the LLM technology
  • They might have various levels of trust or confidence in what an LLM says based on particular past experiences
  • They might expect deterministic results regardless that LLMs don’t provide that

User research is a spectacularly vital a part of product design, and I believe it’s an actual mistake to skip that step once we are constructing LLM-based tools. We will’t assume we know the way our particular audience has experienced LLMs previously, and we particularly can’t assume that our own experiences are representative of theirs.

User Profiles

There happens to be some good research on this topic to assist guide us, fortunately. Some archetypes of user perspectives might be present in the 4-Persona Framework developed by Cassandra Jones-VanMieghem, Amanda Papandreou, and Levi Dolan at Indiana University School of Medicine.

They propose (within the context of medication, but I believe it has generalizability) these 4 categories:

Unconscious User (Don’t know/Don’t care)

  • A user who doesn’t really take into consideration AI and doesn’t see it as relevant to their life would fall on this category. They’d naturally have limited understanding of the underlying technology and wouldn’t have much curiosity to seek out out more.

Avoidant User (AI is Dangerous)

  • This user has an overall negative perspective about AI and would come to the answer with high skepticism and mistrust. For this user, any AI product offering could have a really detrimental effect on the brand relationship.

AI Enthusiast (AI is All the time Helpful)

  • This user has high expectations for AI — they’re keen about AI but their expectations could also be unrealistic. Users who expect AI to take over all drudgery or to give you the chance to reply any query with perfect accuracy might fit here.

Informed AI User (Empowered)

  • This user has a sensible perspective, and sure has a generally high level of data literacy. They might use a “trust but confirm” strategy where citations and evidence for assertions from an LLM are vital to them. Because the authors indicate, this user only calls on AI when it’s useful for a selected task.

Constructing on this framework, I’d argue that excessively optimistic and excessively pessimistic viewpoints are each often based in some deficiency of data in regards to the technology, but they don’t represent the identical type of user in any respect. The mix of data level and sentiment (each the strength and the qualitative nature) together creates the user profile. My interpretation is a bit different from what the authors suggest, which is that the Enthusiasts are well informed, because I’d actually argue that unrealistic expectation of the capabilities of AI is usually grounded in a lack of awareness or unbalanced information consumption.

This provides us lots to take into consideration in the case of designing latest LLM solutions. At times, product developers can fall into the trap of assuming the data level is the one axis, and forgetting that sentiment socially about this technology varies widely and may have just as much influence on how a user receives and experiences these products.

Why This Happens

It’s price considering a bit in regards to the reasons for this broad spectrum of user profiles, and of sentiment specifically. Many other technologies we use commonly don’t encourage as much polarization. LLMs and other generative AI are relatively latest to us, in order that is actually a part of the difficulty, but there are qualitative points of generative AI which might be particularly distinctive and should have an effect on how people respond.

Pinski and Benlian have some interesting work on this subject, noting that key characteristics of generative AI can disrupt the ways in which human-computer interaction researchers have come to expect these relationships to work — I highly recommend reading their article.

Nondeterminism

As computation has turn out to be a part of our every day lives over the past a long time, we’ve got been capable of depend on some amount of reproducibility. If you click a key or push a button, the response from the pc will likely be the identical each time, kind of. This imparts a way of trustworthiness, where we all know that if we learn the right patterns to attain our goals we will depend on those patterns to be consistent. Generative AI breaks this contract, due to the nondeterministic nature of the outputs. The typical layperson using technology has little experience with the concept of the identical keystroke or request returning unexpected and at all times different results, and this understandably breaks the trust they could otherwise have. The nondeterminism is for a superb reason, in fact, and when you understand the technology that is just one other characteristic of the technology to work with, but at a less informed stage it could possibly be problematic.

Inscrutability

That is just one other word for “black box”, really. The character of neural networks that underly much of generative AI is that even those of us who work directly with the technology don’t have the power to completely explain why a model “does what it does”. We will’t consolidate and explain every neuron’s weighting in every layer of the network, since it’s just too complex and has too many variables. There are in fact many useful explainable AI solutions that may help us understand the levers which might be making an impact on a single prediction, but a broader explanation of the workings of those technologies just isn’t realistic. Which means we’ve got to just accept some level of unknowability, which, for scientists and curious laypeople alike, might be very difficult to just accept.

Autonomy

The growing push to make generative AI a part of semi-autonomous agents appears to be driving us to have these tools operate with less and fewer oversight, and fewer control by human users. In some cases, this might be quite useful, but it will possibly also create anxiety. Given what we already find out about these tools being nondeterministic and never explainable on a broad scale, autonomy can feel dangerous. If we don’t at all times know what the model will do, and we don’t fully grasp why it does what it does, some users could possibly be forgiven for saying that this doesn’t feel like a protected technology to permit to operate without supervision. We’re continually working on developing evaluation and testing strategies to try and stop unwanted behavior, but a specific amount of risk is unavoidable, as is true with any probabilistic technology. On the other side, among the autonomy of generative AI can create situations where users don’t recognize AI’s involvement in a given task in any respect. It could actually silently work behind the scenes, and a user could haven’t any awareness of its presence. This is an element of the much larger area of concern where AI output becomes indistinguishable from material created organically by humans.

What this implies for product

This doesn’t mean that constructing products and tools that involve generative AI is a nonstarter, in fact. It means, as I often say, that we must always take a careful take a look at whether generative AI is fit for the issue or task in front of us, and ensure we’ve considered the risks in addition to the possible rewards. That is at all times step one — ensure that AI is the suitable selection and that you simply’re willing to just accept the risks that include using it.

After that, here’s what I like to recommend for product designers:

  • Conduct rigorous user research. Discover what the distributions of the user profiles described above are in your user base, and plan how the product you’re constructing will accommodate them. If you’ve a good portion of Avoidant users, plan an informational technique to smooth the way in which for adoption, and consider rolling things out slowly to avoid a shock to the user base. Alternatively, if you’ve a number of Enthusiast users, ensure you’re clear in regards to the boundaries of functionality your tool will provide, so that you simply don’t get a “your AI sucks” type of response. If people expect magical results from generative AI and you may’t provide that, because there are vital safety, security, and functional limitations you should abide by, then this will likely be an issue in your user experience.
  • Construct in your users: This might sound obvious, but essentially I’m saying that your user research should deeply influence not only the appear and feel of your generative AI product however the actual construction and functionality of it. You need to come on the engineering tasks with an evidence-based view of what this product must be able to and the various ways your users may approach it.
  • Prioritize education. As I even have already mentioned, educating your users about regardless of the solution you’re providing happens to be goes to be vital, no matter whether or not they’re positive or negative coming in. Sometimes we assume that folks will “just get it” and we will skip over this step, but it surely’s a mistake. You will have to set expectations realistically and preemptively answer questions that may come from a skeptical audience to make sure a positive user experience.
  • Don’t force it. Currently we’re finding that software products we’ve got used happily previously are adding generative AI functionality and making it mandatory. I’ve written before about how the market forces and AI industry patterns are making this occur, but that doesn’t make it less damaging. You ought to be prepared for some group of users, nonetheless small, to wish to refuse to make use of a generative AI tool. This could be due to critical sentiment, or security regulation, or simply lack of interest, but respecting that is the suitable selection to preserve and protect your organization’s good name and relationship with that user. In case your solution is helpful, worthwhile, well-tested, and well-communicated, chances are you’ll give you the chance to extend adoption of the tool over time, but forcing it on people won’t help.

Conclusion

When it comes right down to it, a number of these lessons are good advice for every kind of technical product design work. Nonetheless, I would like to emphasise how much generative AI changes about how users interact with technology, and the numerous shift it represents for our expectations. In consequence, it’s more vital than ever that we take a extremely close take a look at the user and their place to begin, before launching products like this out into the world. As many organizations and corporations are learning the hard way, a brand new product is a probability to make an impression, but that impression could possibly be terrible just as easily because it could possibly be good. Your opportunities to impress are significant, but so are also your opportunities to wreck your relationship with users, crush their trust in you, and set yourself up with serious damage control work to do. So, watch out and conscientious in the beginning! Good luck!


Read more of my work at www.stephaniekirmer.com.


Further Reading

https://scholarworks.indianapolis.iu.edu/items/4a9b51db-c34f-49e1-901e-76be1ca5eb2d

https://www.sciencedirect.com/science/article/pii/S2949882124000227

https://www.nature.com/articles/s41746-022-00737-z

https://www.researchgate.net/profile/Muhammad-Ashraf-Faheem/publication/386330933_Building_Trust_with_Generative_AI_Chatbots_Exploring_Explainability_Privacy_and_User_Acceptance/links/674d7838a7fbc259f1a5c5b9/Constructing-Trust-with-Generative-AI-Chatbots-Exploring-Explainability-Privacy-and-User-Acceptance.pdf

https://www.tandfonline.com/doi/full/10.1080/10447318.2024.2401249#d1e231

https://www.stephaniekirmer.com/writing/canwesavetheaieconomy

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x