Reckoning with generative AI’s uncanny valley

-

Mental models and antipatterns

Mental models are a very important concept in UX and product design, but they must be more readily embraced by the AI community. At one level, mental models often don’t appear because they’re routine patterns of our assumptions about an AI system. That is something we discussed at length within the strategy of putting together the newest volume of the Thoughtworks Technology Radar, a biannual report based on our experiences working with clients all around the world.

As an illustration, we called out complacency with AI generated code and replacing pair programming with generative AI as two practices we consider practitioners must avoid as the recognition of AI coding assistants continues to grow. Each emerge from poor mental models that fail to acknowledge how this technology actually works and its limitations. The results are that the more convincing and “human” these tools develop into, the harder it’s for us to acknowledge how the technology actually works and the restrictions of the “solutions” it provides us.

In fact, for those deploying generative AI into the world, the risks are similar, maybe even more pronounced. While the intent behind such tools is normally to create something convincing and usable, if such tools mislead, trick, and even merely unsettle users, their value and value evaporates. It’s no surprise that laws, comparable to the EU AI Act, which requires of deep fake creators to label content as “AI generated,” is being passed to handle these problems.

It’s value stating that this isn’t just a difficulty for AI and robotics. Back in 2011, our colleague Martin Fowler wrote about how certain approaches to constructing cross platform mobile applications can create an uncanny valley, “where things work mostly like… native controls but there are barely enough tiny differences to throw users off.”

Specifically, Fowler wrote something we expect is instructive: “different platforms have other ways they expect you to make use of them that alter the whole experience design.” The purpose here, applied to generative AI, is that different contexts and different use cases all include different sets of assumptions and mental models that change at what point users might drop into the uncanny valley. These subtle differences change one’s experience or perception of a big language model’s (LLM) output.

For instance, for the drug researcher that wishes vast amounts of synthetic data, accuracy at a micro level could also be unimportant; for the lawyer trying to know legal documentation, accuracy matters rather a lot. Actually, dropping into the uncanny valley might just be the signal to step back and reassess your expectations.

Shifting our perspective

The uncanny valley of generative AI could be troubling, even something we would like to attenuate, but it surely must also remind us of generative AI’s limitations—it should encourage us to rethink our perspective.

There have been some interesting attempts to try this across the industry. One which stands out is Ethan Mollick, a professor on the University of Pennsylvania, who argues that AI shouldn’t be understood nearly as good software but as an alternative as “pretty good people.”

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x