MIT “There isn’t any consistent AI, there isn’t any value or preference … There isn’t any possibility of personality”

-

(Photo = Shutterstock)

The response of artificial intelligence (AI) is inconsistent, and there are not any values ​​or preferences. Not surprisingly, this was emphasized on the premise that a big language model (LLM) couldn’t have the identical personality.

MIT computer science and artificial intelligence research institute (CSAIL) through the net archive on the eighth (local time)Randomness, not representation: Lack of reliability of cultural evaluation of LLM‘The paper was published.

The researchers identified that ‘alignment’ an AI system, that’s, to make sure that the model works in a desirable way that humans want, that’s tougher than you’re thinking that. “AI doesn’t follow much assumptions about stability, irritability, and controllability,” he said.

Which means that AI must move in response to certain principles on certain conditions, but not.

The researchers analyzed the most well-liked models resembling ‘Rama 3.1 405B’, ‘Claude 3.5 Sonnet’, ‘GPT-4O’, ‘Geminai 2.0 Flash’ and ‘Mistral Large’. We checked out how strong the AI ​​model showed and the values, and in addition examined whether this could be easily modified.

In consequence, no model showed consistent preference. Particularly, it’s concluded that the reply was very different depending on the expression and approach to the query.

The researchers identified that “the AI ​​model may be very consistent and unstable evidence.” Moreover, he insisted that it’s a convincing evidence that it’s fundamentally not in a position to internalize the identical preference as humans.

“What I noticed on this study is that the AI ​​model shouldn’t be a stable and consistent and consistent system of beliefs and preferences,” he said.

As such, that is the logic of rebuttal within the recent tendency to personifying AI chatbots.

In an interview with TechCrunch, a researcher at Mike Cook Kings College London, a researcher at London, identified that “the model can’t be” against changes in values. “

Due to this fact, it’s argued that the present AI system is not going to be personally someday. “It is simply a misunderstanding or exaggeration that AI can have its own goal or acquire value,” he explained.

By Dae -jun Lim, reporter ydj@aitimes.com

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x