“I should periodically remind children that AI chatbots aren’t human beings”

-

(Photo = Shutterstock)

In america, a laws has been submitted to periodically remind artificial intelligence (AI) chatbots that aren’t humans. Lately, young people’s use of AI chatbots has increased and unwanted effects have been reported, which is meant to reduce it.

The Virge reported on Tuesday that Senator Steve Padila, California, USA, proposed a brand new bill (SB 243).

Based on the bill, AI firms must prevent children from accessing the chatbot for greater than a certain quantity of time, and submit annual reports to the predominant health service department that detects the number of youngsters’s suicide thoughts and the variety of times the chatbot mentions the subject. . It also needs to warn that chatbots might not be suitable for some children.

On this regard, last 12 months, character AI and Google were sued for committing suicide after a youth talk with the chatbot.

“Our youngsters aren’t a rat to sacrifice and experiment with mental health,” Padila said. .

Particularly, the argument was that the youth must be banned from social media access and mobile phones from abroad, including america. It’s now analyzed that AI chatbots are more likely to be the following goal.

By Dae -jun Lim, reporter ydj@aitimes.com

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x