In america, a laws has been submitted to periodically remind artificial intelligence (AI) chatbots that aren’t humans. Lately, young people’s use of AI chatbots has increased and unwanted effects have been reported, which is meant to reduce it.
The Virge reported on Tuesday that Senator Steve Padila, California, USA, proposed a brand new bill (SB 243).
Based on the bill, AI firms must prevent children from accessing the chatbot for greater than a certain quantity of time, and submit annual reports to the predominant health service department that detects the number of youngsters’s suicide thoughts and the variety of times the chatbot mentions the subject. . It also needs to warn that chatbots might not be suitable for some children.
On this regard, last 12 months, character AI and Google were sued for committing suicide after a youth talk with the chatbot.
“Our youngsters aren’t a rat to sacrifice and experiment with mental health,” Padila said. .
Particularly, the argument was that the youth must be banned from social media access and mobile phones from abroad, including america. It’s now analyzed that AI chatbots are more likely to be the following goal.
By Dae -jun Lim, reporter ydj@aitimes.com