Home Artificial Intelligence The way to speak about AI (even in case you don’t know much about AI)

The way to speak about AI (even in case you don’t know much about AI)

1
The way to speak about AI (even in case you don’t know much about AI)

Deeper Learning

Catching bad content within the age of AI

Within the last 10 years, Big Tech has change into really good at some things: language, prediction, personalization, archiving, text parsing, and data crunching. But it surely’s still surprisingly bad at catching, labeling, and removing harmful content. One simply must recall the spread of conspiracy theories about elections and vaccines in the USA over the past two years to grasp the real-world damage this causes. The benefit of using generative AI could turbocharge the creation of more harmful online content. Individuals are already using AI language models to create fake news web sites

But could AI help with content moderation? The latest large language models are a lot better at interpreting text than previous AI systems. In theory, they could possibly be used to spice up automated content moderation. Read more from Tate Ryan-Mosley in her weekly newsletter, The Technocrat.

Bits and Bytes

Scientists used AI to search out a drug that would fight drug-resistant infections
Researchers at MIT and McMaster University developed an AI algorithm that allowed them to search out a recent antibiotic to kill a variety of bacteria accountable for many drug-resistant infections which might be common in hospitals. That is an exciting development that shows how AI can speed up and support scientific discovery. (MIT News

Sam Altman warns that OpenAI could quit Europe over AI rules
At an event in London last week, the CEO said OpenAI could “stop operating” within the EU if it cannot comply with the upcoming AI Act. Altman said his company found much to criticize in how the AI Act was worded, and that there have been “technical limits to what’s possible.” This is probably going an empty threat. I’ve heard Big Tech say this persistently before about one rule or one other. More often than not, the chance of losing out on revenue on the planet’s second-largest trading bloc is simply too big, they usually figure something out. The plain caveat here is that many firms have chosen to not operate, or to have a restrained presence, in China. But that’s also a really different situation. (Time)

Predators are already exploiting AI tools to generate child sexual abuse material
The National Center for Missing and Exploited Children has warned that predators are using generative AI systems to create and share fake child sexual abuse material. With powerful generative models being rolled out with safeguards which might be inadequate and simple to hack, it was only a matter of time before we saw cases like this. (Bloomberg)

Tech layoffs have ravaged AI ethics teams 
This can be a nice overview of the drastic cuts Meta, Amazon, Alphabet, and Twitter have all made to their teams focused on web trust and safety in addition to AI ethics. Meta, for instance, ended a fact-checking project that had taken half a yr to construct. While firms are racing to roll out powerful AI models of their products, executives prefer to boast that their tech development is secure and ethical. But it surely’s clear that Big Tech views teams dedicated to those issues as expensive and expendable. (CNBC

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here