Home Artificial Intelligence Are we able to trust AI with our bodies?

Are we able to trust AI with our bodies?

2
Are we able to trust AI with our bodies?

But as AI enters ever-more sensitive areas, we want to maintain our wits about us and remember the constraints of the technology. Generative AI systems are excellent at predicting the subsequent likely word in a sentence, but don’t have a grasp on the broader context and meaning of what they’re generating. Neural networks are competent pattern seekers, and may also help us make latest connections between things, but are also easy to trick and break and susceptible to biases. 

The biases of AI systems in settings comparable to healthcare are well documented. But as AI enters latest arenas, I’m looking out for the inevitable weird failures that can crop up. Will the food AI systems recommend skew American? How healthy will the recipes be? And can the workout plans bear in mind physiological differences between female and male bodies, or will they default to male-oriented workout plans? 

And most significantly, it’s crucial to recollect these systems don’t have any knowledge of what exercise seems like, what food tastes like, or what we mean by ‘top quality’. AI workout programs might provide you with dull, robotic exercises. AI recipe makers are likely to suggest mixtures that taste horrible, or are even poisonous. Mushroom foraging books are likely riddled with misinformation about which varieties are toxic and which usually are not, which could have catastrophic consequences. 

Humans also generally tend to position an excessive amount of trust in computers. It’s only a matter of time before “death by GPS” is replaced by “death by AI-generated mushroom foraging book.” Including labels on AI-generated content is place to begin. On this latest age of AI-powered products, it would be more essential than ever for the broader population to grasp how these powerful systems work and don’t work. And to take what they are saying with a pinch of salt. 

Deeper Learning

How generative AI is boosting the spread of disinformation and propaganda

Governments and political actors all over the world are using AI to create propaganda and censor online content. In a latest report released by Freedom House, a human rights advocacy group, researchers documented using generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.”

Downward spiral: The annual report, Freedom on the Net, scores and ranks countries in accordance with their relative degree of web freedom, as measured by a bunch of things like web shutdowns, laws limiting online expression, and retaliation for online speech. The 2023 edition, released on October 4, found that global web freedom declined for the thirteenth consecutive 12 months, driven partially by the proliferation of artificial intelligence. Read more from Tate Ryan-Mosley in her weekly newsletter on tech policy, The Technocrat.

Bits and Bytes

Predictive policing software is terrible at predicting crimes
The Latest Jersey police department used an algorithm called Geolitica that was right lower than 1% of the time, in accordance with a latest investigation. We’ve known about how deeply flawed and racist these systems are for years. It’s incredibly frustrating that public money continues to be being wasted on them. (The Markup and Wired

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here