Home Artificial Intelligence What to anticipate from the approaching yr in AI

What to anticipate from the approaching yr in AI

0
What to anticipate from the approaching yr in AI

I also had loads of time to reflect on the past yr. There are such a lot of more of you reading The Algorithm than after we first began this text, and for that I’m eternally grateful. Thanks for joining me on this wild AI ride. Here’s a cheerleading pug as slightly present! 

So what can we expect in 2024? All signs point to there being immense pressure on AI corporations to point out that generative AI can earn a living and that Silicon Valley can produce the “killer app” for AI. Big Tech, generative AI’s biggest cheerleaders, is betting big on customized chatbots, which is able to allow anyone to grow to be a generative-AI app engineer, with no coding skills needed. Things are already moving fast: OpenAI is reportedly set to launch its GPT app store as early as this week. We’ll also see cool latest developments in AI-generated video, an entire lot more AI-powered election misinformation, and robots that multitask. My colleague Will Douglas Heaven and I shared our 4 predictions for AI in 2024 last week—read the complete story here. 

This yr can even be one other huge yr for AI regulation world wide. In 2023 the primary sweeping AI law was agreed upon within the European Union, Senate hearings and executive orders unfolded within the US, and China introduced specific rules for things like recommender algorithms. If last yr lawmakers agreed on a vision, 2024 will likely be the yr policies begin to morph into concrete motion. Along with my colleagues Tate Ryan-Mosley and Zeyi Yang, I’ve written a bit that walks you thru what to anticipate in AI regulation in the approaching yr. Read it here. 

But whilst the generative-AI revolution unfolds at a breakneck pace, there are still some big unresolved questions that urgently need answering, writes Will. He highlights problems around bias, copyright, and the high cost of constructing AI, amongst other issues. Read more here. 

My addition to the list could be generative models’ huge security vulnerabilities. Large language models, the AI tech that powers applications equivalent to ChatGPT, are very easy to hack. For instance, AI assistants or chatbots that may browse the web are very prone to an attack called indirect prompt injection, which allows outsiders to regulate the bot by sneaking in invisible prompts that make the bots behave in the best way the attacker wants them to. This might make them powerful tools for phishing and scamming, as I wrote back in April. Researchers have also successfully managed to poison AI data sets with corrupt data, which may break AI models for good. (In fact, it’s not at all times a malicious actor attempting to do that. Using a latest tool called Nightshade, artists can add invisible changes to the pixels of their art before they upload it online in order that if it’s scraped into an AI training set, it may cause the resulting model to interrupt in chaotic and unpredictable ways.) 

Despite these vulnerabilities, tech corporations are in a race to roll out AI-powered products, equivalent to assistants or chatbots that may browse the net. It’s fairly easy for hackers to govern AI systems by poisoning them with dodgy data, so it’s only a matter of time until we see an AI system being hacked in this manner. That’s why I used to be pleased to see NIST, the US technology standards agency, raise awareness about these problems and offer mitigation techniques in a latest guidance published at the tip of last week. Unfortunately, there’s currently no reliable fix for these security problems, and way more research is required to know them higher.

AI’s role in our societies and lives will only grow larger as tech corporations integrate it into the software all of us rely on each day, despite these flaws. As regulation catches up, keeping an open, critical mind in the case of AI is more essential than ever.

Deeper Learning

How machine learning might unlock earthquake prediction

LEAVE A REPLY

Please enter your comment!
Please enter your name here