Home Artificial Intelligence It’s high time for more AI transparency

It’s high time for more AI transparency

7
It’s high time for more AI transparency

But what really stands out to me is the extent to which Meta is throwing its doors open. It is going to allow the broader AI community to download the model and tweak it. This might help make it safer and more efficient. And crucially, it could exhibit the advantages of transparency over secrecy relating to the inner workings of AI models. This might not be more timely, or more vital. 

Tech firms are rushing to release their AI models into the wild, and we’re seeing generative AI embedded in an increasing number of products. But essentially the most powerful models on the market, comparable to OpenAI’s GPT-4, are tightly guarded by their creators. Developers and researchers pay to get limited access to such models through an internet site and don’t know the small print of their inner workings. 

This opacity could lead on to problems down the road, as is highlighted in a recent, non-peer-reviewed paper that caused some buzz last week. Researchers at Stanford University and UC Berkeley found that GPT-3.5 and GPT-4 performed worse at solving math problems, answering sensitive questions, generating code, and doing visual reasoning than that they had a few months earlier. 

These models’ lack of transparency makes it hard to say exactly why that is perhaps, but regardless, the outcomes must be taken with a pinch of salt, Princeton computer science professor Arvind Narayanan writes in his assessment. They’re more likely brought on by “quirks of the authors’ evaluation” than evidence that OpenAI made the models worse. He thinks the researchers didn’t keep in mind that OpenAI has fine-tuned the models to perform higher, and that has unintentionally caused some prompting techniques to stop working as they did previously. 

This has some serious implications. Firms which have built and optimized their products to work with a certain iteration of OpenAI’s models could “100%” see them suddenly glitch and break, says Sasha Luccioni, an AI researcher at startup Hugging Face. When OpenAI fine-tunes its models this fashion, products which have been built using very specific prompts, for instance, might stop working in the best way they did before. Closed models lack accountability, she adds. “If you’ve a product and you alter something within the product, you’re purported to tell your customers.” 

An open model like LLaMA 2 will not less than make it clear how the corporate has designed the model and what training techniques it has used. Unlike OpenAI, Meta has shared all the recipe for LLaMA 2, including details on the way it was trained, which hardware was used, how the information was annotated, and which techniques were used to mitigate harm. People doing research and constructing products on top of the model know exactly what they’re working on, says Luccioni. 

“Once you’ve access to the model, you may do all varieties of experiments to make certain that you just get well performance otherwise you get less bias, or whatever it’s you’re in search of,” she says. 

Ultimately, the open vs. closed debate around AI boils all the way down to who calls the shots. With open models, users have more power and control. With closed models, you’re on the mercy of their creator. 

Having a giant company like Meta release such an open, transparent AI model appears like a possible turning point within the generative AI gold rush. 

7 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here