AI is at an inflection point, Fei-Fei Li says

-

Two things have happened, Li explains. Generative AI has caused the general public to wake as much as AI technology, she says, since it’s behind concrete tools, similar to ChatGPT, that individuals can check out for themselves. And consequently, businesses have realized that AI technology similar to text generation could make them money, they usually have began rolling these technologies out in additional products for the actual world. “Due to that, it impacts our world in a more profound way,” Li says. 

Li is one in every of the tech leaders we interviewed for the newest issue of MIT Technology Review, dedicated to the most important questions and hardest problems facing the world. We asked big thinkers of their fields to weigh in on the underserved issues on the intersection of technology and society. Read what other tech luminaries and AI heavyweights, similar to Bill Gates, Yoshua Bengio, Andrew Ng, Joelle Pineau, Emily Bender, and Meredith Broussard, needed to say here. 

In her newly published memoir,  Li recounts how she went from an immigrant living in poverty to the AI heavyweight she is today. It’s a touching look into the sacrifices immigrants must make to attain their dreams, and an insider’s telling of how artificial-intelligence research rose to prominence.  

After we spoke, Li told me she has her eyes set firmly on the longer term of AI and the hard problems that lie ahead for the sector. 

Listed below are some highlights from our conversation. 

Why she disagrees with among the AI “godfathers” about catastrophic AI risks: Other AI heavyweights, similar to Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, have been jousting in public concerning the risks of AI systems and tips on how to govern the technology safely. Hinton, particularly, has been vocal about his concerns that AI could pose an existential risk to humanity. Li is less convinced. “I absolutely respect that. I feel, intellectually, we should always discuss all this. But in case you ask me as an AI leader… I feel there are other risks which are what I’d call catastrophic risks to society which are more pressing and urgent,” she says. Li highlights practical, “rubber meets the road” problems similar to misinformation, workforce disruption, bias, and privacy infringements.  

Hard problems: One other major AI risk Li is worried about is the increasingly concentrated power and dominance of the tech industry on the expense of investment in science and technology research in the general public sector. “AI is so expensive—a whole bunch of hundreds of thousands of dollars for one large model, making it unattainable for academia. Where does that leave science for public good? Or diverse voices beyond the client? America needs a moon-shot moment in AI and to significantly put money into public-sector research and compute capabilities, including a National AI Research Resource and labs just like CERN. I firmly imagine AI will help the human condition, but not with no coordinated effort to make sure America’s leadership in AI,” she told us.

The issues of ImageNet: ImageNet, which Li created, has been criticized for being biased and containing unsafe or harmful photos, which in turn result in biases and harmful outcomes in AI systems. Li admits the database is just not perfect. “It takes people to call out the imperfections of ImageNet and to call out fairness issues. Because of this we want diverse voices,” she says. “It takes a village to make technology higher.” 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x