ChatGPT is about to revolutionize the economy. We want to make your mind up what that appears like.

-

Power struggle

When Anton Korinek, an economist on the University of Virginia and a fellow on the Brookings Institution, got access to the brand new generation of enormous language models corresponding to ChatGPT, he did what a number of us did: he began fooling around with them to see how they may help his work. He fastidiously documented their performance in a paper in February, noting how well they handled 25 “use cases,” from brainstorming and editing text (very useful) to coding (pretty good with some help) to doing math (not great).

ChatGPT did explain one of the fundamental principles in economics incorrectly, says Korinek: “It screwed up really badly.” But the error, easily spotted, was quickly forgiven in light of the advantages. “I can inform you that it makes me, as a cognitive employee, more productive,” he says. “Hands down, absolute confidence for me that I’m more productive when I take advantage of a language model.” 

When GPT-4 got here out, he tested its performance on the identical 25 questions that he documented in February, and it performed much better. There have been fewer instances of creating stuff up; it also did a lot better on the mathematics assignments, says Korinek.

Since ChatGPT and other AI bots automate cognitive work, versus physical tasks that require investments in equipment and infrastructure, a lift to economic productivity could occur way more quickly than in past technological revolutions, says Korinek. “I feel we may even see a greater boost to productivity by the tip of the 12 months—definitely by 2024,” he says. 

Who will control the long run of this amazing technology?

What’s more, he says, in the long term, the way in which the AI models could make researchers like himself more productive has the potential to drive technological progress. 

That potential of enormous language models is already turning up in research within the physical sciences. Berend Smit, who runs a chemical engineering lab at EPFL in Lausanne, Switzerland, is an authority on using machine learning to find latest materials. Last 12 months, after one in all his graduate students, Kevin Maik Jablonka, showed some interesting results using GPT-3, Smit asked him to reveal that GPT-3 is, in actual fact, useless for the sorts of sophisticated machine-learning studies his group does to predict the properties of compounds.

“He failed completely,” jokes Smit.

It seems that after being fine-tuned for just a few minutes with just a few relevant examples, the model performs in addition to advanced machine-learning tools specially developed for chemistry in answering basic questions on things just like the solubility of a compound or its reactivity. Simply give it the name of a compound, and it will probably predict various properties based on the structure.

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

1 COMMENT

0 0 votes
Article Rating
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

1
0
Would love your thoughts, please comment.x
()
x