Home Artificial Intelligence OpenAI’s ChatGPT Tackles University Accounting Exams

OpenAI’s ChatGPT Tackles University Accounting Exams

OpenAI’s ChatGPT Tackles University Accounting Exams

OpenAI recently launched its groundbreaking AI chatbot, GPT-4, which has been making waves in various fields. With a ninetieth percentile rating on the bar exam, passing 13 out of 15 AP exams, and scoring near-perfectly on the GRE Verbal test, GPT-4’s performance has been nothing in need of extraordinary.

Researchers at Brigham Young University (BYU) and 186 other universities were interested by how OpenAI’s technology would perform on accounting exams. They tested the unique version, ChatGPT, and located that while there remains to be room for improvement within the accounting domain, the technology is a game changer that can positively impact the way in which education is delivered and received.

Since its debut in November 2022, ChatGPT has turn out to be the fastest-growing technology platform ever, reaching 100 million users in under two months. In light of the continuing debate concerning the role of AI models like ChatGPT in education, lead study writer David Wood, a BYU professor of accounting, decided to recruit as many professors as possible to evaluate the AI’s performance against actual university accounting students.

ChatGPT vs. Students on Accounting Exams

The research involved 327 co-authors from 186 educational institutions across 14 countries, who contributed 25,181 classroom accounting exam questions. BYU undergraduates also provided 2,268 textbook test bank questions. The questions covered various accounting subfields, similar to accounting information systems (AIS), auditing, financial accounting, managerial accounting, and tax. Additionally they varied in difficulty and sort.

Although ChatGPT’s performance was impressive, students outperformed the AI, with a median rating of 76.7% in comparison with ChatGPT’s 47.4%. On 11.3% of questions, ChatGPT scored higher than the scholar average, particularly excelling in AIS and auditing. Nonetheless, it struggled with tax, financial, and managerial assessments, possibly because of its difficulty with mathematical processes.

ChatGPT performed higher on true/false questions (68.7% correct) and multiple-choice questions (59.5%) but had difficulty with short-answer questions (28.7% to 39.1%). It generally struggled with higher-order questions, sometimes providing authoritative written descriptions for incorrect answers or answering the identical query in alternative ways.

The Way forward for ChatGPT in Education

Despite its limitations, researchers anticipate that GPT-4 will improve on accounting questions and address the problems they found. Probably the most promising aspect is the chatbot’s potential to boost teaching and learning, similar to helping design and test assignments or draft portions of a project.

“This can be a disruption, and we want to evaluate where we go from here,” said study coauthor and fellow BYU accounting professor Melissa Larson. “In fact, I’m still going to have TAs, but that is going to force us to make use of them in alternative ways.”

As AI continues to advance, educators must adapt and find latest ways to include these technologies into their teaching methods.



Please enter your comment!
Please enter your name here