Home Artificial Intelligence Artificial Intelligence and the Tetris Conundrum

Artificial Intelligence and the Tetris Conundrum

0
Artificial Intelligence and the Tetris Conundrum

In a pioneering study led by Cornell University, researchers launched into an exploratory journey into the realms of algorithmic fairness in a two-player version of the classic game Tetris. The experiment was founded on an easy yet profound premise: Players who received fewer turns throughout the game perceived their opponent as less likable, no matter whether a human or an algorithm was chargeable for allocating the turns.

This approach marked a big shift away from the standard focus of algorithmic fairness research, which predominantly zooms in on the algorithm or the choice itself. As a substitute, the Cornell University study decided to make clear the relationships among the many people affected by algorithmic decisions. This alternative of focus was driven by the real-world implications of AI decision-making.

“We’re beginning to see a variety of situations through which AI makes decisions on how resources needs to be distributed amongst people,” observed Malte Jung, associate professor of data science at Cornell University, who spearheaded the study. As AI becomes increasingly integrated into various facets of life, Jung highlighted the necessity to grasp how these machine-made decisions shape interpersonal interactions and perceptions. “We see an increasing number of evidence that machines mess with the way in which we interact with one another,” he commented.

The Experiment: A Twist on Tetris

To conduct the study, Houston Claure, a postdoctoral researcher at Yale University, made use of open-source software to develop a modified version of Tetris. This new edition, dubbed Co-Tetris, allowed two players to alternately work together. The players’ shared goal was to control falling geometric blocks, neatly stacking them without leaving gaps and stopping the blocks from piling to the highest of the screen.

In a twist on the standard game, an “allocator”—either a human or an AI—determined which player would take each turn. The allocation of turns was distributed such that players received either 90%, 10%, or 50% of the turns.

The Concept of Machine Allocation Behavior

The researchers hypothesized that players receiving fewer turns would recognize the imbalance. Nevertheless, what they didn’t anticipate was that players’ feelings towards their co-player would remain largely the identical, no matter whether a human or an AI was the allocator. This unexpected result led the researchers to coin the term “machine allocation behavior.”

This idea refers back to the observable behavior exhibited by people based on allocation decisions made by machines. It’s a parallel to the established phenomenon of “resource allocation behavior,” which describes how people react to decisions about resource distribution. The emergence of machine allocation behavior demonstrates how algorithmic decisions can shape social dynamics and interpersonal interactions.

Fairness and Performance: A Surprising Paradox

Nevertheless, the study didn’t stop at exploring perceptions of fairness. It also delved into the connection between allocation and gameplay performance. Here, the findings were somewhat paradoxical: fairness in turn allocation didn’t necessarily lead to higher performance. In truth, equal allocation of turns often resulted in worse game scores in comparison with situations where the allocation was unequal.

Explaining this, Claure said, “If a robust player receives a lot of the blocks, the team goes to do higher. And if one person gets 90%, eventually they’ll recover at it than if two average players split the blocks.”

In our evolving world, where AI is increasingly integrated into decision-making processes across various fields, this study offers useful insights. It provides an intriguing exploration of how algorithmic decision-making can influence perceptions, relationships, and even game performance. By highlighting the complexities that arise when AI intersects with human behaviors and interactions, the study prompts us to ponder crucial questions on how we are able to higher understand and navigate this dynamic, tech-driven landscape.

LEAVE A REPLY

Please enter your comment!
Please enter your name here