In a fascinating experiment involving a modified version of the classic game Tetris, researchers have uncovered insights into how unfair AI allocation and human response can impact social dynamics. 

The study, conducted by Cornell University's Ann S. Bowers College of Computing and Information Science, sheds light on the perceptions and behaviors of individuals affected by allocation decisions made by either humans or artificial intelligence.

Tetris
(Photo: Teyssier Gwenaelle/ Pixabay )

Allocation Decisions

Associate Professor Malte Jung led the study to explore the relationships between people impacted by allocation decisions, an area often overlooked in algorithmic fairness research. 

While most studies focus on the algorithm or decision itself, Jung and his team delved into the social consequences of these decisions.

"We are starting to see a lot of situations in which AI makes decisions on how resources should be distributed among people," explained Jung.

"We want to understand how that influences the way people perceive one another and behave towards each other. We see more and more evidence that machines mess with the way we interact with each other."

Houston B. Claure, the first author of the research paper titled "The Social Consequences of Machine Allocation Behavior: Fairness, Interpersonal Perceptions, and Performance," explained that their previous study involving a robot's allocation decisions led them to further explore how individuals react to machines' preferences.

To streamline their investigations, the team turned to Tetris, the iconic block-stacking video game that has long served as a research tool for understanding human behavior and cognition.

By developing a two-player version called Co-Tetris, researcher Claure created a platform where participants collaboratively played the game.

Claure employed open-source software to create an "allocator" system that determined the distribution of turns among players. The allocator could assume the role of either a human or an AI, and participants were informed about the allocator's identity. 

The researchers manipulated the allocation conditions, including scenarios where one player received 90% of the turns, another player received 10% (referred to as the "less" condition), or an equal split of 50% was maintained for each player (referred to as the "equal" condition). 

The results revealed that individuals who received fewer turns were keenly aware of their partner's advantage, irrespective of whether the allocator was a human or an AI. 
 
Surprisingly, the researchers discovered that perceptions of dominance differed based on the allocator. When an AI performed the allocation, the player receiving more turns saw their partner as less dominant, whereas dominance perceptions were unaffected when a human did the allocation.

Read Also: 'Tetris Effect: Connected' Upgrade: Play Against Each Other Using Any Platform!

Machine Allocation Behavior

The researchers coined the term "machine allocation behavior" to describe the unique behavior resulting from a machine making allocation decisions.

They also found that equal allocation of turns did not necessarily translate into improved gameplay and performance. On the contrary, unequal allocation often led to better scores, as a strong player receiving most of the blocks contributed to overall team success.

Rene Kizilcec, a co-author of the study, emphasized the importance of considering the long-term effects of AI decisions on individuals, especially in scenarios where continuous decisions are made.

As AI tools become increasingly integrated into everyday life, such as AI tutors in education, understanding how people perceive the competence and fairness of these systems over time is crucial.
 
This study opens up new avenues for research into the intricate relationship between AI decisions, human perception, and social dynamics.

By investigating the impact of unfair allocation in a familiar and engaging context like Tetris, the researchers have shed light on the complexities of AI-human interactions and the need for further exploration in this field.

The study's findings were published in Computers in Human Behavior. 

Related Article: Goldman Sachs Introduces AI-powered Social Media Startup 'Louisa'

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion