Newspaper group Gannett decided to stop using artificial intelligence (AI) to create high school sports reports after many LedeAI-written high school sports reports drew criticism on social media.

The Columbus Dispatch, one of Gannett's publications, released these artificial intelligence-written sports articles earlier this month, setting off an unanticipated flood of jeering on social media. Critics said the pieces' unusual language, lack of sports expertise, repetitive content, and shortage of crucial details made them seem automated writing. 

Other Gannett newspapers, including the Louisville Courier-Journal, AZ Central, Florida Today, and the Milwaukee Journal Sentinel, published similar LedeAI stories recently, raising concerns about the reliability of AI-generated content across the media company.

AI Writes Boring Articles

CNN reported that when discussing "high school football action," describing instances in which one team "took victory away from" another, and talking about "cruise-control" victories, a lot of the AI-generated sports reports used similar wording. Additionally, the articles repeated game dates, creating monotony.

The LedeAI testing was reportedly put on hold in all of Gannett's local markets where the service was being used due to the social media backlash. Amid the controversy, Gannett reiterated its dedication to investigating automation and AI to improve its journalism and reader experience while ensuring that all material complies with strict journalistic standards.

As of the present reporting, LedeAI still needs to comment on the issue.

Read Also: Snapchat Releases Own AI-Powered Selfie Avatar Feature: 'Dreams'

Although Gannett's AI project is meant to speed up content production, the company's recent experience has highlighted the difficulties of depending only on material produced by AI without human monitoring. The issue comes after Gannett's decision to cut 6% of its news division's personnel in December.

This incident mirrors a more significant trend among news sources using AI technologies. Due to the many revisions that were required in the articles produced by AI, CNET also briefly paused its AI writing project. Additionally, according to a report published by Mercury News, several media firms have blocked access to programs like OpenAI's ChatGPT to prevent their content from being improperly utilized in the training of AI models.

Experts Discourage the Use of AI in Journalism


Although AI may increase productivity, issues about accuracy and disinformation highlight the need for cautious adoption. Keeping automation and journalistic ethics in balance is a concern for the news business as AI advances rapidly.

Given that generative AI has the potential to increase productivity and cut down on manual effort, Gannett earlier noted that it is aiming to integrate the technology into its content creation system. However, the business stresses that human control is essential to AI-generated content quality and accuracy. 

Experts point out that generative AI's drawbacks, such as the capacity to produce false or wrong information, present significant difficulties in a sector that places a premium on factual dependability and correctness.

Northwestern University associate professor Nicholas Diakopoulos advised against utilizing AI models. "Where I am right now is I wouldn't recommend these models for any journalistic use case where you're publishing automatically to a public channel," the expert noted, as quoted by Reuters.

Related Article: X Ends Ban on Political Ads But Will Continue Some Restrictions

byline -quincy

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion