Bloomberg’s AI Experiment: A Journey Through the Ups and Downs
Bloomberg, a titan in the financial news sector, is diving headlong into the world of artificial intelligence (AI) to enhance its journalism. But like any pioneering venture, the path hasn’t been entirely smooth.
The Learning Curve of AI Summaries
In 2023 alone, Bloomberg had to revise more than three dozen AI-generated article summaries. One notable incident occurred recently when the financial powerhouse reported on President Trump’s auto tariffs. While the main article accurately stated that Trump would announce the tariffs that day, the AI’s bullet-point summary got the timeline wrong regarding when the broader tariff action would kick in.
This wasn’t an isolated case. Other news organizations are also toying with AI technology; for example, Gannett employs AI-generated summaries, while The Washington Post uses a tool named “Ask the Post” to answer questions from its articles.
A Broader Perspective on AI in Journalism
However, AI missteps aren’t uncommon. The Los Angeles Times recently had to retract an AI enhancement on an opinion piece because it mischaracterized the Ku Klux Klan, failing to identify it as a racist organization—an alarming oversight for a major outlet.
Despite these challenges, Bloomberg stands by its AI initiatives, with a spokesperson noting that around 99% of AI-generated summaries currently meet their editorial standards. The company is committed to transparency, ensuring that any updates or corrections to stories are clearly communicated. Journalists retain full oversight, able to modify or eliminate any summaries that don’t meet their expectations. The overarching aim is for AI to complement, rather than replace, the human touch in journalism.
The Balancing Act: AI vs. Human Insight
Bloomberg rolled out its AI-generated summaries on January 15, serving three bullet points that encapsulate the essence of their articles. Editor-in-Chief John Micklethwait explained the rationale behind the move in a recently published essay. He highlighted that while readers appreciate the convenience of quickly grasping a story’s gist, journalists express concerns—worried that their full narratives may be overlooked in favor of brief summaries.
Micklethwait accurately pointed out that “an AI summary is only as good as the story it is based on.” And that’s where human journalists retain their irreplaceable value, curating and providing the richness of information that AI alone cannot generate.
Acknowledging Shortcomings
Instances of AI errors have included mix-ups in timelines, misattributions, and flawed statistical data. For example, in a March 6 article about tariffs, an AI summary mistakenly indicated that Mr. Trump had implemented tariffs on Canadian goods the previous year instead of the current one. Another March correction dealt with a summary that confused different types of sustainable funds, resulting in inaccurate figures.
In the recent tariff announcement case, Bloomberg promptly issued a correction, clarifying the timeline error in the AI summary, emphasizing their commitment to accuracy.
Toward a More Refined Experience
Despite the hiccups, feedback on the AI summaries at Bloomberg has generally been positive, and the organization is continuously working to enhance the user experience. As technology evolves, the blending of AI and journalism promises to reshape the way we consume news.
Conclusion
As we venture further into the digital age, the integration of AI in journalism is paving the way for a new narrative. The landscape is still evolving, and with that evolution come both opportunities and challenges. The AI Buzz Hub team is excited to see where these breakthroughs take us. Want to stay in the loop on all things AI? Subscribe to our newsletter or share this article with your fellow enthusiasts.