One of the biggest concerns over the use of generative AI tools like ChatGPT is their environmental impact. But what is that impact — and what strategies are there for reducing it? Here is what we know so far — and some suggestions for good practice.
What exactly is the environmental impact of using generative AI? It’s not an easy question to answer, as the MIT Technology Review’s James O’Donnell and Casey Crownhart found when they set out to find some answers.
“The common understanding of AI’s energy consumption,” they write, “is full of holes.”
As universities adapt to a post-ChatGPT era, many journalism assessments have tried to address the widespread use of AI by asking students to declare and reflect on their use of the technology in some form of critical reflection, evaluation or report accompanying their work. But having been there and done that, I didn’t think it worked.
So this year — my third time round teaching generative AI to journalism students — I made a big change: instead of asking students to reflect on their use of AI in a critical evaluation alongside a portfolio of journalism work, I ditched the evaluation entirely.
A new AI function is being added to Google Sheets that could make most other functions redundant. But is it any good? And what can it be used for? Here’s what I’ve learned in the first week…
The AI function avoids the Clippy-like annoyances of Gemini in Sheets
AI has been built into Google Sheets for some time now in the Clippy-like form of Gemini in Sheets. But Google Sheets’s AI function is different.
Available to a limited number of users for now, it allows you to incorporate AI prompts directly into a formula rather than having to rely on Gemini to suggest a formula using existing functions.
At the most basic level that means the AI function can be used instead of functions like SUM, AVERAGE or COUNT by simply including a prompt like “Add the numbers in these cells” (or “calculate an average for” or “count”). But more interesting applications come in areas such as classification, translation, analysis and extraction, especially where a task requires a little more ‘intelligence’ than a more literally-minded function can offer.
I put the AI function through its paces with a series of classification challenges to see how it performed. Here’s what happened — and some ways in which the risks of generative AI need to be identified and addressed.
Tools like ChatGPT might seem to speak your language, but they actually speak a language of probability and educated guesswork. You can make yourself better understood — and get more professional results — with a few simple prompting techniques. Here are the key ones to add to your toolkit. (also in Portuguese)
Strong factual storytelling relies on good idea development. In this video, part of a series of video posts made for students on the MA in Data Journalism at Birmingham City University, I explain how to generate good ideas by avoiding common mistakes, applying professional techniques and considering your audience.
Machine learning and Natural Language Processing (NLP) are two forms of artificial intelligence that have been used for years within journalism. In this video, part of a series of video posts made for students on the MA in Data Journalism at Birmingham City University, I explain how both technologies have been used in journalism, the challenges that journalists face in using them, and the various concepts and jargon you will come across in the field.
SRF Data example (note: this uses a Random Forest algorithm, which employs a collection of ‘decision trees’, and is not a decision tree as stated in the video)
Having outlined the range of ways in which artificial intelligence has been applied to journalistic investigations in a previous post, some clear challenges emerge. In this second part of a forthcoming book chapter, I look at those challenges and other themes: from accuracy and bias to resources and explainability.
Investigative journalists have been among the earliest adopters of artificial intelligence in the newsroom, and pioneered some of its most compelling — and award-winning — applications. In this first part of a draft book chapter, I look at the different branches of AI and how they’ve been used in a range of investigations.
Automation was key to the work of data journalism pioneers such as Adrian Holovaty — and it’s becoming increasingly central once again. This video, made for students on the MA in Data Journalism at Birmingham City University, explores the variety of roles that automation plays in data journalism; new concepts such as robot journalism, natural language generation (NLG) and structured journalism; and how data journalists’ editorial role becomes “delegated to the future” through the creation of algorithms.
I’m speaking at the Broadcast Journalism Teaching Council‘s summer conference this week about artificial intelligence — specifically generative AI. It’s a deceptively huge area that presents journalism educators with a lot to adapt to in their teaching, so I decided to put those in order of priority.
Each of these priorities could form the basis for part of a class, or a whole module – and you may have a different ranking. But at least you know which one to do first…
Priority 1: Understand how generative AI works
The first challenge in teaching about generative AI is that most people misunderstand what it actually is — so the first priority is to tackle those misunderstandings.