In September I took part in a panel at the African Journalism Education Network conference. The most interesting moment came when members of the audience were asked if they didn’t use AI — and why.
A desire for friction
One person worried about the loss of friction that AI represented: using AI tools increased the risk that they would complete tasks too easily, without the friction that would help them to learn.
Low friction is a recurring challenge in teaching, learning and journalism: as access to information — and each other — has become easier through search engines, mobile phones and email/social media, expectations and behaviours have changed.
This lack of friction means less time for strategic choices about what information to use, and how to source it, or who to approach, and how.
It can lead to sloppiness and impatience, and reduce opportunities for developing problem-solving skills and persistence.
Done right, however, using AI can actually reintroduce friction into a process: this is what happened when I introduced an AI diary into my teaching last year, leading to an increase in reflection and problem-solving.
Using hybrid destination-prompting can also design friction into the process.
Concern over boundaries
Another attendee was concerned about crossing the boundaries that we set for students. “How,” he argued, “can we ask them not to use AI if we use it ourselves?”
One challenge here is that it is also difficult to enforce a boundary if you don’t know when it’s been crossed. This is especially difficult if you’ve not used AI yourself and developed a feel for its output.
Another challenge is that those boundaries differ between news organisations and universities — and are constantly shifting.
Part of the role of journalism study is to critically interrogate those boundaries and the arguments around them in order to know where and when they should be enforced.
For example, through experimenting with AI we might identify that the most important boundaries are determined by risk, or truthfulness (e.g. being up to date) or bias. We might also identify good and bad practice, or where boundaries are not clearly defined.
A desire to work hard
One person enjoyed working hard on projects, and did not want AI to do that work for her.
There are two concepts tied up in this sentiment: job satisfaction, and effort.
AI can increase the risk that less effort is expended on our work. But this is just one choice. Another is to invest the effort elsewhere.
The inventions of other labour-saving devices provide useful analogies here. Once the washing machine was invented, for example, a choice opened up: we could continue to work hard to wash clothes by hand, exercising certain muscles and maintaining that skill — or we could use the time to exercise other muscles or develop other skills.
So using AI doesn’t mean you don’t work hard — you will probably just work hard on something else, such as sourcing and conducting interviews, verification and factchecking, writing and editing — or even learning AI-based skills such as prompt design.
And this is where job satisfaction plays a role.
A reflection by one of my students in the conclusion to their AI diary last year was a particularly good example of this: “After I’d become comfortable with the AI,” she wrote, “it was no longer enjoyable to use because I wasn’t mentally stimulated by my work.“
As a result of this, along other concerns, she made a conscious critical decision to use it less.
Equally, someone may find the stimulation provided by their work fades to a degree where they prefer to use AI to free up time for them to focus on more challenging parts of reporting.
A desire for expertise
“I want to be an expert” said another person. The implication here was that you can only become an expert in something if you don’t take short cuts, i.e. use AI.
This is something I’ve written about in my posts on journey and destination prompts.
Yes, AI can offer a tempting shortcut, and research suggests when it is used in this way we retain less information and have lower cognitive activity (the same applies to a lesser degree to using search engines).
But that isn’t the only way that AI can be used.
Journey prompting — asking for advice on a process, such as learning — can be used to help design a more effective strategy to becoming an expert, for example.
Role play prompts are a particular example of this: creating a mentor can help support the process of building expertise in an area.
And AI’s translation abilities — not just between languages, but between specialist vocabularies and levels of literacy — are particularly suited to getting to grips with a field where jargon and obtuse documentation can act as barriers to learning.
It is important to remember that AI is actually just an interface to information (its training material). And other interfaces — reading a book or being taught by an expert — are also ‘shortcuts’ to developing expertise (compared to, for example, practice, trial and error).
In fact, almost all learning is a trade-off between experience and efficiency.
And as part of that trade-off we need to recognise the limitations of each approach that we are using: books are helpful but not as effective as experiencing a process ourselves; experts are more interactive but we may find them hard to understand, or their experience may be limited; AI is extremely flexible, but the quality and reliability is variable and needs further checking and exploration.
Privacy
“What is it learning about me?” was the final objection and a reminder that surveillance capitalism — where you become the product — is an important consideration when using these tools.
There are ways that you can reduce the degree to which a large language model ‘learns’ from your activity, such as turning off memory and paying for pro versions of AI tools, but this is certainly an area to keep an eye on in the coming years alongside the data we provide in our search behaviour, browsers and phones.
