
Almost all conversations around AI come down to these hopes and fears: that at its best AI can help us to reflect on our humanity. At its worst, it can lead us to forget it — or subjugate it.
When AI is dismissed as flawed, it is often through a concern that it will make us less human — or redundant.
The problem with this approach is that it can overlook the very real problems, and risks, in being human.
When people talk about the opportunities in using AI, it is often because they hope it will address the very human qualities of ignorance, bias, human error — or simply lack of time.
The problem with this approach is that it overlooks the very real problems, and risks, in removing tasks from a human workflow, including deskilling and job satisfaction.
So every debate on the technology should come back to this question: are we applying it (or dismissing it) in a way that leads us to ignore our humanity — or in a way that forces us to address our very human strengths and weaknesses?
