Artificial intelligence is reshaping industries, economies and even our daily lives. But how we perceive AI – and what we choose to do with it – depends heavily on our perspective. Here’s a breakdown of the ways people view AI, showing just how nuanced the conversation is. People tend to fall into one of three main categories, and within each, there are multiple types. If you work in change management, understanding this range of views will be key to successful strategy.
The enthusiasts
Some people see the potential of AI and feel excited about its possibilities. This general position comes in a few varieties.
Idealists often believe that AI can create a more equitable and sustainable future. They see the ways that collaboration between humans and AI can ensure ethical, inclusive and long-term benefits. These optimistic sorts tend to see AI as a transformative force for good, capable of solving humanity’s biggest challenges. They see its potential to revolutionize industries, improve our quality of life, foster innovation, close opportunity gaps and empower marginalized communities. For example, they’re excited about the ways that AI can accelerate medical breakthroughs and make education more accessible worldwide.
Technophiles, the classic early adopter types, are also starry-eyed about AI, but they place less emphasis on the greater good. They see AI as a cutting-edge marvel that should be pursued for its innovation alone. They’re most excited about pushing the boundaries of AI and they don’t tend to spend much time addressing its societal impacts. They view AI as the ultimate frontier and believe that pushing it forward is how we’ll unlock humanity’s next great era.
Opportunists also want AI to move forward fast, but more because of opportunities for personal gain. They see that AI provides a chance to capitalize on emerging trends and create value quickly. They want to seize AI-driven opportunities for profit – and sometimes they tend to overlook ethical concerns. Their view is that AI is like the next gold rush and that getting in early is all that matters if you want to make a fortune.
The critics
Pessimistic types feel that AI poses significant risks – job displacement, ethical dilemmas and even existential threats. They tend to highlight its downsides, such as bias, loss of privacy and misuse. They have serious fears about the exploitative nature of AI in surveillance, misinformation and widening inequality. Their main concern is that AI might amplify inequality and erode the public’s trust in systems of all kinds if left unchecked. These critics are concerned that AI will exacerbate societal issues and they view it as being driven more by profit than public good. They think AI is being used to consolidate power and profit at the expense of privacy and democracy. The opportunists described above are their worst nightmare.
Skeptics, in turn, are less concerned about exploitation and more concerned that AI’s capabilities are simply overhyped. They believe AI is not as impactful as claimed. They question whether AI can deliver on its promises and tend to be the people pointing out technical flaws. They feel that while AI might automate some tasks, it’s nowhere close to matching human reasoning.
The realists
Somewhere in between these two general schools of thought, we find the realists. Realists believe that AI is neither good nor bad – it’s a tool that reflects the intentions of its creators and users. They want to understand AI’s capabilities and limitations while implementing it responsibly. While they know AI can enhance productivity, they also know that it won’t replace human creativity entirely.
Some of these realists are chiefly pragmatists. They take the position that AI can be a practical solution to specific problems, not a silver bullet. They’re mostly interested in tangible applications and measurable return on investment, meaning they see AI as being like any other tool: a means to an end. They know that AI can help automate repetitive tasks, but they understand that human oversight is critical for success.
Some of them are especially focused on AI’s effects on human beings. These humanists feel that AI development should proceed cautiously to avoid unintended harm. They favour slow, deliberate implementation that prioritizes sustainability and societal well-being. They also tend to think that AI should remain subordinate to human values and creativity. Their emphasis is on ways that AI can enhance, but not replace, human agency and decision-making. They may agree that AI can augment our work but feel it should never harm people or the environment or take away our ability to make meaningful choices.
Each of these perspectives highlights a different dimension of AI. I’m not arguing that any one of them is entirely right or entirely wrong. The important thing, in my view, is that people working in change management need to consider what each one brings to the table so that you can find the right balance between seizing opportunities and addressing concerns.