People are alarmed and impassioned by the advancements that we’re witnessing in AI. And much of the discussion we’re hearing is dominated by polarized views:
Either…
1. “AI is going to be the best thing ever! It will automate everything and move us light-years ahead!”
Or…
2. “AI is going to be the end of the world!”
Elon Musk falls into this camp, as demonstrated during his interview with British Prime Minister Rishi Sunak last quarter. Musk offered a rather apocalyptic interpretation of the impact of AI: “[t]here is a safety concern, especially with humanoid robots — at least a car can’t chase you into a building or up a tree.”
The rise of AI is surrounded by an unhelpful hint of hysteria, which is obfuscating the matter and hindering needed action. As usual in the case of two extreme, opposing views, a considered middle ground is more realistic. The reality of AI’s impact will likely fall somewhere in between a miracle and an apocalypse.
You’ll have heard about the UK’s first AI Safety Summit, held at Bletchley Park on 1–2 November 2023.
The summit marked an endeavor by Sunak to get everyone into the same room, to agree there should be an international approach to solving AI. But that inherently assumes that AI is a problem and must be solved accordingly…
The fact of the matter is that AI generates value. It’s also a fact that advancements in AI give rise to certain ethical concerns, which need to be addressed.
The somewhat abstract, alarmist discussions we’re hearing at the moment are of limited practical value. We need a plan. And we need to act on it.
The BBC’s commentary on the Sunak/Musk interview was astute: “[a]mid all the philosophizing, there was little in the way of new announcements about how the technology will be employed and regulated in the UK — aside from the prime minister's promise that AI could be used to improve the government’s own website.”
Which was frankly rather comical. The state of the government’s website is hardly the country’s foremost concern around AI…
One correct conclusion derived from the Sunak/Musk interview was the idea that we need a “referee” to monitor the ramifications of advancements in AI.
This referee should be a specialist organization dedicated to addressing AI. And fortunately for us, we don’t have to reinvent the wheel.
In the UK we already have an organization responsible for General Data Protection Regulation (GDPR). I’m referring to the Information Commissioner’s Office (ICO). As per its mission statement, “The ICO exists to empower you through information.”
The ICO is arguably among the UK’s most effective regulators. Generating fines, not just headlines, the ICO imposes real penalties upon organizations and individuals who transgress data privacy laws. It’s proven itself to be a powerful force in actively preventing and punishing illegal and unethical activity around data.
There’s an existing awareness of the ICO and the purpose it serves, so it already lays claim to a level of authority. And by virtue of its current function, it already understands many of the issues for which AI will be responsible.
However, the ICO’s remit does not currently extend beyond the realm of personal data.
As a practical solution, the remit of the ICO needs to be broadened, and we need to get legislation in place that will enable it to act upon its findings.
AI has set new rules of engagement, which make it more critical than ever to pay attention to defending the data of organizations. Generative AI (e.g. ChatGPT) raises evident concerns about protecting IP and preventing plagiarism.
We see a lot of headlines about safeguarding personal data. As we should. But that’s not enough. We need to ensure the ethical usage of all data. And the ICO is arguably the right organization for the job.
In addition to the ICO, we should be bringing the Alan Turing Institute into the conversation. They have the expertise to contribute constructively. And their thoughts on the value of AI would be highly useful.
It’s not just about taking a prohibitionary approach to the use of AI and its possibilities. It’s about more positive action — helping businesses to understand how we can leverage the significant value represented by AI. This process will likely necessitate government grants to encourage businesses to use AI more effectively.
The British economy grows on the back of tech; historically, employment has risen in tandem with technological advancements. However, many people are worried about the implications of AI, the latest tech bubble, in relation to employment.
During his interview with Musk, Sunak recognized the widespread “anxiety” about jobs being rendered defunct by AI.
Musk took this a step further, declaring “[t]here will come a point where no job is needed — you can have a job if you want one for personal satisfaction but AI will do everything.”
The future is uncertain. The argument that human contributions will ever be entirely superseded by AI is debatable.
But in any case, rather than worrying about a possible vision of an apocalyptic future, we need to take practical action, to benefit people right now. And the way to do this is to equip people with the skills relevant to a workplace re-envisioned by AI.
With the rise of apprenticeships, we’ve seen concentrated efforts to close the gap between employment and unemployment. But to perpetuate these efforts in the new post-AI workscape, we need to make sure that apprenticeships, and indeed all educational courses, acknowledge AI. Now, in order to be relevant and future-proofed, all educational and training programs, irrespective of their specific subject area, enlighten students about ethics, bias, and the responsible use of AI. All courses need to impart, on some level, an understanding of how AI can help us.
Because it most certainly can help us. AI is presenting us with the opportunity to ignite economic growth, increase employment, and enhance value generation.
Currently, a certain scaremongering seems to reign supreme. Which is counterproductive. It’s fostering a repetitive conversation and a certain stagnation.
AI doesn’t have to be some incomprehensible, uncontrollable wave bearing down on us. I argue there is an answer.
When you get down to the details, there’s much that can be understood and much that can be done.
AI represents neither the end of the world nor the answer to all. And once we acknowledge that, we can get to work.
About the Author:
Simon Asplen-Taylor is the Founder and CEO of Data, Analytics, and AI advisory, DataTick. He is also a bestselling author, having written the Amazon #1 selling Data and Analytics Strategy for Business. A number of firms are using the book to guide their data capabilities.
Asplen-Taylor has been named ‘Most Influential London CEO in Data’ and ‘Europe’s Most Influential Data Leader’. He has led some of the largest data-driven transformations in Europe and served as CDO for several FTSE firms.
For over 30 years he has led the data capabilities at organizations such as Bupa, IBM, UBS, and Bank of America Merrill Lynch. Most recently, he led the data transformation of the Lloyd’s of London insurance market.
Asplen-Taylor has a respected record of transforming businesses using Data, Analytics, and AI while delivering significant upside. He also specializes in acting as an advisor and coach to Executives, Board, and Data Leaders.