Building an AI superpower: Does the UK stand a chance?

A digital map of the UK in yellow on a dark green background
(Image credit: Shutterstock)

This article originally appeared in issue 24 of IT Pro 20/20, available here. To sign up to receive each new issue in your inbox, click here

In the race for geopolitical, economic and digital supremacy in artificial intelligence (AI), China and the US undoubtedly hold the lead, with the UK in tow alongside Japan, South Korea, Germany and India.

The global leaders each boast greater wealth for investment, wider talent pools, and broader access to public and private sector innovation; advantages, perhaps, the UK cannot match, especially post-Brexit. While the UK punches above its weight on technology, its status has been slipping and it cannot afford to fall further behind.

When the government launched its ten-year National AI Strategy in September, the digital secretary Nadine Dorries hailed the roadmap’s “vision”. These plans, added the business secretary Kwasi Kwarteng, would “translate the tremendous potential of AI into better growth, prosperity and social benefits to the UK”, and “lead the charge in applying AI to the greatest challenges of the 21st century”.

Indeed, serving as the birthplace of Alan Turing, Ada Lovelace and DeepMind, the UK has – at least on paper – a heritage rich enough to realise its full potential. The document, however, also serves as a reality-check, acknowledging the UK ranks third in the world on several fronts including AI publication citations per capita and the total number of AI companies. The National AI Strategy, the government believes, is the crucial next step in turning the tide. Experts, however, are less enthusiastic and dispute whether it goes far enough in key areas such as investment and regulation.

The three pillars for AI success

AI is a “general-purpose technology”, according to the strategy, akin to James Watt’s 1776 steam engine. Spanning the next ten years, the government’s strategy focuses on the three broad pillars of investment, economic transition, and governance. These unite under the common aim of making the UK the “best place to live and work with AI”, with “clear rules” and “applied ethical principles”.

  • Investment: The government’s proposals purport to guarantee scientific breakthroughs, growth in the UK supplier base and a boost to workforce diversity.
  • Economic transition: These are filed alongside plans to render the economy AI-ready, which involves increasing AI exports, ensuring greater public value-for-money, and broadening AI adoption in businesses and across regions.
  • Governance: Finally, the plan provisions for a boost to public trust in AI, and a raised level of responsible innovation to maintain the UK’s position. The plan also identifies many opportunities, as well as moral, social and economic challenges – not least around ethics.

Among proposed future actions include launching a new National AI Research and Innovation Programme to align funding programmes across UKRI Research Councils and Innovate UK. This is set out alongside finding ways to ensure AI policy supports the government’s ambition to secure a strategic advantage through science and technology. Finally, the document proposes setting out, in early 2022, a white paper on a pro-innovation national position for governing and regulating AI.

This white paper can put the UK on a path to being an AI superpower “fit for the coming decade”, Tabitha Goldstaub, chair of the UK’s AI Council, which wrote the independent AI Roadmap used to inform the strategy, tells IT Pro. “Further actions will also be required,” she acknowledges, however. “Continued dialogue with the AI community, the right level of funding and mechanisms to monitor and assess progress will be crucial.”

It’s a numbers game

The National AI Strategy notes the government has invested more than £2.3 billion into AI initiatives since 2014. Experts who’ve studied it, however, say promises on future funding don’t go far enough. The future investment streams cited are rather opaque, given it was published before October’s Budget. Chancellor Rishi Sunak did, then, commit £550 million for a “skills revolution” to quadruple the places available on AI and security boot camps.

Roughly three times the levels of investment discussed are actually needed, however, according to David Shrier, AI futurist and professor of practice (AI & Innovation) at Imperial College. “As an example,” he explains, “the new budget published in October allocated £23 million for AI and data science training – £100 million would be much better. Remember that the UK is competing on a global stage.”

The government’s £2.3 billion of investment since 2014 is dwarfed, Shrier says, by China’s contribution last year, alone. This stood at a staggering figure of £52 billion, with private sector companies ploughing in billions more.

“The proof of the pudding is in the eating,” Peter van der Putten, director of decisioning and AI solutions at Pegasystems, and assistant professor of AI and creative research at Leiden University. “So, it will be interesting to revisit the action plan in six to 12 months and see how many of the actions have been ticked off. One should put one’s money where one’s mouth is, so these plans will need to be accompanied by proper levels of investment, and hard commitments on these investments are still lacking.”

Missing a trick on ethics and standards

Although the UK’s research budget can’t compete pound-for-pound on a per capita basis with the United States or China, nor “probably” the EU, Labour MP Darren Jones heralds how we “already punch above our weight” in the global AI arena.

“Our AI strategy was, correctly, very innovation-driven,” Jones, who is also the chair of Parliament’s Business, Energy and Industrial Strategy select committee, tells IT Pro. “It looked very much at R&D, research collaborations, innovation and type development, which, of course, is crucial because it’s a new general-purpose technology.”

RELATED RESOURCE

Seven leading machine learning use cases

Seven ways machine learning solves business problems

FREE DOWNLOAD

He also feels the UK must find its own distinct AI role in the world, however, rather than duplicating or competing directly with researchers in the top two nations. He, and others, highlight AI ethics and international standards as a route towards adding value, although Jones acknowledges the “elephant in the room”, adding: “An ethical values-based framework in authoritarian regimes compared to democratic countries is fundamentally different.”

Despite these difficulties, he affirms the UK mustn’t “drop this idea we have something to say, and offer to the world, around ethics, regulation and standards”. He does, though, question whether the strategy’s focus on these elements is as strong as previous expressions of intent.

“In the past, the UK has said to the world: “We are distinctive, not just because we have great universities and innovation capacity, but because in a robust democratic society with the rule of law, we can drive the agenda around what ethical or values-based AI means”,” he explains. “There’s been a lot of debate about that over the previous years, but that wasn’t actually featured, I don't think, in a very substantive way in the AI strategy.”

Jones also wonders whether the UK may be following a more deregulatory approach to AI, which he suggests may undermine what the UK had previously said was one of its distinctive features. “So,” he adds, “I’m a bit concerned about the incoherence there about what we’ve said in the past and what we’ve said we’re going to do in the future.”

Emma Wright, legal counsel for the Institute of AI, is also anxious the UK may have missed an opportunity to play a greater global convening role, despite the potential for disagreements between countries on common themes and standards. “The benefit of this to the UK would be we could step up a level in contending with the US and China in leading the way in AI,” she says, “likely attracting more of the international academics and researchers, which in itself would turbocharge our industrial strategy.”

Regulations, protections and fostering talent

Making progress on implementing this strategy is now critical, experts argue. Shrier, for instance, says the government must invest in universities as “hotbeds of AI innovation”, while Goldstaub wants to ensure the strategy delivers on promises of data and AI literacy for all young people. This would “foster a nation of engaged, informed and empowered consumers of the technology”, she says, and aid the battle against companies with biased or unethical AI technologies. It would also ensure the UK vastly expands its talent pool.

AI, meanwhile, must be treated as a strategic resource, according to Natalie Cramp, CEO of data science firm Profusion. This would entail offering more M&A rules and protections to ensure companies founded in the UK avoid foreign takeovers, such as Google’s contentious acquisition of AI startup DeepMind in 2014. Saar Yoskovitz, co-founder of US-based AI predictive maintenance unicorn Augury, on the other hand, wants a greater emphasis to be placed on government funding to support VCs, which would allow the UK to establish a base of AI unicorns.

He suggests that establishing a “unified AI task force for the UK” would offer access to resources and tools, raise wider awareness, demonstrate use cases for industry, and show the need for AI standards. The UK government has rightly identified the latter in its strategy, Yoskovitz concedes, but he also warns that clearer commitments on funding and accessibility for businesses is needed to drive the UK to develop into a “true AI superpower”.

Darktrace counts itself as among the UK’s AI unicorn success stories. Dave Palmer, its chief product officer, is positive about recent Budget commitments to increase core funding for UK universities and research institutions, as well as overall spending on R&D. He isn’t as bullish on the AI commercialisation process, however. “While the Strategy sets out a programme of measures to stimulate the commercialisation process in broad terms,” he adds, “the sector will be watching for more detail on how these will be executed, what the concrete objectives of the programme are, and how these will be measured.”

Jonathan Weinberg is a freelance journalist and writer who specialises in technology and business, with a particular interest in the social and economic impact on the future of work and wider society. His passion is for telling stories that show how technology and digital improves our lives for the better, while keeping one eye on the emerging security and privacy dangers. A former national newspaper technology, gadgets and gaming editor for a decade, Jonathan has been bylined in national, consumer and trade publications across print and online, in the UK and the US.