With renewed attempts to introduce AI regulation in the UK. What does this mean for enterprises, and should business leaders welcome it?

Throughout history, society has created and evolved regulatory policies in line with societal needs, market opportunities and business trends. But never has there been a faster-developing and higher-stakes technology than AI. All market regulation - financial, transport or tech focused - has been somewhat shaped by the strength and direction of political winds. Given the global nature of AI and the potential to affect every aspect of our personal lives, society and even the climate, the case for thoughtfully constructed regulations has even greater consequence.

As with many aspects of regulation, the speed of technological evolution often outpaces the ability for regulatory bodies to identify what guardrails need to be established, which maximise the benefits, whilst minimising the risks.  AI is proving no different. 

What AI regulation already exists?

Whilst there may not be global agreement on the objectives of AI regulation and how far rules should extend, there are still some major regulatory frameworks already in play, such as The EU AI Act, which provides a unified set of risk based, safety and transparency rules for AI across the EU. China also has established legally binding regulations such as transparency around algorithms and, from September this year, AI-generated content must be labelled.

Other countries are in the process of establishing a federal regulatory framework (such as Canada’s Artificial Intelligence and Data Act (AIDA)). Then there is the US, which is evolving its regulatory position, with the White House recently ordering federal agencies to expand use of AI and reverse some of the Biden-era safeguards.

What AI regulation does the UK have?

We have gone from an initial hawkish stance on AI risk, to favouring a more principles-based approach, which leverages existing laws and regulations.

In 2023 at the AI Safety Summit at Bletchley Park, the UK government spearheaded conversations around existential threats posed by AI. But it has since moved away from adopting a centralised AI regulator. Instead, it’s choosing to empower regulatory bodies to look at the impact of AI on the markets they regulate. For example, the Bank of England, the Financial Conduct Authority, the MHRA and the Competitions and Market Authority. The Information Commissioner’s Office also recently issued draft guidance on using AI.

This transition of positioning has been further exemplified by renaming the AI Safety Institute to the AI Security Institute. This was a small but symbolically significant move indicating that concerns have switched from algorithmic fairness and bias, to defending against malicious use of AI and ensuring geopolitical resilience.

Do we need global AI policy alignment?

At a global level there is a hodgepodge of existing and emerging AI regulations. Many Governments around the world are still not sure how to frame, manage and mitigate the risks. Or even what risks to manage. They are caught between wanting to leverage the power of AI for various capabilities and the potential economic gold rush (especially amid anaemic economic growth for many countries); but then worrying about the potential unintended societal impact of this technology.

But is misalignment a problem? For enterprises it creates a complex operating environment with the associated increase in costs of doing business across jurisdictions.  At a societal level, many argue that AI regulatory alignment could either exasperate or close global inequalities.

What does this mean for business?

All these moving parts present multinational businesses with a complex environment to navigate. Making internal policies, frameworks and governance important for several reasons.

Firstly, traditional approaches such as applying the high watermark, are not enough. This practice only works when regulations in different jurisdictions are broadly trying to achieve the same objective. But with AI, the current regulations often have different objectives. For example, in China AI regulations focuses on ensuring political and social stability, whereas in the EU, regulations focus on ensuring human rights and minimising bias.

Corporations therefore need to be more contextually aware when they decide how to build, deploy and use AI, remaining abreast of evolving rules in each jurisdiction. They can no longer simply set their systems to meet the global high watermark and be assured of meeting regulatory obligations.

Secondly, having a strong AI governance model can be a competitive differentiator. Given high consumer alert around potential risks of AI, enterprises will win trust from their customers, employees, partners and ecosystem by adopting robust AI governance. Going beyond the minimum regulations, to thoughtful and considered policies, a company provides reassurance and clarity.

Apple is a case in point. It built consumer trust by championing user privacy and positioning itself as a protector of data. This differentiation contributed to a strong brand value and customer loyalty for Apple.

Finally, forging a strong AI governance policy frees teams for creativity. Criticism of policies or regulations are frequently grounded in the belief that they slow innovation. But the opposite is true. It may seem counterintuitive, but greater ambiguity around the use of technology, invariably slows adoption and value realisation. This is because employees don’t know the boundaries of what they can do and can’t do. They end up with analysis-paralysis impacting operating costs and speed-to-value creation.

Creating an AI governance framework

Just like AI’s capabilities are evolving, so too will the enterprise's governance model.  Here are some suggestions on how companies can lay strong foundations today:

  1. Start building trust by sharing your AI vision, how you intend to use it and the ethics by which you will govern its use.
  2. Next, embed your AI ethics deeply into company culture through training programmes, internal communications and reinforcement of messaging.
  3. Build a dynamic and risk-weighted governance framework that adapts to the fast-moving pace of the technology, the increasing diversity of use-cases and global regulatory variances. Also include an AI-incident response framework.
  4. Consider differing jurisdictional regulations when architecting your AI platforms. Such as taking a jurisdictional zoning and modular compliance approach.
  5. Finally, invest in your people; the tools; and regulatory intelligence so that the right people, with the right capabilities are on top of the dynamic AI regulatory landscape.

For further information visit Tarralugo