Comment: How to balance risk with innovation in the age of AI regulation

For anyone dragging their heels on the topic, the message is clear – AI is already here, says Nick Sewell, director of Digital Services, UK at Expleo.

AdobeStock

AI is being adopted into our everyday lives at a startling rate. Not since the World Wide Web went live in 1991 have we seen such a technology gold rush.

AI’s 2023 breakthrough moment not only piqued the business world’s interest, but that of legislators too, with the EU first over the line to adopt a landmark law in March this year, offering some reassurance for those concerned about safeguards and consumer protections.

And let’s get real, with its mind-blowing reach and dazzling capabilities AI was never going to continue unregulated, but what does the rapidly evolving regulatory landscape mean for businesses looking to balance AI-led innovation plans with multiple legislative requirements, and how can effective plans be put in place that keep organisations one step ahead of the regulators and their competitors?

Legislating for risk

What started in the EU is likely to be the tip of a legislative iceberg. To date, the UK has moved to put AI guidelines in place, but legislation is sure to follow and with the government recently confirming a close collaboration with the US on safety evaluation, research, and guidance for AI safety – we can expect further layers of complexity for global operators to navigate.

Over in the EU, things have moved a lot faster, with a landmark bill already passed into law that brings immediate impacts for businesses with multi-million Euro fines for non-compliance, underscoring the stakes at play and urgently demonstrating the opportunities and threats AI presents.

Understanding the penalties

The EU AI Act clearly classifies AI systems into unacceptable, high-risk, limited-risk and no-risk categories.

Things start to get interesting for AI systems in the high-risk category, where examples include certain safety-critical aerospace and automotive systems, medical instruments and even children’s talking toys.

AI systems in this category are classified as having the potential to impact safety and rights and therefore require thorough risk management, robust data governance, human oversight and transparency for safe and responsible development.

The unacceptable risk category takes this one step further and includes AI systems that are prohibited due to inherent threats to human rights and safety, think of things like social scoring by governments or AI-manipulated advertising.

Coming clean on transparency

Companies operating in the EU can no longer use generative AI models on the down-low to inform work outputs. The Act requires companies to clearly label content and materials as ‘generated by AI’ – covering everything from marketing videos to product prototyping and training materials to financial reporting. In addition, whatever is produced is subject to copyright laws.

How to stay one step ahead

Staying on top of the changes is about acting – by putting a plan in place that can be quickly embedded into existing processes and quality management practices throughout the organisation. Any plan should include taking steps to:

 

  •  Define the risk level of a system, app or application
  •  Ensure transparency with AI-generated content, data, code and deliverables
  •  Build in traceability to show how information, decisions, or guidance is found or formed

 

Assessing operations in this way may seem arduous, but it’s essential to ensuring conformity with the EU legislation. And by building AI compliance into applications, systems and internal policies companies will be able to stay ahead of fast-moving requirements.

Integrating AI into your strategy

The legislation directs us towards a more ethical and transparent approach, but it’s not yet clear how this will be policed, which is something we can expect to see play out in the coming years as the first precedents for non-compliance are set.

While we may not yet know how the law will be applied, it’s important to remind ourselves of the exciting possibilities AI presents – it is equally important to find ways to embed innovation into AI processes, alongside the need for risk management and compliance.  And I believe these two seemingly competing priorities can co-exist. By adding an assessment of the opportunities AI provides, alongside adopting a business transformation approach when setting up new systems, we can harness the benefits AI gives for code production, code reviews and automated testing.

All signs currently point towards AI being integral to business operations for years to come, and tempered with the right legislation we can manage the risks, keep humans at the heart of decision-making while also realise the opportunities it presents us, all in one neat package.

Nick Sewell, director of Digital Services, UK at Expleo.

Related content