Regulation

Simmons global AI lead at GITEX: ‘Regulation can give us the trust we need in AI’

Minesh Tanna moderated a session on tech, crime, and law featuring the EU Parliament's digital policy advisor and Europol's AI chief.
Monday's panel on the AI Stage at GITEX 2024. From left to right: Didier Jacobs, Kai Zenner, and Minesh Tanna. Photo credit: Aishah Hussain.

Regulation can stifle innovation yet provide the trust we need in artificial intelligence (AI), Simmons & Simmons’ global AI lead and partner Minesh Tanna said during a session focused on the intersection of tech, crime, and law at GITEX Global 2024.

“I think the reality today is that in order to use all of the technology we have around us safely and responsibly, there is an acknowledgement globally that we do need some regulation of AI,” said Tanna, who moderated the discussion on day one of the week-long event which featured the EU Parliament’s office head and digital policy advisor, Kai Zenner, and Europol’s ICT head and AI chief, Didier Jacobs. “What that regulation looks like is not yet the subject of consensus—different jurisdictions are taking different approaches.”

Offering the EU perspective, Zenner, who played a prominent role in the development of the EU AI Act, the world’s first comprehensive AI regulation, said the EU has been “very active” with regards to digital legislation over the last five years. “We are now currently at around 160 digital laws in all different areas, sometimes overlapping and contradicting.”

“Not all of them are implemented or entered into force,” he said. “I think right now we have around 88 laws that are implemented, and the others are still being discussed or not yet published in the EU Official Journal.”

“It’s a lot,” he said, continuing: “The biggest problem we have in Europe right now, we were so active in terms of new laws that we were not really good at coordinating or streamlining, so we have a lot of different frameworks where we don’t really know how they work together.”

On whether the EU overregulates technology to the extent of stifling innovation, Zenner said: “The EU at the very beginning had this idea to create rubrics to establish a room of trustworthy technologies, also where legal certainty is very high and companies know how to build technology, and with that we wanted to trigger another ‘Brussels Effect’, similar with the GDPR. We wanted to create laws that are so good they are copied and pasted around the world. This was the intention.”

“Right now there is so much legislation out there, so less coordinated and streamlined, that most companies that really want to follow the rules, do not really have the chance—they struggle—and this is indeed why not only non-European companies think about ‘well, better to avoid the EU’, but now our own companies have this mindset.”

“We hear a lot of talk to go to the US, the UK, or Switzerland,” he said. “Even though I work now for quite a long time on digital policy making, I think those companies probably have a point at least for the next two or three years. If we use those two or three years very well and streamline all our laws, then I think we have a chance to achieve our original goal.”

GITEX is underway this week, from October 14-18, at the Dubai World Trade Centre. This year’s theme is ‘global collaboration to forge a future AI economy’. On whether jurisdictions can achieve international cooperation and harmonisation on the regulation of technology, Jacobs stressed that increased collaboration and cooperation are needed to overcome challenges and solve international crime. “Crime knows no borders, and the criminals more or less use the same tools for ransomware, hacking, extortion etc.,” he said.

He pointed to “friction” in the exchange of data between jurisdictions, namely in the EU and the US. “You need to have multidisciplinary teams to assemble taskforces to solve international crime,” said Jacobs, adding: “Collaboration is key to overcome these friction points.”

Reinforcing a positive message on the regulation of technology, Tanna, who is based in Simmons & Simmons’ London office, said: “I arrived in Dubai on an aeroplane. I have no idea how aeroplanes fly and stay in the air. Some of you may know, but I expect many of you don’t know how aeroplanes stay in the air. But we trust aeroplanes, don’t we? We get on them all the time, and we trust that they will fly safely. Why is that? AI is similar to aeroplanes. We can’t always explain how AI arrives at its decisions, it operates in a black box, difficult to understand. We trust aeroplanes for two reasons: there is a track record of safety; and there is regulation. You know when you get on that plane that the pilot has had to undergo extensive qualifications, you know that the aeroplane has had to pass certain safety standards. A similar analogy can be applied with AI.”

“We don’t yet have the track record of safety, because AI is so new,” he said. “Regulation can give us the trust that we need in AI.”

Continuing, Tanna, who chairs multiple AI groups, said: “So don’t necessarily think of regulation as a bad thing. Think of it as a way to show that AI, some of the products that you will be involved with, are trustworthy. That they are safe to use. You can use regulation for commercial advantages, and whilst there may be valid concerns about the EU overregulating and stifling innovation, I think, on the plus side, if you are complying with the EU regulation, for example, you can go out to the rest of the world and say, ‘we have safe, trustworthy AI, and we know that because we met a very onerous regulation’.”

Aishah Hussain

Aishah Hussain is the Editor of Law Middle East, based in Dubai. Got a story or tip? Email: aishah.hussain@itp.com