‘Safety-first’ approach to AI could stifle innovation, ministers warned

Control over artificial intelligence is at risk of being concentrated in the hands of a small group of big technology companies, which would leave the UK lagging in the race to dominate the growing market, peers have warned.

The House of Lords communications and digital committee has found that regulators were not “striking the right balance between innovation and risk”, instead prioritising safety over opportunity.

Baroness Stowell of Beeston, the committee’s chairwoman, said: “We can’t afford our USP [unique selling point] to be all about safety. That would get exhausted quite quickly if we aren’t also at the vanguard of developing the technology.”

The report is a blow to the UK government, which hosted the inaugural AI Safety Summit at Bletchley Park in November, bringing together global leaders in the field, and has positioned itself as a world leader in mitigating the risks of the new technology.

The report also found that Britain should seriously consider taking on the tech giants and creating its own “sovereign” large language model to benefit the public sector, researchers and industry.

Large language models — includingOpenAI’s ChatGPT, Google’s PaLM and Meta’s Llama — are AIs that have been trained using vast amounts of information scraped from the internet. They are behind technology such as chatbots, able to generate human-like answers in response to prompts.

Baroness Stowell of Beeston said that the UK needed to be at the “vanguard” of developing AI

“There is a real risk thatwe get into the same position with this technology as we have done in the past, such as where you’ve got only two or three businesses operating the cloud and one dominating search,” Stowell said. “This technology is going to be more powerful than any that’s gone before, so we’ve got to ensure that we work to avoid that.”

The peers said that the UK’s limited computational capacity was also a concern. Despite putting more than £1 billion into this AI infrastructure, it was a fraction of the investment from the commercial sector.

The committee, which includes Lord Hall of Birkenhead, former director-general of the BBC, and Baroness Wheatcroft, former editor of The Sunday Telegraph, took evidence from contributors including OpenAI, Ofcom, Meta and Microsoft.

The peers also warned ministers of the need to support copyright holders in the face of tech companies using their work to power their AI models.

Their report found that some tech firms were “using copyrighted material without permission, reaping vast financial rewards” and that the government had a duty to act. “The point of copyright is to reward creators for their efforts, prevent others from using works without permission and incentivise innovation. The current legal framework is failing to ensure these outcomes occur,” it said.

Discussions between technology companies and the creative industries, held by the UK’s Intellectual Property Office, recently broke down, and it is understood that the issue is now being examined by the Department for Science, Innovation and Technology (DSIT). At the same time, litigation is building: The New York Times sued Microsoft and OpenAI in December, while Getty Images is embroiled in a case against StabilityAI.

The government plans to delegate regulation of the technology to different watchdogs but the report found that many, such as the Equality and Human Rights Commission, were not prepared for the responsibility. Ministers are expected to publish their response to a consultation on this imminently.

A spokeswoman from DSIT said that it “did not accept” the report’s main findings. “The UK is a clear leader in AI research and development and as a government we are already backing AI’s boundless potential to improve lives, pouring millions of pounds into rolling out solutions that will transform healthcare, education and business growth,” the spokeswoman said.

“The future of AI is safe AI. It is only by addressing the risks of today and tomorrow that we can harness its incredible opportunities and attract even more of the jobs and investment that will come from this new wave of technology.

“That’s why have spent more than any other government on safety research through the AI Safety Institute and are promoting a pro-innovation approach to AI regulation.”

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *