The leading AI companies have agreed to a series of safeguards as the technology increases in sophistication
Meta, OpenAI, Google, Amazon, Microsoft and others have pledged to work within a framework designed in collaboration with the US government. This is a voluntary effort, there aren't any penalties if they break the pact.
Broadly, the rules are designed to make it easier for folks to spot AI content - which is certainly important as the US heads into the Presidential election season nearly next year.
The companies agreed to:
In the UK, the future of encryption is being tested
The new Online Safety bill would allow Ofcom - the UK's communications regulator - to be able to request tech companies to scan encrypted use data for child exploitation and counter-terrorism threats. It's interesting that they're seeking to give this power to a regulator, and not the courts as is common for things like search warrants and detailed data collection about someone.
Those supporting the bill say it's needed to tackle "record levels" of child abuse hidden away from view. But privacy advocates say it's a step too far. The tech companies agree - with Meta saying they'd pull WhatsApp from the UK. Apple says they'd pull FaceTime and iMessage. They don't want to create a backdoor to their global platform for a specific country, and broadly don't believe in breaking encryption.
LISTEN ABOVE