Apple is walking a privacy tightrope with its big AI move

Inevitably,after around a year of generative AI features being stuffed into essentially every new product and service fromGoogle,Microsoft andSamsung,Apple has followed suit with theannouncement of its own AI suite and a tie-up with OpenAI’s chatbot ChatGPT.

At the keynote speech for Apple’s Worldwide Developer Conference,the company was characteristically determined to avoid the term AI as much as possible. And while the first half of the presentation was filled with AI,it was the garden-variety AI that the company has beenbuilding into its products for years. From tinting app icons,summarising web pages and categorising emails on iPhone to recognising and editing a user’s unique handwriting on iPad,these features show how often machine learning can provide a convenient and secure on-device function without necessarily identifying itself as such.

OpenAI chief executive Sam Altman,with Apple services boss Eddy Cue,at WWDC.

OpenAI chief executive Sam Altman,with Apple services boss Eddy Cue,at WWDC.Bloomberg

But the second half was about the overt,unmistakable,large language model AI that has taken the tech industry by storm,and which was humorously twisted here to stand for “Apple Intelligence” rather than artificial intelligence,thus giving the company license to speak its name.

Later this year,newer Apple devices (that is,iPhone 15 Pro and this year’s iPhones,or iPads or Macs from around 2021) will have the ability to generate text and images baked into their operating systems,allowing users to rewrite or expand their messages,create custom emojis and stickers,or edit things out of their photos,in seconds. Many of these functions will happen entirely on the device,but those requiring more complex processing will be sent to Apple’s cloud servers.

Wedbush analyst Dan Ives said Apple had taken the right path to implement AI,making its devices more useful while encouraging upgrades.

“We believe Apple’s AI strategy will leverage its golden installed base around personalisation and large language models on the phone,that should change the growth trajectory of Cupertino and spur an AI-driven iPhone upgrade cycle starting with iPhone 16,” he said.

“This was a historical day for Apple,and Cook and Co did not disappoint in our view.”

But putting aside whether the move will please analysts and investors,Apple’s clear challenge here is to take the kind of technology it has traditionally railed against (that is,the kind that requires collecting user data and sending it to the cloud) and show how it can be a force for good on its platforms. So it’s no surprise the companyimmediately wanted to discuss security.

“Private Cloud Compute allows Apple Intelligence to process complex user requests with groundbreaking privacy,” said Apple software engineering boss Craig Federighi at the event.

“We’ve extended iPhone’s industry-leading security to the cloud,with what we believe is the most advanced security architecture ever deployed for cloud AI at scale.”

Apple has announced a raft of changes to its iOS operating system which operates on all iPhones.

I think it’s fair to say that sending any personal data from your device over the internet to a remote server is inherently less secure than having the data never leave the device. So we can’t simply take Apple’s word that its solution is more secure than others (after all,other web giants like Google and Amazon spend billions on secure cloud storage,too),and Federighi’s inference that Private Cloud Compute is as secure as local iPhone data storage deserves scrutiny.

That said,Apple’s setup has a number of points in its favour to separate it from other companies involved in cloud AI processing.

For starters,it designs and controls both the device the data originates from and the server itself. For example,when an iPhone 15 needs to contact a secure server to crunch a particular AI request,both systems will be run by Apple Silicon with features including Secure Enclave (which protects encryption keys so data can’t be intercepted) and Secure Boot (which makes sure a verified operating system is running). The devices will also be able to verify each other as secure before data is exchanged.

But even if data is difficult to intercept,what about malicious actors hacking in and stealing stored data? No server is truly safe from that. But as Apple tells it,there won’t really be any data of value to steal. The user’s device sends only the data strictly needed to provide the answer to the request;it’s only accessible to that specific server,and it’s never stored,so it disappears once the task is complete.

Apple devices will ask before sharing personal data with OpenAI.

Apple devices will ask before sharing personal data with OpenAI.Supplied

Apple has also said independent experts are welcome to examine the code it uses to run its secure servers to verify that they’re doing what it’s claimed they’re doing. You could view the Private Cloud Compute system as a moving of the goal posts for Apple from “we won’t take your data from your device” to “we will,but we won’t look at it”. But then some of the AI tasks we’re talking about just aren’t possible on a phone,so if you’re going to send it to someone,it might as well be the company promising not to even peek at it.

Where the most questions arise in terms of Apple Intelligence and privacy is with regard to third-party integrations. Apple devices will be able to decide whether a different AI service would be best to accomplish a request,and at WWDC the first of these services was revealed to be OpenAI’s ChatGPT. This means that iPhone,iPad and Mac users will get free access to some of the most powerful large language models around to help them rewrite text or summon custom images,but it also means sending personal data outside the Apple ecosystem.

To be clear,this is no more a privacy risk than downloading the ChatGPT app from the App Store and using it. But since the technology is going to be a core and integrated part of the operating system on Apple products,it’s worth noting that OpenAIdoes not have the same track record of privacy and security commitments that Apple has.

The company has been criticised for its data collection practices,both the ways it obtained information to train its models and the way it implements data collected from users. Apple’s challenge is to provide its users with the utility of ChatGPT,without selling them out to OpenAI. At the WWDC keynote,Apple obviously did not say that using ChatGPT on iPhone was less secure than sticking to its own AI capabilities,but there was a tacit admission along those lines:the iPhone will ask your permission every time before it shares information with ChatGPT.

In a news release,Apple said that it obscures your IP address before sending the information and that OpenAI “won’t store requests.” However,it remains to be seen whether OpenAI stores any data collected from these Apple Intelligence requests. Even without IP addresses or the ability to identify individual users,the anonymised and aggregated data could be very useful for OpenAI in training its models.

Apple also mentioned that users will have the option to log in with their ChatGPT account,at which point they’ll no longer be anonymised by Apple and will be subject to OpenAI’s privacy policy. It’s not clear at this point what benefit there is to doing that,but it’s entirely possible OpenAI will announce some iPhone-specific features that only paying customers get,to encourage more sign-ups.

Ultimately,I expect Apple will give users the ability to turn off ChatGPT if they like. In the future,it will likely offer multiple AI providers the same way it offers multiple search engines now. As for cloud processing in general,I’d like to believe you’ll be able to turn it off,but the company has hit the Private Cloud Compute talking point so hard that it’s not a guarantee.

Get news and reviews on technology,gadgets and gaming in our Technology newsletter every Friday.Sign up here.

Tim Biggs is a writer covering consumer technology,gadgets and video games.

Most Viewed in Technology