Apple has officially entered the AI wars, announcing a series of upcoming artificial intelligence features, including an expected partnership with OpenAI that could further amplify growing privacy concerns within the tech industry.
At its Worldwide Developers Conference on Monday, the company unveiled Apple Intelligence, a suite of homegrown AI services that will ship on its devices this fall. While Apple-branded AI was the focus of Monday's keynote, the company also announced a long-awaited partnership with OpenAI, albeit in a somewhat muted fashion.
Starting later this year, Apple users will be able to access OpenAI's ChatGPT models for free, without having to create an account, and ChatGPT will be integrated with Apple's new and improved Siri feature, allowing the AI model to search the internet to quickly answer users' questions.
Apple said the ChatGPT integration will be optional for users: customers will be able to opt out of the presence of OpenAI on their devices, which could go a long way to assuage the privacy concerns of AI-warranty users.
“I think it's a great idea,” says Maribel Lopez, an AI analyst and founder of research and strategy consulting firm Lopez Research, “because I think we're going to get to a point where people just aren't going to accept that trade-off.”
Apple's assurances come amid growing concerns about OpenAI's safety efforts. Earlier this month, The New York Times reported that a group of current and former employees went public with concerns about the company's financial motivations and commitment to developing responsible AI.
OpenAI will train ChatGPT based on user interactions and information, and ultimately the generative AI will be able to accurately infer sensitive information about individuals based on what they type online, Business Insider previously reported.
“Some people are OK with that, some aren't,” Lopez said, “but if you give them a platform and say there's no way to opt out, that can be hard.”
Lopez added that the opt-out feature on Apple devices gives customers some control in preparation for the arrival of AI.
Apple is taking a safety-first approach when it comes to its AI deployments. The company said Monday that it doesn't use any private or personal customer data to train the models underlying Apple Intelligence. Instead, Apple models were trained with licensed data and publicly available information. The system also operates via private cloud computing, an infrastructure designed to privately process AI requests at scale.
Meanwhile, much of Apple Intelligence's marketing is already focused on safety and privacy, with ads boasting about “an entirely new standard of privacy in AI.”
Lopez said Apple's emphasis on privacy and need to protect its strong brand could explain why the company has lagged behind in AI.
“Everyone says it's delayed, but I think it took them ages to sort these things out,” she said. “Sam Altman might not get sued, but Apple definitely can.”