With Apple Intelligence, the banner under which the Apple company brings together all the functions related to generative AI, Apple plays on both fronts: on the one hand, local data processing for the simplest queries; on the other hand, the passage through secure servers for the most complex matters with a focus on confidentiality.
For the occasion, the company makes use of OpenAI’s services: Siri will have to make room for ChatGPT when the assistant is caught off guard. The bot will also be able to bring its writing skills into applications and generate images if needed. Apple will rely on GTP-4o, and requests that will be milled from the latest version of the great language model will benefit from a luxury of privacy protection: IP addresses will be masked, OpenAI will not store requests.
OpenAI “has done things in terms of privacy that I appreciated,” Cook said in an interview with The Washington Post. “I think they’re pioneers in this area and now they have the best model,” Cook said. He adds, however, that the manufacturer is “not stuck with one person forever,” a way of saying that negotiations are continuing with other suppliers, including Google with Gemini.
If Apple tries to reassure on respect for privacy, when it comes to the other great pitfall of generative AI, Tim Cook cannot give guarantees. We are talking about hallucinations, which push, for example, Gemini himself to recommend glue-based pizzas, or to eat a pebble a day.