Apple is developing its own large language model (LLM) that runs on-device to prioritize speed and privacy, Bloomberg's Mark Gurman reports.
Writing in his "Power On" newsletter, Gurman said that Apple's LLM underpins upcoming generative AI features. "All indications" apparently suggests that it will run entirely on-device, rather than via the cloud like most existing AI services.
Since they will run on-device, Apple's AI tools may be less capable in certain instances than its direct cloud-based rivals, but Gurman suggested that the company could "fill in the gaps" by licensing technology from Google and other AI service providers. Last month, Gurman reported that Apple was in discussions with Google to integrate its Gemini AI engine into the iPhone as part of iOS 18. The main advantages of on-device processing will be quicker response times and superior privacy compared to cloud-based solutions.
Apple's marketing strategy for its AI technology will apparently be based around how it can be useful to users' daily lives, rather than its power. Apple's broader AI strategy is expected to be revealed alongside previews of its major software updates at WWDC in June.
Source: Macrumors