Apple Hired Dozens of AI Experts From Google for a Secretive Zurich Research Lab

Apple has poached dozens of artificial intelligence experts from Google and created a “secretive European laboratory” in Zurich to house a new team of staff tasked with building new AI models and products, according to a paywalled Financial Times report.

Based on an analysis of LinkedIn profiles conducted by FT, Apple has recruited at least 36 specialists from Google since 2018, when it poached John Giannandrea to be its top AI executive.

Apple’s main AI team works out of California and Seattle, but the company has recently expanded offices dedicated to AI work in Zurich, Switzerland. Apple’s acquisition of local AI startups FaceShift (VR) and Fashwell (image recognition) is believed to have influenced its decision to build a secretive research lab known as “Vision Lab” in the city.

According to the report, employees based in the lab have been involved in Apple’s research into the underlying technology that powers OpenAI’s ChatGPT chatbot and similar products based on large language models (LLMs). The focus has been on designing more advanced AI models that incorporate text and visual inputs to produce responses to queries.

The report suggests that Apple’s recent work on LLMs is a natural outgrowth of the company’s work on Siri over the last decade:

The company has long been aware of the potential of “neural networks” — a form of AI inspired by the way neurons interact in the human brain and a technology that underpins breakthrough products such as ChatGPT.

Chuck Wooters, an expert in conversational AI and LLMs who joined Apple in December 2013 and worked on Siri for almost two years, said: “During the time that I was there, one of the pushes that was happening in the Siri group was to move to a neural architecture for speech recognition. Even back then, before large language models took off, they were huge advocates of neural networks.”

Currently, Apple’s leading AI group includes notable ex-Google personnel such as Giannandrea, former head of Google Brain, which is now part of DeepMind. Samy Bengio, now senior director of AI and ML research at Apple, was also previously a leading AI scientist at Google. The same goes for Ruoming Pang, who directs Apple’s “Foundation Models” team focusing on large language models. Pang previously headed AI speech recognition research at Google.

In 2016, Apple acquired Perceptual Machines, a company that worked on generative AI-powered image, detection, founded by Ruslan Salakhutdinov from Carnegie Mellon University. Salakhutdinov is said to be a key figure in the history of neural networks, and studied at the University of Toronto under the “godfather” of the technology, Geoffrey Hinton, who left Google last year citing concerns about the dangers of generative AI.

Salakhutdinov told FT that one reason for Apple’s slow AI rollout was the tendency of language models to provide incorrect or problematic answers: “I think they are just being a little bit more cautious because they can’t release something they can’t fully control,” he said.

iOS 18 is rumored to include new generative AI features for Siri, Spotlight, Shortcuts, Apple Music, Messages, Health, Keynote, Numbers, Pages, and other apps. These features are expected to be powered by Apple’s on-device LLM, although Apple is also said to have discussed partnerships with Google, OpenAI, and Baidu.

A first look at the AI features that Apple has planned should come in just over a month, with ‌iOS 18‌ set to debut at the Worldwide Developers Conference that kicks off on June 10.