Apple Reveals More of its Self-Driving Technology
Apple has released results from several of the companies artificial intelligence AI projects.
Ruslan Salakhutdinov, Apple’s director of artificial intelligence research addressed roughly 200 AI experts who had signed up for a free lunch and peek at how Apple uses machine learning, a technique for analyzing large stockpiles of data.
Each of the projects involved giving software capabilities needed for self-driving cars. The Apple Exec discussed projects using data from cameras and other sensors to spot cars and pedestrians on urban streets, navigate in unfamiliar spaces, and build detailed 3-D maps of cities.
These concepts and project aims give an insight into Apple’s secretive efforts around autonomous-vehicle technology. The company has recently received a permit from the California DMV to test self-driving vehicles.
The scale and scope Apples autonomous programme remains unclear. Salakhutdinov showed data from one project previously disclosed in a research paper posted online last month. It trained software to identify pedestrians and cyclists using 3-D scanners called lidars used on most autonomous vehicles.
Other projects discussed by the Apple Executive do not appear to have been previously disclosed until now. One piece of software being developed can identify cars, pedestrians, and the driveable parts of the road in images from a camera or multiple cameras mounted on a vehicle.
Salakhutdinov showed images demonstrating how the system performed when raindrops spattered the windscreen, and could infer the position of pedestrians on the sidewalk when they were partially screened by parked cars.
He cited recent improvements in machine learning for some tasks. “If you asked me five years ago, I would be very skeptical of saying ‘Yes you could do that,’”
Another project involves software that can detect a sense of direction as it moves through the air, a technique called SLAM, for simultaneous localization and mapping. SLAM can be used on robots and autonomous vehicles, and also has applications in map building and augmented reality.
Another Apple project uses data collected by sensor-laden cars to generate rich 3-D maps with features like traffic lights and road markings. Most prototype autonomous vehicles need detailed digital maps in order to operate.
The Apple event was part of a conference on machine learning called NIPS, in which, nearly 8,000 people attended. There was also a strong showing from recruiter as well at this conference, including Elon Musk—hoping to lure machine learning engineers, who are highly prized employees in short supply.
The AI talent shortage was one of the main reasons for the Apple event which attracted people from top universities such as MIT and Stanford, and companies including Alphabet and Facebook. The event included talks from engineers about how machine learning is used inside Apple products such as the Siri personal assistant.
Carlos Guestrin, Apple’s director of machine learning, and a professor at University of Washington, spoke about the powerful computer systems and large datasets available to machine-learning engineers who join the company. He won applause by announcing that Apple is open sourcing software to help app developers use machine learning first developed at his startup Turi, acquired by Apple last summer.
Apple is being forced to relax its infamous secrecy as it competes for talent with rivals such as Google. A company spokesman pointed to five academic machine learning papers released since Salakhutdinov joined the company, but said that Apple doesn’t maintain a count of such publications.
The company has also started sharing some of its work on a technical blog branded as the Apple Machine Learning Journal. In contrast to this, Alphabet’s AI research groups have contributed to more than 60 accepted papers at NIPS this week alone. To keep pace, or get ahead, of competitors in AI, Apple may need to share more with them.