Apple's Siri – Keeping it Simple
- June 12, 2012
- 0 Comments
[Update: WWDC has started- and so far no mention of third-party Siri integration. Looks like Apple went with the more conservative option of only expanding to specific third parties, e.g. Yelp. Maybe something new will be announced before the end of WWDC, or maybe it will be in another update in another year or so. But what follows is still what I would guess Apple will do when they enable third parties on Siri… ]
I thought this post on a potential Siri API was interesting. In fact, when I first saw Siri it looked like a great first step toward inter-application communication on an iOS device (a la the services framework on OSX).
The second half of Siri integration, Semantics, is the tricky part: something that most iOS developers have never dealt with. Semantics will attempt to capture the various ways a user can ask for something, and, more importantly, the ways Siri, in turn, can ask for more information should that be required. This means that developers will need to imagine and provide “hints” about the numerous ways a user can ask for something. Sure, machine learning can cover some of that, but at this early stage Siri will need human supervision to work seamlessly.
For example, if I ask “When does the Duke game start,” Siri will somehow have to know that this is the same question as “When does Duke play tonight,” and so on. It’s not magic. Someone has to tell Siri that those two things mean the same thing, and that someone, if it can be helped, should not be the user. It should be the developers of the ESPN app. This of course won’t be enough. Entire response trees will need to be implemented by app developers. The ESPN app will need to be smart enough to reply, “Which sport, Duke Men’s Basketball or Duke Women’s Soccer?” And so on.
I don’t think this speculation is on target. For one thing, Apple will want to own the semantics – it is the key differentiator between iPhone and other devices.
But from a culture and philosophy point of view, I don’t think Apple will take this approach. Consider animation. Apple doesn’t force all of the App developers to be experts in animation. Instead, it provides APIs that make animation accessible to all App developers. In the same way, I don’t expect they’ll force developers to pick up semantics and natural language processing.
I expect Apple will keep it simple:
- They’ll provide a way for Apps to register with Siri for information domains or actions – and likely this registrations will require approval from Apple.
- They’ll expect developers’ Apps to receive the results of natural language processing and semantics, and to respond to requests. And they’ll probably make it pretty easy.
Of course, they might do nothing of the kind at WWDC. They might just bring a few additional apps into the Siri ecosystem. But you have to believe they’re working toward something interesting.
(And wouldn’t a little Siri-enabled BPM be interesting?)