Siri co-founders Dag Kittlaus and Adam Cheyer have offered their first public demonstration of Viv, the much-anticipated voice assistant that promises an advanced level of human-computer interaction. The demo took place yesterday at TechCrunch's Disrupt NY event, where Viv's creators wasted no time showing what the new AI bot is capable of.
Kittlaus began by asking Viv what the weather was like today, but then continued the conversation with increasingly complicated queries, like "Was it raining in Seattle three Thursdays ago?" and even "Will it be warmer than 70-degrees near the Golden Gate Bridge after 5pm the day after tomorrow?" Viv had no problems answering the stacked requests, showing a clear awareness of context.
Viv's enhanced contextual awareness is thanks to what Kittlaus called "dynamic program generation", a "new science breakthrough" that enables Viv to understand the intent of the user and code its own responses on the fly.
The feature is central to Kittlaus and Cheyer's hopes for a thriving third-party ecosystem for Viv, since developers are fully able to integrate it into their apps' functionality. The idea is that developers will take advantage of the open-ended nature of the platform to create new and increasingly complex experiences in a short period of time, instead of having to hard-code every response specific to their apps' interactive features.
Kittlaus and Cheyer say Viv is closer to the original vision they had for Siri, the virtual assistant they created in 2007 which is now built into all of Apple's iOS devices. Google and Facebook have already made offers to purchase the AI bot, but it is not clear if Kittlaus and Cheyer have plans to sell the technology, while early integrations are expected to come "later this year".
Top Rated Comments
But how they bought Siri and actually had these guys working for them and yet Apple aren't the ones making this announcement is incredible.
This presentation should have been a part of WWDC, built into iOS 10, OS X, watch OS 3.0 and the newest tvOS. I guess we'll have to wait until June 13th, but man, this will be hard to beat.
Just set movie recording source to your device while you device is connected, then you are all set.
:D
Even if the speech-to-text wasn't great, and you had to be really clear when you spoke to it, why isn't the Text-to-Action good? Aren't there people at Apple looking through Siri queries, and able to spend all day every day adding new functionality?
It debuted in 2011, surly even if someone was to write out the commands a user needed to say by hand they could. "Siri can't do this, but it would be cool if it could. How many ways are there to say it?"
Isn't someone ticking off a list ensuring that every function of the device can also be accomplished using Siri? Why not add feedback to the Siri UI: "Did Siri do what you wanted?" with a Yes / No. Then they can look through all the "No"s and work out what happened, how to improve the service, etc?