Talk in English - US at jsDay 2018
Track Name:
Backendy track
View Slides: https://speakerdeck.com/xiehan/jsday-finding-your-voice-building-screenless-interfaces-with-node-dot-js
Short URL: https://joind.in/talk/dce8e
(QR-Code (opens in new window))
"OK Google, ask Alexa to check if Siri can recommend Cortana a movie to watch with Bixby." Voice assistants are one of the biggest emerging technologies in 2018, and every company wants in. At NPR, the largest public radio producer in the United States, our interest in voice-based interfaces is obvious; they're a natural fit for our content, which has always taken an audio-first approach. But given that it's still such a new field, the development process is anything but straightforward: how do you even prototype a screenless interface? How does the Alexa platform differ from, say, Google Assistant, and can you develop one app for both? What's a Lambda, and do you have to use it? In this talk, we'll run through these confusing, high-level questions, and then go over some real-world code samples for a Node.js API that powers a voice-based UI. For demo purposes, we'll use Amazon Alexa, but we'll also discuss strategies we've used to develop an infrastructure that can support other voice assistants once they are further along. Finally, we'll discuss the mistakes we made, the things we wish we'd done differently, and the things we wished we'd known up front as we set out on our journey to build a next-generation voice UI framework in-house at NPR.
Comments
Comments are closed.
Love it!
Maybe a repository with a small example should have been a great take away :)
(To actually see that code is not complicated)
Thanks for this, good overview of the current state of tech for voice UIs. Would be nice to have seen a simple example of an request and reply, and what are the voice triggers that tell Alexa et al which skill to use.