Hinweis
Für den Zugriff auf diese Seite ist eine Autorisierung erforderlich. Sie können versuchen, sich anzumelden oder das Verzeichnis zu wechseln.
Für den Zugriff auf diese Seite ist eine Autorisierung erforderlich. Sie können versuchen, das Verzeichnis zu wechseln.
![]() |
An effective dialogue is the key component to a successful interaction between a voice-only application and a user. A voice-only application interacts with the user entirely without visual cues. The dialogue flow must be intuitive and natural enough to simulate two humans conversing. It must also provide a user with enough context and supporting information to understand the next action step at any point in the application.
Because multimodal applications feature a graphical user interface (GUI) with which users interact, developers do not design dialogs for them. A hands-free application is an exception to this rule.
Note Hands-free applications contain both a GUI and dialog-flow components, and provide users with both verbal and visual confirmations. A dashboard navigation system in a car is an example of a hands-free application. A user speaks to the application and the application speaks to the user, and a visual cue appears on a map, based on the user's input. Because hands-free applications are essentially voice-only applications that developers extend to include multimodal functionality, the process of creating hands-free applications is not explicitly covered in this documentation.
Use the following list as a suggested task order when designing a dialogue for a voice-only application.
- Determine the information items that the application requires from the user.
- Determine the commands a user can speak to exit the current dialog, ask for help, or perform other tasks that are not within the scope of the current dialog.
- Model a dialogue flow (which consists of one or many question and answer cycles) using Microsoft Visio, or a similar tool, to represent the branches and sequence that the application's questions and answers require.
- Determine which prompts the application will speak to the user, including those in response to commands, silence, and recognition failure.
- Determine which grammars the application will use to recognize user speech.
| To | See |
|---|---|
| Get more information on designing dialogs. | Modeling Question and Answer Cycles |
| Get more information on creating dialogs. | Creating and Configuring Question and Answer Dialogs |
| Get more information on enabling speech recognition. | Providing Access to Speech Recognition |
| Get more information on testing speech and telephony input. | Specifying and Testing User Input |
| Get more information on prompts and prompt functions. | Speaking to Users |
| Get more information on using event handlers. | Handling Client-side Events |
| Get more information on enabling user commands. | Providing Options for Users |
| Get more information on specifying the activation order of Speech Controls. | Specifying the Evaluated Activation Order of Controls |
| Get more information on setting default property values. | Specifying Default Speech Control Properties |
.jpg)