FAQ for voice optimization (preview)

[This article is prerelease documentation and is subject to change.]

This article contains frequently asked questions related to the configuration and setup of voice agents in Copilot Studio.

Important

  • This is a preview feature.
  • Preview features aren’t meant for production use and might have restricted functionality. These features are subject to supplemental terms of use, and are available before an official release so that customers can get early access and provide feedback.

Can the agent answer grounded knowledge only, or must it also take action in systems of record?

Not necessarily. You can configure agents to operate purely on grounded knowledge, without taking any action in backend systems. Copilot Studio controls this feature through knowledge and web search settings.

When "knowledge-only" agents make sense

Use this mode when the agent’s role is primarily informational:

  • Answering FAQs

  • Explaining policies

  • Providing guidance or instructions

  • Deflecting calls or chat

In these scenarios, the model retrieves information from configured sources and generates a response without calling any APIs.

How does the agent retrieve current business data, policies, and customer context in real time?

Grounded knowledge (static or semi-static): This approach works best for policies, documentation, and structured content.

The model uses Generative Answers, where it:

  • Searches across configured knowledge sources.

  • Synthesizes a response.

  • Optionally cites sources.

Supported sources include

  • SharePoint

  • Websites

  • Uploaded documents

  • Dataverse (indirect through flows only)

Note

Dataverse isn't supported as a direct knowledge source for C2‑facing agents due to authentication requirements. You can surface Dataverse data through flows or OData calls and return it to the agent as structured results.

Best use cases for knowledge

  • Refund and return policies

  • Store hours and locations

  • Eligibility rules

  • Product FAQs

  • Internal procedures

Example

"What’s your refund policy for online orders?"

The model retrieves policy content from SharePoint and generates a clear answer.

Which tasks require exact validation before running? Refunds, cancellations, updates, or account changes

Certain actions require strict validation and must never be left to free-form AI decisions.

High-risk categories

Category Examples Why it matters
Financial Refunds, payments, credits Financial risk
Account State Cancellations, plan changes Irreversible actions
Identity Address, phone, SSN updates Fraud and compliance
Legal Consent, opt-outs Regulatory exposure

The safe run pattern

AI decides > System validates > AI communicates

This principle ensures safe generative orchestration.

Example: Refund request

  1. Model identifies intent
    "User wants a refund"

  2. Model gathers required details
    Order ID, reason, timeframe

  3. API or system of record validates

    • Checks eligibility

    • Applies refund policy

    • Confirms approval or rejection

  4. Model communicates the outcome

    • Explains the result clearly

    • Doesn't invent or assume outcomes

Clarifying a common misconception

Using a single model doesn't mean uncontrolled automation.

There's a clear separation of responsibilities.

Capability Who decides Who enforces
Intent recognition Model
Knowledge answers Model Knowledge source scope
API selection Model Tool availability
Validation System of record Backend logic
Final response Model Based on real outcomes

Configure real-time voice agents