- HTTP request
- Path parameters
- Request body
- Response body
- UserInput
- InputType
- DeviceProperties
- Surface
- Location
- LatLng
- Output
- Canvas
- Prompt
- Simple
- Content
- Card
- Image
- ImageFill
- Link
- OpenUrl
- UrlHint
- Table
- TableColumn
- HorizontalAlignment
- TableRow
- TableCell
- Media
- MediaType
- OptionalMediaControls
- MediaObject
- MediaImage
- Collection
- CollectionItem
- List
- ListItem
- Suggestion
- Diagnostics
- ExecutionEvent
- ExecutionState
- Slots
- SlotFillingStatus
- Slot
- SlotMode
- SlotStatus
- Status
- UserConversationInput
- IntentMatch
- ConditionsEvaluated
- Condition
- OnSceneEnter
- WebhookRequest
- WebhookResponse
- WebhookInitiatedTransition
- SlotMatch
- SlotRequested
- SlotValidated
- FormFilled
- WaitingForUserInput
- EndConversation
Plays one round of the conversation.
HTTP request
POST https://actions.googleapis.com/v2/{project=projects/*}:sendInteraction
The URL uses gRPC Transcoding syntax.
Path parameters
Parameters | |
---|---|
project |
Required. The project being tested, indicated by the Project ID. Format: projects/{project} |
Request body
The request body contains data with the following structure:
JSON representation | |
---|---|
{ "input": { object ( |
Fields | |
---|---|
input |
Required. Input provided by the user. |
deviceProperties |
Required. Properties of the device used for interacting with the Action. |
conversationToken |
Opaque token that must be passed as received from SendInteractionResponse on the previous interaction. This can be left unset in order to start a new conversation, either as the first interaction of a testing session or to abandon a previous conversation and start a new one. |
Response body
If successful, the response body contains data with the following structure:
Response to a round of the conversation.
JSON representation | |
---|---|
{ "output": { object ( |
Fields | |
---|---|
output |
Output provided to the user. |
diagnostics |
Diagnostics information that explains how the request was handled. |
conversationToken |
Opaque token to be set on SendInteractionRequest on the next RPC call in order to continue the same conversation. |
UserInput
User input provided on a conversation round.
JSON representation | |
---|---|
{
"query": string,
"type": enum ( |
Fields | |
---|---|
query |
Content of the input sent by the user. |
type |
Type of the input. |
InputType
Indicates the input source, typed query or voice query.
Enums | |
---|---|
INPUT_TYPE_UNSPECIFIED |
Unspecified input source. |
TOUCH |
Query from a GUI interaction. |
VOICE |
Voice query. |
KEYBOARD |
Typed query. |
URL |
The action was triggered by a URL link. |
DeviceProperties
Properties of device relevant to a conversation round.
JSON representation | |
---|---|
{ "surface": enum ( |
Fields | |
---|---|
surface |
Surface used for interacting with the Action. |
location |
Device location such as latitude, longitude, and formatted address. |
locale |
Locale as set on the device. The format should follow BCP 47: https://tools.ietf.org/html/bcp47 Examples: en, en-US, es-419 (more examples at https://tools.ietf.org/html/bcp47#appendix-A). |
timeZone |
Time zone as set on the device. The format should follow the IANA Time Zone Database, e.g. "America/New_York": https://www.iana.org/time-zones |
Surface
Possible surfaces used to interact with the Action. Additional values may be included in the future.
Enums | |
---|---|
SURFACE_UNSPECIFIED |
Default value. This value is unused. |
SPEAKER |
Speaker (e.g. Google Home). |
PHONE |
Phone. |
ALLO |
Allo Chat. |
SMART_DISPLAY |
Smart Display Device. |
KAI_OS |
KaiOS. |
Location
Container that represents a location.
JSON representation | |
---|---|
{
"coordinates": {
object ( |
Fields | |
---|---|
coordinates |
Geo coordinates. Requires the [DEVICE_PRECISE_LOCATION] [google.actions.v2.Permission.DEVICE_PRECISE_LOCATION] permission. |
formattedAddress |
Display address, e.g., "1600 Amphitheatre Pkwy, Mountain View, CA 94043". Requires the [DEVICE_PRECISE_LOCATION] [google.actions.v2.Permission.DEVICE_PRECISE_LOCATION] permission. |
zipCode |
Zip code. Requires the [DEVICE_PRECISE_LOCATION] [google.actions.v2.Permission.DEVICE_PRECISE_LOCATION] or [DEVICE_COARSE_LOCATION] [google.actions.v2.Permission.DEVICE_COARSE_LOCATION] permission. |
city |
City. Requires the [DEVICE_PRECISE_LOCATION] [google.actions.v2.Permission.DEVICE_PRECISE_LOCATION] or [DEVICE_COARSE_LOCATION] [google.actions.v2.Permission.DEVICE_COARSE_LOCATION] permission. |
LatLng
An object that represents a latitude/longitude pair. This is expressed as a pair of doubles to represent degrees latitude and degrees longitude. Unless specified otherwise, this object must conform to the WGS84 standard. Values must be within normalized ranges.
JSON representation | |
---|---|
{ "latitude": number, "longitude": number } |
Fields | |
---|---|
latitude |
The latitude in degrees. It must be in the range [-90.0, +90.0]. |
longitude |
The longitude in degrees. It must be in the range [-180.0, +180.0]. |
Output
User-visible output to the conversation round.
JSON representation | |
---|---|
{ "text": string, "speech": [ string ], "canvas": { object ( |
Fields | |
---|---|
text |
Spoken response sent to user as a plain string. |
speech[] |
Speech content produced by the Action. This may include markup elements such as SSML. |
canvas |
Interactive Canvas content. |
actionsBuilderPrompt |
State of the prompt at the end of the conversation round. More information about the prompt: https://developers.google.com/assistant/conversational/prompts |
Canvas
Represents an Interactive Canvas response to be sent to the user. This can be used in conjunction with the "firstSimple" field in the containing prompt to speak to the user in addition to displaying a interactive canvas response. The maximum size of the response is 50k bytes.
JSON representation | |
---|---|
{ "url": string, "data": [ value ], "suppressMic": boolean, "enableFullScreen": boolean } |
Fields | |
---|---|
url |
URL of the interactive canvas web app to load. If not set, the url from current active canvas will be reused. |
data[] |
Optional. JSON data to be passed through to the immersive experience web page as an event. If the "override" field in the containing prompt is "false" data values defined in this Canvas prompt will be added after data values defined in previous Canvas prompts. |
suppressMic |
Optional. Default value: false. |
enableFullScreen |
If |
Prompt
Represent a response to a user.
JSON representation | |
---|---|
{ "append": boolean, "override": boolean, "firstSimple": { object ( |
Fields | |
---|---|
append |
Optional. Mode for how this messages should be merged with previously defined messages. "false" will clear all previously defined messages (first and last simple, content, suggestions link and canvas) and add messages defined in this prompt. "true" will add messages defined in this prompt to messages defined in previous responses. Setting this field to "true" will also enable appending to some fields inside Simple prompts, the Suggestion prompt and the Canvas prompt (part of the Content prompt). The Content and Link messages will always be overwritten if defined in the prompt. Default value is "false". |
override |
Optional. Mode for how this messages should be merged with previously defined messages. "true" clears all previously defined messages (first and last simple, content, suggestions link and canvas) and adds messages defined in this prompt. "false" adds messages defined in this prompt to messages defined in previous responses. Leaving this field to "false" also enables appending to some fields inside Simple prompts, the Suggestions prompt, and the Canvas prompt (part of the Content prompt). The Content and Link messages are always overwritten if defined in the prompt. Default value is "false". |
firstSimple |
Optional. The first voice and text-only response. |
content |
Optional. A content like a card, list or media to display to the user. |
lastSimple |
Optional. The last voice and text-only response. |
suggestions[] |
Optional. Suggestions to be displayed to the user which will always appear at the end of the response. If the "override" field in the containing prompt is "false", the titles defined in this field will be added to titles defined in any previously defined suggestions prompts and duplicate values will be removed. |
link |
Optional. An additional suggestion chip that can link out to the associated app or site. The chip will be rendered with the title "Open |
canvas |
Optional. Represents a Interactive Canvas response to be sent to the user. |
Simple
Represents a simple prompt to be send to a user.
JSON representation | |
---|---|
{ "speech": string, "text": string } |
Fields | |
---|---|
speech |
Optional. Represents the speech to be spoken to the user. Can be SSML or text to speech. If the "override" field in the containing prompt is "true", the speech defined in this field replaces the previous Simple prompt's speech. |
text |
Optional text to display in the chat bubble. If not given, a display rendering of the speech field above will be used. Limited to 640 chars. If the "override" field in the containing prompt is "true", the text defined in this field replaces to the previous Simple prompt's text. |
Content
Content to be shown.
JSON representation | |
---|---|
{ // Union field |
Fields | ||
---|---|---|
Union field content . Content. content can be only one of the following: |
||
card |
A basic card. |
|
image |
An image. |
|
table |
Table card. |
|
media |
Response indicating a set of media to be played. |
|
canvas |
A response to be used for interactive canvas experience. |
|
collection |
A card presenting a collection of options to select from. |
|
list |
A card presenting a list of options to select from. |
Card
A basic card for displaying some information, e.g. an image and/or text.
JSON representation | |
---|---|
{ "title": string, "subtitle": string, "text": string, "image": { object ( |
Fields | |
---|---|
title |
Overall title of the card. Optional. |
subtitle |
Optional. |
text |
Body text of the card. Supports a limited set of markdown syntax for formatting. Required, unless image is present. |
image |
A hero image for the card. The height is fixed to 192dp. Optional. |
imageFill |
How the image background will be filled. Optional. |
button |
Button. Optional. |
Image
An image displayed in the card.
JSON representation | |
---|---|
{ "url": string, "alt": string, "height": integer, "width": integer } |
Fields | |
---|---|
url |
The source url of the image. Images can be JPG, PNG and GIF (animated and non-animated). For example, |
alt |
A text description of the image to be used for accessibility, e.g. screen readers. Required. |
height |
The height of the image in pixels. Optional. |
width |
The width of the image in pixels. Optional. |
ImageFill
Possible image display options for affecting the presentation of the image. This should be used for when the image's aspect ratio does not match the image container's aspect ratio.
Enums | |
---|---|
UNSPECIFIED |
Unspecified image fill. |
GRAY |
Fill the gaps between the image and the image container with gray bars. |
WHITE |
Fill the gaps between the image and the image container with white bars. |
CROPPED |
Image is scaled such that the image width and height match or exceed the container dimensions. This may crop the top and bottom of the image if the scaled image height is greater than the container height, or crop the left and right of the image if the scaled image width is greater than the container width. This is similar to "Zoom Mode" on a widescreen TV when playing a 4:3 video. |
Link
Link content.
JSON representation | |
---|---|
{
"name": string,
"open": {
object ( |
Fields | |
---|---|
name |
Name of the link |
open |
What happens when a user opens the link |
OpenUrl
Action taken when a user opens a link.
JSON representation | |
---|---|
{
"url": string,
"hint": enum ( |
Fields | |
---|---|
url |
The url field which could be any of: - http/https urls for opening an App-linked App or a webpage |
hint |
Indicates a hint for the url type. |
UrlHint
Different types of url hints.
Enums | |
---|---|
LINK_UNSPECIFIED |
Unspecified |
AMP |
URL that points directly to AMP content, or to a canonical URL which refers to AMP content via . |
Table
A table card for displaying a table of text.
JSON representation | |
---|---|
{ "title": string, "subtitle": string, "image": { object ( |
Fields | |
---|---|
title |
Overall title of the table. Optional but must be set if subtitle is set. |
subtitle |
Subtitle for the table. Optional. |
image |
Image associated with the table. Optional. |
columns[] |
Headers and alignment of columns. |
rows[] |
Row data of the table. The first 3 rows are guaranteed to be shown but others might be cut on certain surfaces. Please test with the simulator to see which rows will be shown for a given surface. On surfaces that support the WEB_BROWSER capability, you can point the user to a web page with more data. |
button |
Button. |
TableColumn
Describes a column in a table.
JSON representation | |
---|---|
{
"header": string,
"align": enum ( |
Fields | |
---|---|
header |
Header text for the column. |
align |
Horizontal alignment of content w.r.t column. If unspecified, content will be aligned to the leading edge. |
HorizontalAlignment
The alignment of the content within the cell.
Enums | |
---|---|
UNSPECIFIED |
Unspecified horizontal alignment. |
LEADING |
Leading edge of the cell. This is the default. |
CENTER |
Content is aligned to the center of the column. |
TRAILING |
Content is aligned to the trailing edge of the column. |
TableRow
Describes a row in the table.
JSON representation | |
---|---|
{
"cells": [
{
object ( |
Fields | |
---|---|
cells[] |
Cells in this row. The first 3 cells are guaranteed to be shown but others might be cut on certain surfaces. Please test with the simulator to see which cells will be shown for a given surface. |
divider |
Indicates whether there should be a divider after each row. |
TableCell
Describes a cell in a row.
JSON representation | |
---|---|
{ "text": string } |
Fields | |
---|---|
text |
Text content of the cell. |
Media
Represents one media object. Contains information about the media, such as name, description, url, etc.
JSON representation | |
---|---|
{ "mediaType": enum ( |
Fields | |
---|---|
mediaType |
Media type. |
startOffset |
Start offset of the first media object. A duration in seconds with up to nine fractional digits, terminated by ' |
optionalMediaControls[] |
Optional media control types this media response session can support. If set, request will be made to 3p when a certain media event happens. If not set, 3p must still handle two default control type, FINISHED and FAILED. |
mediaObjects[] |
List of Media Objects |
MediaType
Media type of this response.
Enums | |
---|---|
MEDIA_TYPE_UNSPECIFIED |
Unspecified media type. |
AUDIO |
Audio file. |
MEDIA_STATUS_ACK |
Response to acknowledge a media status report. |
OptionalMediaControls
Optional media control types the media response can support
Enums | |
---|---|
OPTIONAL_MEDIA_CONTROLS_UNSPECIFIED |
Unspecified value |
PAUSED |
Paused event. Triggered when user pauses the media. |
STOPPED |
Stopped event. Triggered when user exits out of 3p session during media play. |
MediaObject
Represents a single media object
JSON representation | |
---|---|
{
"name": string,
"description": string,
"url": string,
"image": {
object ( |
Fields | |
---|---|
name |
Name of this media object. |
description |
Description of this media object. |
url |
The url pointing to the media content. |
image |
Image to show with the media card. |
MediaImage
Image to show with the media card.
JSON representation | |
---|---|
{ // Union field |
Fields | ||
---|---|---|
Union field image . Image. image can be only one of the following: |
||
large |
A large image, such as the cover of the album, etc. |
|
icon |
A small image icon displayed on the right from the title. It's resized to 36x36 dp. |
Collection
A card for presenting a collection of options to select from.
JSON representation | |
---|---|
{ "title": string, "subtitle": string, "items": [ { object ( |
Fields | |
---|---|
title |
Title of the collection. Optional. |
subtitle |
Subtitle of the collection. Optional. |
items[] |
min: 2 max: 10 |
imageFill |
How the image backgrounds of collection items will be filled. Optional. |
CollectionItem
An item in the collection
JSON representation | |
---|---|
{ "key": string } |
Fields | |
---|---|
key |
Required. The NLU key that matches the entry key name in the associated Type. |
List
A card for presenting a list of options to select from.
JSON representation | |
---|---|
{
"title": string,
"subtitle": string,
"items": [
{
object ( |
Fields | |
---|---|
title |
Title of the list. Optional. |
subtitle |
Subtitle of the list. Optional. |
items[] |
min: 2 max: 30 |
ListItem
An item in the list
JSON representation | |
---|---|
{ "key": string } |
Fields | |
---|---|
key |
Required. The NLU key that matches the entry key name in the associated Type. |
Suggestion
Input suggestion to be presented to the user.
JSON representation | |
---|---|
{ "title": string } |
Fields | |
---|---|
title |
Required. The text shown in the suggestion chip. When tapped, this text will be posted back to the conversation verbatim as if the user had typed it. Each title must be unique among the set of suggestion chips. Max 25 chars |
Diagnostics
Diagnostics information related to the conversation round.
JSON representation | |
---|---|
{
"actionsBuilderEvents": [
{
object ( |
Fields | |
---|---|
actionsBuilderEvents[] |
List of events with details about processing of the conversation round throughout the stages of the Actions Builder interaction model. Populated for Actions Builder & Actions SDK apps only. |
ExecutionEvent
Contains information about execution event which happened during processing Actions Builder conversation request. For an overview of the stages involved in a conversation request, see https://developers.google.com/assistant/conversational/actions.
JSON representation | |
---|---|
{ "eventTime": string, "executionState": { object ( |
Fields | ||
---|---|---|
eventTime |
Timestamp when the event happened. A timestamp in RFC3339 UTC "Zulu" format, with nanosecond resolution and up to nine fractional digits. Examples: |
|
executionState |
State of the execution during this event. |
|
status |
Resulting status of particular execution step. |
|
warningMessages[] |
List of warnings generated during execution of this Event. Warnings are tips for the developer discovered during the conversation request. These are usually non-critical and do not halt the execution of the request. For example, a warnings might be generated when webhook tries to override a custom Type which does not exist. Errors are reported as a failed status code, but warnings can be present even when the status is OK. |
|
Union field EventData . Detailed information specific to different of events that may be involved in processing a conversation round. The field set here defines the type of this event. EventData can be only one of the following: |
||
userInput |
User input handling event. |
|
intentMatch |
Intent matching event. |
|
conditionsEvaluated |
Condition evaluation event. |
|
onSceneEnter |
OnSceneEnter execution event. |
|
webhookRequest |
Webhook request dispatch event. |
|
webhookResponse |
Webhook response receipt event. |
|
webhookInitiatedTransition |
Webhook-initiated transition event. |
|
slotMatch |
Slot matching event. |
|
slotRequested |
Slot requesting event. |
|
slotValidated |
Slot validation event. |
|
formFilled |
Form filling event. |
|
waitingUserInput |
Waiting-for-user-input event. |
|
endConversation |
End-of-conversation event. |
ExecutionState
Current state of the execution.
JSON representation | |
---|---|
{ "currentSceneId": string, "sessionStorage": { object }, "slots": { object ( |
Fields | |
---|---|
currentSceneId |
ID of the scene which is currently active. |
sessionStorage |
State of the session storage: https://developers.google.com/assistant/conversational/storage-session |
slots |
State of the slots filling, if applicable: https://developers.google.com/assistant/conversational/scenes#slot_filling |
promptQueue[] |
Prompt queue: https://developers.google.com/assistant/conversational/prompts |
userStorage |
State of the user storage: https://developers.google.com/assistant/conversational/storage-user |
householdStorage |
State of the home storage: https://developers.google.com/assistant/conversational/storage-home |
Slots
Represents the current state of a the scene's slots.
JSON representation | |
---|---|
{ "status": enum ( |
Fields | |
---|---|
status |
The current status of slot filling. |
slots |
The slots associated with the current scene. An object containing a list of |
SlotFillingStatus
Represents the current status of slot filling.
Enums | |
---|---|
UNSPECIFIED |
Fallback value when the usage field is not populated. |
INITIALIZED |
The slots have been initialized but slot filling has not started. |
COLLECTING |
The slot values are being collected. |
FINAL |
All slot values are final and cannot be changed. |
Slot
Represents a slot.
JSON representation | |
---|---|
{ "mode": enum ( |
Fields | |
---|---|
mode |
The mode of the slot (required or optional). Can be set by developer. |
status |
The status of the slot. |
value |
The value of the slot. Changing this value in the response, will modify the value in slot filling. |
updated |
Indicates if the slot value was collected on the last turn. This field is read-only. |
prompt |
Optional. This prompt is sent to the user when needed to fill a required slot. This prompt overrides the existing prompt defined in the console. This field is not included in the webhook request. |
SlotMode
Represents the mode of a slot, that is, if it is required or not.
Enums | |
---|---|
MODE_UNSPECIFIED |
Fallback value when the usage field is not populated. |
OPTIONAL |
Indicates that the slot is not required to complete slot filling. |
REQUIRED |
Indicates that the slot is required to complete slot filling. |
SlotStatus
Represents the status of a slot.
Enums | |
---|---|
SLOT_UNSPECIFIED |
Fallback value when the usage field is not populated. |
EMPTY |
Indicates that the slot does not have any values. This status cannot be modified through the response. |
INVALID |
Indicates that the slot value is invalid. This status can be set through the response. |
FILLED |
Indicates that the slot has a value. This status cannot be modified through the response. |
Status
The Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. Each Status
message contains three pieces of data: error code, error message, and error details.
You can find out more about this error model and how to work with it in the API Design Guide.
JSON representation | |
---|---|
{ "code": integer, "message": string, "details": [ { "@type": string, field1: ..., ... } ] } |
Fields | |
---|---|
code |
The status code, which should be an enum value of |
message |
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the |
details[] |
A list of messages that carry the error details. There is a common set of message types for APIs to use. An object containing fields of an arbitrary type. An additional field |
UserConversationInput
Information related to user input.
JSON representation | |
---|---|
{ "type": string, "originalQuery": string } |
Fields | |
---|---|
type |
Type of user input. E.g. keyboard, voice, touch, etc. |
originalQuery |
Original text input from the user. |
IntentMatch
Information about triggered intent match (global or within a scene): https://developers.google.com/assistant/conversational/intents
JSON representation | |
---|---|
{
"intentId": string,
"intentParameters": {
string: {
object ( |
Fields | |
---|---|
intentId |
Intent id which triggered this interaction. |
intentParameters |
Parameters of intent which triggered this interaction. An object containing a list of |
handler |
Name of the handler attached to this interaction. |
nextSceneId |
Scene to which this interaction leads to. |
ConditionsEvaluated
Results of conditions evaluation: https://developers.google.com/assistant/conversational/scenes#conditions
JSON representation | |
---|---|
{ "failedConditions": [ { object ( |
Fields | |
---|---|
failedConditions[] |
List of conditions which were evaluated to 'false'. |
successCondition |
The first condition which was evaluated to 'true', if any. |
Condition
Evaluated condition.
JSON representation | |
---|---|
{ "expression": string, "handler": string, "nextSceneId": string } |
Fields | |
---|---|
expression |
Expression specified in this condition. |
handler |
Handler name specified in evaluated condition. |
nextSceneId |
Destination scene specified in evaluated condition. |
OnSceneEnter
Information about execution of onSceneEnter stage: https://developers.google.com/assistant/conversational/scenes#onEnter
JSON representation | |
---|---|
{ "handler": string } |
Fields | |
---|---|
handler |
Handler name specified in onSceneEnter event. |
WebhookRequest
Information about a request dispatched to the Action webhook: https://developers.google.com/assistant/conversational/webhooks#payloads
JSON representation | |
---|---|
{ "requestJson": string } |
Fields | |
---|---|
requestJson |
Payload of the webhook request. |
WebhookResponse
Information about a response received from the Action webhook: https://developers.google.com/assistant/conversational/webhooks#payloads
JSON representation | |
---|---|
{ "responseJson": string } |
Fields | |
---|---|
responseJson |
Payload of the webhook response. |
WebhookInitiatedTransition
Event triggered by destination scene returned from webhook: https://developers.google.com/assistant/conversational/webhooks#transition_scenes
JSON representation | |
---|---|
{ "nextSceneId": string } |
Fields | |
---|---|
nextSceneId |
ID of the scene the transition is leading to. |
SlotMatch
Information about matched slot(s): https://developers.google.com/assistant/conversational/scenes#slot_filling
JSON representation | |
---|---|
{
"nluParameters": {
string: {
object ( |
Fields | |
---|---|
nluParameters |
Parameters extracted by NLU from user input. An object containing a list of |
SlotRequested
Information about currently requested slot: https://developers.google.com/assistant/conversational/scenes#slot_filling
JSON representation | |
---|---|
{
"slot": string,
"prompt": {
object ( |
Fields | |
---|---|
slot |
Name of the requested slot. |
prompt |
Slot prompt. |
SlotValidated
Event which happens after webhook validation was finished for slot(s): https://developers.google.com/assistant/conversational/scenes#slot_filling
FormFilled
Event which happens when form is fully filled: https://developers.google.com/assistant/conversational/scenes#slot_filling
WaitingForUserInput
Event which happens when system needs user input: https://developers.google.com/assistant/conversational/scenes#input
EndConversation
Event which informs that conversation with agent was ended.