conv.ask(`Hi! Say something, and I'll repeat it.`);
Actions SDK (Java)
ResponseBuilder responseBuilder =
getResponseBuilder(request).add("Hi! Say something, and I'll repeat it.");
return responseBuilder.build();
JSON
{"expectUserResponse":true,"expectedInputs":[{"inputPrompt":{"richInitialPrompt":{"items":[{"simpleResponse":{"textToSpeech":"Hi! Say something, and I'll repeat it."}}]}},"possibleIntents":[{"intent":"actions.intent.TEXT"}]}],"conversationToken":"{\"data\":{}}","userStorage":"{\"data\":{}}"}
[null,null,["最后更新时间 (UTC):2025-07-26。"],[[["\u003cp\u003eFulfillment enables your Actions project to receive user input, process it, and provide responses, driving the conversation.\u003c/p\u003e\n"],["\u003cp\u003eYou can build fulfillment using the Actions SDK with Node.js or Java/Kotlin, or by referring to the Conversation webhook format.\u003c/p\u003e\n"],["\u003cp\u003eTo handle user requests, create functions that process input and use \u003ccode\u003econv.ask()\u003c/code\u003e to provide responses.\u003c/p\u003e\n"],["\u003cp\u003eManage conversation state with \u003ccode\u003econversationToken\u003c/code\u003e (HTTP/JSON API) or \u003ccode\u003econv.data\u003c/code\u003e (Node.js client library).\u003c/p\u003e\n"],["\u003cp\u003eFor the main invocation intent, orient the user with possible actions using \u003ccode\u003econv.ask()\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# Build fulfillment (Dialogflow)\n\n| **Learn more about building responses**\n|\n| The documentation here describes how to build fulfillment generally.\n| See [Surface capabilities](/assistant/df-asdk/surface-capabilities) and\n| [Responses](/assistant/df-asdk/responses),\n| for more information on how to handle specific portions of your conversation.\n\nFulfillment defines the conversational interface for your Actions project to obtain\nuser input and the logic to process the input and eventually fulfill the Action.\n\nOverview\n--------\n\nYour fulfillment receives requests from the Assistant, processes the request and\nresponds. This back-and-forth request and response process drives the\nconversation forward until you eventually fulfill the initial user request.\n\nThe following steps describe how you can build fulfillment using the Actions\nSDK with the Node.js or the Java/Kotlin client library:\n\n1. [Initialize the ActionsSdkApp object](#initialize_the_actionssdkapp_object).\n2. [Create functions to handle requests](#create_functions_to_handle_requests) in your fulfillment logic.\n\n| **Note:** If you are not using our client libraries, you can refer to the [Conversation webhook format](/assistant/df-asdk/reference/conversation-webhook-json) guide.\n\nBuilding dialogs\n----------------\n\n### Initialize the `ActionsSdkApp` object\n\nThe following code instantiates\n[`ActionsSdkApp`](//actions-on-google.github.io/actions-on-google-nodejs/2.12.0/index.html)\nand does some boilerplate Node.js setup for Google Cloud Functions:\nActions SDK (Node.js) \n\n```gdscript\n'use strict';\n\nconst {actionssdk} = require('actions-on-google');\nconst functions = require('firebase-functions');\n\nconst app = actionssdk({debug: true});\n\napp.intent('actions.intent.MAIN', (conv) =\u003e {\n conv.ask('Hi!');\n});\n\n// More intent handling if needed\nexports.myFunction = functions.https.onRequest(app);\n```\nActions SDK (Java) \n\n```text\nResponseBuilder responseBuilder = getResponseBuilder(request).add(\"Hi!\");\nreturn responseBuilder.build();\n```\nJSON \n\n```carbon\n{\n \"expectUserResponse\": true,\n \"expectedInputs\": [\n {\n \"inputPrompt\": {\n \"richInitialPrompt\": {\n \"items\": [\n {\n \"simpleResponse\": {\n \"textToSpeech\": \"Hi!\"\n }\n }\n ]\n }\n },\n \"possibleIntents\": [\n {\n \"intent\": \"actions.intent.TEXT\"\n }\n ]\n }\n ],\n \"conversationToken\": \"{\\\"data\\\":{}}\",\n \"userStorage\": \"{\\\"data\\\":{}}\"\n}\n```\n\n### Create functions to handle requests\n\nWhen users speak a phrase, you receive a request from Google Assistant. To\nfulfill the intents that come in requests, create functions that handle the\ntriggered intent.\n| **Note:** See [intents](/assistant/df-asdk/reference/intents) for more information on the intents you can provide fulfillment for.\n\nTo handle requests:\n\n1. Carry out any logic required to process the user input.\n\n2. Call the `conv.ask()` function passing the response you want to show\n as an argument.\n\nThe following code shows how to build a simple response:\nActions SDK (Node.js) \n\n```text\nconv.ask(`Hi! Say something, and I'll repeat it.`);\n```\nActions SDK (Java) \n\n```text\nResponseBuilder responseBuilder =\n getResponseBuilder(request).add(\"Hi! Say something, and I'll repeat it.\");\nreturn responseBuilder.build();\n```\nJSON \n\n```carbon\n{\n \"expectUserResponse\": true,\n \"expectedInputs\": [\n {\n \"inputPrompt\": {\n \"richInitialPrompt\": {\n \"items\": [\n {\n \"simpleResponse\": {\n \"textToSpeech\": \"Hi! Say something, and I'll repeat it.\"\n }\n }\n ]\n }\n },\n \"possibleIntents\": [\n {\n \"intent\": \"actions.intent.TEXT\"\n }\n ]\n }\n ],\n \"conversationToken\": \"{\\\"data\\\":{}}\",\n \"userStorage\": \"{\\\"data\\\":{}}\"\n}\n```\n\n### Handling intents\n\nOnce you have all your functions to handle triggered intents, use `app.intent` to\nassign handlers to intents.\nActions SDK (Node.js) \n\n```text\napp.intent('actions.intent.TEXT', (conv) =\u003e {\n // handle text intent.\n});\napp.intent('actions.intent.MAIN', (conv) =\u003e {\n // handle main intent.\n});\n```\nActions SDK (Java) \n\n```transact-sql\n@ForIntent(\"actions.intent.MAIN\")\npublic ActionResponse main(ActionRequest request) {\n // handle main intent\n // ...\n}\n\n@ForIntent(\"actions.intent.TEXT\")\npublic ActionResponse text(ActionRequest request) {\n // handle text intent\n // ...\n}\n```\n\n### Ending conversations\n\nWhen you no longer want any user input in return and want to end the conversation,\ncall the `conv.close()` function.\nThis function tells Google Assistant to speak back the text to the user and end the conversation by closing the microphone.\n\nHandling the main invocation intent\n-----------------------------------\n\nWhen users trigger the `app.intent.action.MAIN` intent, you normally don't\nneed to do any user input processing. If your action package contains many\nactions and covers many use cases, it's a good idea to orient the user by telling\nthem a few things that they can do.\n\n1. Call the `conv.ask()` function passing your response as an argument. Google Assistant speaks your response to the user and then waits for the user to trigger one of the intents that you specified.\n\nThe following snippet shows how to handle a simple welcome intent:\nActions SDK (Node.js) \n\n```scilab\n// handle the initialTrigger\nfunction handleMainIntent(conv, input) {\n conv.ask(input);\n}\n```\nActions SDK (Java) \n\n```text\nprivate ActionResponse handleMainIntent(ResponseBuilder rb, String input) {\n return rb.add(input).build();\n}\n```\n\n### Conversation state\n\nIf you are using the [conversation HTTP/JSON webhook API](/assistant/df-asdk/reference/webhooks), you can maintain the\nstate of your conversations with a JSON formatted object (`conversationToken`)\nthat is passed back and forth between you and Google Assistant. If you are\nusing the [Node.js client library](/assistant/df-asdk/reference/nodejsv2/overview), you\ncan write to and read from the `conv.data` field directly. This field\nis passed back and forth automatically between requests and responses.\nActions SDK (Node.js) \n\n```css+lasso\nconv.data = {something: 10};\nlet value = conv.data.something;\n```\nActions SDK (Java) \n\n```text\nResponseBuilder rb = getResponseBuilder(request);\nrb.getConversationData().put(\"something\", 10);\nObject value = rb.getConversationData().get(\"something\");\n```"]]