GenerateContentRequest

class GenerateContentRequest


A request to generate content.

Summary

Nested types

Public properties

Int

the max unique responses to return.

ImagePart?

the image to be used for generation.

Int

the maximum number of tokens that can be generated in the response.

PromptPrefix?

an experimental optional field for the prefix of the prompt.

Int

the seed for the random number generator.

Float

the degree of randomness in token selection.

TextPart

the text prompt to be used for generation.

Int

how many tokens to select from among the highest probabilities.

Public companion functions

builder

fun builder(image: ImagePart, text: TextPart): GenerateContentRequest.Builder

Public properties

candidateCount

val candidateCountInt

the max unique responses to return. The allowed range is 1, 8. The exact same candidates will be deduped before returning, which means requesting N candidates could give you 1 to N unique results. When setting this to be greater than one, it is common to set a higher temperature as well; otherwise, the multiple candidates can be very similar or even identical. The default value is 1.

image

val imageImagePart?

the image to be used for generation. Image is optional and if provided, it is added to the beginning of the prompt followed by text content.

maxOutputTokens

val maxOutputTokensInt

the maximum number of tokens that can be generated in the response. The allowed range is 1, 256. Specify a lower value for shorter responses and a higher value for potentially longer responses. When a candidate's generation is stopped due to reaching this limit, its FinishReason will be MAX_TOKENS. It's possible for different candidates in a single result to have different FinishReasons. Setting a lower value can be helpful if you want to make inference time-bound to unblock the UI. For example, if you request 4 candidates and 3 finish quickly but the last one is taking very long, setting a lower max output token limit will stop the long-running candidate and return the three completed ones. The default value is 256.

promptPrefix

val promptPrefixPromptPrefix?

an experimental optional field for the prefix of the prompt. This can be used to provide a prefix shared across multiple generation requests. When promptPrefix is set, the system can cache its processing of this prefix on supported devices, potentially reducing inference time. The promptPrefix is prepended to the text to form the full prompt. promptPrefix is not supported for image input and should not be set if an image is provided.

seed

val seedInt

the seed for the random number generator. The allowed range is any non-negative integer. Passing in a fixed positive seed is useful for getting more stable, deterministic results for the same input across runs. The default value is 0 which has a special meaning of using different seeds each time.

temperature

val temperatureFloat

the degree of randomness in token selection. The allowed range is 0.0f, 1.0f. Lower temperatures are good for prompts that require a less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0.0f means that the highest probability tokens are always selected. The default value for temperature is 0.0f.

text

val textTextPart

the text prompt to be used for generation.

topK

val topKInt

how many tokens to select from among the highest probabilities. A smaller value can make the output less random, while a larger value allows for more diversity. The theoretical range for top_k is from 1 to the size of the model's vocabulary. The default value is 3. The top-K of 3 means that the next token is sampled from among the three most probable tokens by using temperature.