[null,null,["最后更新时间 (UTC):2025-07-26。"],[[["\u003cp\u003eReturns an \u003ccode\u003eee.Model\u003c/code\u003e representing a Vertex AI model endpoint for making predictions within Earth Engine.\u003c/p\u003e\n"],["\u003cp\u003eEnables integration with pre-trained or custom Vertex AI models for Earth Engine analysis.\u003c/p\u003e\n"],["\u003cp\u003eAccepts various parameters for configuring model inputs, outputs, and prediction behavior.\u003c/p\u003e\n"],["\u003cp\u003eSupports customization of data types, projections, tile sizes, and payload formats for model interaction.\u003c/p\u003e\n"],["\u003cp\u003eThis functionality is currently in public preview and subject to potential changes.\u003c/p\u003e\n"]]],["This document describes the `ee.Model.fromVertexAi` method, which retrieves a model from a Vertex AI endpoint. Key actions include specifying the `endpoint`, input properties, and output properties/bands. Users can define `inputShapes`, `inputTileSize`, and `inputOverlapSize` for image predictions. Control over data handling involves `inputTypeOverride`, `proj`, `fixInputProj`, `outputTileSize`, `outputMultiplier`, `maxPayloadBytes`, and `payloadFormat`. The method returns a Model, with detailed argument specifications provided.\n"],null,["# ee.Model.fromVertexAi\n\nReturns an ee.Model from a description of a Vertex AI model endpoint. (See https://cloud.google.com/vertex-ai).\n\n\u003cbr /\u003e\n\n| **Warning:** This method is in public preview and may undergo breaking changes.\n\n| Usage | Returns |\n|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|\n| `ee.Model.fromVertexAi(endpoint, `*inputProperties* `, `*inputTypeOverride* `, `*inputShapes* `, `*proj* `, `*fixInputProj* `, `*inputTileSize* `, `*inputOverlapSize* `, `*outputTileSize* `, `*outputBands* `, `*outputProperties* `, `*outputMultiplier* `, `*maxPayloadBytes* `, `*payloadFormat*`)` | Model |\n\n| Argument | Type | Details |\n|---------------------|---------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `endpoint` | String | The endpoint name for predictions. |\n| `inputProperties` | List, default: null | Properties passed with each prediction instance. Image predictions are tiled, so these properties will be replicated into each image tile instance. Defaults to no properties. |\n| `inputTypeOverride` | Dictionary, default: null | Types to which model inputs will be coerced if specified. Both Image bands and Image/Feature properties are valid. |\n| `inputShapes` | Dictionary, default: null | The fixed shape of input array bands. For each array band not specified, the fixed array shape will be automatically deduced from a non-masked pixel. |\n| `proj` | Projection, default: null | The input projection at which to sample all bands. Defaults to the default projection of an image's first band. |\n| `fixInputProj` | Boolean, default: null | If true, pixels will be sampled in a fixed projection specified by 'proj'. The output projection is used otherwise. Defaults to false. |\n| `inputTileSize` | List, default: null | Rectangular dimensions of pixel tiles passed in to prediction instances. Required for image predictions. |\n| `inputOverlapSize` | List, default: null | Amount of adjacent-tile overlap in X/Y along each edge of pixel tiles passed in to prediction instances. Defaults to \\[0, 0\\]. |\n| `outputTileSize` | List, default: null | Rectangular dimensions of pixel tiles returned from AI Platform. Defaults to the value in 'inputTileSize'. |\n| `outputBands` | Dictionary, default: null | A map from output band names to a dictionary of output band info. Valid band info fields are 'type' and 'dimensions'. 'type' should be a ee.PixelType describing the output band, and 'dimensions' is an optional integer with the number of dimensions in that band e.g., \"outputBands: {'p': {'type': ee.PixelType.int8(), 'dimensions': 1}}\". Required for image predictions. |\n| `outputProperties` | Dictionary, default: null | A map from output property names to a dictionary of output property info. Valid property info fields are 'type' and 'dimensions'. 'type' should be a ee.PixelType describing the output property, and 'dimensions' is an optional integer with the number of dimensions for that property if it is an array e.g., \"outputBands: {'p': {'type': ee.PixelType.int8(), 'dimensions': 1}}\". Required for predictions from FeatureCollections. |\n| `outputMultiplier` | Float, default: null | An approximation to the increase in data volume for the model outputs over the model inputs. If specified this must be \\\u003e= 1. This is only needed if the model produces more data than it consumes, e.g., a model that takes 5 bands and produces 10 outputs per pixel. |\n| `maxPayloadBytes` | Long, default: null | The prediction payload size limit in bytes. Defaults to 1.5MB (1500000 bytes.) |\n| `payloadFormat` | String, default: null | The payload format of entries in prediction requests and responses. One of: \\['SERIALIZED_TF_TENSORS, 'RAW_JSON', 'ND_ARRAYS', 'GRPC_TF_TENSORS', 'GRPC_SERIALIZED_TF_TENSORS', 'GRPC_TF_EXAMPLES'\\]. Defaults to 'SERIALIZED_TF_TENSORS'. |"]]