AI-generated Key Takeaways
- 
          
TensorFlow, an open source ML platform, is used in Earth Engine for advanced ML methods, with the API supporting data and imagery export/import in TFRecord format.
 - 
          
The
ee.Modelpackage in Earth Engine handles interactions with TensorFlow-backed machine learning models, including creating instances from AI Platform Predictor. - 
          
To interact with models hosted on AI Platform, they need to be in TensorFlow's SavedModel format and have compatible input/output formats for TensorProto interchange, which can be facilitated by the Earth Engine CLI's
model preparecommand. - 
          
Using models with
ee.Model.fromAiPlatformPredictor()requires sufficient permissions, specifically the ML Engine Model User role. - 
          
Image predictions on an
ee.Imagecan be made usingmodel.predictImage(), which returns anee.Imageand handles tiling and output projection adjustments automatically. 
TensorFlow is an open source ML platform that supports advanced ML methods such as deep learning. This page describes TensorFlow specific features in Earth Engine. Although TensorFlow models are developed and trained outside Earth Engine, the Earth Engine API provides methods for exporting training and testing data in TFRecord format and importing/exporting imagery in TFRecord format. See the TensorFlow examples page for more information about how to develop pipelines for using TensorFlow with data from Earth Engine. See the TFRecord page to learn more about how Earth Engine writes data to TFRecord files.
      ee.Model
    
    
      The ee.Model package handles interaction with TensorFlow backed machine
      learning models.
    
Interacting with models hosted on AI Platform
      A new ee.Model instance can be created with
      ee.Model.fromAiPlatformPredictor().  This is an ee.Model object
      that packages Earth Engine data into tensors, forwards them as predict requests to
      Google AI Platform then automatically
      reassembles the responses into Earth Engine data types.  Note that depending on the size
      and complexity of your model and its inputs, you may wish to
      adjust
        the minimum node size of your AI Platform model to accommodate a high volume of
      predictions.
    
Earth Engine requires AI Platform models to use TensorFlow's
      SavedModel
      format.  Before a hosted model can interact with Earth Engine, its inputs/outputs need to
      be compatible with the TensorProto interchange format, specifically serialized
      TensorProtos in base64.  To make this easier, the Earth Engine CLI has the
      model prepare command that wraps an existing
      SavedModel in the required operations to convert input/output formats.
      To use a model with ee.Model.fromAiPlatformPredictor(), you must have sufficient
      permissions to use the model.  Specifically, you (or anyone who uses the model) needs at
      least the 
      ML Engine Model User role.  You can inspect and set model permissions from the
      models page of the Cloud
        Console.
    
Regions
      You should use regional endpoints for your models, specifying the region at model creation,
      version creation and in ee.Model.fromAiPlatformPredictor().  Any region will
      work (don't use global), but us-central1 is preferred.  Don't specify the
      REGIONS parameter.  If you are are creating a model from the
      Cloud Console,
      ensure the regional box is checked.
    
Costs
Image Predictions
      Use model.predictImage() to make predictions on an ee.Image
      using a hosted model.  The return type of predictImage() is an
      ee.Image which can be added to the map, used in other computations,
      exported, etc.  Earth Engine will automatically tile the input bands and adjust
      the output projection for scale changes and overtiling as needed.  (See
      the TFRecord doc for more information on how tiling
      works).  Note that Earth Engine will always forward 3D tensors to your model even when
      bands are scalar (the last dimension will be 1).
      Nearly all convolutional models will have a fixed input projection (that of the data
      on which the model was trained).  In this case, set the fixInputProj parameter
      to true in your call to ee.Model.fromAiPlatformPredictor().
      When visualizing predictions, use caution when zooming out on a model that has a fixed
      input projection.  This is for the same reason as described
      here.  Specifically, zooming to a large spatial scope can result in requests for too
      much data and may manifest as slowdowns or rejections by AI Platform.