Provides an interface for performing text rewriting.
It supports both standard and streaming inferences, as well as methods for preparing and cleaning up model resources.
Typical usage:
 RewritingRequest request = RewritingRequest.builder(text).build();
 ListenableFuture future = rewriter.runInference(request);
 Futures.addCallback(future, new FutureCallback<>() {
     @Override
     public void onSuccess(RewritingResult result) {
         Log.d(TAG, result.getResults(0).getText());
     }
     @Override
     public void onFailure(Throwable t) {
         Log.e(TAG, "Failed to rewrite", t);
     }
 }, executor);
 Public Method Summary
| ListenableFuture<Integer> | 
                  
                  checkFeatureStatus()
                   
                    Checks the current availability status of the rewriting feature.
                   | 
| void | 
                  close()
                   
                    Releases resources associated with the rewriting engine.
                   | 
| ListenableFuture<Void> | 
                  
                  downloadFeature(DownloadCallback callback)
                   
                    Downloads the required model assets for the rewriting feature if they are not
                    already available.
                   | 
| ListenableFuture<String> | 
                  
                  getBaseModelName()
                   
                    Returns the name of the base model used by this rewriter instance.
                   | 
| ListenableFuture<Void> | 
                  
                  prepareInferenceEngine()
                   
                    Prepares the inference engine for use by loading necessary models and
                    initializing runtime components.
                   | 
| ListenableFuture<RewritingResult> | 
                  
                  runInference(RewritingRequest
                  request)
                   
                    Performs asynchronous rewriting on the provided input request.
                   | 
| ListenableFuture<RewritingResult> | 
                  
                  runInference(RewritingRequest
                  request, StreamingCallback streamingCallback)
                   
                    Performs streaming rewriting inference on the provided input request.
                   | 
Inherited Method Summary
Public Methods
public ListenableFuture<Integer> checkFeatureStatus ()
Checks the current availability status of the rewriting feature.
Returns
- a ListenableFutureresolving to aFeatureStatusindicating feature readiness.
public void close ()
Releases resources associated with the rewriting engine.
This should be called when the rewriter is no longer needed. Can be safely called multiple times.
public ListenableFuture<Void> downloadFeature (DownloadCallback callback)
Downloads the required model assets for the rewriting feature if they are not already available.
Use this method to proactively download models before inference. The provided
            DownloadCallback
            is invoked to report progress and completion status.
Parameters
| callback | a non-null DownloadCallbackfor receiving download updates. | 
|---|
Returns
- a ListenableFuturethat completes when the download finishes or fails. It completes immediately if already downloaded.
public ListenableFuture<String> getBaseModelName ()
Returns the name of the base model used by this rewriter instance.
The model name may be used for logging, debugging, or feature gating purposes.
Returns
- a ListenableFutureresolving to the base model name.
public ListenableFuture<Void> prepareInferenceEngine ()
Prepares the inference engine for use by loading necessary models and initializing runtime components.
If the models haven't been downloaded yet, it will trigger the download and wait for it to complete first.
While calling this method is optional, we recommend invoking it well before the first inference call to reduce the latency of the initial inference.
Returns
- a cancellable ListenableFuturethat completes when the engine is ready.
public ListenableFuture<RewritingResult> runInference (RewritingRequest request)
Performs asynchronous rewriting on the provided input request.
This is the standard, non-streaming version of inference. The full rewrite suggestions are returned once the model completes processing.
This method is non-blocking. To handle the result, callers should attach a listener
            to the returned ListenableFuture
            using 
            Futures.addCallback(ListenableFuture, FutureCallback super V>
            , Executor) or similar methods. The inference runs on background threads and
            may complete at a later time depending on model availability and input size.
The returned ListenableFuture
            is cancellable. If the inference is no longer needed (e.g., the user navigates away or
            input changes), calling future.cancel(true) will attempt to interrupt the
            process and free up resources.
Note that inference requests may fail under certain conditions such as:
- Invalid or malformed input in the RewritingRequest
- Exceeded usage quota or failed safety checks
GenAiException
            handling should be implemented when attaching the callback to the future.
          Parameters
| request | a non-null RewritingRequestcontaining input text. | 
|---|
Returns
- a cancellable ListenableFuturethat resolves to aRewritingResult.
public ListenableFuture<RewritingResult> runInference (RewritingRequest request, StreamingCallback streamingCallback)
Performs streaming rewriting inference on the provided input request.
Only one RewritingSuggestion
            will be streamed out via the StreamingCallback,
            while there could be multiple RewritingSuggestions
            in the final RewritingResult.
            The streamed out RewritingSuggestion
            is not guaranteed to be the first one in the final result list.
Streaming can be interrupted by a GenAiException.
            In that case, consider removing any already streamed partial output from the UI.
The returned ListenableFuture
            is cancellable. If the inference is no longer needed (e.g., the user navigates away or
            input changes), calling future.cancel(true) will attempt to interrupt the
            process and free up resources.
Note that inference requests may fail under certain conditions such as:
- Invalid or malformed input in the RewritingRequest
- Exceeded usage quota or failed safety checks
GenAiException
            handling should be implemented when attaching the callback to the future.
          Parameters
| request | a non-null RewritingRequestcontaining the input text. | 
|---|---|
| streamingCallback | a non-null StreamingCallbackfor receiving streamed results. | 
Returns
- a cancellable ListenableFutureresolving to the finalRewritingResult.
