Provides an interface for performing text proofreading.
It supports both standard and streaming inferences, as well as methods for preparing and cleaning up model resources.
Typical usage:
 ProofreadingRequest request = ProofreadingRequest.builder(text).build();
 ListenableFuture future = proofreader.runInference(request);
 Futures.addCallback(future, new FutureCallback<>() {
     @Override
     public void onSuccess(ProofreadingResult result) {
         Log.d(TAG, result.getResults(0).getText());
     }
     @Override
     public void onFailure(Throwable t) {
         Log.e(TAG, "Failed to proofread", t);
     }
 }, executor);
 Public Method Summary
| ListenableFuture<Integer> | 
                  
                  checkFeatureStatus()
                   
                    Checks the current availability status of the proofreading feature.
                   | 
| void | 
                  
                  close()
                   
                    Releases resources associated with the proofreading engine.
                   | 
| ListenableFuture<Void> | 
                  
                  downloadFeature(DownloadCallback callback)
                   
                    Downloads the required model assets for the proofreading feature if they are
                    not already available.
                   | 
| ListenableFuture<String> | 
                  
                  getBaseModelName()
                   
                    Returns the name of the base model used by this proofreader instance.
                   | 
| ListenableFuture<Void> | 
                  
                  prepareInferenceEngine()
                   
                    Prepares the inference engine for use by loading necessary models and
                    initializing runtime components.
                   | 
| ListenableFuture<ProofreadingResult> | 
                  
                  runInference(ProofreadingRequest
                  request)
                   
                    Performs asynchronous proofreading on the provided input request.
                   | 
| ListenableFuture<ProofreadingResult> | 
                  
                  runInference(ProofreadingRequest
                  request, StreamingCallback streamingCallback)
                   
                    Performs streaming proofreading inference on the provided input request.
                   | 
Inherited Method Summary
Public Methods
public ListenableFuture<Integer> checkFeatureStatus ()
Checks the current availability status of the proofreading feature.
Returns
- a ListenableFutureresolving to aFeatureStatusindicating feature readiness.
public void close ()
Releases resources associated with the proofreading engine.
This should be called when the proofreader is no longer needed. Can be safely called multiple times.
public ListenableFuture<Void> downloadFeature (DownloadCallback callback)
Downloads the required model assets for the proofreading feature if they are not already available.
Use this method to proactively download models before inference. The provided
            DownloadCallback
            is invoked to report progress and completion status.
Parameters
| callback | a non-null DownloadCallbackfor receiving download updates. | 
|---|
Returns
- a ListenableFuturethat completes when the download finishes or fails. It completes immediately if already downloaded.
public ListenableFuture<String> getBaseModelName ()
Returns the name of the base model used by this proofreader instance.
The model name may be used for logging, debugging, or feature gating purposes.
Returns
- a ListenableFutureresolving to the base model name.
public ListenableFuture<Void> prepareInferenceEngine ()
Prepares the inference engine for use by loading necessary models and initializing runtime components.
If the models haven't been downloaded yet, it will trigger the download and wait for it to complete first.
While calling this method is optional, we recommend invoking it well before the first inference call to reduce the latency of the initial inference.
Returns
- a cancellable ListenableFuturethat completes when the engine is ready.
public ListenableFuture<ProofreadingResult> runInference (ProofreadingRequest request)
Performs asynchronous proofreading on the provided input request.
This is the standard, non-streaming version of inference. The full proofreading suggestions are returned once the model completes processing.
This method is non-blocking. To handle the result, callers should attach a listener
            to the returned ListenableFuture
            using 
            Futures.addCallback(ListenableFuture, FutureCallback super V>
            , Executor) or similar methods. The inference runs on background threads and
            may complete at a later time depending on model availability and input size.
The returned ListenableFuture
            is cancellable. If the inference is no longer needed (e.g., the user navigates away or
            input changes), calling future.cancel(true) will attempt to interrupt the
            process and free up resources.
Note that inference requests may fail under certain conditions such as:
- Invalid or malformed input in the ProofreadingRequest
- Exceeded usage quota or failed safety checks
GenAiException
            handling should be implemented when attaching the callback to the future.
          Parameters
| request | a non-null ProofreadingRequestcontaining input text. | 
|---|
Returns
- a cancellable ListenableFuturethat resolves to aProofreadingResult.
public ListenableFuture<ProofreadingResult> runInference (ProofreadingRequest request, StreamingCallback streamingCallback)
Performs streaming proofreading inference on the provided input request.
Only one ProofreadingSuggestion
            will be streamed out via the StreamingCallback,
            while there could be multiple ProofreadingSuggestions
            in the final ProofreadingResult.
            The streamed out ProofreadingSuggestion
            is not guaranteed to be the first one in the final result list.
Streaming can be interrupted by a GenAiException.
            In that case, consider removing any already streamed partial output from the UI.
The returned ListenableFuture
            is cancellable. If the inference is no longer needed (e.g., the user navigates away or
            input changes), calling future.cancel(true) will attempt to interrupt the
            process and free up resources.
Note that inference requests may fail under certain conditions such as:
- Invalid or malformed input in the ProofreadingRequest
- Exceeded usage quota or failed safety checks
GenAiException
            handling should be implemented when attaching the callback to the future.
          Parameters
| request | a non-null ProofreadingRequestcontaining the input text. | 
|---|---|
| streamingCallback | a non-null StreamingCallbackfor receiving streamed results. | 
Returns
- a cancellable ListenableFutureresolving to the finalProofreadingResult.
