在 Android 上使用 ML Kit 偵測臉部網格資訊

您可以使用 ML Kit 偵測類似自拍的圖片和影片中的臉部。

臉部網格偵測 API
SDK 名稱face-mesh-detection
導入作業程式碼和資產會在建構時靜態連結至應用程式。
應用程式大小影響~6.4MB
成效適用於大多數裝置。

立即試用

事前準備

  1. 在專案層級的 build.gradle 檔案中,請務必在 buildscript 和 allprojects 區段中加入 Google 的 Maven 存放區。

  2. 將 ML Kit 臉部網格偵測程式庫的依附元件新增至模組的應用程式層級 gradle 檔案,通常為 app/build.gradle

    dependencies {
     // ...
    
     implementation 'com.google.mlkit:face-mesh-detection:16.0.0-beta1'
    }
    

輸入圖片規範

  1. 拍攝時應與裝置攝影機保持約 2 公尺 (約 7 英尺) 的距離,確保臉部夠大,以利系統辨識臉部網格。一般來說,臉部越大,臉部網格辨識效果越好。

  2. 臉部應面向鏡頭,且至少要露出半張臉。 臉部和相機之間如有大型物體,可能會導致準確度降低。

如要在即時應用程式中偵測臉部,也請考慮輸入圖片的整體尺寸。較小的圖片處理速度較快,因此以較低解析度拍攝圖片可減少延遲。不過,請注意上述準確度規定,並確保主體的臉部盡可能占據圖片的大部分。

設定臉部網格偵測器

如要變更任何臉部網格偵測器的預設設定,請使用 FaceMeshDetectorOptions 物件指定這些設定。你可以變更下列設定:

  1. setUseCase

    • BOUNDING_BOX_ONLY:僅提供偵測到的臉部網格的周框。 這是最快速的臉部偵測器,但有範圍限制(臉部必須在攝影機約 2 公尺或 7 英尺內)。

    • FACE_MESH (預設選項):提供周框和額外的臉部網格資訊 (468 個 3D 點和三角形資訊)。與 BOUNDING_BOX_ONLY 用例相比,Pixel 3 測得的延遲時間增加約 15%。

例如:

Kotlin

val defaultDetector = FaceMeshDetection.getClient(
  FaceMeshDetectorOptions.DEFAULT_OPTIONS)

val boundingBoxDetector = FaceMeshDetection.getClient(
  FaceMeshDetectorOptions.Builder()
    .setUseCase(UseCase.BOUNDING_BOX_ONLY)
    .build()
)

Java

FaceMeshDetector defaultDetector =
        FaceMeshDetection.getClient(
                FaceMeshDetectorOptions.DEFAULT_OPTIONS);

FaceMeshDetector boundingBoxDetector = FaceMeshDetection.getClient(
        new FaceMeshDetectorOptions.Builder()
                .setUseCase(UseCase.BOUNDING_BOX_ONLY)
                .build()
        );

準備輸入圖片

如要在圖片中偵測臉部,請從 Bitmapmedia.ImageByteBuffer、位元組陣列或裝置上的檔案建立 InputImage 物件。然後,將 InputImage 物件傳遞至 FaceDetectorprocess 方法。

如要偵測臉部網格,請使用尺寸至少為 480x360 像素的圖片。如果您要即時偵測臉部,以這個最低解析度擷取影格有助於縮短延遲時間。

您可以從不同來源建立 InputImage 物件,詳情請參閱下文。

使用 media.Image

如要從 media.Image 物件建立 InputImage 物件 (例如從裝置的相機擷取圖片時),請將 media.Image 物件和圖片的旋轉角度傳遞至 InputImage.fromMediaImage()

如果您使用 CameraX 程式庫,OnImageCapturedListenerImageAnalysis.Analyzer 類別會為您計算旋轉值。

Kotlin

private class YourImageAnalyzer : ImageAnalysis.Analyzer {

    override fun analyze(imageProxy: ImageProxy) {
        val mediaImage = imageProxy.image
        if (mediaImage != null) {
            val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees)
            // Pass image to an ML Kit Vision API
            // ...
        }
    }
}

Java

private class YourAnalyzer implements ImageAnalysis.Analyzer {

    @Override
    public void analyze(ImageProxy imageProxy) {
        Image mediaImage = imageProxy.getImage();
        if (mediaImage != null) {
          InputImage image =
                InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees());
          // Pass image to an ML Kit Vision API
          // ...
        }
    }
}

如果您使用的相機程式庫未提供圖片的旋轉角度,可以根據裝置的旋轉角度和裝置中相機感應器的方向計算:

Kotlin

private val ORIENTATIONS = SparseIntArray()

init {
    ORIENTATIONS.append(Surface.ROTATION_0, 0)
    ORIENTATIONS.append(Surface.ROTATION_90, 90)
    ORIENTATIONS.append(Surface.ROTATION_180, 180)
    ORIENTATIONS.append(Surface.ROTATION_270, 270)
}

/**
 * Get the angle by which an image must be rotated given the device's current
 * orientation.
 */
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
@Throws(CameraAccessException::class)
private fun getRotationCompensation(cameraId: String, activity: Activity, isFrontFacing: Boolean): Int {
    // Get the device's current rotation relative to its "native" orientation.
    // Then, from the ORIENTATIONS table, look up the angle the image must be
    // rotated to compensate for the device's rotation.
    val deviceRotation = activity.windowManager.defaultDisplay.rotation
    var rotationCompensation = ORIENTATIONS.get(deviceRotation)

    // Get the device's sensor orientation.
    val cameraManager = activity.getSystemService(CAMERA_SERVICE) as CameraManager
    val sensorOrientation = cameraManager
            .getCameraCharacteristics(cameraId)
            .get(CameraCharacteristics.SENSOR_ORIENTATION)!!

    if (isFrontFacing) {
        rotationCompensation = (sensorOrientation + rotationCompensation) % 360
    } else { // back-facing
        rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360
    }
    return rotationCompensation
}

Java

private static final SparseIntArray ORIENTATIONS = new SparseIntArray();
static {
    ORIENTATIONS.append(Surface.ROTATION_0, 0);
    ORIENTATIONS.append(Surface.ROTATION_90, 90);
    ORIENTATIONS.append(Surface.ROTATION_180, 180);
    ORIENTATIONS.append(Surface.ROTATION_270, 270);
}

/**
 * Get the angle by which an image must be rotated given the device's current
 * orientation.
 */
@RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
private int getRotationCompensation(String cameraId, Activity activity, boolean isFrontFacing)
        throws CameraAccessException {
    // Get the device's current rotation relative to its "native" orientation.
    // Then, from the ORIENTATIONS table, look up the angle the image must be
    // rotated to compensate for the device's rotation.
    int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
    int rotationCompensation = ORIENTATIONS.get(deviceRotation);

    // Get the device's sensor orientation.
    CameraManager cameraManager = (CameraManager) activity.getSystemService(CAMERA_SERVICE);
    int sensorOrientation = cameraManager
            .getCameraCharacteristics(cameraId)
            .get(CameraCharacteristics.SENSOR_ORIENTATION);

    if (isFrontFacing) {
        rotationCompensation = (sensorOrientation + rotationCompensation) % 360;
    } else { // back-facing
        rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360;
    }
    return rotationCompensation;
}

接著,將 media.Image 物件和旋轉角度值傳遞至 InputImage.fromMediaImage()

Kotlin

val image = InputImage.fromMediaImage(mediaImage, rotation)

Java

InputImage image = InputImage.fromMediaImage(mediaImage, rotation);

使用檔案 URI

如要從檔案 URI 建立 InputImage 物件,請將應用程式內容和檔案 URI 傳遞至 InputImage.fromFilePath()。當您使用 ACTION_GET_CONTENT 意圖提示使用者從相簿應用程式選取圖片時,這項功能就非常實用。

Kotlin

val image: InputImage
try {
    image = InputImage.fromFilePath(context, uri)
} catch (e: IOException) {
    e.printStackTrace()
}

Java

InputImage image;
try {
    image = InputImage.fromFilePath(context, uri);
} catch (IOException e) {
    e.printStackTrace();
}

使用 ByteBufferByteArray

如要從 ByteBufferByteArray 建立 InputImage 物件,請先計算圖片旋轉角度,如先前所述的 media.Image 輸入內容。接著,使用緩衝區或陣列建立 InputImage 物件,並提供圖片的高度、寬度、色彩編碼格式和旋轉角度:

Kotlin

val image = InputImage.fromByteBuffer(
        byteBuffer,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
)
// Or:
val image = InputImage.fromByteArray(
        byteArray,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
)

Java

InputImage image = InputImage.fromByteBuffer(byteBuffer,
        /* image width */ 480,
        /* image height */ 360,
        rotationDegrees,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
);
// Or:
InputImage image = InputImage.fromByteArray(
        byteArray,
        /* image width */480,
        /* image height */360,
        rotation,
        InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12
);

使用 Bitmap

如要從 Bitmap 物件建立 InputImage 物件,請進行下列宣告:

Kotlin

val image = InputImage.fromBitmap(bitmap, 0)

Java

InputImage image = InputImage.fromBitmap(bitmap, rotationDegree);

圖片會以 Bitmap 物件和旋轉角度表示。

處理圖片

將圖片傳遞至 process 方法:

Kotlin

val result = detector.process(image)
        .addOnSuccessListener { result ->
            // Task completed successfully
            // …
        }
        .addOnFailureListener { e ->
            // Task failed with an exception
            // …
        }

Java

Task<List<FaceMesh>> result = detector.process(image)
        .addOnSuccessListener(
                new OnSuccessListener<List<FaceMesh>>() {
                    @Override
                    public void onSuccess(List<FaceMesh> result) {
                        // Task completed successfully
                        // …
                    }
                })
        .addOnFailureListener(
                new OnFailureListener() {
                    @Override
                    Public void onFailure(Exception e) {
                        // Task failed with an exception
                        // …
                    }
                });

取得偵測到的臉部網格資訊

如果圖片中偵測到任何臉孔,系統會將 FaceMesh 物件清單傳遞至成功事件監聽器。每個 FaceMesh 都代表圖片中偵測到的臉孔。您可以取得輸入圖片中每個臉部網格的邊界座標,以及您設定臉部網格偵測器要尋找的任何其他資訊。

Kotlin

for (faceMesh in faceMeshs) {
    val bounds: Rect = faceMesh.boundingBox()

    // Gets all points
    val faceMeshpoints = faceMesh.allPoints
    for (faceMeshpoint in faceMeshpoints) {
      val index: Int = faceMeshpoints.index()
      val position = faceMeshpoint.position
    }

    // Gets triangle info
    val triangles: List<Triangle<FaceMeshPoint>> = faceMesh.allTriangles
    for (triangle in triangles) {
      // 3 Points connecting to each other and representing a triangle area.
      val connectedPoints = triangle.allPoints()
    }
}

Java

for (FaceMesh faceMesh : faceMeshs) {
    Rect bounds = faceMesh.getBoundingBox();

    // Gets all points
    List<FaceMeshPoint> faceMeshpoints = faceMesh.getAllPoints();
    for (FaceMeshPoint faceMeshpoint : faceMeshpoints) {
        int index = faceMeshpoints.getIndex();
        PointF3D position = faceMeshpoint.getPosition();
    }

    // Gets triangle info
    List<Triangle<FaceMeshPoint>> triangles = faceMesh.getAllTriangles();
    for (Triangle<FaceMeshPoint> triangle : triangles) {
        // 3 Points connecting to each other and representing a triangle area.
        List<FaceMeshPoint> connectedPoints = triangle.getAllPoints();
    }
}