Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Anda dapat menggunakan ML Kit untuk memberi label pada objek yang dikenali dalam gambar. Model default yang disediakan dengan
ML Kit mendukung 400+ label berbeda.
Cobalah
Cobalah aplikasi contoh untuk
melihat contoh penggunaan API ini.
Sebelum memulai
Sertakan pod ML Kit berikut di Podfile Anda:
pod 'GoogleMLKit/ImageLabeling', '8.0.0'
Setelah Anda menginstal atau mengupdate Pod project, buka project Xcode menggunakan
.xcworkspace. ML Kit didukung di Xcode versi 12.4 atau yang lebih baru.
Sekarang Anda siap memberi label pada gambar.
1. Menyiapkan gambar input
Buat objek VisionImage menggunakan UIImage atau objek
CMSampleBuffer.
Jika Anda menggunakan UIImage, ikuti langkah-langkah berikut:
Buat objek VisionImage dengan UIImage. Pastikan untuk menentukan .orientation yang benar.
Swift
let image = VisionImage(image: UIImage)
visionImage.orientation = image.imageOrientation
2. Mengonfigurasi dan menjalankan pemberi label pada gambar
Untuk memberi label pada objek dalam gambar, teruskan objek VisionImage ke elemen
Metode processImage()ImageLabeler.
Pertama, dapatkan instance ImageLabeler.
Swift
letlabeler=ImageLabeler.imageLabeler()// Or, to set the minimum confidence required:// let options = ImageLabelerOptions()// options.confidenceThreshold = 0.7// let labeler = ImageLabeler.imageLabeler(options: options)
Objective-C
MLKImageLabeler*labeler=[MLKImageLabelerimageLabeler];// Or, to set the minimum confidence required:// MLKImageLabelerOptions *options =// [[MLKImageLabelerOptions alloc] init];// options.confidenceThreshold = 0.7;// MLKImageLabeler *labeler =// [MLKImageLabeler imageLabelerWithOptions:options];
Jika pelabelan gambar berhasil, handler penyelesaian menerima array
Objek ImageLabel. Setiap objek ImageLabel mewakili sesuatu yang
yang diberi label dalam gambar. Model dasar mendukung 400+ label yang berbeda.
Anda dapat memperoleh deskripsi teks setiap label, indeks di antara semua label yang didukung oleh
model, dan skor keyakinan kecocokannya. Contoh:
Jika Anda ingin memberikan label pada gambar dalam aplikasi real-time, ikuti
panduan untuk mencapai kecepatan frame terbaik:
Untuk memproses frame video, gunakan API sinkron results(in:) dari pemberi label gambar. Telepon
metode ini dari class AVCaptureVideoDataOutputSampleBufferDelegatecaptureOutput(_, didOutput:from:) untuk mendapatkan hasil dari video yang diberikan secara sinkron
{i>frame<i}. Simpan AVCaptureVideoDataOutputalwaysDiscardsLateVideoFrames sebagai true untuk men-throttle panggilan ke pemberi label gambar. Jika
frame video tersedia saat pemberi label gambar sedang berjalan, dan frame tersebut akan dihapus.
Jika Anda menggunakan output pemberi label gambar untuk menempatkan grafis pada
gambar input, pertama-tama dapatkan hasilnya dari ML Kit, lalu render gambar
dan overlay dalam satu langkah. Dengan demikian, Anda merender ke permukaan tampilan
hanya sekali untuk setiap {i>
frame<i} input yang diproses. Lihat updatePreviewOverlayViewWithLastFrame
dalam contoh panduan memulai ML Kit sebagai contoh.
[null,null,["Terakhir diperbarui pada 2025-08-29 UTC."],[[["\u003cp\u003eML Kit's image labeling API lets you identify objects in images using a pre-trained model that recognizes over 400 labels.\u003c/p\u003e\n"],["\u003cp\u003eTo use this API, you need to include the \u003ccode\u003eGoogleMLKit/ImageLabeling\u003c/code\u003e pod, create a \u003ccode\u003eVisionImage\u003c/code\u003e object from your image, and then process it with an \u003ccode\u003eImageLabeler\u003c/code\u003e instance.\u003c/p\u003e\n"],["\u003cp\u003eResults are provided as an array of \u003ccode\u003eImageLabel\u003c/code\u003e objects, each containing the label's text, confidence score, and index.\u003c/p\u003e\n"],["\u003cp\u003eFor real-time applications, leverage the synchronous \u003ccode\u003eresults(in:)\u003c/code\u003e API and manage video frame processing efficiently to maintain optimal frame rates.\u003c/p\u003e\n"]]],["ML Kit allows image labeling using a default model with 400+ labels. To begin, include the `GoogleMLKit/ImageLabeling` pod in your Podfile and open the `.xcworkspace` in Xcode. Input images are prepared using `VisionImage` objects, created from either `UIImage` or `CMSampleBuffer`. An `ImageLabeler` instance processes the image, returning `ImageLabel` objects with text, confidence, and index. For real-time performance, use the synchronous `results(in:)` API with video frames, managing the frame rate.\n"],null,["You can use ML Kit to label objects recognized in an image. The default model provided with\nML Kit supports 400+ different labels.\n\n\u003cbr /\u003e\n\n| **Note:** ML Kit iOS APIs only run on 64-bit devices. If you build your app with 32-bit support, check the device's architecture before using this API.\n\nTry it out\n\n- Play around with [the sample app](https://github.com/googlesamples/mlkit/tree/master/ios/quickstarts/vision) to see an example usage of this API.\n\nBefore you begin\n\n1. Include the following ML Kit pods in your Podfile: \n\n ```\n pod 'GoogleMLKit/ImageLabeling', '8.0.0'\n ```\n2. After you install or update your project's Pods, open your Xcode project using its `.xcworkspace`. ML Kit is supported in Xcode version 12.4 or greater.\n\nNow you are ready to label images.\n\n1. Prepare the input image\n\nCreate a [`VisionImage`](/ml-kit/reference/swift/mlkitvision/api/reference/Classes/VisionImage) object using a `UIImage` or a\n`CMSampleBuffer`.\n\nIf you use a `UIImage`, follow these steps:\n\n- Create a [`VisionImage`](/ml-kit/reference/swift/mlkitvision/api/reference/Classes/VisionImage) object with the `UIImage`. Make sure to specify the correct `.orientation`. \n\n Swift \n\n ```text\n let image = VisionImage(image: UIImage)\n visionImage.orientation = image.imageOrientation\n ```\n\n Objective-C \n\n ```objective-c\n MLKVisionImage *visionImage = [[MLKVisionImage alloc] initWithImage:image];\n visionImage.orientation = image.imageOrientation;\n ```\n\nIf you use a `CMSampleBuffer`, follow these steps:\n\n- Specify the orientation of the image data contained in the\n `CMSampleBuffer`.\n\n To get the image orientation: \n\n Swift \n\n ```gdscript\n func imageOrientation(\n deviceOrientation: UIDeviceOrientation,\n cameraPosition: AVCaptureDevice.Position\n ) -\u003e UIImage.Orientation {\n switch deviceOrientation {\n case .portrait:\n return cameraPosition == .front ? .leftMirrored : .right\n case .landscapeLeft:\n return cameraPosition == .front ? .downMirrored : .up\n case .portraitUpsideDown:\n return cameraPosition == .front ? .rightMirrored : .left\n case .landscapeRight:\n return cameraPosition == .front ? .upMirrored : .down\n case .faceDown, .faceUp, .unknown:\n return .up\n }\n }\n \n ```\n\n Objective-C \n\n ```css+lasso\n - (UIImageOrientation)\n imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation\n cameraPosition:(AVCaptureDevicePosition)cameraPosition {\n switch (deviceOrientation) {\n case UIDeviceOrientationPortrait:\n return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationLeftMirrored\n : UIImageOrientationRight;\n\n case UIDeviceOrientationLandscapeLeft:\n return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationDownMirrored\n : UIImageOrientationUp;\n case UIDeviceOrientationPortraitUpsideDown:\n return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationRightMirrored\n : UIImageOrientationLeft;\n case UIDeviceOrientationLandscapeRight:\n return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationUpMirrored\n : UIImageOrientationDown;\n case UIDeviceOrientationUnknown:\n case UIDeviceOrientationFaceUp:\n case UIDeviceOrientationFaceDown:\n return UIImageOrientationUp;\n }\n }\n \n ```\n- Create a [`VisionImage`](/ml-kit/reference/swift/mlkitvision/api/reference/Classes/VisionImage) object using the `CMSampleBuffer` object and orientation: \n\n Swift \n\n ```povray\n let image = VisionImage(buffer: sampleBuffer)\n image.orientation = imageOrientation(\n deviceOrientation: UIDevice.current.orientation,\n cameraPosition: cameraPosition)\n ```\n\n Objective-C \n\n ```objective-c\n MLKVisionImage *image = [[MLKVisionImage alloc] initWithBuffer:sampleBuffer];\n image.orientation =\n [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation\n cameraPosition:cameraPosition];\n ```\n\n2. Configure and run the image labeler To label objects in an image, pass the `VisionImage` object to the `ImageLabeler`'s `processImage()` method.\n\n\u003cbr /\u003e\n\n1. First, get an instance of `ImageLabeler`.\n\nSwift \n\n```swift\nlet labeler = ImageLabeler.imageLabeler()\n\n// Or, to set the minimum confidence required:\n// let options = ImageLabelerOptions()\n// options.confidenceThreshold = 0.7\n// let labeler = ImageLabeler.imageLabeler(options: options)\n```\n\nObjective-C \n\n```objective-c\nMLKImageLabeler *labeler = [MLKImageLabeler imageLabeler];\n\n// Or, to set the minimum confidence required:\n// MLKImageLabelerOptions *options =\n// [[MLKImageLabelerOptions alloc] init];\n// options.confidenceThreshold = 0.7;\n// MLKImageLabeler *labeler =\n// [MLKImageLabeler imageLabelerWithOptions:options];\n```\n\n1. Then, pass the image to the `processImage()` method:\n\nSwift \n\n```swift\nlabeler.process(image) { labels, error in\n guard error == nil, let labels = labels else { return }\n\n // Task succeeded.\n // ...\n}\n```\n\nObjective-C \n\n```objective-c\n[labeler processImage:image\ncompletion:^(NSArray *_Nullable labels,\n NSError *_Nullable error) {\n if (error != nil) { return; }\n\n // Task succeeded.\n // ...\n}];\n```\n\n3. Get information about labeled objects\n\nIf image labeling succeeds, the completion handler receives an array of\n`ImageLabel` objects. Each `ImageLabel` object represents something that was\nlabeled in the image. The base model supports [400+ different labels](/ml-kit/vision/image-labeling/label-map).\nYou can get each label's text description, index among all labels supported by\nthe model, and the confidence score of the match. For example: \n\nSwift \n\n```swift\nfor label in labels {\n let labelText = label.text\n let confidence = label.confidence\n let index = label.index\n}\n```\n\nObjective-C \n\n```objective-c\nfor (MLKImageLabel *label in labels) {\n NSString *labelText = label.text;\n float confidence = label.confidence;\n NSInteger index = label.index;\n}\n```\n\nTips to improve real-time performance\n\nIf you want to label images in a real-time application, follow these\nguidelines to achieve the best framerates:\n\n- For processing video frames, use the `results(in:)` synchronous API of the image labeler. Call this method from the [`AVCaptureVideoDataOutputSampleBufferDelegate`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutputsamplebufferdelegate)'s [`captureOutput(_, didOutput:from:)`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutputsamplebufferdelegate/1385775-captureoutput) function to synchronously get results from the given video frame. Keep [`AVCaptureVideoDataOutput`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutput)'s [`alwaysDiscardsLateVideoFrames`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutput/1385780-alwaysdiscardslatevideoframes) as `true` to throttle calls to the image labeler. If a new video frame becomes available while the image labeler is running, it will be dropped.\n- If you use the output of the image labeler to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each processed input frame. See the [updatePreviewOverlayViewWithLastFrame](https://github.com/googlesamples/mlkit/blob/master/ios/quickstarts/vision/VisionExample/CameraViewController.swift) in the ML Kit quickstart sample for an example."]]