Organiza tus páginas con colecciones
Guarda y categoriza el contenido según tus preferencias.
Puedes usar ML Kit para etiquetar objetos reconocidos en una imagen. El modelo predeterminado que se proporciona con
ML Kit admite más de 400 etiquetas diferentes.
Incluye los siguientes pods del ML Kit en tu Podfile:
pod 'GoogleMLKit/ImageLabeling', '8.0.0'
Después de instalar o actualizar los Pods de tu proyecto, abre el proyecto de Xcode a través de su
.xcworkspace El Kit de AA es compatible con Xcode 12.4 o versiones posteriores.
Ahora está todo listo para etiquetar imágenes.
1. Prepara la imagen de entrada
Crea un objeto VisionImage con un objeto UIImage o
CMSampleBuffer
Si usas un UIImage, sigue estos pasos:
Crea un objeto VisionImage con UIImage. Asegúrate de especificar el .orientation correcto.
Swift
let image = VisionImage(image: UIImage)
visionImage.orientation = image.imageOrientation
Para etiquetar objetos de una imagen, pasa el objeto VisionImage a la
Método processImage() de ImageLabeler
Primero, obtén una instancia de ImageLabeler.
Swift
letlabeler=ImageLabeler.imageLabeler()// Or, to set the minimum confidence required:// let options = ImageLabelerOptions()// options.confidenceThreshold = 0.7// let labeler = ImageLabeler.imageLabeler(options: options)
Objective-C
MLKImageLabeler*labeler=[MLKImageLabelerimageLabeler];// Or, to set the minimum confidence required:// MLKImageLabelerOptions *options =// [[MLKImageLabelerOptions alloc] init];// options.confidenceThreshold = 0.7;// MLKImageLabeler *labeler =// [MLKImageLabeler imageLabelerWithOptions:options];
Por último, pasa la imagen al método processImage():
3. Obtén información sobre los objetos etiquetados
Si el etiquetado de imágenes se realiza correctamente, el controlador de finalización recibe un array de
ImageLabel. Cada objeto ImageLabel representa un elemento que se
etiquetada en la imagen. El modelo base admite más de 400 etiquetas diferentes.
Puedes obtener la descripción de texto de cada etiqueta, el índice entre todas las etiquetas compatibles con
el modelo y la puntuación de confianza de la coincidencia. Por ejemplo:
Sugerencias para mejorar el rendimiento en tiempo real
Si quieres etiquetar imágenes en una aplicación en tiempo real, sigue estos pasos:
pautas para lograr la mejor velocidad de fotogramas:
Para procesar fotogramas de video, usa la API síncrona results(in:) del etiquetador de imágenes. Llamada
este método desde el
De AVCaptureVideoDataOutputSampleBufferDelegate
La función captureOutput(_, didOutput:from:) para obtener resultados de un video determinado de forma síncrona
marco. Mantener de AVCaptureVideoDataOutputalwaysDiscardsLateVideoFrames como true para limitar las llamadas al etiquetador de imágenes Si un nuevo cliente
El fotograma estará disponible mientras el etiquetador de imágenes se esté ejecutando y se descartará.
Si usas la salida del etiquetador de imágenes para superponer gráficos
la imagen de entrada, primero obtén el resultado del ML Kit y, luego, renderiza la imagen
y superponerla en un solo paso. De esta manera, renderizas en la superficie de visualización.
solo una vez por cada trama de entrada procesada. Consulta updatePreviewOverlayViewWithLastFrame.
en la muestra de inicio rápido del Kit de AA para ver un ejemplo.
[null,null,["Última actualización: 2025-08-29 (UTC)"],[[["\u003cp\u003eML Kit's image labeling API lets you identify objects in images using a pre-trained model that recognizes over 400 labels.\u003c/p\u003e\n"],["\u003cp\u003eTo use this API, you need to include the \u003ccode\u003eGoogleMLKit/ImageLabeling\u003c/code\u003e pod, create a \u003ccode\u003eVisionImage\u003c/code\u003e object from your image, and then process it with an \u003ccode\u003eImageLabeler\u003c/code\u003e instance.\u003c/p\u003e\n"],["\u003cp\u003eResults are provided as an array of \u003ccode\u003eImageLabel\u003c/code\u003e objects, each containing the label's text, confidence score, and index.\u003c/p\u003e\n"],["\u003cp\u003eFor real-time applications, leverage the synchronous \u003ccode\u003eresults(in:)\u003c/code\u003e API and manage video frame processing efficiently to maintain optimal frame rates.\u003c/p\u003e\n"]]],["ML Kit allows image labeling using a default model with 400+ labels. To begin, include the `GoogleMLKit/ImageLabeling` pod in your Podfile and open the `.xcworkspace` in Xcode. Input images are prepared using `VisionImage` objects, created from either `UIImage` or `CMSampleBuffer`. An `ImageLabeler` instance processes the image, returning `ImageLabel` objects with text, confidence, and index. For real-time performance, use the synchronous `results(in:)` API with video frames, managing the frame rate.\n"],null,["You can use ML Kit to label objects recognized in an image. The default model provided with\nML Kit supports 400+ different labels.\n\n\u003cbr /\u003e\n\n| **Note:** ML Kit iOS APIs only run on 64-bit devices. If you build your app with 32-bit support, check the device's architecture before using this API.\n\nTry it out\n\n- Play around with [the sample app](https://github.com/googlesamples/mlkit/tree/master/ios/quickstarts/vision) to see an example usage of this API.\n\nBefore you begin\n\n1. Include the following ML Kit pods in your Podfile: \n\n ```\n pod 'GoogleMLKit/ImageLabeling', '8.0.0'\n ```\n2. After you install or update your project's Pods, open your Xcode project using its `.xcworkspace`. ML Kit is supported in Xcode version 12.4 or greater.\n\nNow you are ready to label images.\n\n1. Prepare the input image\n\nCreate a [`VisionImage`](/ml-kit/reference/swift/mlkitvision/api/reference/Classes/VisionImage) object using a `UIImage` or a\n`CMSampleBuffer`.\n\nIf you use a `UIImage`, follow these steps:\n\n- Create a [`VisionImage`](/ml-kit/reference/swift/mlkitvision/api/reference/Classes/VisionImage) object with the `UIImage`. Make sure to specify the correct `.orientation`. \n\n Swift \n\n ```text\n let image = VisionImage(image: UIImage)\n visionImage.orientation = image.imageOrientation\n ```\n\n Objective-C \n\n ```objective-c\n MLKVisionImage *visionImage = [[MLKVisionImage alloc] initWithImage:image];\n visionImage.orientation = image.imageOrientation;\n ```\n\nIf you use a `CMSampleBuffer`, follow these steps:\n\n- Specify the orientation of the image data contained in the\n `CMSampleBuffer`.\n\n To get the image orientation: \n\n Swift \n\n ```gdscript\n func imageOrientation(\n deviceOrientation: UIDeviceOrientation,\n cameraPosition: AVCaptureDevice.Position\n ) -\u003e UIImage.Orientation {\n switch deviceOrientation {\n case .portrait:\n return cameraPosition == .front ? .leftMirrored : .right\n case .landscapeLeft:\n return cameraPosition == .front ? .downMirrored : .up\n case .portraitUpsideDown:\n return cameraPosition == .front ? .rightMirrored : .left\n case .landscapeRight:\n return cameraPosition == .front ? .upMirrored : .down\n case .faceDown, .faceUp, .unknown:\n return .up\n }\n }\n \n ```\n\n Objective-C \n\n ```css+lasso\n - (UIImageOrientation)\n imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientation\n cameraPosition:(AVCaptureDevicePosition)cameraPosition {\n switch (deviceOrientation) {\n case UIDeviceOrientationPortrait:\n return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationLeftMirrored\n : UIImageOrientationRight;\n\n case UIDeviceOrientationLandscapeLeft:\n return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationDownMirrored\n : UIImageOrientationUp;\n case UIDeviceOrientationPortraitUpsideDown:\n return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationRightMirrored\n : UIImageOrientationLeft;\n case UIDeviceOrientationLandscapeRight:\n return cameraPosition == AVCaptureDevicePositionFront ? UIImageOrientationUpMirrored\n : UIImageOrientationDown;\n case UIDeviceOrientationUnknown:\n case UIDeviceOrientationFaceUp:\n case UIDeviceOrientationFaceDown:\n return UIImageOrientationUp;\n }\n }\n \n ```\n- Create a [`VisionImage`](/ml-kit/reference/swift/mlkitvision/api/reference/Classes/VisionImage) object using the `CMSampleBuffer` object and orientation: \n\n Swift \n\n ```povray\n let image = VisionImage(buffer: sampleBuffer)\n image.orientation = imageOrientation(\n deviceOrientation: UIDevice.current.orientation,\n cameraPosition: cameraPosition)\n ```\n\n Objective-C \n\n ```objective-c\n MLKVisionImage *image = [[MLKVisionImage alloc] initWithBuffer:sampleBuffer];\n image.orientation =\n [self imageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientation\n cameraPosition:cameraPosition];\n ```\n\n2. Configure and run the image labeler To label objects in an image, pass the `VisionImage` object to the `ImageLabeler`'s `processImage()` method.\n\n\u003cbr /\u003e\n\n1. First, get an instance of `ImageLabeler`.\n\nSwift \n\n```swift\nlet labeler = ImageLabeler.imageLabeler()\n\n// Or, to set the minimum confidence required:\n// let options = ImageLabelerOptions()\n// options.confidenceThreshold = 0.7\n// let labeler = ImageLabeler.imageLabeler(options: options)\n```\n\nObjective-C \n\n```objective-c\nMLKImageLabeler *labeler = [MLKImageLabeler imageLabeler];\n\n// Or, to set the minimum confidence required:\n// MLKImageLabelerOptions *options =\n// [[MLKImageLabelerOptions alloc] init];\n// options.confidenceThreshold = 0.7;\n// MLKImageLabeler *labeler =\n// [MLKImageLabeler imageLabelerWithOptions:options];\n```\n\n1. Then, pass the image to the `processImage()` method:\n\nSwift \n\n```swift\nlabeler.process(image) { labels, error in\n guard error == nil, let labels = labels else { return }\n\n // Task succeeded.\n // ...\n}\n```\n\nObjective-C \n\n```objective-c\n[labeler processImage:image\ncompletion:^(NSArray *_Nullable labels,\n NSError *_Nullable error) {\n if (error != nil) { return; }\n\n // Task succeeded.\n // ...\n}];\n```\n\n3. Get information about labeled objects\n\nIf image labeling succeeds, the completion handler receives an array of\n`ImageLabel` objects. Each `ImageLabel` object represents something that was\nlabeled in the image. The base model supports [400+ different labels](/ml-kit/vision/image-labeling/label-map).\nYou can get each label's text description, index among all labels supported by\nthe model, and the confidence score of the match. For example: \n\nSwift \n\n```swift\nfor label in labels {\n let labelText = label.text\n let confidence = label.confidence\n let index = label.index\n}\n```\n\nObjective-C \n\n```objective-c\nfor (MLKImageLabel *label in labels) {\n NSString *labelText = label.text;\n float confidence = label.confidence;\n NSInteger index = label.index;\n}\n```\n\nTips to improve real-time performance\n\nIf you want to label images in a real-time application, follow these\nguidelines to achieve the best framerates:\n\n- For processing video frames, use the `results(in:)` synchronous API of the image labeler. Call this method from the [`AVCaptureVideoDataOutputSampleBufferDelegate`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutputsamplebufferdelegate)'s [`captureOutput(_, didOutput:from:)`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutputsamplebufferdelegate/1385775-captureoutput) function to synchronously get results from the given video frame. Keep [`AVCaptureVideoDataOutput`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutput)'s [`alwaysDiscardsLateVideoFrames`](https://developer.apple.com/documentation/avfoundation/avcapturevideodataoutput/1385780-alwaysdiscardslatevideoframes) as `true` to throttle calls to the image labeler. If a new video frame becomes available while the image labeler is running, it will be dropped.\n- If you use the output of the image labeler to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each processed input frame. See the [updatePreviewOverlayViewWithLastFrame](https://github.com/googlesamples/mlkit/blob/master/ios/quickstarts/vision/VisionExample/CameraViewController.swift) in the ML Kit quickstart sample for an example."]]