ML Kit Pose Detection API は、アプリ デベロッパーが連続した動画または静止画像から被写体の姿勢をリアルタイムで検出できる、軽量で汎用性の高いソリューションです。ポーズは、一連の骨格のランドマーク ポイントで、ある時点での身体の位置を表します。ランドマークは、肩や腰など、さまざまな部位に対応しています。ランドマークの相対位置を使用して、ポーズを区別できます。
深度分析の Z 座標: この値は、ユーザーの身体の一部がユーザーの腰の前にあるか背後にあるかを判断するのに役立ちます。詳しくは、以下の Z 座標のセクションをご覧ください。
Pose Detection API はランドマークと位置情報のセットを返すという点で、Facial Recognition API と似ています。ただし、顔検出では、笑顔や開いた目などの機能の認識も試みられますが、姿勢検出では、ポーズのランドマークやポーズ自体に意味はありません。ポーズを解釈する独自のアルゴリズムを作成できます例については、姿勢分類のヒントをご覧ください。
Z 座標は、すべてのランドマークに対して計算される試験運用版の値です。X 座標および Y 座標と同様に「画像ピクセル」で測定されますが、実際の 3D 値ではありません。Z 軸はカメラに直交し、被写体の腰の間を通ります。Z 軸の原点は、腰の間のほぼ中心点(カメラを基準とした左右および前後)です。負の Z 値はカメラに向かって、正の値はカメラから離れています。
Z 座標に上限も下限もありません。
サンプル結果
次の表は、右側のポーズのランドマークの座標と InFrameLikelihood を示しています。ユーザーの左手の Z 座標は負の座標です。これは、被写体の腰の中心の前でカメラの方が多いためです。
[null,null,["最終更新日 2025-07-25 UTC。"],[[["\u003cp\u003eML Kit's Pose Detection API enables real-time body pose tracking in apps, using skeletal landmark points to identify body positions.\u003c/p\u003e\n"],["\u003cp\u003eIt offers cross-platform support, full-body tracking with 33 landmarks including hands and feet, and InFrameLikelihood scores for landmark accuracy.\u003c/p\u003e\n"],["\u003cp\u003eTwo SDK options are available: a base SDK for real-time performance and an accurate SDK for higher precision landmark coordinates.\u003c/p\u003e\n"],["\u003cp\u003eThe API is currently in beta and subject to change.\u003c/p\u003e\n"]]],[],null,["# Pose detection\n\nThis API is offered in beta, and is not subject to any SLA or deprecation policy. Changes may be made to this API that break backward compatibility. \n\nThe ML Kit Pose Detection API is a lightweight versatile solution for app\ndevelopers to detect the pose of a subject's body in real time from a\ncontinuous video or static image. A pose describes the body's position at one\nmoment in time with a set of skeletal landmark points. The landmarks\ncorrespond to different body parts such as the shoulders and hips. The relative\npositions of landmarks can be used to distinguish one pose from another.\n\n[iOS](/ml-kit/vision/pose-detection/ios)\n[Android](/ml-kit/vision/pose-detection/android)\n\nML Kit Pose Detection produces a full-body 33 point skeletal match that includes\nfacial landmarks (ears, eyes, mouth, and nose) and points on the hands and feet.\nFigure 1 below shows the landmarks looking through the camera at the user, so\nit's a mirror image. The user's right side appears on the left of the image: \n**Figure 1.**Landmarks\n\nML Kit Pose Detection doesn't require specialized equipment or ML expertise in\norder to achieve great results. With this technology developers can create one\nof a kind experiences for their users with only a few lines of code.\n\nThe user's face must be present in order to detect a pose. Pose detection works\nbest when the subject's entire body is visible in the frame, but it also\ndetects a partial body pose. In that case the landmarks that are not\nrecognized are assigned coordinates outside of the image.\n\nKey capabilities\n----------------\n\n- **Cross-platform support** Enjoy the same experience on both Android and iOS.\n- **Full body tracking** The model returns 33 key skeletal landmark points, including the positions of the hands and feet.\n- **InFrameLikelihood score** For each landmark, a measure that indicates the probability that the landmark is within the image frame. The score has a range of 0.0 to 1.0, where 1.0 indicates high confidence.\n- **Two optimized SDKs** The base SDK runs in real time on modern phones like the Pixel 4 and iPhone X. It returns results at the rate of \\~30 and \\~45 fps respectively. However, the precision of the landmark coordinates may vary. The accurate SDK returns results at a slower framerate, but produces more accurate coordinate values.\n- **Z Coordinate for depth analysis** This value can help determine whether parts of the users body are in front or behind the users' hips. For more information, see the [Z Coordinate](#z_coordinate) section below.\n\nThe Pose Detection API is similar to the [Facial Recognition API](/ml-%0Akit/vision/face-detection) in that it returns a set of landmarks and their\nlocation. However, while Face Detection also tries to recognize features such as\na smiling mouth or open eyes, Pose Detection does not attach any meaning to the\nlandmarks in a pose or the pose itself. You can create your own algorithms to\ninterpret a pose. See\n[Pose Classification Tips](/ml-kit/vision/pose-detection/classifying-poses) for\nsome examples.\n\nPose detection can only detect one person in an image. If two people are in the\nimage, the model will assign landmarks to the person detected with the highest\nconfidence.\n\nZ Coordinate\n------------\n\nThe Z Coordinate is an experimental value that is calculated for every landmark. It\nis measured in \"image pixels\" like the X and Y coordinates, but it is not a true\n3D value. The Z axis is perpendicular to the camera and passes between\na subject's hips. The origin of the Z axis is approximately the center point\nbetween the hips (left/right and front/back relative to the camera).\nNegative Z values are towards the camera; positive values are away from it.\nThe Z coordinate does not have an upper or lower bound.\n\nSample results\n--------------\n\nThe following table shows the coordinates and InFrameLikelihood for a few\nlandmarks in the pose to the right. Note that the Z coordinates for the user's\nleft hand are negative, since they are in front of the subject's hips' center\nand towards the camera. \n\n| Landmark | Type | Position | InFrameLikelihood |\n|----------|----------------|------------------------------------|-------------------|\n| 11 | LEFT_SHOULDER | (734.9671, 550.7924, -118.11934) | 0.9999038 |\n| 12 | RIGHT_SHOULDER | (391.27032, 583.2485, -321.15836) | 0.9999894 |\n| 13 | LEFT_ELBOW | (903.83704, 754.676, -219.67009) | 0.9836427 |\n| 14 | RIGHT_ELBOW | (322.18152, 842.5973, -179.28519) | 0.99970156 |\n| 15 | LEFT_WRIST | (1073.8956, 654.9725, -820.93463) | 0.9737737 |\n| 16 | RIGHT_WRIST | (218.27956, 1015.70435, -683.6567) | 0.995568 |\n| 17 | LEFT_PINKY | (1146.1635, 609.6432, -956.9976) | 0.95273364 |\n| 18 | RIGHT_PINKY | (176.17755, 1065.838, -776.5006) | 0.9785348 |\n\nUnder the hood\n--------------\n\nFor more implementation details on the underlying ML models for this API, check\nout our [Google AI blog post](https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html).\n\nTo learn more about our ML fairness practices and how the models were trained,\nsee our [Model Card](/static/ml-kit/images/vision/pose-detection/pose_model_card.pdf)"]]