High-performance pose detection for React Native using Google's MediaPipe models with optimized frame processing for smooth real-time tracking.
You can find the package on npm: react-native-mediapipe-posedetection
β οΈ New Architecture RequiredThis library only supports React Native's New Architecture (Turbo Modules). You must enable the New Architecture in your app to use this library.
- β¨ Real-time pose detection via
react-native-vision-camera - π― 33 pose landmarks per detected person
- π Optimized for performance (~15 FPS throttling to prevent memory issues)
- π± iOS & Android support
- π₯ GPU acceleration support
- π¨ Static image detection support
- πͺ Easy-to-use React hooks
- React Native: 0.74.0 or higher
- New Architecture: Must be enabled
- iOS: iOS 12.0+
- Android: API 24+
- Dependencies:
react-native-vision-camera^4.0.0 (for real-time detection)react-native-worklets-core^1.0.0 (for frame processing)
npm install react-native-mediapipe-posedetection react-native-vision-camera react-native-worklets-core
# or
yarn add react-native-mediapipe-posedetection react-native-vision-camera react-native-worklets-coreIf you are using Expo, you can use the built-in config plugin to automatically copy your MediaPipe model files to the native Android and iOS projects during prebuild.
- Add your model files (e.g.,
pose_landmarker_lite.task) to a directory in your project (e.g.,./assets/models/). - Update your
app.jsonorapp.config.js:
{
"expo": {
"plugins": [
[
"react-native-mediapipe-posedetection",
{
"assetsPaths": ["./assets/models/"]
}
]
]
}
}This plugin will copy all files from the specified assetsPaths to:
- Android:
android/app/src/main/assets/ - iOS: The root of the Xcode project (and add them to the build resources).
Note: The
assetsPathsare relative to your project root.
If you haven't already enabled the New Architecture in your React Native app:
In your android/gradle.properties:
newArchEnabled=trueIn your ios/Podfile:
use_frameworks! :linkage => :static
$RNNewArchEnabled = trueThen reinstall pods:
cd ios && pod install- Download the MediaPipe pose landmarker model (e.g.,
pose_landmarker_lite.task) - Add it to your Xcode project
- Ensure it's included in "Copy Bundle Resources" build phase
The MediaPipe dependencies are automatically included. Place your model file in android/app/src/main/assets/.
import {
usePoseDetection,
RunningMode,
Delegate,
KnownPoseLandmarks,
} from 'react-native-mediapipe-posedetection';
import { Camera, useCameraDevice } from 'react-native-vision-camera';
function PoseDetectionScreen() {
const device = useCameraDevice('back');
const poseDetection = usePoseDetection(
{
onResults: (result) => {
// result.landmarks contains detected pose keypoints
console.log('Number of poses:', result.landmarks.length);
if (result.landmarks[0]?.length > 0) {
const nose = result.landmarks[0][KnownPoseLandmarks.nose];
console.log('Nose position:', nose.x, nose.y);
}
},
onError: (error) => {
console.error('Detection error:', error.message);
},
},
RunningMode.LIVE_STREAM,
'pose_landmarker_lite.task',
{
numPoses: 1,
minPoseDetectionConfidence: 0.5,
minPosePresenceConfidence: 0.5,
minTrackingConfidence: 0.5,
delegate: Delegate.GPU,
}
);
if (!device) return null;
return (
<Camera
style={{ flex: 1 }}
device={device}
isActive={true}
frameProcessor={poseDetection.frameProcessor}
onLayout={poseDetection.cameraViewLayoutChangeHandler}
/>
);
}import {
PoseDetectionOnImage,
Delegate,
} from 'react-native-mediapipe-posedetection';
async function detectPoseInImage(imagePath: string) {
const result = await PoseDetectionOnImage(
imagePath,
'pose_landmarker_lite.task',
{
numPoses: 2, // Detect up to 2 people
minPoseDetectionConfidence: 0.5,
delegate: Delegate.GPU,
}
);
console.log('Detected poses:', result.landmarks.length);
console.log('Inference time:', result.inferenceTime, 'ms');
}For a simpler setup, use the provided MediapipeCamera component:
import { MediapipeCamera } from 'react-native-mediapipe-posedetection';
function App() {
return (
<MediapipeCamera
style={{ flex: 1 }}
cameraPosition="back"
onResults={(result) => {
console.log('Pose detected:', result.landmarks);
}}
poseDetectionOptions={{
numPoses: 1,
minPoseDetectionConfidence: 0.5,
}}
/>
);
}Hook for real-time pose detection.
Parameters:
-
callbacks:DetectionCallbacks<PoseDetectionResultBundle>onResults: (result: PoseDetectionResultBundle) => void- Called when poses are detectedonError: (error: DetectionError) => void- Called on detection errors
-
runningMode:RunningModeRunningMode.LIVE_STREAM- For camera/video inputRunningMode.VIDEO- For video file processingRunningMode.IMAGE- For static images (usePoseDetectionOnImageinstead)
-
model:string- Path to MediaPipe model file (e.g.,'pose_landmarker_lite.task') -
options:Partial<PoseDetectionOptions>(optional)numPoses:number- Maximum number of poses to detect (default: 1)minPoseDetectionConfidence:number- min confidence threshold (default: 0.5)minPosePresenceConfidence:number- min presence threshold (default: 0.5)minTrackingConfidence:number- min tracking threshold (default: 0.5)shouldOutputSegmentationMasks:boolean- Include segmentation masks (default: false)delegate:Delegate.CPU | Delegate.GPU | Delegate.NNAPI- Processing delegate (default: GPU)mirrorMode:'no-mirror' | 'mirror' | 'mirror-front-only'- Camera mirroringfpsMode:'none' | number- Additional FPS throttling (default: 'none')
Returns: MediaPipeSolution
frameProcessor: VisionCamera frame processorcameraViewLayoutChangeHandler: Layout change handlercameraDeviceChangeHandler: Camera device change handlercameraOrientationChangedHandler: Orientation change handlerresizeModeChangeHandler: Resize mode handlercameraViewDimensions: Current camera view dimensions
Detect poses in a static image.
Parameters:
imagePath:string- Path to the image filemodel:string- Path to MediaPipe model fileoptions: Same asusePoseDetectionoptions
Returns: Promise<PoseDetectionResultBundle>
interface PoseDetectionResultBundle {
inferenceTime: number; // Milliseconds
size: { width: number; height: number };
landmarks: Landmark[][]; // Array of poses, each with 33 landmarks
worldLandmarks: Landmark[][]; // 3D world coordinates
segmentationMasks?: Mask[]; // Optional segmentation masks
}
interface Landmark {
x: number; // Normalized 0-1
y: number; // Normalized 0-1
z: number; // Depth (relative)
visibility?: number; // Confidence 0-1
presence?: number; // Presence confidence 0-1
}Use KnownPoseLandmarks for easy landmark access:
import { KnownPoseLandmarks } from 'react-native-mediapipe-posedetection';
const landmarks = result.landmarks[0];
const nose = landmarks[KnownPoseLandmarks.nose];
const leftShoulder = landmarks[KnownPoseLandmarks.leftShoulder];
const rightWrist = landmarks[KnownPoseLandmarks.rightWrist];Available landmarks:
- Face:
nose,leftEye,rightEye,leftEar,rightEar,mouthLeft,mouthRight - Upper body:
leftShoulder,rightShoulder,leftElbow,rightElbow,leftWrist,rightWrist - Hands:
leftPinky,rightPinky,leftIndex,rightIndex,leftThumb,rightThumb - Lower body:
leftHip,rightHip,leftKnee,rightKnee,leftAnkle,rightAnkle - Feet:
leftHeel,rightHeel,leftFootIndex,rightFootIndex
This library includes critical performance optimizations for React Native's new architecture:
To prevent memory pressure and crashes, the library automatically throttles:
- Frame processing to ~15 FPS (MediaPipe detection calls)
- Event emissions to ~15 FPS (JavaScript callbacks)
This dual-layer throttling ensures:
- β Stable memory usage
- β No crashes during extended use
- β Smooth pose detection experience
- β Efficient battery usage
The throttling is transparent and requires no configuration. 15 FPS is sufficient for smooth pose tracking in most use cases.
For even more control, use the fpsMode option:
usePoseDetection(callbacks, RunningMode.LIVE_STREAM, 'model.task', {
fpsMode: 10, // Process frames at 10 FPS
});If you were using a previous version that supported the Bridge architecture:
-
Upgrade React Native to 0.74.0 or higher
-
Enable New Architecture (see installation instructions)
-
Rebuild your app completely:
# iOS cd ios && pod install && cd .. # Android cd android && ./gradlew clean && cd ..
The API remains the same, so your application code shouldn't need changes.
Cause: New Architecture is not enabled or not properly configured.
Solution:
- Verify
newArchEnabled=trueinandroid/gradle.properties - Verify
$RNNewArchEnabled = trueinios/Podfile - Clean and rebuild your app
- Ensure you're using React Native 0.74.0+
Cause: Model file not found or invalid configuration.
Solution:
- Verify the model file path is correct
- Ensure the model file is included in your app bundle
- Check that the model file is a valid MediaPipe pose landmarker model
Solution: The library automatically throttles to prevent this. If you still experience issues:
- Ensure you're on the latest version
- Reduce
numPosesto 1 - Set
shouldOutputSegmentationMasks: false - Use
Delegate.CPUinstead ofDelegate.GPUif GPU memory is limited
Solutions:
- Use
pose_landmarker_lite.taskinstead ofpose_landmarker_full.task - Set
fpsMode: 10for lower frame processing - Reduce
numPosesif you don't need to detect multiple people - Enable GPU acceleration:
delegate: Delegate.GPU
Check out the example directory for a complete working app demonstrating:
- Real-time pose detection
- Pose landmark visualization
- Camera controls
- Performance optimization
Run the example:
cd example
yarn install
yarn ios # or yarn androidSee the contributing guide to learn how to contribute to the repository and the development workflow.
This library is based on the work from react-native-mediapipe by cdiddy77. The pose detection module codes were taken from this repository and upgraded to support React Native's New Architecture (Turbo Modules).
MIT Β© EndLess728
Made with create-react-native-library