Comment on page
AI Animation API
Use Kinetix AI Animation API to build creators tools!
The AI Animate API is organized with REST providing a simple and secure way to retrieve an animation from a video or a text. Our API accepts form-encoded request bodies, returns JSON-encoded responses, and uses standard HTTP response codes, authentication, and verbs.
AI Animate API transforms any avatar-based video and any text prompt into a 3D animation.
- Average processing time for a 10 sec video = 3min
- No limitation on video duration input, animation file in output will perfectly match the video duration input
- All input videos are reencoded in 30 FPS to match with output technical characteristics
AI Animate API handles any avatar-based video into a 3D animation with the following characteristics:
- Multiple characters in the video:
- Only 1 character will be processed and transposed.
- If the video contains more than 5 characters at the same time for at least 1 sec, the AI processing is aborted and an error message is sent.
- If the video contains less than 5 characters, the following rules apply:
- 1.The model keep the character associated with the longest animation file
- 2.In case of equality of animation durations, the model keeps the one with the largest bounding box
- Full body, half-body: API handles both setup as input.
- Character clothing: Maximize the output quality with fit clothes that contrast with the background
- Video file size: Any size supported, the bigger the video, longer the process
- Video formats supported: AVI, FLV, MKV, MP4, TS, MOV or WebM.
- 4K videos are not supported
- Edited video: Avoid scene cuts or complex edited video that will alter character detection and animation processing
Only 1 character will be processed and transposed into the animation output.
- Multiple characters in the prompt:
- Only 1 character will be processed and transposed in the output animation.
Animation output characteristics based on the video or text input are the following:
- Only one animation file output: 1 .fbx or .glb or .usdz file
- Full body frames only: if half-body provided in input, the API creates a full body animation composed of the half-body animated input and other half-body static
- Animation format: FBX, GLB, USDz
- Number of frames: 30 Frames Per Second (FPS)
- Animation format: FBX, GLB, USDz
- Number of frames: 20 Frames Per Second (FPS)
(coming soon) Emote standards can be applied to the output animation on request in API call
There a 2 options to test and use the AI Animation API:
- Limited: free version of the AI Animation API that lets you output up to 20000 animation frames in total.
- If a video input makes you overpass the 20000 limits, the process is blocked and API sends back an error message (Frame Limitation reached).
- Once reached that limit, you are invited to switch to Unlimited option.
- Unlimited: paid version of the AI Animation API that lets you output as many animation frames as you want.
- Price per frame starts at 0.0005€
- Kinetix refunds all the frames charged if the API process is unsuccessful
- A seperate group containing all of the skeleton's joints, independant from any curves or other kind of controlers.
- Clean joints will all transformation channels (translate, rotate and scale) unlocked and easily acessible, to ease the remapping process.
- Have a logical joint hiearchy (example: L_shoulder>L_arm>L_forearm>L_hand)
- Ideally, have a consistent joint orientation (+Y axis is the most optimal), and be wary of joint mirroring (example: left arm and right arm should
- Avoid "Maya exclusive" tools and solvers (ex : lattices, ribbons, etc...) or their equivalent in Blender, etc… if possible, for better compatibility (character rig should be convertible to an .fbx or .glb file)
- Removing controllers if possible, as existing control rigs affecting the skeleton can hinder the retargeting process.
- For facial animation compatibility, recreate blendshapes based on Apple ARKit instead of the current facial control rig. (cf: https://arkit-face-blendshapes.com/ and https://developer.apple.com/documentation/arkit/arfaceanchor/blendshapelocation)