Skip to main content
All CollectionsModels
Which model should I choose?
Which model should I choose?

Check out these tutorials to learn more about our suite of models

Updated over a week ago

Before creating a video in neural frames, you’ll choose an AI model—each offering different styles and levels of detail. Whether using text-to-video models for full-motion video or frame-by-frame animation for more creative control, you can explore options like DreamShaper, Juggernaut XL, or Analog Diffusion.

Overview of model selections

Explore Specialist and Allrounder models, the difference between our 1.5 models and XL models.


Overview of Text-to-Video models

We offer AI video models, also known as Text-to-Video models, which allow you to generate full-motion clips directly from text. Unlike frame-by-frame animation, these models create continuous sequences where you describe the full scene and movement in your prompt. With options like Runway Gen3 Alpha, Kling 1.6, and Kling 1.6 Pro, you can choose the best balance of realism and efficiency. Features like seamless clip continuation, render history, and audio-reactive post-processing give you even more control.


Overview of Custom Models

Custom models in neural frames allow you to train AI on your own styles, objects, or even people, giving you precise control over how your visuals are generated. By uploading 20–199 images, you can teach the AI to replicate a specific aesthetic or subject, making it easier to create consistent, personalized videos. Once trained, simply use a custom keyword (e.g., SKS style) to apply your model in new creations. Whether fine-tuning artistic styles or generating AI-driven self-portraits, this feature unlocks limitless creative possibilities.

Did this answer your question?