← Back to blog

What's New in Reels Farm: AI Clone, GPT Image 2, and Workflow Updates

· Product Updates · 8 min read

This update introduces two major additions for content teams. AI Clone expands what you can build from existing face and motion assets, and GPT Image 2 gives AI Avatar workflows another model option for stronger prompt-following and edit fidelity.

Reels Farm has focused on one core goal for a while: help teams move faster from idea to short-form output without losing control of quality.

This release pushes that goal further with two major additions.

AI Clone is now available as its own workflow, and GPT Image 2 is now available as a model option in AI Avatars.

Quick Answer

Here is what changed in practical terms:

  1. AI Clone lets you combine a face image and a motion video to generate a clone output.
  2. Voice conversion can be enabled optionally with searchable voice selections and preview playback.
  3. GPT Image 2 is now selectable in AI Avatars for prompt-driven or reference-assisted avatar generation.
  4. The new features connect to your existing assets and content flow, including My AI Avatars, My Hooks, and saved outputs.

If your team already has avatar and hook assets in place, this update gives you more ways to reuse them productively.

What Is New: AI Clone

AI Clone is a dedicated creation area in the dashboard.

The workflow is built around two required inputs:

  • a face image
  • a motion video

For face input, users can pull from My AI Avatars or upload a file.

For motion input, users can pull from My Hooks or upload a file.

This is useful because many teams already have reusable character images and hook clips. Instead of restarting from scratch, AI Clone allows those inputs to become a new output path.

Optional Voice Conversion

Voice conversion is optional and can be toggled on only when needed.

When enabled, users can:

  • select a voice category
  • search voices
  • preview voices before selecting
  • optionally reduce background noise before conversion

That structure matters for real production use. It keeps the core generation flow simple, while still giving audio control when the project requires it.

Resolution and Direction Controls

AI Clone also exposes:

  • output resolution selection (`720p` default or `1080p`)
  • an optional short direction prompt for model guidance

The optional prompt is useful for creative steering without forcing a long setup process.

Generation and Output Flow

After generation starts, users can keep browsing while the job is queued and processed.

Finished outputs appear in My AI Clones in the same workspace. That reduces context switching and makes it easier to review and reuse results quickly.

What Is New: GPT Image 2 in AI Avatars

AI Avatars now includes GPT Image 2 as a model option alongside existing model choices.

In the UI, GPT Image 2 is positioned for:

  • strong prompt following
  • high-fidelity edits

For teams already using AI Avatars, this means one more model path without changing the rest of the workflow.

The Workflow Stays Familiar

You can still use the same core AI Avatar flow:

  1. write the prompt
  2. select model
  3. choose aspect ratio
  4. optionally use a reference image
  5. generate and save strong outputs as reusable characters

The important operational detail is continuity. Teams do not need to relearn the product to test GPT Image 2.

Why This Release Matters for Production Teams

The impact is less about having more buttons and more about better asset leverage.

Many content teams already have:

  • saved avatar faces
  • generated hooks
  • reusable product assets
  • scheduling flows in place

With AI Clone, those assets can be recombined into another output type.

With GPT Image 2, avatar generation gets another model option that can improve fit for specific prompt or edit requirements.

Combined, this can shorten the time between brief and output, especially for teams running high content volume.

Suggested Rollout Plan for Teams

If you want a clean rollout internally, use a simple sequence:

  1. Pick one campaign or product line as a pilot.
  2. Build a small test set in AI Clone from existing avatars and hooks.
  3. Run the same avatar prompts with GPT Image 2 and your current default model.
  4. Keep only the outputs that are clearly more usable.
  5. Add the winners to your reusable character and asset workflow.

This keeps the update grounded in output quality, not novelty.

Common Mistakes to Avoid During Adoption

Testing too many variables at once

If you change face, motion, voice, and prompt direction in every run, it becomes hard to understand what actually improved.

Saving outputs without curation

The release adds creative capacity, but quality still comes from selection discipline.

Treating model choice as a branding decision

Choose GPT Image 2 or another model based on output fit for your campaign needs.

Ignoring downstream publishing needs

Generation speed only helps when the result is ready for your actual content calendar and channels.

FAQ

Is AI Clone meant to replace existing avatar or hook workflows?

It is better viewed as an extension of them. It reuses face and motion assets you already have.

Do I need voice conversion for AI Clone?

No. Voice conversion is optional.

Is GPT Image 2 a separate feature area?

No. It is a model option inside AI Avatars.

Should teams switch all generation to GPT Image 2 immediately?

Usually no. A side-by-side test on real campaign prompts is a safer way to decide where it performs best for your workflow.

Final Take

This release strengthens two parts of the Reels Farm system: reusable input leverage and model flexibility.

AI Clone creates a new path from existing face and motion assets to fresh outputs, and GPT Image 2 adds another practical option in AI Avatar generation. Teams that adopt both with a clear testing workflow should see faster creative cycles with better control.

Related reading