Frequently Asked Questions

No, we were building this when people used to call this “deepfakes”. We have seen that narrative change from deepfake > synthetic media > generative AI. Our product is not just a wrapper around third party APIs. 

No, we have a 2-day commitment free trial where you get 20 credits. We do not ask for payment information for the trial.

To use the platform services, we charge credits – 4 credits for a minute of dub, 2 credits for a minute of text-to-speech, and 1 credit for a minute of subtitling.

You have the option to use our product for free (freemium plan) where certain features and speakers would be restricted. You can ofcourse buy a pro subscription to keep using the pro features. 

We support 60+ languages and have 500+ speakers. A lot of our premium speakers were developed in-house from scratch. We have our IP in Audio Generative AI.

For English and Spanish, we have over 95% accurate translations. It would be ~85% for other languages.

We have built an editor for seamless human-in-the-loop changes. You can change the translations, sync the voiceovers to video, add multiple speakers, and oh so much more! We have tools like find and replace, transliterate built in to the editor for faster changes.

We recommend informational content, think: short animated explainer videos, software walkthroughs, support videos, learning / training videos.

We strongly recommend you do not use Dubverse for entertainment-type content, the output won’t come out good.

Yes, absolutely! Once the video is dubbed by our platform, you can go to “Retune” mode and assign different lines to different speakers. We have speaker diarization in beta, which will use AI to auto detect and auto assign speakers in the near future.

Usually, it takes under 4 minutes to dub a video. If it’s taking longer, please report this in the support chat.

Yes, we can set up an AI voice clone for you — fill this form on this page. Please note that voice cloning is not included in any subscription plan and would be billed separately.

No, you’ll have to select how you’d want it to sound from our speaker library of over 500 speakers.

We would highly recommend a human review on all outputs generated by Dubverse. We have a vendor management service that has language experts who can vet the videos and make corrections for you. Please ping us on the chat to initiate a review. The review service is not included in any subscription plan, it would be charged separately.

We have most of the global and Indian languages covered. Until you speak some esoteric language that only the aliens can understand, we should be able to serve you. 😉

We want to help you preserve the hard work it takes to generate quality content. The commercial rights are available only on the pro subscriptions, no commercial rights are given on a trial or a freemium account.

Yes! We share all the updates and exclusive deals with our community members first. Please join here.

A lot of Media Productions use Dubverse generated subtitles for their series / movies, so we’d say extremely accurate.

Yes, please make sure they have an active Dubverse account. You can add collaborators to the folders..

No, we do not have APIs as of now. But we plan to release them in 2024.

Please look at the product roadmap here.

We have experts on board as part of our partnerships with global language service providers. We share textual data, context and user persona of the video with them and they get the translations done on the same platform, providing a seamless experience.

These are professional language experts, ranging from different age-groups to domain specific translators such as educators, medical translators, financial translators and more. 

Got you covered! Our AI speakers have been tested on English+any other language content and they flawlessly speak in the desired language

We exist for breaking barriers across languages and we have developed our systems to take care of this.

 Yes, we do preserve the sound effects to make sure the output quality remains as close to the source video as possible.