Vocaloids were invented in 2000, with commercial release in 2004. Human singers aren’t extinct yet.
It may be possible in the future for a synthetic voice to sound fully human with a full range of emotions. But I believe that human actors and voice actors will still be used because 1) it’s easier to explain what to do to a human professional, 2) unions exist and they will push back against it.
Acting is an art. What world is it where robots do art while humans do the tedious manual labor?
I think you’re probably right, but a world where robots do art and humans do the tedious manual labor sounds eerily similar to the world we live in. At least, it is not outside the realm of possibility.
Quite some work left to do to achieve a sociaty with universal basic income, if even the technologies developed for the purpose are twisted and used against it.
Oh no… I figured it out. Quark never left this timeline when he jumped back to Roswell! We are living in a universe where Quark secretly runs the world! It’s the only explanation for this madness!
I also think it will likely be quite some time before AI can accurately reproduce the range of emotions a human can. Simple emotional responses, sure, but I’m not so certain about complex ones in the near future.
Vocaloids are far from perfect, but they can be damn good in hands of a good producer. Plus, isn’t that the original point? “AI VA were invented so soon all VAs will be AI”?
And to produce the example you provided, it required a big voice bank from people who are very experienced in voicework. Top Gear/The Grand Tour have over 200 episodes where the hosts have basically the same characters throughout the show spanning like 20 years. And it still ain’t perfect. It’s damn good, but there are hiccups here and there.
So to produce a good AI voiceover, you’ll need experienced people doing a lot of work. And to get experienced human actors you will need humans acting. Hence, my point
Well, if you talk about the newest AI-powered UTAU voicebanks, that’s because the developers finally thought about crossing the streams, and instead of having the singers merely pronounce syllables in several pitches, they used that data (expanded to also include several syllable clusters) to train an AI. Unlike most trained AI models, where the voice samples are recorded from live performances, so they vary in quality and on data points for each individual syllable, these have the full set of voice training data prerecorded by design, so the quality of every possible combination of phonemes is as clear as possible.
Vocaloids were invented in 2000, with commercial release in 2004. Human singers aren’t extinct yet.
It may be possible in the future for a synthetic voice to sound fully human with a full range of emotions. But I believe that human actors and voice actors will still be used because 1) it’s easier to explain what to do to a human professional, 2) unions exist and they will push back against it.
Acting is an art. What world is it where robots do art while humans do the tedious manual labor?
I think you’re probably right, but a world where robots do art and humans do the tedious manual labor sounds eerily similar to the world we live in. At least, it is not outside the realm of possibility.
That is the world we currently live in.
Quite some work left to do to achieve a sociaty with universal basic income, if even the technologies developed for the purpose are twisted and used against it.
A world where profits are put over people.
Well, I meant it more like “can you imagine how horrible such a world is?” Not just “can you imagine it?”
Because yeah, you barely need to imagine it at all
Oh. Got it.
Oh no… I figured it out. Quark never left this timeline when he jumped back to Roswell! We are living in a universe where Quark secretly runs the world! It’s the only explanation for this madness!
To add to that. It’s actually highly popular for vocaloid songs to be covered by humans.
It’s a great way for song composers and lyric writers who don’t have the resources or connections to enter the field.
I also think it will likely be quite some time before AI can accurately reproduce the range of emotions a human can. Simple emotional responses, sure, but I’m not so certain about complex ones in the near future.
Vocaloids are far from perfect singers. It’s like saying that because abstract art was invented all forms of art in the future would be abstract.
Also, looking at some of the current use of AI voices, there’s no doubt it can be used for mainstream VA work https://youtu.be/FigIAAYHoW8?si=16pIkeSmhOwnuGde
Vocaloids are far from perfect, but they can be damn good in hands of a good producer. Plus, isn’t that the original point? “AI VA were invented so soon all VAs will be AI”?
And to produce the example you provided, it required a big voice bank from people who are very experienced in voicework. Top Gear/The Grand Tour have over 200 episodes where the hosts have basically the same characters throughout the show spanning like 20 years. And it still ain’t perfect. It’s damn good, but there are hiccups here and there.
So to produce a good AI voiceover, you’ll need experienced people doing a lot of work. And to get experienced human actors you will need humans acting. Hence, my point
Yeah but vocaloids suck and I’ve heard ai singing recently that made me double check because they were so good.
Does this suck? To my ears, it doesn’t. Not unmistakably human by any stretch, but still pretty good. And that’s 9 years ago
And by “AI singing” do you mean “a famous voice overlaid on another singer’s performanse” or something closer to text-to-speech (text-to-song)?
I dont understand the language nor am i familiar with that style so i couldnt really judge.
Im not sure about your second point. I’ll keep this in mind and the next example i come across i will come back here to share.
Well, if you talk about the newest AI-powered UTAU voicebanks, that’s because the developers finally thought about crossing the streams, and instead of having the singers merely pronounce syllables in several pitches, they used that data (expanded to also include several syllable clusters) to train an AI. Unlike most trained AI models, where the voice samples are recorded from live performances, so they vary in quality and on data points for each individual syllable, these have the full set of voice training data prerecorded by design, so the quality of every possible combination of phonemes is as clear as possible.
That’s very interesting. Where can i read more about it?
https://dreamtonics.com/en/synthesizer-v-ai-announcement/