• CrabLangEnjoyer@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    A current state of the art ai model from Microsoft can achieve acceptable quality with about 3 seconds of audio. Commercially available stuff like eleven labs about 30 minutes. But quality will obviously vary heavily but then again they’re using a low quality phone call so maybe not that important

    • madsen@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      With that little, they may be able to recreate the timbre of someone’s voice, but speech carries a multitude of other identifiers and idiosyncrasies that they’re unlikely to get with that little audio, like personal vocabulary (we don’t choose the same words and phrasings for things), specific pronunciations (e.g. “library” vs “libary”), voice inflections, etc. Obviously, the more training data you have, the better the output.