I’m aware they’re not using a generic model, but that’s not much better. Current custom-made models still fuck up significantly more than humans, and in less predictable ways.
Even if their custom model is slightly incorrect 1% of the time, that’s still a major problem in critical systems like those.
It’s good to see that they aren’t just piping through GPT-whatever, but in my experience, the vast majority of people who tout AI, even at large corporations, generally have zero idea how it works. I’m still not convinced that this is a good idea.
I mostly use A.I. to translate. ChatGPT gets that done it gets it done pretty good, especially when you say “translate this mandarin text into English. I don’t care if it is somewhat inaccurate, just do it as best as you can.“
That’s assuming they’re using one of the generic models like ChatGPT and not something custom they’ve created specifically to do this.
Edit: they are in fact using their own as per the article
I’m aware they’re not using a generic model, but that’s not much better. Current custom-made models still fuck up significantly more than humans, and in less predictable ways.
Even if their custom model is slightly incorrect 1% of the time, that’s still a major problem in critical systems like those.
Which models are those?
It’s good to see that they aren’t just piping through GPT-whatever, but in my experience, the vast majority of people who tout AI, even at large corporations, generally have zero idea how it works. I’m still not convinced that this is a good idea.
I mostly use A.I. to translate. ChatGPT gets that done it gets it done pretty good, especially when you say “translate this mandarin text into English. I don’t care if it is somewhat inaccurate, just do it as best as you can.“