Thanks for listening and echoing some of my own doubts. I was kind of getting the feeling that MS Researchers were too invested in gpt and not being realistic about the limitations. But I hadn’t really seen others trying the two instance method and discarding it as not useful.
The tldr is nobody has solved it and it might not be solvable.
Which when you think of the structure being LLMs… that makes sense. They’re statistical models. They don’t have a grounding in any sort of truth. If the input hits the right channels it will output something undefined.
The Microsoft guy tries to spin this as “creativity!” but creativity requires intent. This is more like a random number generator outputting your tarot and you really buying into it.
Thanks for listening and echoing some of my own doubts. I was kind of getting the feeling that MS Researchers were too invested in gpt and not being realistic about the limitations. But I hadn’t really seen others trying the two instance method and discarding it as not useful.
Here’s a recent story about hallucinations: https://www.cnn.com/2023/08/29/tech/ai-chatbot-hallucinations/index.html
The tldr is nobody has solved it and it might not be solvable.
Which when you think of the structure being LLMs… that makes sense. They’re statistical models. They don’t have a grounding in any sort of truth. If the input hits the right channels it will output something undefined.
The Microsoft guy tries to spin this as “creativity!” but creativity requires intent. This is more like a random number generator outputting your tarot and you really buying into it.