Has OpenAI provided an explanation for the very clear degradation of ChatGPT’s performance with the GPT4 model? Since the last two updates, it is significantly faster (which is fundamentally useless when we are capped at 25 messages in 3 hours…), suggesting that computational power has been reduced, while at the same time, it is making unprecedented reasoning errors. As a regular user, I have noticed the emergence of gross errors that did not exist before, and especially a much greater capacity ...
Since inclusion of ChatGPT in Bing Chat, I speculated that Microsoft was going to buy OpenAI at some point.
They’re probably going to degrade performance, focus on Microsoft partnership (Bing already has a well established web index), and then Microsoft outright buys it and locks it behind a subscription.
This makes sense for any other company but OpenAI is still technically a non profit in control of the OpenAI corporation, the part that is actually a business and can raise capital. Considering Altman claims literal trillions in wealth would be generated by future GPT versions, I don’t think OpenAI the non profit would ever sell the company part for a measly few billions.
Since inclusion of ChatGPT in Bing Chat, I speculated that Microsoft was going to buy OpenAI at some point.
They’re probably going to degrade performance, focus on Microsoft partnership (Bing already has a well established web index), and then Microsoft outright buys it and locks it behind a subscription.
This makes sense for any other company but OpenAI is still technically a non profit in control of the OpenAI corporation, the part that is actually a business and can raise capital. Considering Altman claims literal trillions in wealth would be generated by future GPT versions, I don’t think OpenAI the non profit would ever sell the company part for a measly few billions.