I don’t think it’s just marketing bullshit to think of LLMs as AI… The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.
That’s not like a crazy hype claim like the “AGI” thing, either… It doesn’t suggest sentience or consciousness or any particular semblance of life (and I’d disagree with MW that it needs to be “human” in any way)… It’s just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.
I don’t think it’s just marketing bullshit to think of LLMs as AI… The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.
That’s not like a crazy hype claim like the “AGI” thing, either… It doesn’t suggest sentience or consciousness or any particular semblance of life (and I’d disagree with MW that it needs to be “human” in any way)… It’s just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.