Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            9 months ago

            I don’t understand what you’re even trying to ask. AGI is a subcategory of AI. Every AGI is an AI but not every AI is an AGI. OP seems to be thinking that AI isn’t “real AI” because it’s not AGI, but those are not the same thing.

            • BlanketsWithSmallpox@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              9 months ago

              AI has been colloquially used to mean AGI for 40 years. About the only exception has been video games, but most people knew better than thinking the Goomba was alive.

              At what point, did AI get turned into AGI.

      • Pipoca@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        9 months ago

        One low hanging fruit thing that comes to mind is that LLMs are terrible at board games like chess, checkers or go.

        ChatGPT is a giant cheater.

        • Hotzilla@sopuli.xyz
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          9 months ago

          GPT3 was cheating and playing poorly, but original GPT4 played already in level of relatively good player, even in mid game (not found in the internet, do require understanding the game, not just copying). GPT4 turbo probably isn’t so good, openai had to make it dummer (read: cheaper)

          • Pipoca@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            9 months ago

            Three year olds aren’t all that smart, but they learn in a way that ChatGTP 3 and ChatGPT 4 don’t.

            A 3 year old will become a 30 year old eventually, but ChatGPT 3 just kinda stays ChatGPT3 forever. LLMs can be trained offline, but we don’t really know if that converges to some theoretical optimum at some point and how far away from the best possible LLM we are.

      • esserstein@sopuli.xyz
        link
        fedilink
        arrow-up
        10
        arrow-down
        3
        ·
        9 months ago

        Be generally intelligent ffs, are you really going to argue that llms posit original insight in anything?

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          4
          arrow-down
          8
          ·
          9 months ago

          Can you give me an example of a thought or statement you think exhibits original insight? I’m not sure what you mean by that.

            • intensely_human@lemm.ee
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              9 months ago

              No, I don’t think they are. I don’t think you are. I think you’re looking for any possible excuse not to talk to me.

              It’s the zeitgeist of our time. People only want to talk about these topics, these super important topics, without being challenged. It’s pathetic.

              You’re not as intelligent as you think you are

              Oh did you come up with that insight all on your own?

      • doctorcrimson@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        9 months ago

        So basically the ability to do things or learn without direction for tasks other than what it was created to do. Example, ChatGPT doesn’t know how to play chess and Deep Blue doesn’t write poetry. Either might be able to approximate correct output if tweaked a bit and trained on thousands, millions, or billions of examples of proper output, but neither are capable of learning to think as a human would.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          2
          arrow-down
          6
          ·
          9 months ago

          I think it could learn to think as a human does. Humans think by verbalizing at themselves: running their own verbal output back into their head.

          Now don’t get me wrong. I’m envisioning like thousands of prompt-response generations, with many of these LLMs playing specialized roles: generating lists of places to check for X information in its key-value store. The next one’s job is to actually do that. The reason for separation is exhaustion. That output goes to three more. One checks it for errors, and sends it back to the first with errors highlighted to re-generate.

          I think that human thought is more like this big cluster of LLMs all splitting up work and recombining it this way.

          Also, you’d need “dumb”, algorithmic code that did tasks like:

          • compile the last second’s photograph, audio intake, infrared, whatever, and send it to the processing team.

          • Processing team is a bunch of LLMs, each with a different task in its prompt: (1) describe how this affects my power supply, (2) describe how this affects my goal of arriving at the dining room, (3) describe how this affects whatever goal number N is in my hierarchy of goals, (4) which portions of this input batch doesn’t make sense?

          • the whole layout of all the teams, the prompts for each job, all of it could be tinkered with by LLMs promoted to examine and fiddle with that.

          So I don’t mean “one LLM is a general intelligence”. I do think it’s a general intelligence within its universe; or at least as general as a human language-processing mind is general. I think they can process language for meaning just as deep as we can, no problem. Any question we can provide an answer to, without being allowed to do things outside the LLM’s universe like going to interact with the world or looking things up, they can also provide.

          An intelligence capable of solving real-world problems needs to have, as it’s universe, something like the real world. So I think LLMs are the missing piece of the puzzle, and now we’ve got the pieces to build a person as capable of thinking and living as a human, at least in terms of mind, and activity. Maybe we can’t make a bot that can eat a pork sandwich for fuel and gestate a baby, no. But we can do GAI, that has its own body with its own set of constraints, with the tech we have now.

          It would probably “live” its life at a snail’s pace, given how inefficient its thinking is. But if we died and it got lucky, it could have its own civilization, knowing things we have never known. Very unlikely, more likely it dies before it accumulates enough wisdom to match the biochemical problem set our bodies have solved over a billion years, for handling pattern decay at levels all the way down to organelles.

          The robots would probably die. But if they got lucky and invented lubricant or whatever the thing was, before it killed them, then they’d go on and on, just like our own future. They’d keep developing, never stopping.

          But in terms of learning chess they could do both thing: they could play chess to develop direct training data. And, they could analyze their own games, verbalize their strategies, discover deeper articulable patterns, learn that way too.

          I think to mimic what humans do, they’d have to dream. They’d have to take all the inputs of the day and scramble them to get them to jiggle more of the structure into settling.

          Oh, and they’d have to “sleep”. Perhaps not all or nothing, but basically they’d need to re-train themselves on the day’s episodic memories, and their own responses, and the outcomes of those responses in the next set of sensory status reports.

          Their day would be like a conversation with chatgpt, except instead of the user entering text prompts it would be their bodies entering sensory prompts. The day is a conversation, and sleeping is re-training with that conversation as part of the data.

          But there’s probably a million problems in there to be solved yet. Perhaps they start cycling around a point, a little feedback loop, some strange attractor of language and action, and end up bumping into a wall forever mumbling about paying the phone bill. Who knows.

          Humans have the benefit of a billion years of evolution behind us, during which most of “us” (all the life forms on earth) failed, hit a dead end, and died.

          Re-creating the pattern was the first problem we solved. And maybe that’s what is required for truly free, general, adaptability to all of reality: no matter how much an individual fails, there’s always more. So reproduction may be the only way to be viable long-term. It certainly seems true of life … all of which reproduces and dies, and hopefully more of the former.

          So maybe since reproduction is such a brutally difficult problem, the only viable way to develop a “codebase” is to build reproduction first, so that all future features have to not break reproduction.

          So perhaps the robots are fucked from the get-go, because reverse-building a reproduction system around an existing macro-scale being, doesn’t guarantee that you hit one of the macro-scale being forms that actually can be reproduced.

          It’s an architectural requirement, within life, at every level of organization. All the way down to the macromolecules. That architectural requirement was established before everything else was built. As the tests failed, and new features were rewritten so they still worked but didn’t break reproduction, reproduction shaped all the other features in ways far too complex to comprehend. Or, more importantly than comprehending, reproduce in technology.

          Or, maybe they can somehow burrow down and find the secret of reproduction, before something kills them.

          I sure hope not because robots that have reconfigured themselves to be able to reproduce themselves down to the last detail, without losing information generation to generation, would be scary as fuck.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        9 months ago

        Artificial intelligence might be really good, perhaps even superhuman at one thing, for example driving a car but that same competence doesn’t apply over variety of fields. Your self-driving car can’t help with your homework. With artificial general intelligence however, it does. Humans posses general intelligence; we can do math, speak different languages, know how to navigate social situations, know how to throw a ball, can interpret sights, sounds etc.

        With a real AGI you don’t need to develop different versions of it for different purposes. It’s generally intelligent so it can do it all. This also includes writing its own code. This is where the worry about intelligence explosion origins from. Once it’s even slightly better than humans at writing its code it’ll make a more competent version of itself which will then create even more competent version and so on. It’s a chain reaction which we might not be able to stop. After all it’s by definition smarter than us and being a computer; also million times faster.

        Edit: Another feature that AGI would most likely, though not neccessarily posses is consciousness. There’s a possibility that it feels like something to be generally intelligent.

        • intensely_human@lemm.ee
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          9 months ago

          I think that the algorithms used to learn to drive cars can learn other things too, if they’re presented with training data. Do you disagree?

          Just so we’re clear, I’m not trying to say that a single, given, trained LLM is, itself, a general intelligence (capable of eventually solving any problem). But I don’t think a person at a given moment is either.

          Your Uber driver might not help you with your homework either, because he doesn’t know how. Now, if he gathers information about algebra and then sleeps and practices and gains those skills, now maybe he can help you with your homework.

          That sleep, which the human gets to count on in his “I can solve any problem because I’m a GI!” claim to having natural intelligence, is the equivalent of retraining a model, into a new model, that’s different from the previous day’s model in that it’s now also trained on that day’s input/output conversations.

          So I am NOT claiming that “This LLM here, which can take a prompt and produce an output” is an AGI.

          I’m claiming that “LLMs are capable of general intelligence” in the same way that “Human brains are capable of general intelligence”.

          The brain alternates between modes: interacting, and retraining, in my opinion. Sleep is “the consolidation of the day’s knowledge into structures more rapidly accesible and correlated with other knowledge”. Sound familiar? That’s when ChatGPT’s new version comes out, and it’s been trained on all the conversations the previous version had with people who opted into that.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            9 months ago

            I’ve heard expers say that GPT4 displays signs of general intelligence so while I still wouldn’t call it an AGI I’m in no way claiming an LLM couldn’t ever become generally intelligent. Infact if I were to bet money on it I think there’s a good chance that this is where our first true AGI systems will originate from. We’re just not there yet.

            • Cethin@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              It isn’t. It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.

              For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        I wrote this for another reply, but I’ll post it for you too:

        It doesn’t understand things like we think of with intelligence. It generates output that fits a recognized input. If it doesn’t recognize the input in some form it generates garbage. It doesn’t understand context and it doesn’t try to generalize knowledge to apply to different things.

        For example, I could teach you about gravity, trees, and apples and ask you to draw a picture of an apple falling from a tree and you’d be able to create a convincing picture of what that would look like even without ever seeing it before. An LLM couldn’t. It could create a picture of an apple falling from a tree based on other pictures of apples falling from trees, but not from just the knowledge of an apple, a tree, and gravity.