“Handed directly to resident.”
“Handed directly to resident.”
Great, now we’ll have separate “california-model” ai-models, like cars.
Nah… probably just missing their clothes and equipment.
Each possible origin needs a road to each possible destination. No intersections, no traffic, problem solved!
I’m sorry it troubles your mind so much that you had to make a post about it.
Personally witnessed or it didn’t happen.
Taking a step back, i wonder… we are reading this stuff now, it effects us too. What if we have already stepped into a linguistic death-spiral of a telephone-game where each generation gets rehashed garbage from the last?
Multiple endings! Refund, pay-up, audit, and no-knock raid!
This reminds me of my old phone. I downloaded a podcast on it that had a shock-opener and for some reason was always “the next thing” the sound/music player wanted to play. So many times, by accidental touch inputs or clicking the headphone button, or the like, my phone would randomly scream: "WHO DOESN’T LIKE TO PEE IN THE SINK!?!?!”
Both practically and theoretically, it might be impossible. It basically comes down to trusting trust. https://www.youtube.com/watch?v=SJ7lOus1FzQ
Officially unofficial.
To assume that a GPT is right is to assume everything on the internet is right, as from that it arose.
Perhaps more important is to have devices start or fall open… if the OEM has lost interest in it, let others support the device. Make ewaste valuable and avoidable.
I assume that is the intended purpose of the wago connector over the hot line.
Even if AI never actually takes someone’s job, it’s clear that the hype surrounding it can displace workers, and it’s use in screening candidates may prevent you from finding another.
Another missed opportunity to bring back the headphone jack.
I’m not talking about one-offs and the assessment noise floor, more like: “ChatGPT broke the Turing test” (as is claimed). It used to be something we tried to attain, and now we don’t even bother trying to make GPT seem human… we actually train them to say otherwise lest people forget. We figuratively pole-vaulted over the turing test and are now on the other side of it, as if it was a point on a timeline instead of an academic procedure.
The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.
There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.
Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.
No, Neo. When you’re ready… you wont need a lighter.
Seems like asking for trouble… and law suits.