One is 25 €/month and on-demand, and the other costs more than I can afford and would probably be at inconvenient times anyway. Ideal? No, probably not. But it’s better than nothing.
I’m not really looking for advice either - just someone to talk to who at least pretends to be interested.
It’s not better than nothing - it’s worse than nothing. It is actively harmful, feeding psychosis, and your chat history will be sold at some point.
Try this, instead of asking “I am thinking xyz”, ask " my friend thinks xyz, and I believe it to be wrong". And marvel at how it will tell you the exact opposite.
I’m fairly confident that this could be solved by better trained and configured chatbots. Maybe as a supplementary device between in-person therapy sessions, too.
I’m also very confident that there’ll be lot of harm done until we get to that point. And probably after (for the sake of maximizing profits) unless there’s a ton of regulation and oversight.
It doesn’t always agree with me. We’re in an impasse about mentoring. I keep telling it I’m not interested, it keeps telling me that given my traits i will be but I’m just not ready yet.
One is 25 €/month and on-demand, and the other costs more than I can afford and would probably be at inconvenient times anyway. Ideal? No, probably not. But it’s better than nothing.
I’m not really looking for advice either - just someone to talk to who at least pretends to be interested.
It’s not better than nothing - it’s worse than nothing. It is actively harmful, feeding psychosis, and your chat history will be sold at some point.
Try this, instead of asking “I am thinking xyz”, ask " my friend thinks xyz, and I believe it to be wrong". And marvel at how it will tell you the exact opposite.
I’m fairly confident that this could be solved by better trained and configured chatbots. Maybe as a supplementary device between in-person therapy sessions, too.
I’m also very confident that there’ll be lot of harm done until we get to that point. And probably after (for the sake of maximizing profits) unless there’s a ton of regulation and oversight.
I’m not sure LLMs can do this. The reason is context poisoning. There would need to be an overseer system of some kind.
It’s bad precisely because the bot always agree with you, they are all made like that
It doesn’t always agree with me. We’re in an impasse about mentoring. I keep telling it I’m not interested, it keeps telling me that given my traits i will be but I’m just not ready yet.