That sounds very reasonable as a prediction. I could see it being a pretty interesting black mirror episode. I would love it to stay as fiction though.
That sounds very reasonable as a prediction. I could see it being a pretty interesting black mirror episode. I would love it to stay as fiction though.
Sorry this is months after, but it’s cool to see it worked. I use a software called XXX Agile and it’s not the worst I work with but when ported to my company has some flaws. There’s a long project to switch somewhere else for document control and people who should know much better than me are worried it will fill some gaps but open us up to way more.
Maybe it was written incorrectly, but he said he’s sure Chauvin doesn’t regret his actions. Which is an interesting point when arguing for rehabilitation, but changing a mindset like that would take copious amounts of time for them to be safe in public. Maybe like 20 years to rehabilitate.
I absolutely love the first few seasons of Alone. I can’t vouch for the rest because I haven’t watched them yet, but that’s some amazing survival type reality TV.
Should’ve retired like 40 years ago. I hope she was able to pass on that skill set already. I hope this was a case of “that’s what I wanted to do today” and not a case of “that’s what has to happen today”. When at war, I can understand that priorities change. She probably has a strong bond to going above and beyond for her country, but no one should have to feel that way. I have great respect for her, and I feel great sadness that her skills are even needed currently.
This is my biggest concern. I’m in a position where (potentially in the near future) I see AI being used as an excuse to do work quicker so we can focus on other things more but still have to review the AI response before agreeing/signing off. Reviewing for accuracy takes just as long as doing it yourself when it’s strongly regulated and it comes down to revisions and document numbers. Much less making a sound argument that actually is up to date with that documentation. So either I trust the AI short cut and open myself up to errors, or redo all the work for them. No gain in time efficiency with shorter timelines. I’d rather make something and have it flag things that I can check so I’m more sure of my own work. What I do shouldn’t be faster, but it can be more error free. It would take a lot of training and updating of training with each iteration of documentation change. I could be the slave of change, with more expectations, with no actual improvement of the tools I have (in fact more risk of issues with the tools being used).
I found it very tone deaf. I do think this was written and shot before the harassment allegation came out so they wouldn’t have had that attitude given the video had been delayed for 24 hours.
I’ve dabbled in the virtual assistants because I wanted to see what they can do. Siri (it’s been years so I don’t know if it improved), Alexa, Google, are all horse shit. Every time I try to use them it works like garbage. They either trigger incorrectly or try to implement something I don’t want. The few times they do work correctly I don’t trust them because of all the other garbage experiences so I have to double check what they did. That negates the entire point from a time and convenience standpoint.
You’re exactly right. They are legally required to turn it over when compelled. Let’s keep that mess away from the federation. It will only get worse.
I see it as a classic intent vs outcome. If someone tries to commit atrocities and fails then their moral character is just as bad. People can change and reform but the attempt, exuberance, and time involved are all bigger signals than how the victim is affected. Incompetence can’t be a defense for evil at a certain point.