Let’s imagine the possibilities and theoretically demo the results based on current knowledge:
-
yes AI made the process fast and the patient did not die unnecessarily.
-
same but the patient died well.
-
same but the patient died.
-
same as either 1, 2, or 3 but AI made things slower.
Demo:
Pharmacy: Patient requires amoxicillin for a painful infection of the ear while allergic to penicillin:
AI: Sure! You will find penicillin in Isle 23 box number 5.
Pharmacy: the patient needs amoxicillin actually.
AI: Sure! The Patient must have an allergic reaction to more commonly used anti inflammatory medications.
Pharmacy: actually amoxicillin is more of an antibiotic, where can I find it?
AI: Sure! While you are correct that amoxicillin is an antibiotic, it is a well studied result that after an infection inflammation is reduced. You can find the inflammation through out the body including the region where the infection is located.
Pharmacy: amoxicillin location!
AI: Sure! Amoxicillin was invented in Beecham Research Laboratories.
What baffles me is why would you use an LLM when what you need is a digital inventory manager. Not bashing your argument’s merits. On the contrary, I think it depicts very well how people will shove AI-marketed shit on already-solved problems and make everyone’s lives worse because it’s ✨modern✨.
It’s the same crap like with blockchain.
People have no idea how sophisticated modern IT systems already are, and if you glue fancy words on solved problems, people will cheer you for being super innovative.
Ugh, blockchain. During the pandemic, I had absolutely no work to do so my boss asked me to make a presentation for him to present on the merits of blockchain. When my response was that it’s overhyped bullshit, he was not thrilled.
I made the requested presentation but it made me feel dirty, so I alt texted every slide’s graphics to include the counterpoint to the bullshit benefits being presented.
My company tried to jump onto the bandwagon in 2018 or so, but it fizzled out very quickly. Fortunately.
It’s not if you actually know what it is and what it’s for… A trustless public ledger.
My question is: is it being used for inventory management? Or is it being used to feed the entire patient file in to make sure the Pharmacist doesn’t make a mistake as well. Double checking for conflict in the prescription interactions and stuff like that.
Should it be relied as the only thing? No. Is it nice to have another set of eyes on every task? Probably? Could this be solved with the hiring of more pharmacy techs and an education system not driven by profit margins for the investors that actually facilitates the workforce’s technical skills? Yes.
Idk. Just sounds like shitty companies being shitty companies all the way down.
Further to this, to human is top err - so why would you start to rely on something that’s confidently incorrect so often.
It’s only a matter of time before this misleads someone terribly
Actual pharmacist here, working in pharmacy IT.
Unlike other industries, Pharmacy is not particularly thrilled about or interested in AI. In fact, my hospital explicitly blocks access to all LLMs.
I was actually kind of hoping to see what Microsoft is claiming here, and just walked away from this post more confused.
I think it’s in reference to this: https://news.microsoft.com/source/asia/features/taiwan-hospital-deploys-ai-copilots-to-lighten-workloads-for-doctors-nurses-and-pharmacists/
Looks like the benefit/headline comes from use of the entire software suite that provides access to a patient’s chart/medical history including checks for interactions/allergies. Most of that has nothing to do with AI but since it has a feature that generates a summary via a language model the whole thing is marketed as an AI Copilot.
Good thing, you don’t want medical advice from an LLM
You’re not great taking medical advice from a doctor either, seeing how often they’re wrong.
That’s fair, but they tend to be more right than an LLM :P
I was trying to find an article I read about a year ago, about an experiment where AI was assisting a doctor. Where it suggested questions and possible diagnosis for the doctor to look into.
IIRC the result was both faster and more accurate diagnosis. Too bad I can’t find it again now :(
deleted by creator
Listening to employees when making decisions, what a concept! It’s a shame many places don’t do that.
Is “pharmacists seeing more patients” really a measure of something good? I’m a non-native English speaker so cut me some slack but all I can imagine is just longer queues in the pharmacy and more tired pharmacists (and people who now need to wait in the queue now).
“pharmacists seeing more patients” Implies that the queue moves quicker.
A pharmacist can only have so much time in their shift, so being able to more effectively use that time (see more people) would be a good thing.That’s a noble goal but does adding more people help the (long-term only, please) effectiveness? At what point does it start hindering it?
I would assume that someone like a pharmacist has to be focused all the time, stakes is high…
Do we have precise data about how physiological state of a pharmacist is changing through the shift? Do we know whether or not the pauses between people – which we might or might not have considered a wasted time – are actually essential for their ability to stay focused and reliable? (Is the answer the same for all of them?) Or maybe they could actually still use part of that time in a productive way, right? Also, why is there lack of people in the first place?
Focusing solely on adding more people to the equation seems to neglect factors like this. This tells me that whoever this factoid is trying to impress is not someone who I would want to trust with managing a pharmacy (or anything except maybe some production line) in the first place.
Suggestion: BS from MS about Al
helping a pharmacistfilling RxI read it as AI somehow making more people sick therefore more of them needing to go see pharmacists, therefore pharmacists seeing more patients
That’s a more realistic take. I for one would want the pharmacist to get AI help, that’s fine. But not start taking double the patients. There’s a people interaction aspect to this too. It’s health care not care for animals to get them ready for tomorrow’s dinner. But seriously don’t eat animals, they got feelings too.
They are using AI to help the pharmacist decrypt thedoctors’s writing
Thats actually not a terrible idea.
What I need is AI to fix my doctor visits. Seems like those fucks expect you to be timely but then make you wait in their waiting room for 15 minutes and then an additional 30 inside the patient room. Oh sure, our time is unimportant, it’s all about you, doc.
Doctors are understaffed and underpaid because insurance is taking all the profit.
Even if this were true, did the pharmacists get a raise? Are they making more money? Or are they just seeing more patients (doing the extra emotional and mental labor that entails) and paying less attention to each one while Safeway and Walgreens pocket any increased revenue?
If anything, their tech hours got reduced.
I’ll take the non-AI using pharmacist for the win. Thank you very much.
Expert Systems are great for pharmacies, not the bullshit generators currently labeled as “AI.”
I am sure that AI leads to them seeing more people that need help
The pic being blurred and all, I thought it’s going to be some dad joke around “pharmacist can see more patients”
“Generate me 4096 images of pharmacy patients!”
I second the comment about this being a reason to reduce technician hours. Worked at the busiest store in my district the last 15 years of my career. We went from 3 pharmacists with several hours overlap on weekdays, down to 2 pharmacists with no overlap. Tech hours once was high enough to have 5 technicians on between 10-6, down to only having 5 total on staff. We went from a 24 location, down to being open only 11.5 hours a day. We were one block up from a Walgreens and one block down from a RiteAid that both ended up closing, and getting most of their customers who walked there. We had 2 major exoduses of staff and lost a good number of long time patients in the enshitification.
Even in a world where some new AI model could improve pharmacist throughput, it doesn’t compare to the skeleton crewing of corporate pharmacy bottom-line-go-up.
And because their LLM generated advice to people is bound to kill some of them, they can ‘see’ even more of them!