Writing customer/company-wide emails is a good example. “Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online”
Dumbing down technical information “word this so a non-technical person can understand: our DHCP scope filled up and there were no more addresses available for Site A, which caused the temporary outage for some users”
Another is feeding it an article and asking for a summary, https://hackingne.ws/ does that for its Bsky posts.
Coding is another good example, “write me a Python script that moves all files in /mydir to /newdir”
Asking for it to summarize a theory or protocol, “explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2”
My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn’t as accurate.
The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.
How do you know the answer on why RIP was replaced with RIPv2 is accurate and not just a load of bullshit like putting glue on pizza?
Yes, I’m saving time. As I mentioned in my other comment:
Yeah, normally my “Make this sound better” or “summarize this for me” is a longer wall of text that I want to simplify, I was trying to keep my examples short.
And
and helps correct my shitty grammar at times.
And
Hallucinations are a thing, so validating what it spits out is definitely needed.
Most of what I’m asking it are things I have a general idea of, and AI has the capability of making short explanations of complex things. So typically it’s easy to spot a hallucination, but the pieces that I don’t already know are easy to Google to verify.
Basically I can get a shorter response to get the same outcome, and validate those small pieces which saves a lot of time (I no longer have to read a 100 page white paper, instead a few paragraphs and then verify small bits)
If the amount of time it takes to create the prompt is the same as it would have taken to write the dumbed down text, then the only time you saved was not learning how to write dumbed down text. Plus you need to know what dumbed down text should look like to know if the output is dumbed down but still accurate.
Writing customer/company-wide emails is a good example. “Make this sound better: we’re aware of the outage at Site A, we are working as quick as possible to get things back online”
Dumbing down technical information “word this so a non-technical person can understand: our DHCP scope filled up and there were no more addresses available for Site A, which caused the temporary outage for some users”
Another is feeding it an article and asking for a summary, https://hackingne.ws/ does that for its Bsky posts.
Coding is another good example, “write me a Python script that moves all files in /mydir to /newdir”
Asking for it to summarize a theory or protocol, “explain to me why RIP was replaced with RIPv2, and what problems people have had since with RIPv2”
it’s not good for summaries. often gets important bits wrong, like embedded instructions that can’t be summarized.
My experience has been very different, I do have to sometimes add to what it summarized though. The Bsky account mentioned is a good example, most of the posts are very well summarized, but every now and then there will be one that isn’t as accurate.
The dumbed down text is basically as long as the prompt. Plus you have to double check it to make sure it didn’t have outrage instead of outage just like if you wrote it yourself.
How do you know the answer on why RIP was replaced with RIPv2 is accurate and not just a load of bullshit like putting glue on pizza?
Are you really saving time?
Yes, I’m saving time. As I mentioned in my other comment:
And
And
How do you validate the accuracy of what it spits out?
Why don’t you skip the AI and just use the thing you use to validate the AI output?
Most of what I’m asking it are things I have a general idea of, and AI has the capability of making short explanations of complex things. So typically it’s easy to spot a hallucination, but the pieces that I don’t already know are easy to Google to verify.
Basically I can get a shorter response to get the same outcome, and validate those small pieces which saves a lot of time (I no longer have to read a 100 page white paper, instead a few paragraphs and then verify small bits)
Dumbed down doesn’t mean shorter.
If the amount of time it takes to create the prompt is the same as it would have taken to write the dumbed down text, then the only time you saved was not learning how to write dumbed down text. Plus you need to know what dumbed down text should look like to know if the output is dumbed down but still accurate.