I found ChatGPT to be indispensable in dealing with a nonsense homework assignment. I had to write a paper on a subject I knew about, but had to write it from a specific viewpoint and in far less time than would normally be allowed. I wrote up an outline, checked my sources, etc. Told ChatGPT to flesh it out. Read through the output, made some adjustments, and reprocessed. When I was happy with the result, I had it write a closing paragraph. Once again, read through the output, made adjustments, and reprocessed. Same with the opening paragraph.
Lastly, to remove all traces of AI, ran it all through QuillBot and had the input made more academic in some places, more casual in others.
Lesson: know your subject before attempting this. ChatGPT can be a time saver, but only if you already understand the output. Think of it as you would an advanced spelling and grammar checker. It’s just another tool/ After all, if your boss told you to write something, and you could do it in a quarter of the expected time and still produce acceptable output, would they be upset?
That’s probably the most useful way we can use language models.
But i do think there is also a use for finding key information (like a name) way quicker without the need to use keywords as we do in browsers.
Then it make any research using that information extremely easy if you got the keyword, and if that’s the goal you also will confirm the validity of the answer in your process…
P.S. The first time i used chatGPT this way was very conclusive. I was looking for a philosophical point of view but didn’t know at all if it even had a name, it gave it to me extremely quickly from just few lines of explaining (ontic structural realism for anyone interested).
My pretty clueless colleague wants a license for a programming AI. I tested some for a while and while they can certainly help ease tedious tasks or build a scaffold you can check and populate with logic, it’s not useful as an artificial colleague writing for you. I really don’t want her to work that way, because she doesn’t know what she does and seldomly checks her own code. I’d prefer if she’d learn to code before delegating parts of it to an AI.
That’s just it. You have to understand the output; otherwise you get very authoritative looking garbage.
I wrote some rudimentary Python code to do a job by brute force. On a whim, I asked ChatGPT to optimize the code. It did a pretty good job, but I still had to tweak it for my purpose.
I found ChatGPT to be indispensable in dealing with a nonsense homework assignment. I had to write a paper on a subject I knew about, but had to write it from a specific viewpoint and in far less time than would normally be allowed. I wrote up an outline, checked my sources, etc. Told ChatGPT to flesh it out. Read through the output, made some adjustments, and reprocessed. When I was happy with the result, I had it write a closing paragraph. Once again, read through the output, made adjustments, and reprocessed. Same with the opening paragraph.
Lastly, to remove all traces of AI, ran it all through QuillBot and had the input made more academic in some places, more casual in others.
Lesson: know your subject before attempting this. ChatGPT can be a time saver, but only if you already understand the output. Think of it as you would an advanced spelling and grammar checker. It’s just another tool/ After all, if your boss told you to write something, and you could do it in a quarter of the expected time and still produce acceptable output, would they be upset?
That’s probably the most useful way we can use language models.
But i do think there is also a use for finding key information (like a name) way quicker without the need to use keywords as we do in browsers.
Then it make any research using that information extremely easy if you got the keyword, and if that’s the goal you also will confirm the validity of the answer in your process…
P.S. The first time i used chatGPT this way was very conclusive. I was looking for a philosophical point of view but didn’t know at all if it even had a name, it gave it to me extremely quickly from just few lines of explaining (ontic structural realism for anyone interested).
My pretty clueless colleague wants a license for a programming AI. I tested some for a while and while they can certainly help ease tedious tasks or build a scaffold you can check and populate with logic, it’s not useful as an artificial colleague writing for you. I really don’t want her to work that way, because she doesn’t know what she does and seldomly checks her own code. I’d prefer if she’d learn to code before delegating parts of it to an AI.
That’s just it. You have to understand the output; otherwise you get very authoritative looking garbage.
I wrote some rudimentary Python code to do a job by brute force. On a whim, I asked ChatGPT to optimize the code. It did a pretty good job, but I still had to tweak it for my purpose.