On the flip side, anytime I’ve tried to use it to write python scripts for me, it always seems to get them slightly wrong. Nothing that a little troubleshooting can’t handle, and certainly helps to get me in the ballpark of what I’m looking for, but I think it still has a little ways to go for specific coding use cases.
I think the key there is that ChatGPT isn’t able to run its own code, so all it can do is generate code which “looks” right, which in practice is close to functional but not quite. In order for the code it writes to reliably work, I think it would need a builtin interpreter/compiler to actually run the code, and for it to iterate constantly making small modifications until the code runs, then return the final result to the user.
That’s the same for me with Powershell and bash scripts.
Granted, I also never tell the AI certain details, like file names or folder locations. I always give it generic names that can’t be linked back to me so easily and then input the real information once I’ve copied it out.
I’ve had to do follow ups with the AI like when I get an error and continue using the original fake information.
It’s been really good at helping correct errors that come up as a result of what it gave me, most of the time. I can remember at least one time where we went around in circles and I ended up scrapping it and restarting in another direction.
On the flip side, anytime I’ve tried to use it to write python scripts for me, it always seems to get them slightly wrong. Nothing that a little troubleshooting can’t handle, and certainly helps to get me in the ballpark of what I’m looking for, but I think it still has a little ways to go for specific coding use cases.
I think the key there is that ChatGPT isn’t able to run its own code, so all it can do is generate code which “looks” right, which in practice is close to functional but not quite. In order for the code it writes to reliably work, I think it would need a builtin interpreter/compiler to actually run the code, and for it to iterate constantly making small modifications until the code runs, then return the final result to the user.
The new code interpreter is able to run its own code, but i haven’t personally tested it to see if its code is more often functional.
That’s the same for me with Powershell and bash scripts.
Granted, I also never tell the AI certain details, like file names or folder locations. I always give it generic names that can’t be linked back to me so easily and then input the real information once I’ve copied it out.
I’ve had to do follow ups with the AI like when I get an error and continue using the original fake information.
It’s been really good at helping correct errors that come up as a result of what it gave me, most of the time. I can remember at least one time where we went around in circles and I ended up scrapping it and restarting in another direction.