Does having to look back at 4 of your old code examples to write 10 lines count?
I didn’t google it…
Isn’t that the idea. Like you know that you had a viable solution to a complex problem previously so why go through the trouble of solving it again if you already did. Even if you have to modify it, it saves time for new novel problems. I’m
a viable solution to a complex problem
You mean how to structure a for loop in a bash script? Lmao
Yes
You are?
Yeah.
My company starts all new projects from a skeleton of the last project including shared directories of usual functions we’ve created over time.
Sorry I was trying the parse the “I’m” at the end of your comment
Ah, just a typo. Or my alter ego almost escaped.
That’s the way. I’ve been programming for nigh on four decades, and it’s almost a daily occurrence with junior devs going to stack overflow or chatGPT to solve an issue instead of just searching the code where nine times out of ten the problem (or a very similar one) is already solved.
Been doing a whole lot less of that now that copilot is up and running. Didn’t expect it to be such a productivity booster tbh.
… I tend to get down votes when I say copilot has improved my work a ton.
Most of my code isn’t ground breaking shit. I gotta write a script for my task. It’s 90% copying with 10% modifications for my use case.
It also does comments for me…
I wrote a script the other day in like 30 min tested working that would’ve been 2 hrs easy.
Yeah, I was lucky that I snuck into my company’s pilot program for it.
I’m impressed at how often it predicts what I’m about to do. The code almost always needs a slight bit of editing, but it almost always at least shaves a bit of time off of whatever task I was doing.
I no longer go straight to stackoverflow, I always ask the copilot first. Sometimes even just phrasing the question in natural language, something I wouldn’t do it trying to find it via search or stackoverflow, is kind of like rubber duck debugging, and I’ll come up with the answer while writing it out.
My fav thing is two things.
-
It reuses MY OWN CODE STYLE. So if I ignore a suggestion and setup a try catch in my own quriky way it’ll actually reuse it later on when I’m scripting. This works best when you add comments for the sections you write FIRST. So you comment
# create array for x data
it’ll do that ortry catch for query
it’ll give you a suggestion for the next block right away. -
DEBUGGING. github copilot can see your terminal and script so it’ll give you a detailed breakdown and suggestions. Blew my mind the first time.
-
In my experience, Copilot does a fairly good job when you already know what you’re doing, but can’t be bothered to write the code yourself.
For example, basic stuff like read data from that file, use dplyr, remove these columns, do these calculations, plot the result using ggplot2, label the axes this way, use those colors etc. Copilot gives you the code that does roughly what you want, but you usually need to tweak it a bit it to suit your preferences. Copilot also makes absurd mistakes, but fixing them is fairly easy. If this is the sort of stuff you’re doing, copilot can indeed boost your productivity.
However, if you don’t know how to do something a bit more exotic like principal component analysis, and you ask copilot to do the job for you, expect plenty of trouble. You may end up on a wild goose chase, using the wrong tools, doing unnecessary calculations and all sorts of crazy nonsense. When you know what you’re doing, you can ask a very specific thing. When you don’t, you may end up being too ambiguous in your prompt, which will result copilot leading you down the wrong path.
You can do it this way too, but before implementing a single line of that garbage code, you absolutely have to ask copilot a bunch of questions just to make sure you really understand what you’re doing, what the new functions do, where do you really want to go etc. You’re probably going to have to tweak the code before running it, and that’s why you need to know what you’re doing. That’s the one big area you can’t outsource to copilot just yet.
But is it still faster than reading the documentation and building your own experimental tests? If you spend an hour and get a pile of broken garbage, then certainly not. If you spend a bit more, ask plenty of questions, make sure you know what you’re doing, then maybe it is worth it.
I 100% agree. I especially love when copilot literally just starts making up shit that doesn’t work or doesn’t exist. Like it can’t be wrong. It just freaking guesses… God forbid it can’t admit it doesn’t have enough data to answer the question.
Best part is when you say “that command doesn’t exist” it’s like “I apologize. Here is a real command to accomplish your task”
SMH
Again, to your point, I agree that copilot is amazing if you already know how to write the code you want. We’re smart enough to know that the suggestions will work for our task. It’s definitely not smart enough to replace you
Right?!
myList = list( 78, 99, 15, 78, 03, 22, 12, 73 )
Nice try, you clearly googled to make this /s
Unfortunately the code is Java and the ten lines are all just boilerplate
😂
There are people who unironically believe this.
Me when I write a regrex without googling every bit of it.
“regrex” They should definitely be known as that!
lol, didn’t even see my typo
One day I’ll pull that off…
Relatable
Me running an LLM at home:
The same image, but the farmer is standing in front of a field of poppy (for opioid production)
I am researching doing the same, but know nothing about running my own yet. Did you train your llm for programming in any way, or just download and run an open source one? If so which model etc do you use?
Run an open source one. Training requires lots of knowledge and even more hardware resources/time. Fine tuned models are available for free online, there is not much use in training it yourself.
Options are
https://github.com/oobabooga/text-generation-webui
https://github.com/Mozilla-Ocho/llamafile
https://github.com/ggerganov/llama.cpp
I recommend llavafiles, as this is the easiest option to run. The GitHub has all the stuff you need in the “quick start” section.
Though the default is a bit restricted on windows. Since the llavafiles are bundling the LLM weights with the executable and Windows has a 4GB limit on executables you’re restricted to very small models. Workarounds are available though!
Im gonna give llamafile a go! I want to try to run it at least once with a different set of weights just to see it work and also see different weights handle the same inputs.
The reason I am asking about training is because of my work where fine tuning our own is going to come knocking soon, so I want to stay a bit ahead of the curve. Even though it already feels like I am late to the party.
Have a look at llama file models they’re pretty cool, just rename to xxx.exe and run on windows and chmod on Linux.
Though the currently supported ones are limited, you could try llama code.
Where do you get it? Hugging face?
https://llamafile.ai (though it’s down for the moment)
https://github.com/Mozilla-Ocho/llamafile
Lot’s of technical details, but essentially the llamafile is a engine + model + web ui, in a single executable file. You just download it and run it and stuff happens.
Thanks!
I always feel bad about how much I have to look up until I look at any programmer based forum. Then, I feel at home lol
Apparently the average developer will get this much done in a single day’s work anyways, so nice job being ahead of the curve!
Maybe you need to. 😬
RIP
Man, man
Only 10 lines of code o.O, that much!