Kudos to the people who wrote this and it’s great for people who use debuggers.
However I’d like to say that I haven’t used one in years and don’t see any reason to go back.
I’ve found that there’s way simpler practices that have upped my development speed considerably. Simply think about what you’re trying to do more carefully, and read over the code until you’re sure it’s good. It’s the fastest way to iterate. Doesn’t work? Read and think again.
You can put a format log in there. You can even comment it out, which can be useful later and for other people. It’s plain and simple.
When I find myself using log all the time, it’s either because I am tired, and I shouldn’t be coding any way. Or impatient, which means I’m wasting time and should slow down. Or I have to deal with a library that has a shitty API, which you’ll probably want to avoid using any way. And in that case you can use the interactive console to quickly try things out.
Honestly if there’s anything I want to get better at it’s test driven development. It tells you clearly whether it’s working as expected or not.
I feel like you’re missing out on a ton of awesome features by not using a debugger? Step backs are super useful, inline/live commands save you from re-running the code to see a different value, you can change values on the fly.
And it’s nice to say “think about your code more” but when you’re working with large teams, on legacy codebases, you don’t often have the opportunity to “think about your code” because you’re trying to decipher what someone wrote 3 years ago and they don’t even work with the company anymore.
Have you ever tried my approach? It also works for understanding existing code bases. It just takes some practice, like working a mental muscle. So it might seem strange and ineffective at first.
If you do lots of trial and error and use the debugger you’re basically externalizing the work, which slows it down and is less satisfying too imo.
I have indeed. We even practice pure TDD and won’t accept PRs without test coverage, but it doesn’t change the fact that sometimes bugs happen, and when they do it tends to be much more effective to work through the problem with a debugger than make guesses at what things need to be logged, or poked into or whatever.
If what you’re doing works for you, more power to you, but in my opinion I’d never give up a tool in my toolset because it makes me far more productive than I’d be without it.
I get where you’re coming from, and debuggers surely have their merits. But I’ve found value in a more deliberate approach that emphasizes understanding and careful code review. Even when faced with legacy systems or larger teams, with practice, this technique can be quite effective. It’s less about relinquishing tools, and more about harnessing those that harmonize with our individual coding styles. Granted, it’s not a one-size-fits-all approach, but it’s what works for me and others who prefer a similar path in coding.
Since adopting TDD, my debugger use has really dropped off. I think it’s partially due to TDD encouraging me to develop more pure functions and push side effects to injectable (and thus mockable) objects.
But every so often I encounter a state that I can’t understand how the code gets into, and in those cases being able to step through everything that’s going on is incredibly helpful.
I may not use my debugger every day, but when I want it, I’m sure glad it’s there.
When I have a real head scratcher like that I use log with a b c d. It’s rare though and mostly due to me not paying attention or due to some convoluted calling graph.
Ya I’ve also switched to functional wherever possible. I still use objects for di.
Same here! Testing up front has made it extremely rare that I have to go back with a debugger later.
It’s a little hard to iterate and think when you’re adding to a complicated codebase you might not have worked with in several months, or even just a portion of a project that’s seemed stable for a long time. In that scenario, debuggers are able to shorten the getting up to speed process by quite a bit.
My favorite tool in that case is jump to definition.
I am unsure just how revolutionary this feature is, though I am definitely interested in trying it and can see it’s value. I’ve somewhat gotten away from Jetbrains, but I do still use and promote Rider for C# development so this is potentially a nice addition for my professional life.
am unsure just how revolutionary this feature is
It’s not. This feature existed for dotnet in bugaid (which then got renamed to Ozcode) (which then got killed by Datadog) for the last 10 years already
Out of curiosity, can anyone here think of a bug they’ve faced which would, identifiably, have been made easier by preemptively running source as it was written? This looks like it’s preemptive compiling, which isn’t just unwise, it’s potentially dangerous.
To be honest, I used to be a big fan of Eclipse and Jet Brains, but at this point I’ll just take Vim over either any day. It took a little time to get used to, but it saves me from the wait time of a load and doesn’t plaster my screen with highlighter and pop ups while I’m just trying to finish a module.
The code is already compiled. what do you mean it’s preemptively compiled? If you’re talking about executed, they explicitly called that out…
A prediction can also end at a function call the debugger is cautious about evaluating. That is for your own (or your software’s) well-being. Since code is executed during debugging, the debugger has to be sure that it’s not causing any mutations. When running the above example, we must confirm the evaluation of the int.TryParse method (more about that later):
As mentioned in the previous section, the predictive debugger won’t evaluate possibly impure functions to avoid side effects. As a human developer, you most certainly have more knowledge about the code, which is why we give you the possibility to forcefully evaluate a call by clicking the hint:
I mean that in running it, it is by basic definition compiling what it is running into machine language. So no, code is most certainly not already compiled into machine language as I’m writing or reviewing it.
You question doesn’t make much sense to me.
By necessity, when you’re in the debugger your code has already been compiled either way, no? Or am I missing something here?
This isn’t executing your code as you’re writing it (though it does support Edit & Continue), this is preemptively executing the next lines in your code when you’re already paused in the debugger - which means it’s been compiled and already running.
you’re misunderstanding. this is a function of the debugger. Your code has already been compiled and is currently running if you are using this feature.
Ah, you’re right, that does make more sense. So this runs while the program is in debug mode.
That relieves me a bit. I just feel like a lot of these new IDE features are things that no one specifically asked for.
preemptively running source as it was written
It’s not preemptively running source as it’s being written, it’s preemptively evaluating methods as you’re debugging it
This looks like it’s preemptive compiling, which isn’t just unwise, it’s potentially dangerous.
So I think what you might mean is preemptively evaluating methods at runtime? - which would be unwise / potentially dangerous - since it could cause side effect
For example, evaluating a method that increments something and modifies the state. So if it’s preemptively called by the debugger, the state would be modified, and the actual invocation would be different
I installed the Resharper RC, and this is how it looks like in a small project that parses an excel file: https://i.imgur.com/g4s0P3h.png
So, in the example my debugger is still on the
allTheFieldsEmpty
line and hasn’t ran it yet, but resharper already evaluated it to false. Then it also greyed out everything in theif(allTheFieldsEmpty)
block, since it knows it wouldn’t hit thatThe next line you can see there was a warning, “Possible impure evaluation” - which is that I assume you were talking about, and it didn’t evaluate that yet. I can click the box and make it evaluation it.
The debugger inspects the method, as the article mentions, it check for the PureAttribute - indicating that it’s safe to use
After I marked that
GetMappingField
method as Pure, it actually did evaluate it without any interaction, and it predicted it would throw an exception https://i.imgur.com/zQ0K3Ge.png - seems pretty useful so farDon’t get me wrong, I hope this works out, but I still think it’s trusting far too many decisions to the IDE. It feels like feature bloat.
Which, is why I’m asking if anyone has had an issue which this would legitimately have helped solve.
I don’t think this will necessarily help solve issues you wouldn’t be able to solve without this, though I used similar tools in the past (Ozcode) and it did make debugging easier / faster
deleted by creator
Why do you say that?
A debugged code could be doing once per run operation, use unique data or send network request that’s isn’t supposed to be done until a user explicitly does it.
they explicitly call out that they won’t perform the predictive calls unless they’re sure it doesn’t modify state.
A prediction can also end at a function call the debugger is cautious about evaluating. That is for your own (or your software’s) well-being. Since code is executed during debugging, the debugger has to be sure that it’s not causing any mutations. When running the above example, we must confirm the evaluation of the int.TryParse method (more about that later):
As mentioned in the previous section, the predictive debugger won’t evaluate possibly impure functions to avoid side effects. As a human developer, you most certainly have more knowledge about the code, which is why we give you the possibility to forcefully evaluate a call by clicking the hint:
They do say that, but how much can it be trusted? Can they really detect all native interface calls? Be aware of all future file system checks or event driven programming paradigms?
hashset.getOrElse()
where uniqueness decides future flow? I’m sure we will be experiencing or at least seeing bug reports related to predictive debugger triggering mutations.I’m struggling to see how bug reports found using this prediction approach would ever be sent as anything but bugs of the predictive debugger itself.
how would end-users ever see bugs caused by a debugger the devs use? how would users of a third-party library conflate bugs in their own code/the third-party code when you can see which lines are which as you debug?