It saddens me that a lot of people don't use debuggers and default to adding print statements. As far as I can tell, it's for several reasons:
1. The debugger is primitive (e.g. Godot GDScript - no conditional breakpoints or watches).
2. The debugger is unstable (e.g. Android Studio - frequently hangs, or takes a long time to populate data needlessly)
3. The debugger's UI is not friendly (e.g. Android Studio - hitting a breakpoint in multiple threads causes unexpected jumps or loss of current state; VSCode C++ debugger - doesn't display information properly or easily (arrays of objects) or displays too much information (CPU registers, flags, memory addresses); C++ debugger for D - doesn't display D data types).
4. The debugger is not properly integrated into the environment - can't find symbols, libraries or source files, or finds the wrong source files, etc. Need to jump through hoops to configure those.
5. Platforms don't support debuggers properly (e.g. again Android - ANRs when debugging the main thread, can't leave a debugging session overnight without some timer killing the process)
6. Developers got used to the workflow of "add a print statement, rerun and check the console" since high school and nobody taught them a more powerful tool
7. Developers code all day, so adding print statements by coding feels more natural than switching to the debugger's UI and way of doing things. (e.g. "if (i == 100) console.log(value)" allows you to stay in the same code, as opposed to setting a breakpoint, finding out how to add the 'i == 100' condition and pray that there's no issue with variables being optimized out at runtime).
I like Replay's features and that it's improving the state of the current tools. At the end of the day, adding print statements in Replay doesn't seem to affect the state of the application, so in that sense it's similar to gdb commands in that it's just a UI choice, but I wouldn't go as far as encouraging print-based debugging.
Outside of Replay, print-based debugging is still a primitive way of analyzing the state of the app and promoting this state of affairs reduces the pool of people who use and would hopefully improve the existing debuggers.
We all appreciated Firebug and the Chrome DevTools because of the powerful features they give us to inspect the state of the application. Imagine a person who adds print statements to their code every time they want to inspect the DOM or check the current CSS attributes. It works, but we have better tools, and we should make them even better.
I think print statements are actually useful in ways that typical debuggers are not meant to be; they make it easy to show changes over time, and they provide a tight feedback loop between observing the value of some data and performing interactions that update that data. For example, if you wanted to know how a coordinate calculation changed as you scrolled the page, print statements would be more useful than a debugger. I don't think this is exclusively why debuggers get less use, but I think that print statements aren't inherently a thing to optimize away from.
That and concurrent execution is where I've found print statements to be most useful, but nothing prevents a debugger from keeping track of some value over time and then display those values on the UI, just like one would with a print statement.
My view is that using print statements is absolutely a subpar method of debugging and that we should, in fact, optimize away from it by creating better debuggers.
Anything you can do with a print statement can also be done with a logpoint, if your debugger has that concept. Logpoints can also sometimes be simulated with conditional breakpoints (log something and then return false).
The debugger saves so much time wasted recompiling/reloading with new print statements, IMO it's strictly better on every aspect.
I've been dreaming forever about writing a debugger that basically just produces a timeline/log (branching in the case of threads or processes) of program execution, you can drill down the stack or into a loop at any point, and surfaces the trace of your code as opposed to 17 layers of library indirection.
I mentioned this in a different thread, but I'd recommend you take a look at Pernosco, a debugging tool written by the original author of rr: https://pernos.co/about/callees/
Couldn't agree more. Debugger support in modern codebases has become a huge after-thought which is such a shame.
It is an amazing way to discover how a codebase works. You pick a point of interest, and then you get the entire path from the beginning of the app's execution to that point as your stack trace, and every variable along the way too. Watches are great too for tracking a value changing over time.
Micro-services and Docker also took debugging many steps backwards - one advantage of a monolith is that you can easily step-through the entire execution, whereas if you have to cross process-boundaries it becomes a lot more complex to properly debug.
I'm working on a monorepo template at the moment where everything is debuggable with a single-click. This includes debugging native addons in C++ and Rust for Node.js projects. It's not easy - which is why people avoid debuggers so much.
I recently setup debugging in a Rust project for IntelliJ was the alternative was adding `dbg()!` statements which involved 10s recomplilation. The difficulty was implementing the right pretty-printers in lldb so you could see the value of various types because support is quite immature at the moment.
Those top-to-bottom stack traces also become a lot less useful in today's highly-meta frameworks, where functions get passed around and eventually scheduled at a point totally divorced from where they live in the code. I'm not saying this is a bad thing, it just makes debuggers somewhat less useful.
It's certainly a combination of these things. I use breakpoints all the time when I'm working with C# because I'm inside Visual Studio. It's super easy to work with the debugger there. With Source Link I can even step into other libraries of ours. Debugging C++ under VS is also not bad, and Python in PyCharm is a good experience.
But if I don't have VS or PyCharm available, I'll switch to printf debugging.
Though there are some cases where even with a good debugger I'll end up debugging by modifying the code. Sometimes it's necessary for performance reasons. Conditional breakpoints when debugging C# are extremely expensive so tossing one on a line that's executed many times may make the process far too slow. In that case it's better to compile in an if statement and then drop the breakpoint inside there. Other times the debugger is just limited in what information it can provide. Pointers to arrays in C++ are a common annoyance since the debugger has no length information.
My theory is that breakpoints are not useful because they let you go forward. But if you have an issue somewhere where a variable is not in the right state, it's because somewhere in the past was the issue. But you can't go back with a normal debugger.
Replay allows you to go back in time which is to me the biggest breakthrough. This actually makes them useful!
Breakpoints are a tool to stop execution and land in the present. It's the debugger that decides where you can go from there. Typically they'll allow you to go into the past, but only to inspect the stack frames, because the values on the heap get overwritten. I vaguely remember that some debuggers are able to record heap writes and thus are able to show the entire state of the app at each frame, effectively "going back" and replaying stack frames. My guess is that Replay does something similar.
Maybe 2a. Executing code (like a print) in an auto-continuing breakpoint action makes the program itself pause; especially tiresome when you're looking at a timing or performance issue.
Just my anecdote: Personally I don't like using the one in Xcode (and maybe I'm missing something obvious) because I got so used to the debugger in JS land where I get access to a live REPL which functions just like the code I write. In Xcode, I'm stuck with some lldb prompt which I don't understand and definitely doesn't function like the one in JS tooling. I'm sure it could be more useful if I invested more time into learning it, but the barrier is there.
I’ve used good debuggers in the past, but the main downside to me is that the workflow improvement is relatively minimal compared to print debugging. The “live programming” aspect of Common Lisp and Clojure, as well as the way Cider implements tracing for Clojure _is_ a major improvement, but only because they let you be more precise in what needs to be re-run for print debugging.
I think it's often that the compiler/environment do not leave enough information for the debugger, typically by optimizing out local variable names and their values. By the time you figure out the obscure settings to be able to see the live values of variables and other state, you may have done a lot more surgery on your build system and slowed things down to a crawl compared to adding a few print statements.
It saddens me that a lot of people don't use debuggers and default to adding print statements. As far as I can tell, it's for several reasons:
1. The debugger is primitive (e.g. Godot GDScript - no conditional breakpoints or watches).
2. The debugger is unstable (e.g. Android Studio - frequently hangs, or takes a long time to populate data needlessly)
3. The debugger's UI is not friendly (e.g. Android Studio - hitting a breakpoint in multiple threads causes unexpected jumps or loss of current state; VSCode C++ debugger - doesn't display information properly or easily (arrays of objects) or displays too much information (CPU registers, flags, memory addresses); C++ debugger for D - doesn't display D data types).
4. The debugger is not properly integrated into the environment - can't find symbols, libraries or source files, or finds the wrong source files, etc. Need to jump through hoops to configure those.
5. Platforms don't support debuggers properly (e.g. again Android - ANRs when debugging the main thread, can't leave a debugging session overnight without some timer killing the process)
6. Developers got used to the workflow of "add a print statement, rerun and check the console" since high school and nobody taught them a more powerful tool
7. Developers code all day, so adding print statements by coding feels more natural than switching to the debugger's UI and way of doing things. (e.g. "if (i == 100) console.log(value)" allows you to stay in the same code, as opposed to setting a breakpoint, finding out how to add the 'i == 100' condition and pray that there's no issue with variables being optimized out at runtime).
I like Replay's features and that it's improving the state of the current tools. At the end of the day, adding print statements in Replay doesn't seem to affect the state of the application, so in that sense it's similar to gdb commands in that it's just a UI choice, but I wouldn't go as far as encouraging print-based debugging.
Outside of Replay, print-based debugging is still a primitive way of analyzing the state of the app and promoting this state of affairs reduces the pool of people who use and would hopefully improve the existing debuggers.
We all appreciated Firebug and the Chrome DevTools because of the powerful features they give us to inspect the state of the application. Imagine a person who adds print statements to their code every time they want to inspect the DOM or check the current CSS attributes. It works, but we have better tools, and we should make them even better.