markdown guide
 

1- Panic
2- Question my ability to do shit as a developer
3- Calm down, start to look at the logs
4- Start spamming console.log()
5- Rage because there are too many console.log to keep track
6- If bug is complicated, get back to 1)
7- Making progress, adding a fix
8- Commit/push
9- Realize some other part is broken because of the fix
10- Back to 1)
11- Around 6PM, throw my hands in the air, yell "Fuck this shit", and leave.
12- Around 9AM the next morning, realize it was a typo.

 

Start spamming console.log()

console.log('here1');
console.log('here2');
console.log('here3');
console.log('blabla');
console.log('bananas');
console.log('hmmm');

 

I feel you! I'm more of a:

console.log('HI')
console.log('HELLO')
console.log('HHEEEEEEYYYYY')

kind of dev, but I see where you're coming from :D

 
 

Dude, I read this and left for like 10 minutes !! Thanks for making my day 😁

 

your answer made my day :'‑) :'‑) :'‑)

 
  1. Grep through the logs for any obvious issues or errors. With decent logging, 50% RCA happens here.
  2. Try to replicate the scenario in local environments and see the bug in action.
  3. Keep adding printf statements after each line of execution. Occasionally use the debugger as well on local to triage the issue.
  4. Forget to remove printf statements when committing fix.
  5. Create hotfix commit to remove debug logs πŸ˜„
  6. Actually deploy fix to production.
 
 

Here is my approach

  1. Reproduce the bug
  2. Locate the bug ( I use bunch of print statements, pdb here)
  3. Fix the bug
  4. Test it carefully
  5. Deploy on to staging and test again
  6. Deploy on to production
 

Reproduce the bug

That's usually where things start to go terribly, terribly wrong :)

 

Yeah, if I can't reproduce, then it's a serious issue

 

I wouldn't say I have a pattern. Debugging is like an art. There are rarely two contexts where I can address the exact same way.

What I try to do all the time is to consider the context. I have lost too much time in the past for focusing too much on a single line or function, not understanding why it's failing.

In many cases, bugs are a product of a combination of factors.

Considering the context, what else is going on when the bug was produced, usually allows me to understand the cause faster.

 
  1. First thought - "Oh shit" 🀭
  2. Check the git commit and understand the code changes went with that.
  3. Put on the headphones 🎧 with the debugging playlist
  4. Use debugger and console.log to debug
  5. Bang my head around it πŸ˜‚
  6. Realise the issue was small πŸ€¦β€β™‚οΈ
  7. Look at my self in the mirror 😏
  8. Fixing it πŸ’ͺ
  9. Writing test cases
  10. Make a hot fix with a typo
  11. Deploy and forget πŸ˜‚
 

I develop in a space where you can't trust the hardware you're running on. With that in mind:

1) check the logs
2) replicate the failure
3) come up with a minimal repro and pray that it fails consistently
4) use debugger Foo
5) consult hardware manuals for expected behaviour and interface
6) start tracing register activity and traffic to the hardware unit
7) start bit banging registers
8a) complain about the problem to coworkers
8b) learn about something seemly unrelated that is broken right now
8c) find out your problem is a corner case of that issue
9) file a driver or hardware bug
10) participate in long email threads followed by a meeting where the HW engs explain their change has no software impact and shouldn't break anything
11) HW engs end the meeting with "well in that case it does impact SW"

 
  1. Tell the PM to discuss with QA, because Dev have no access to production.
  2. Kick the Jira report back to QA as it doesn't have reproduction data.
  3. Re-read and kick back again because it doesn't have logs attached.
  4. Raise a ticket with ops to enable remote debug in production.
  5. Connect to production debug port, add conditional breakpoints where I think the issue is.
  6. Observe the issue in production.
  7. Receive the updated Jira & confirm it reports the issue observed.
  8. Repeat 5&6 until 7 is completed.
  9. Write a test case that reproduces the bug & fails the build because of it.
  10. Confirm expected behaviour with a BA.
  11. Repeat 9&10 until happy.
  12. Assign Jira to a junior dev, with a comment of "can you fix the build for this please?"
  13. Coffee break.
 

Recent favorite is testing a fix directly in production by monkey-patching the existing code with the fix (not recommended (but totally recommended)).

The usual favorites are the ones mentioned in Aaron Patterson's puts debugging methods. I have a vim shortcut (space + D) to put out these puts statements:

puts "^^ #{?# * 90}" #DEBUG
p caller.join("\n") #DEBUG
puts "$$ #{?# * 90}" #DEBUG

For sql, I have sample queries for quick reference that does all sorts of queries that I'd mostly require in my current project (CTEs, window functions, certain joined tables etc)

 

If you write perfect patches maybe.

But if you're app use a shared state like the database, and your patch is wrong you might become nightmares.

The wrong states resulting for your mistake are left in the database. And worst is that normally the code is not made to handle inconsistent states.

If you note the mistake in the patch you can make a correction patch but also need an sql query to correct the wrong states.

Is you don't detect it son, the code might generate wormy states for the related entities and this situation spreads.

By the time you note the database might be so inconsistent that you better drop it.

 

Yes, I'm aware of the risks of this approach.

I mentioned it explicitly in the post too.

Even if I know how the patch works inside out, I don't ever try this when it involves database changes. Too risky.

 
  1. Reproduce the issue (entrust it to others if parallelism is possible.needed)
  2. Check logs analyze the stack/error traces
  3. If not enough detail available on the existing logs up the log level
  4. Analyze the code as wirtten and see if there is an issue.
  5. If step fails then run the code on local machine and debug it
  6. Apply the fix, test and write automated tests to catch that in the future.
  7. Release
 

Oh, of course, Ststem.out.println and chase the shit out of the bug like crazy ;)

True story tho: rather than adding breakpoint everywhere, I’m more towards to analyzing the business logic and identify the expected/unexpected results, unfortunately with a brutal print.

I developed this habit when working on a few production projects and live-debugging through the pages of logs.

In production, there is no debugger or any intuitive tools (not usually), but just layers of logs to dig into.

I practice my production debugging habit in daily coding task. It’s not as effective as utilizing a debugger, but it keeps my brain running ;)

 

I just debugged some CI/CD issues this AM. For me, devops debugging looks a little different than my normal (code) debugging process. Here's what happened:

  1. Try to configure a new service based on an internal blog post
  2. Google error message
  3. Tweak settings
  4. Re-read blog post
  5. Reach out to the author of the blog post
  6. Figure out the issue
  7. !!MOST IMPORTANT!! - update the docs so others don't run into this issue πŸ™‚
 
  1. Fully understand the bug by describing it as simply as I can. Imagine you are trying to report it on github.
  2. Ask myself what I expected to happen, what actually happened, and what assumptions I hold that make me feels entitled to the expected result. I list these assumptions out. ( rubber duck method )
  3. I go through the list of assumptions and order them by what I think are the least likely to be invalid first.
  4. I walk down the list and verify all the assumptions are actually true. Usually, it is here that I find the 'invalid assumption'. ( here is where we actually use the debugging tools )
  5. If all my assumptions hold, then it simply means I don't understand the system deeply enough and I need to go back to step 3.
 

Not really, I go with the wind...

Actually, my debugging starts when I write my code.

I program in Ada and, as far as possible, I declare type invariants, contracts for my procedures/functions, define specific subtypes with constraints and spread the code with Assert and such. Armed with this array of "bug traps," as soon as something fishy happens I have an exception that (usually) points an accusing finger to the culprit. This shortens debugging times a lot.

I still remember days of debugging for a dangling pointer in C that corrupted the heap and caused a segmentation fault in a totally unrelated point...

Beside that, I usually go with debugging prints. I use a debugger only in very unusual cases. I do not actually why, I just like debug printing more.

 

My debugging approach is still not as efficient as I would like it to be.

  1. Panic
  2. Open up the logs
  3. Push the bug back to the reporter to explain it more and reproducible steps.
  4. Run application locally and try to reproduce
  5. Swear
  6. Put log.info("what the fork is going on?!") everywhere in the code
  7. Look into the database
  8. Start to rule things out
  9. Decide I need more documentation of all the business rules there are.

I am currently in the process to get diagrams setup for all the business rules to be able to understand what it should be and what it is. Also to push back to the reporter that it functions as designed. And also to make more and more tests.

 
  • find what file has the bug
  • System.out.println() everyware
  • look at it for an hour
  • ask stackoverflow
 

Not really a pattern. If it's in some new functionality it's probably a typo somewhere. Logs might be useful, but it's also common for something expected not to happen, this no logging. If possible by debugging/ looking into the database check if some of the assumptions done might be wrong. Maybe there's a clue in the git log. Has SRE changed anything? Time to maybe as some logging statements to check assumptions. When it was Clojure I could just as them on the fly, and inspect the data directly.. and then it turns out it was a typo after all, ouch.

 

Working in distributed systems with microsoeervices, the problem is to find the causing service first. Normally I try to track where the error happens and follow the ID of the log message of the original service where the event was emitted. Then I'll try to mock the data and debug the critical point. Always going up in the stack trace to find the root cause.

Since I work with Node.js I always use ndb for debugging:

kevinpeters.net/how-to-debug-java-...

 

I think about it best by ruling out parts that couldn't be the problem. I start by trying to rule out as big of chunks as I can, which helps me narrow down to the subsystem, class, function, or even couple of lines where the problem lives. As soon as I can confirm that a piece works like I expect it should, I shrink my scope a little bit and look for the next piece to confirm works on its own. Once I find the spot that's working weird, it's super important that I understand why it's doing what it's actually doing so I can make sure the fix I use actually solves the problem.

Sometimes when I'm tired, I'll find myself randomly trying to change a piece of code to see if it works, but that's a sure sign that I'm not going to get anything else productive done and I need to take a walk, because it means I don't understand why my fixes should work.

It's nice because this thought process works really well for debugging mechanical assemblies that don't quite work too:

"OK, well we've verified that every dimension on this part is right, so take that out, set it aside, and look for the issue in the now-slightly-simpler assembly."

The most important part is that, the more confused and overwhelmed I get, the smaller, slower steps I take. :)

 

console.log , sometimes also using the Browser-Debugger but mostly console.log πŸ˜…
find the wrong variable, fix, another issue, add console.log again, repeat :D

After too many written console.log()'s I recently began work on a kind of debug-dashboard, which will hopefully help to get a better sense to the written logs (or rather better visibility of the testing vars)

 

Ideally, you make sure all the tests have passed, before jumping to debug.

Next, make sure you have covered all test cases.

After that just place breakpoints. First to the View events, then the logic then the data layer. So debugging and separation of conserns is somehow related.

 

I wrote an article about debugging Javascript.

I mostly talked about using breakpoints instead of console logging. Truth is they both have their place, which is something I'd change.

But I hope can help!

 

Go into the logs, get all the data as they were at that point in time, throw them in a pot, boil them into a few unit tests...and see why that problems occurred.

The problem is that the project I am currently working on, as weird as it may seem, it has so many time sensitive external dependencies. So that it doesn't even make sense to even try to debug anything :-) directly.

 

Javascript : console.log();
Python : print()
C : printf();
Go : fmt.Println()

 

I use my polygot skillz

class AmazingCode
  function (){
    puts "hello" !important
  }.permits :world
 

Sprinkle console.log('<unique-prefix>', ...) until the bug has been fixed πŸ€ͺ

 

This is my favorite, I call it the enterprise approach:

  1. Talk to marketing department.
  2. Convince users that the perceived bug is indeed a feature.
  3. More time for new bugs...ahm...features.
 

Usually breakpoints. Sometimes logging. Sometimes semi-randomly commenting code out until the bug doesn't appear, which probably means the issue is in the commented out code. Sometimes git-bisect.

 
 
 
 

I use 'alert' rather then consoleLog.

I use 'toast' rather then LogD.

I use 'or die("Erro");' when I write php code.

 

See stacktrace.
Read logs and see what happened.
Try to reproduce.
Use debugger to see what happens in code.
Fix.
Test locally.
Code review.
Commit.
Test on test environment.

 
  1. Cry a little bit,
  2. Give up coding forever,
  3. Give myself a tedTalk about I can do it and looking through my code.
  4. Find out I just spelled a word wrong.
 
  1. Replicate the bug
  2. Document bug and how to replicate
  3. Create dev environment to fix
  4. Test fix on dev
  5. Launch fix to live
  6. Replicate bug again (hopefully unsuccssfully)
 

this is how I do it. breakpoints everywhere. lools
debug

 
  1. Question my life choices.
  2. Go work as something else instead.
 

Console.log all the things! You have to stick with what works.

 
 

Delete half of the code.... Bug still alive?
if yes then delete half of the remaining code and repeat......

But I try not to do this using ftp on a prod server of course

 
 
  1. reproduce the bug
  2. get the module that might be causing the bug
  3. add a debugger points and repeat till you get where it went wrong.
 

Someone shared the six stages of debugging with me long ago (super funny!) and I haven't ever forgotten it!

 

1 - Replicate the bug
2 - Narrow the location of the bug in the code with logs and debugger.
3 - Correct it (if it's a hard one, cry a little 😁 )
4 - Test it
5 - Deploy

Classic DEV Post from Oct 15

So meetup.com is going to charge attendees in future - what's next for event organizers?

So meetup.com is going to charge attendees in future - what's next for event organizers?

Ben Halpern profile image
A Canadian software developer who thinks he’s funny.

DEV Tip: When you see a dev.to post you like, follow the author to get more of their posts in your feed.

Happy Coding ❀️