Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the funnier situations (funny now, wasn't so much the first time I saw this) that I run into at new gigs or contracts is when a business has absolutely zero monitoring or alerting. Go into their backend, it's predictably a dumpster fire. Start plugging in monitoring and the business realizes everything is on fire and PANICS. It's very difficult to explain to someone who definitely doesn't want to hear that it's actually been broken for a long time, they just didn't care enough to notice or to invest in doing it right the first time.


> It's very difficult to explain to someone who definitely doesn't want to hear that it's actually been broken for a long time, they just didn't care enough to notice or to invest in doing it right the first time.

if no one noticed then can you really say it was broken?


Depends - IME yes, lol. Stuff like email campaigns being broken is super hard for the business to detect, as is a lot of random customer facing stuff that doesn't directly drive revenue. Stuff that isn't a total outage and a degradation will often be tolerated or unnoticed for quite a long time til it actually blows up.

My favorite anti-pattern ever that happens here is when you notice a bug that's existed for a long time, you fix it, then some other part of the system that had adapted and expected the behavior over the years breaks because of how they were handling the bug no longer works. Then of course the business comes to you like "why did you break this?"


Errors can go unnoticed or ignored until they cause a fault. For example you may not notice missing bolts on the underside of the bridge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: