If you need an absolutely reliable product (medical safety, NASA, or whatever), you have to use ultra high assurance design processes that are not economically competitive in more typical application areas. If you don't use those processes, you aren't designing "without a care", but you're designing with an amount of care chosen through an engineering and business decision, based on how much product failure you're willing to tolerate. If falling back to a WDT is a cheap way to reach your acceptable failure rate, it seems like an ok option.
I worked on a thing a while back whose hardware randomly locked up every few thousand hours of operation. We never figured out why, and decided not too spend excessive resources studying it, given that it was coming due for a total redesign anyway.
We had a few hundred of these things in the field which meant that on average, we logged maybe one WDT reset per day across the whole fleet. The application area was not even slightly safety critical and most of the resets were in the middle of the night when the device wasn't in use anyway. There was a slim possibility that a reset at the wrong time could actually inconvenience a customer and we'd get a support call. But AFAIK that never happened. Nobody ever noticed the resets.
I think the above is a typical story. I wasn't involved in the management decision to ship the thing despite the lockups (relying on the WDT), but I can't say that they made a wrong choice. In mathematics we prove things and then expect to be absolutely sure of them, but engineering is different. Most engineering is about making stuff that meets cost constraints and empirically works well enough for the application, and that's what they did.