This is what happens when a company relies on amateurs for beta testing instead of trained and experienced professional programmers.
Bugs still happened, but they seemed to be much less severe and much smaller in scope (notable exceptions to this exist, I'm sure... I just can't think of any )
There's a lot to be said about waiting a year or more for a piece of software to be tested and properly matured (and thus properly debugging it) before releasing it to the public. All this fast-tracking has been lowering the overall quality of most software considerably over the last 7 years or so. In my opinion, anyway.
And long term stability is important too. Having an OS that constantly updates itself, with no way for the user to control or stop the process, introduces a lot of variables that can make the OS inherently less stable. There is an upside, I suppose, in that important fixes or new features that are genuinely useful can be released much more rapidly. Isn't that what monthly hotfixes were for?
Somehow, we as users of Windows managed to get by with the old, supposedly inferior OS update/upgrade model for many years. What makes this new, rapid release model so much better? So far, all I've seen are countless examples of why it's broken and more difficult to manage (pretty much every time there's an update now, I've noticed that something important breaks, and the update is withdrawn and re released with fixes, where if they'd take the time to test more thoroughly (like they used to), they could've gotten the updates right the first time). Having monolithic update packs, whose contents are inseparable from one another, doesn't help, because one bad update in the pack can spoil the rest of them.
The old model, for any flaws it may have, at least was predictable and mostly reliable.
c