This is wrong, in my opinion. I've written several interrupt and tasking systems for MIPS chips, and maintained several others, and I can tell you that it's more painful than most, due to the strange coprocessor architecture and minimal instruction set. I found writing interrupt level debuggers 'challenging', for various arcane reasons. The coprocessor register scheme is novel, to say the least.
You would think they would have gotten this right, and made it easy, since the chip was basically designed to run UNIX. It's not like they didn't have good models to crib from, like, for example, the 68000 architecture. Processor guys seem to have their own logic, based on a nightmarish view of efficiency, which is usually at odds with the programmer model. MIPS was also the first RISC chip to be generally available, so there are oddities associated with that premere status.
One famous perversion with MIPS is the 'branch delay slot', which is a scheme in which the instruction *after* a branch is executed regardless of whether the branch is taken. This has to do with the instruction pipeline. It probably saved 10k transistors, and may have speeded up branch processing, but caused a world of hurt to future programmers.
One thing that is useful is the ability to profile code and to optimize branch statements based on the most likely outcome of the branch. Many tools (including GCC) are designed to take advantage of this.
The best book I've found (actually, this may be out of date, I've been out of touch with MIPS for about 4 years now) is "MIPS RISC Architecture" by Gerry Kane and Joe Heinrich.