What are you running on?
My fairly average rig (dual core 3.2 GHz Athlon, 4GB RAM running Fedora
18 and using the GNU C compiler is compiling and linking 2100 statements. 600k of code in 1.1 seconds. A complete regression test suite (so far amounting to 21 test scripts) runs in 0.38 seconds. All run from a console with make for the compile and bash handling regression tests, natch, natch.Put it this way: the build runs way too fast to see what's happening while its running. The regression tests are the same, though, as you might hope, they only display script names and any deviations from expected results.
It does since it has the same toolset. Just don't expect it to be quite as nippy, though intelligent use of make to minimise the amount of work involved in a build makes a heap of difference. However, its quite a bit faster than my old OS-9/68000 system ever was, but then again that was cranked by a 25MHz 68020 rather than a 800MHz ARM.
I really cut my teeth on an ICL 1902S running a UDAS exec or George 2 and like others have said, never expected more than one test shot per day per project: the machine was running customer's work during the day, so we basically had an overnight development slot and, if we were dead lucky, sometimes a second lunchtime slot while the ops had lunch - if we were prepared to run the beast ourselves.
You haven't really programmed unless you've punched your own cards and corrected them on a 12 key manual card punch.... but tell that to the kids of today....
Yes.
I always leave that in, controlled by a command-line option or the program's configuration file. Properly managed, the run-time overheads are small but the payoff over the years from having well thought-out debugging code in production programs is immense.