Errors when cross-compiling the kernel

What are you running on?

My fairly average rig (dual core 3.2 GHz Athlon, 4GB RAM running Fedora

18 and using the GNU C compiler is compiling and linking 2100 statements. 600k of code in 1.1 seconds. A complete regression test suite (so far amounting to 21 test scripts) runs in 0.38 seconds. All run from a console with make for the compile and bash handling regression tests, natch, natch.

Put it this way: the build runs way too fast to see what's happening while its running. The regression tests are the same, though, as you might hope, they only display script names and any deviations from expected results.

It does since it has the same toolset. Just don't expect it to be quite as nippy, though intelligent use of make to minimise the amount of work involved in a build makes a heap of difference. However, its quite a bit faster than my old OS-9/68000 system ever was, but then again that was cranked by a 25MHz 68020 rather than a 800MHz ARM.

I really cut my teeth on an ICL 1902S running a UDAS exec or George 2 and like others have said, never expected more than one test shot per day per project: the machine was running customer's work during the day, so we basically had an overnight development slot and, if we were dead lucky, sometimes a second lunchtime slot while the ops had lunch - if we were prepared to run the beast ourselves.

You haven't really programmed unless you've punched your own cards and corrected them on a 12 key manual card punch.... but tell that to the kids of today....

Yes.

I always leave that in, controlled by a command-line option or the program's configuration file. Properly managed, the run-time overheads are small but the payoff over the years from having well thought-out debugging code in production programs is immense.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie
Loading thread data ...

Martin,

I'm running on a quad-core Windows 7/64 system, and judging the time taken to compile the 9 programs in the NTP suite using Visual Studi0

2010. These are almost always a compile from scratch, and not a recompile where little will have changed. Your 1.1 second figure would be more than acceptable, and very similar to what I see when using Embarcadero's Delphi which is my prime development environment.

On the RPi I have used Lazarus which is similar, and allows almost common code between Windows and Linux programs.

Cards were used by the Computer Department at university when they bought an IBM 360, and a room full of card punches was rather noisy! I can't recall now whether it was noisier than the room full of 8-track paper tape Flexowriters we at the Engineering Department were using, and yes, we did patch those by hand at times. Almost all of the access to the IBM 1130 we had was hands-on by the researchers and some undergraduates.

Leaving debug code is a good idea, except when it accounts for 90% of the program's execution time as seen by a real-time profiler. I do still try and make my own code as compact as possible, but particularly as fast as possible, and the profiler has been a big help there. I haven't done any serious debugging on the RPi, though - it's been more struggling with things like GNU radio build taking 19 hours and then failing!

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

I find that instant grahpical interface make a change, compile and run ebcourages the youngsters to try ANYTHING to fix problem and not use any form of version control. Then they go off fixing everything else they have now broken because they did not acquire data first, to find out where the problem maybe then use debugs or other data to prove the area of fault, then prove what the fault is if necessary using pencil, paper and a bit of grey matter.

Most people want to put any old code down first, not interested in algorithm or design etc..

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk 
    PC Services 
  Raspberry Pi Add-ons 
 Timing Diagram Font 
  GNU H8 - compiler & Renesas H8/H8S/H8 Tiny 
 For those web sites you hate
Reply to
Paul

On 17/12/2013 10:01, Paul wrote: []

[]

If that's the case, surely they should be better trained in using the tools, rather than deliberately making the tools slower and more difficult to use? Give points for algorithm design!

(That originally came out as "give pints" - might be something in that!)

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

Certainly training would help, but the critical missing ingredient--necessitated by cumbersome tools--is the development of engineering discipline...and that is always in short supply.

--
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon
Reply to
Michael J. Mahon

I don't know about Lazarus: but the C source is identical on the RPi since it uses the same GNU C compiler and make that all Linux systems use.

I used those at Uni, but they were feeding an Elliott 503, a set of huge grey boxes housing solid state electronics but made entirely with discrete transistors. It compiled Algol 60 direct from paper tape and, embarrassingly, no matter what I tried on the 1902S, I was never able to come near the Ellott's compile times: just shows the inherent superiority of 50 microsecond core backing store over 2800 rpm disk drives.

I that case it was done very badly. The trick of minimising overhead is the be able to use something like:

if (debug) { /* debug tests and displays */ }

rather than leaving, e.g. assertions, inline in live code or, worse, having debugging code so interwoven with the logic that it can't be disabled during normal operation. I agree that the overheads of that approach are high, where the overheads of several "if (debug)..." statement are about as low as its possible to get.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

I think you mean if (unlikely(debug)) { debug stuff }

If you want low impact, then tell the compiler it isn't likely so it can twiddle the branch prediction stuff.

I don't know which compiler you use, but in mine assert is only compiled into code in debug builds. There's nothing left in a non-debug build.

Andy

Reply to
mm0fmf

Normally in C you use the preprocessor to eliminate all debug code at compile time when it is no longer required, so even the overhead of the if (debug) and the size of the code in the if statement is no longer there.

Reply to
Rob
[]

Not necessarily bad, just doing a lot of stuff not necessary to the production version. But now it's as you recommend - optional - using conditional compile or boolean variables as you show.

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

In comp.sys.raspberry-pi message , Tue, 17 Dec 2013 22:55:04, Martin Gregorie posted:

At one stage, I used an Elliott 905, with only paper tape - a 250 char/sec reader, and a punch (and console TTY, maybe?).

By sticking to the end of the Algol compiler a short program, the compiler could be persuaded to read from a BS4421 interface, initially with a 1000 char/sec reader. By instead connecting the BS4421 to the site Network, a speed of (IIRC) about 6000 char/sec could be obtained.

Earlier, I used an ICT/ICL 1905. Its CPU had two features not commonly found in modern machines :

(1) A machine-code instruction "OBEY", (2) A compartment which in ours stored the site engineer's lunch.

--
 (c) John Stockton, nr London, UK.  Mail via homepage.  Turnpike v6.05  MIME. 
  Web   - FAQqish topics, acronyms and links; 
  Astro stuff via astron-1.htm, gravity0.htm ; quotings.htm, pascal.htm, etc.
Reply to
Dr J R Stockton

This build of which you speak is the problem with that approach: you'll have to recompile the program before you can start debugging the problem while I can simply I can ask the user to set the debug flag, do it again and unset the debug flag.

Your recompile to turn assertions back on can take days in a real life situation because you may need to do full release tests and get management buy-in before you can let your user run it on live data. Alternatively, it can take at least as long to work out what combo of data and user action is needed to duplicate the bug and then make it happen on a testing system. Bear in mind that Murphy will make sure this happens on sensitive data and that as a consequence you'll have hell's delight getting enough access to the live system to work out what happened, let alone being able to get hold of sufficient relevant data to reproduce the problem.

Two real world examples. In both cases we left debugging code in the production system:

(1) The BBC's Orpheus system dealt with very complex musical data and was used by extremely bright music planning people. I provided a debug control screen for them so they could instantly turn on debug, repeat the action and turn debug off: probably took 15-20 seconds to do and I'd get the diagnostic output the next day. A significant number of times the problem was finger trouble, easy to spot because I had their input and easy to talk them through it too. If it was a genuine bug or something that needed enhancement, such as searching for classical music works by name, I had absolutely all the information we needed to design and implement the change: input, program flow, DB access, and output.

(2) We also left debugging in a very high volume system that handled call detail records for a UK telco. This used exactly the debug enabling method I showed earlier and yet it still managed to process 8000 CDRs/sec (or 35,000 phone number lookups/sec if you prefer) and that was back in

2001 running on a DEC Alpha box. As I said, the overheads of even a few tens of "if (debug)" tests per program cycle where invisible in the actual scheme of things.

My conclusion is that recompiling to remove well designed debugging code, without measuring the effectiveness doing it, is yet another example of premature optimization.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

Indeed, but why bother unless you have actual measurements that let you quantify the trade-off between the performance increase of removing it and improved problem resolution in the live environment?

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

You may have bugs, I don't! :-)

Reply to
mm0fmf

On Wed, 18 Dec 2013 23:45:02 +0000, mm0fmf declaimed the following:

It's an undocumented feature...

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber

In exams they do and for documentation, but most coders and the like especially studenst are lazy with that and want to play with code, not writiung things down.

It is not the tools but the tool using them no matter what training.

that!)

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk 
    PC Services 
  Raspberry Pi Add-ons 
 Timing Diagram Font 
  GNU H8 - compiler & Renesas H8/H8S/H8 Tiny 
 For those web sites you hate
Reply to
Paul

I don't propose to make tools slower, maybe a bit more difficult to use yes. What I don't like is singlestepping etc. That encourages fixing boundary errors by just adding a check or an offset, and also makes developers believe that they can get a correct algorithm by just trying test cases until it looks ok.

Reply to
Rob

The 'IPCC' approach to coding... ..I'll get my coat..

--
Ineptocracy 

(in-ep-toc?-ra-cy) ? a system of government where the least capable to  
lead are elected by the least capable of producing, and where the  
members of society least likely to sustain themselves or succeed, are  
rewarded with goods and services paid for by the confiscated wealth of a  
diminishing number of producers.
Reply to
The Natural Philosopher

Many thanks for that, Gregor. I'll have a play. I did see that

3.10.23+ was now the current version - and that it has drivers for DVB-T sticks. Apart from that, anything worthwhile in 3.10? Would I need to recompile my customised NTP?
--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

Why not insist on them writing proper test cases before writing or compiling any code. 'Proper' involves specifying both inputs and outputs (if trextual output, to the letter) and including corner cases and erroneous inputs as well as straight forward clean path tests.

I routinely do that for my own code: write a test harness and scripts for it. These scripts include expected results either as comments or as expected results fields which the test harness checks.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

Either thats pure bullshit or you don't test your code properly.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.