Errors when cross-compiling the kernel

Trying to cross-compile the Raspberry Pi kernel on a Debian 7 virtual PC. I've got quite a way through the process, and it seems to start compiling, but I'm getting the following error:

HOSTLD scripts/genksyms/genksyms CC scripts/mod/empty.o /home/david/kern/tools/arm-bcm2708/arm-bcm2708-linux-gnueabi/bin/../lib/gcc/arm-bcm2708-linux-gnueabi/4.7.1/../../../../arm-bcm2708-linux-gnueabi/bin/as: error while loading shared libraries: libz.so.1: cannot open shared object file: No such file or directory

Any ideas? I can't find libz.so anywhere....

Thanks, David

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor
Loading thread data ...

Maybe this page can help:

formatting link

Reply to
Paul Berger

It's in the zlib1g package, maybe this one isn't installed?

gregor

--
 .''`.  Homepage: http://info.comodo.priv.at/ - OpenPGP key 0xBB3A68018649AA06 
 : :' : Debian GNU/Linux user, admin, and developer  -  http://www.debian.org/ 
 `. `'  Member of VIBE!AT & SPI, fellow of the Free Software Foundation Europe 
   `-   NP: Bettina Wegner: Auf der Wiese
Reply to
gregor herrmann

Thanks, Paul. I wouldn't have known about that page, as I'm more of a beginner with Linux. Now to see why it's not in the RPi download. The git fetch/checkout "download" was faulty, so I had to use the .tar download. But, the exact same .tar download /did/ compile on the RPi.

Right, more progress. From searching with Google (yes, I should have done this first, but I thought it was just me being ham-fisted) it seems that the problem is that libz.so is actually a host library, and not a Raspberry Pi one. Further, as I'm using 64-bit Linux on a virtual PC, I need to install the 32-bit version of certain libraries, so the next part of the magic spell (it seems like that at times!) is:

sudo dpkg --add-architecture i386 # enable multi-arch sudo apt-get update

Then run: sudo apt-get install ia32-libs

It's taken many days to get this far!

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

Thanks, Gregor. As I mentioned to Paul, installing the 32-bit host libraries on the 64-bit Linux I was using fixed the compile problem. It now remains to be seen whether I am brave enough to try my own cross-compiled kernel on a real system. Yes, I will be using a spare SD card imaged from the existing working one! Very nice to be able to "backup" and "restore" cards on a house PC.

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

When I were a lad you had to write the compiler...

-- Ineptocracy

(in-ep-toc?-ra-cy) ? a system of government where the least capable to lead are elected by the least capable of producing, and where the members of society least likely to sustain themselves or succeed, are rewarded with goods and services paid for by the confiscated wealth of a diminishing number of producers.

Reply to
The Natural Philosopher

didn't we already do all that a couple of months ago?

Reply to
Guesser

My first programming task was updating the Assembler on an IBM 1130 to accept free-format input, to suit the paper tape the department was using rather than punched cards.

It was, IIRC, easier than dealing with Linux!

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

Well yeah. IBM 1130. In those days machines were too simple and stupid to look out for themselves. GE-415 the same. Do whatever it was told. Had no choice.

Mel.

Reply to
Mel Wilson

Anything you're used to is easy. Anything you're not used to is hard. ;-)

--
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon
Reply to
Michael J. Mahon
[]

Yes, there's an element (or 14) of truth in that. But waiting 15 minutes (or 5 hours if you compile on the RPi) to find that something is wrong is definitely not as productive as using e.g. Delphi on the PC. Virtually instant, and you can check out each change step by step.

I wrote up what I've found so far:

formatting link

BTW: On the 1130 the only error message you got back was "Error". Not even a line number....

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

always compile first on a native target if only to check the code for syntax errors...

Always use Make to ensure that you didn't recompile more than you need to at any given stage..

then when you have the code working in a simulator/emulator, then burn your ROM or Flash..

--
Ineptocracy 

(in-ep-toc?-ra-cy) ? a system of government where the least capable to  
lead are elected by the least capable of producing, and where the  
members of society least likely to sustain themselves or succeed, are  
rewarded with goods and services paid for by the confiscated wealth of a  
diminishing number of producers.
Reply to
The Natural Philosopher

On 15/12/2013 15:31, The Natural Philosopher wrote: []

Good in theory, but....

When a compile takes a significant part of the day (as with compiling the kernel on the RPi), making multiple runs is extremely time consuming! Unfortunately, even if you want to change just one option, if it's your first compile it still takes almost all the working day.

What simulator would you recommend for the Raspberry Pi kernel?

BTW: the problem arises because the supplied kernel was compiled with tickless, which makes the kernel-mode GPIO/PPS work very poorly. Changing this one flag makes a worthwhile improvement bringing the averaged NTP jitter down from 3.9 microseconds to 1.2 microseconds, with similar improvements in offset, and correcting an NTP reporting error.

Cheers, David

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

On Sun, 15 Dec 2013 17:43:47 +0000, David Taylor declaimed the following:

I spent nearly 6 months in the early 80s in an environment where two builds a day (for a single application) was a good day. Worse was having to message the sysop to "kill the rabble" (RABL, for ReenABLe -- a batch job I'd written designed to clear out "connection" state from a "database"); it meant I'd concluded the entire database needed to be rebuilt (not an operational database -- though the application itself wasn't considered a database app; it was a requirements traceability tool, lacking dynamic table creation -- a few added capabilities would have given it relational algebra).

We were porting a FORTRAN-IV application to something I call FORTRAN-minus-2. The ported code ended up filled with variables named: inx, linx, jinx, minx, etc. as

call xyz(a-1, b+2, a+b) had to be converted to

inx = a-1 linx = b+2 jinx = a+b call xyz(inx, linx, jinx)

as the compiler could not handle expressions as arguments in a subroutine (or function) call.

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber

try coding in C for a 6809 then ...with 256k of memory in paged banks...all the library code was 'select which ROM bank to use, call the function, get something back in the registers restore ROM bank that called you and return;'

We had a DSP co processor too. 400Mhz digital scope that was. I'd have KILLED for a pi.

--
Ineptocracy 

(in-ep-toc?-ra-cy) ? a system of government where the least capable to  
lead are elected by the least capable of producing, and where the  
members of society least likely to sustain themselves or succeed, are  
rewarded with goods and services paid for by the confiscated wealth of a  
diminishing number of producers.
Reply to
The Natural Philosopher

I agree with you. The kids today can only develop in an IDE where they can just compile&run with a keypress and have results in a second. We used to have to wait for hours before the project was compiled and new tests could be done.

In fact my first experience with programming was in an RJE environment where you had to submit your source (on cards) and have them back with a listing (with run results or a listing with syntax errors) the next working day.

I can tell you this makes you think twice before you code something. My first program (of course a trivial one) in fact compiled OK on the first try! But that was after spending most of the afternoon to check and double-check (and more) to make sure it was OK, and after the teacher assured me that it would be impossible to get it OK the first time.

Having quick turnaround for compile&run IMHO leads to poor software quality, because the tendency is to get functionality OK by trial and error (running it until it no longer fails with the test cases at hand) instead of by carefully looking at the algorithm and its implementation.

Reply to
Rob

Glad you said this, because I was about to. ;-)

With one turnaround per day, plus a core dump (yes, it was core memory) on execution errors, *every* data structure in memory was painstakingly examined to find multiple problems per compile-execute cycle. Of course the stack--and the stack "residue" beyond the current top-of-stack--was one of the first data structures examined forensically.

Any detail that was not exactly as expected resulted in either finding a latent error or revising my understanding of the program's behavior, or (often) both.

The result was that after several cycles, my understanding of the implications of the code I had written and it's interactions with the hardware/software environment was richly improved. My confidence in the code that worked was substantiated and my corrections to code that failed were well thought out.

I sometimes found both compiler and OS bugs as well as my own, many of which did not actually prevent my code from getting correct answers!

When computer cycles are precious, brain cycles are required to wring the maximum amount of information from each trial run. The effects on both code quality and programmer confidence (and humility) are remarkable.

My experience managing today's programmers is that they frequently have no idea what their code actually does during execution. They are often amazed when they discover that their use of dynamic storage allocation is wasting

90% of the allocated memory, or that a procedure is being executed two orders of magnitude more frequently than they expected! And their tools and tests, combined with their inaccurate understanding of their code's behavior, prevent them from finding out.

They are very poorly prepared to program for performance, since, for example, they have no practical grasp that a cache miss costs an order of magnitude more than a hit, and a page miss, perhaps four orders of magnitude.

Interactive programming does not preclude the development of craft, but it apparently significantly impedes it.

All this becomes practically hopeless in modern application environments where one's code constantly invokes libraries, that call libraries, etc., etc., until "Hello, world" requires thirty million instructions and has a working set of a hundred megabytes!

Such progress in the name of eye candy...

--
-michael - NadaNet 3.1 and AppleCrate II: http://home.comcast.net/~mjmahon
Reply to
Michael J. Mahon

On 16/12/2013 09:15, Rob wrote: []

Yes, I also remember the days of queuing, or waiting overnight for a run's output to be returned.

I disagree with you about today's development, though. My experience with C/C++ suggests that it's too slow. Having to wait a few minutes to see the effect of a change encourages developers to change too much at once, rather than a line at a time. I find that with Delphi - where it really is the instant compile and run you criticise - I make much smaller changes and can be sure that each change has worked before introducing the next

I hope the Raspberry Pi encourages similar developments.

(And I think that algorithms are very important. Many people seem to want to do (or to get the compiler to do) minor optimisations of code which may work well only on one processor family, whereas my own experience suggests that using a profiler to find out where the delays are /really/ happening has most often pointed to regions of the program where I was not expecting there to be delays, pointing either to less than optimum algorithm design or, in one case, some debug code which had been left in.)

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

Short addition to your script:

With something like

#v+ # arch/arm/configs/bcmrpi_defconfig export PLATFORM=bcmrpi ARCH=arm CROSS_COMPILE=${CCPREFIX} make ${PLATFORM}_defconfig #v-

you can use the default config instead of an existing one or going through menuconfig manually.

(Useful if you want to switch to e.g. the rpi-3.10.y branch and don't have an existing config as a starting point.)

gregor

--
 .''`.  Homepage: http://info.comodo.priv.at/ - OpenPGP key 0xBB3A68018649AA06 
 : :' : Debian GNU/Linux user, admin, and developer  -  http://www.debian.org/ 
 `. `'  Member of VIBE!AT & SPI, fellow of the Free Software Foundation Europe 
   `-   NP: Nick Cave And The Bad Seeds: Fable Of The Brown Ape
Reply to
gregor herrmann

Out of curiosity, which OS were you using?

I've used uniFlex on SWTPc boxes but don't remember jumping through those hoops (though we were writing in the Sculptor 4GL, which compiled to an intermediate interpreted form (and bloody fast too) rather than all the way to binary.

I've also got considerable time with OS-9, though on a 68000 rather than as level 1 or 2 on a 6809, but am certain that, as level 2 managed memory in 4K chunks, it was nothing like as convoluted as the stuff you're describing. In fact, once I'd replaced the Microware shell with the EFFO one, it was a real pleasure to use.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.