Embedded Linux and PCI (over AT91?)

Hello,

I'm currently making the initial design decisions (and hence the most important ones) about a project, which will involve a processor running Linux. The project's initial definition is to take data from an MPEG video chipset and store the stream in a long file on a Disk-on- Key.

So the concept is simple: Write a driver for whatever video hardware we're going to choose (still not defined), and make use of the Linux environment to handle all the USB and filesystem magic. Left on the todo-list is to write a simple application to copy the data. Neat and simple. Or so I thought.

The first headache was to choose the processor. I was told that ARM9 is the preferred architecture (for historic reasons, so this issue is open) and I'm pretty much in favor of Atmel AT91RM9200, mainly because it appears to have some community mileage (see

formatting link
).

And now to the point: I'd like the processor to talk with some of its peripherals over a PCI bus. For example, the on-chip USB 2.0 is only full-speed, which may be too slow for a video stream, so I'd like the option to connect a high-speed USB chipset over the PCI bus (I suppose such are available).

Another thing is that my company is pretty responsive to customer's bizarre demands, so we need the flexibility to quickly acquire some Linux-supported PCI chipset, which implements some unexpected interface (firewire? IDE? SCSI?). Then make some small changes to the board (basically hook the device on the on-board bus), and have the product up quickly and painlessly.

In a survey I made, I found out that this is not something people do: Very few processors I found have 16-bit PCMCIA, which is the closest I got to PCI. I think I saw some PCI connectivity on Intel's IOP (XScale), but there were too many Gigas mentioned in the datasheet to make it look like a fitting solution for a small module.

So this leads me to basically two questions:

  1. Is trying to connect a PCI bus to a AT91 inherently stupid? If so, why? (I mean, if it was such a good idea, I would expect it implemented on development boards)

  1. What is the simplest way to get any simple processor to have a PCI bus, in such a way that Linux will recognize it naturally? (Assuming the answer to question #1 is "no")

And, by the way, the first thing I suggested was a simple i386-based single board computer. Judging by the allergic response I got, I found no reason to investigate this direction further.

Thanks in advance (and a happy new hangover) Bill

Reply to
bill.valores
Loading thread data ...

There are lots of ARM processors that have good community mileage. Check was it supported in the mainstream kernel source.

The cm-x270 module from compulab is an example of a PXA270 processor with a PCI bridge chip. Apparently they got it to bus mastering, etc. However, be aware that bolting a PCI bridge onto a simple ARM cpu with a 32 bus has some limitations. You will not be able to do full bandwidth USB 2.0 using a PCI bridge and a PXA270 or AT91... processor. You are much better off finding a CPU that includes a PCI bridge in the device. The Intel IXP4xx series do and may be a little better for your application than a IOP CPU. You can easily test a IXP4xx by buying a Linksys NSLU2. This device includes a ARM processor and a USB2.0 chip which is very similar to what you want to do. I think some of Alchemy (MIPS) CPUs also have internal PCI support.

Many of these interfaces are very high bandwidth and well beyond the capability of many ARM CPUs -- at least to support full data rates.

Its kind of like trying to bolt a semi truck axle and wheels on a ford ranger. While you can do it, its not the best fit. The AT91 does not have enough bandwidth to process PCI data rates; therefore it is rarely done.

Use a CPU that has PCI support.

Actually, this is a good idea if you truly need the flexibility you mention. AMD (geode) and Via both make embedded x86 CPUs.

Cliff

formatting link

Reply to
Cliff Brake

I'd avoid PCI. I suppose it's not really necessary for MPEG video, nor for other peripheral functions you might want to use.

The PC world is moving to PCI express, in case you want to use typical PC peripherals (not really recommended for embedded projects, as these chips are not supposed to be supported for many years by their manufactures.

Moreover I would avoid using dedicated MPEG hardware, as IMHO in embedded applications you should use as few complex chips as possible: besides cost considerations, each has a risk of being not available at some point of time.

I learned that Blackfin processors are often used in Video applications as they are very fast with typical DSP applications (such as MPEG compression). There even is an exceptionally fast and rather cheap dual core chip with twice 600 MIPS (which is DSP MIPS, providing multiple results in a single instruction !). So with same you can have (µC-)Linux and MPEG compression in the same hardware (maybe even in the same chip).

Another option I would consider is using a processor in an FPGA. Here e.g. Altera is often used for video stuff. The multiple "DSP-units" (multiplier/adder) can be used for things like MPEG compression. The peripheral interfaces can be programmed in any way you like (e.g. PCI-Express by using some of the many LVDS pins). I learned that you can use as well µCLinux (faster) as full Linux (more "main-stream") with Altera and Xilinx FPGAs.

-Michael

Reply to
Michael Schnell

The first thing I did, of course. But when I see several people adding patches for a certain device, I get the feeling that the device with Linux has been road tested. As opposed to when looking at the driver source files (in the main kernel tree) and finding everything with "Copyright the-vendor's-name". Since everyone can edit anything, it's a bit difficult to know who did what, but when googling for information about Linux support for a certain chip leads me to the chip vendor only (plus some I'll-get-your-Linux-working companies), I get worried.

Thanks. That's a good lead. Any idea how they did it, before I acquire one of these for reverse engineering?

I know that, and I'm fine with that. The purpose is to be able to put a product on table quickly at customer request. If the USB device or hard disk run at 10% of their speed, it's OK as long as it's fast enough for the application.

I have to admit that these processors scare me off a bit. Just looking at their development boards I get the impression that unless I need gigatransfers, I better play with something easier. Nothing substantiated, just my engineering survival instincts.

The Blackfin processor is charming, but from what I understand it has no MMU (or it's not commonly used). uCLinux is out of the question. I should have mentioned that earlier.

On the other hand, TI came out with DaVinci, which runs both an ARM9 and one of these crazy DSP cores. But when we come to Linux, it's all commercial.

FPGAs have it all, except that almost noone really knows how to get them working. So yes, you can run a full-blown Linux on an FPGA, and you can integrate an MPEG compression engine in the fabric using the vast DSP resources available. Will you get a steadily working board before the blood pressure kills you? Depends on how lucky you are with your FPGA engineers. So we agree theoretically, but we may differ in our levels of adventure.

Thanks for your answers, Bill.

Reply to
bill.valores

The chip is a ITE IT8152G 0607-EXA MLR3C2 L. I've never found a data sheet for the device, so you will need to contact ITE (http://

formatting link

The IOP line is indeed designed for high bandwidth storage solutions. But the IXP line is a much simpler low cost device which seems like it would be perfect for your application. As an example, the $80 Linksys NSLU2 device uses them. Based on the OSS NSLU2 efforts, there is a tremendous amount of OSS technology available for the IXP processors. I would have to have some very good reasons not to use a IXP device before I went through the effort to bolt a PCI bridge onto a AT91.

Cliff

Reply to
Cliff Brake

AFAIK, there is no Blackfin with an MMU. An MMU adds some cost and reduces the processing speed. So it's very appropriate to do embedded projects without MMU (unless there are special security demands that can only be done with memory protection.

Why ?

Even with ARM9 the MMU often is not used, as Linux. Particularly the task switches perform a lot better without (due to the not optimum cache design of the ARM9: the cache is located between the CPU and the MMU and not between the MMU and the memory, thus the cache needs to be completely flushed with each task switch).

I don't get your meaning here.

How do you get this opinion ? I do have a reference project list with lots of Altera NIOS (processor ip-Core) designs. Including military stuff etc.

I do hope so, as I'm going to start such a project right now. (OK, MPEG is not part of my project but we do consider video distribution on multiple devices including resampling etc.)

My FPGA designer is quite new to all this, but we are coming up to speed very fast. If it would be necessary I would be able to do this all by myself. To get the processor running in the FPGA, you don't need to do any FPGA design. The Altera software allows for just clicking the standard components. Of course when using non-Altera-standard stuff you need to design or buy the components. You definitely should talk to an Altera dealer's FAI and have him show you what they have (and telly you what they don't have).

For me, going with Altera is the result of a really decent research. So I don't think it's an adventure.

-Michael

Reply to
Michael Schnell

Using uCLinux means that nothing is going to be really simple. There is so much free code to be reused out there, but once you want it to run under uCLinux, something bad is going to happen.

Memory protection is a blessing. Maybe it slows down a bit, but at least you get a segmentation fault when some application does the wrong thing, and not have the kernel crash an hour later. Not to mention what happens when software written by more than one person runs on the processor.

The power of Linux is community. That means that the people who maintain the code are those who are using it. It's very hard to put the finger on the difference, but you get a different feel when using software which is maintained by community, as opposed to such that is handled by some programmers who want to close an action item and go home.

DaVinci's Linux appears to be maintained by TI. That's what I meant with commercial.

The click-click-click part in the beginning is indeed the easy part. I have no idea about how well your specific piece of fun-fun software works, but from what I've seen in the FPGA world, there's always something that won't work unless you really know what you're doing (and many times nothing works. Or works when only in the mornings. Or evenings. Or when noone sneezes).

You know whether a journey is an adventure only when you finish it.

And if there's something that makes me laugh, is when I see all those fancy tools that make an FPGA design in five minutes. Indeed, you get something that works, but then there's this little annoying problem which someone's going to try to solve for the next couple of months.

I hope for you that I'm as wrong as one can be about FPGAs. You'll know pretty soon.

Bill

Reply to
bill.valores

How do you get this impression ? Normal user land software (provided in source code) should not notice at all if it runs on full Linux or µC Linux. (Nearly) all differences are handled in the libraries (such as libc)

It does prevent that user land software bugs destroys the OS. As in embedded systems user land software bugs prevent a decent use of the system anyhow this is not a real additional problem. (Unless the system features an advanced security level. But using Linux as the single (main) OS is not recommended anyhow.)

Besides the cost, "commercial" offers have pros and cons. The "community" can be a great help, if it decides to do so, but, OTOH, if you buy something, the seller is _required_ (or can be payed) to offer support exactly on the topics you ask for.

With any technical design you need to "really know what you are doing" :) . We do FPGA since at least ten years very successfully and have/had several engineers trained on doing the designs. In the beginning the task was very similar to designing an electronic circuit and a PCB, and with the new tools (VHDL/Verilog) it's more like writing software. With the newest tools you even can write C code and convert same into FPGA design (of course only for pars of the design. Best: after you debugged the C code on a processor). The only new thing now (for us) is that the processor is located within the FPGA instead of being a second complex chip. An as the processor itself is just a click-click-click thingy, I don't think the difference is that big. (For me using a new tool chain and Linux instead of my ten years old home-brew OS is the challenging stuff.)

Thanks for your good wishes :). As we are not at all new to FPGAs I do know what I am talking about. Of course doing a design in five minutes is not thing anybody can hope for, but doing an FPGA is not more demanding than designing an old fashioned PCB with the same function. In fact it's _a_lot_ easier, as the design tools do a lot of error checking for you and the programming language provides several nice ways of formulating the designs. In some cases you can use read-made ip-cores instead of ready made complex chips that you would use on your PCB.

-Michael

Reply to
Michael Schnell

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.