DDK from NI

Hi everyone,

I'm wondering whether it's feasible to write our own linux driver through the DDK (Driver Development Kit) from National Instrument or we'd be better off if we buy a new PCIe card with linux drivers off the shelf.

It seems that the RLP architecture they offer may allow 'easily' to create your own driver for your own platform. Does anyone have some experience with that?

Currently we have a PCIe 6363 which cost us ~3000$ plus an absurd ~$4000 for Labview update (crap!).

Any comment is appreciated.

Al

--
A: Because it messes up the order in which people normally read text. 
Q: Why is top-posting such a bad thing? 
 Click to see the full signature
Reply to
alb
Loading thread data ...

As far as I'm concerned, NI was over and done with when they wanted me to pay *again* for a device driver rendered inoperative by a system update.

Jeroen --No NI-- Belleman

Reply to
Jeroen Belleman

Our best customer is a big aerospace conglomerate: jet engines, FADECs, aircraft power systems, helicopters, rockets. Their test rigs run a realtime Linux and their own scripting language that lets end users write their own realtime test scripts that stimulate and measure and log things. They basically sneer at LabView. We make i/o gear for them, a lot of VME and increasingly Ethernet boxes. [1]

I think LabView makes more sense for small benchtop apps. My guys do thousands of channels of very mixed, exotic i/o and run many functions on rigid, complex, millisecond-range time schedules.

If you email me, I can connect you with the corporate Fellow who invented and oversees their software architecture. He would probably give you a summary of what they do.

[1] We recently did an ethernet-based complex transducer simulator using the MicroZed board as the compute platform. Worked great.

formatting link

--
John Larkin                  Highland Technology Inc 
www.highlandtechnology.com   jlarkin at highlandtechnology dot com    
 Click to see the full signature
Reply to
John Larkin

The main problem with Labview IMO is that it's as hard to verify and debug as a large spreadsheet. It's also very easy to hide ignorance behind all those pretty lines and boxes.

My most recent experience was with a guy who wrote a digital lock-in application in Labview, which didn't work very well. Turns out that he was trying to extract a sine wave from the data (which wasn't very sinusoidal) using _curve_fitting_, all in Labview. He had zero idea of signals and systems, and didn't want to listen when I told him to use orthogonality instead. I gave up trying to sort out all the buried treasure in that code, and am about to do a brain transplant on the box.

Fixing Labview code is pure turd-polishing.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

I just did my first project using the Beagle Bone and the PRU processor that is part of the chip. I needed to replace an ancient ISA-bus DMA card, as well as the rest of the aging Windows 95 computer. It took a little while to completely understand the PRU, but it worked great, and shows how to make various custom interfaces that would be too fast for the ARM processor to handle directly. So, what I was doing was a computer retrofit to a laser photoplotter I built in 1996. It needs DMA service of a byte every 5 us, there is no buffer in the photoplotter, so the DMA has to be totally deterministic. I had been sending it one bit/byte in the earlier scheme, which resulted in insanely large files (1 mByte/sq inch). In the Beagle Bone implementation, it gets the data in run length encoded form and unpacks them in real time in the PRU. The PRU is still twiddling its thumbs most of the time waiting for the next request pulse.

I can see using the PRU to accelerate a number of I/O tasks that would normally need an FPGA. I did a couple projects before using the Beagle Board, and one recently with the Beagle Bone, that did not need fast or deterministic I/O.

Jon

Reply to
Jon Elson

Buy him this:

formatting link

As I've been ranting lately, most recent EE grads don't know much about signals&systems.

--
John Larkin         Highland Technology, Inc 

jlarkin att highlandtechnology dott com 
 Click to see the full signature
Reply to
John Larkin

It's interesting to use a board like this as a component. Saves a huge amount of hassle.

The microZed has a dual-core ARM processor. I'd love to tell Linux to run my realtime app exclusively on one core, and let the other one do the OS and ethernet and blink LEDs and stuff. We haven't figured out how to do that yet.

--
John Larkin         Highland Technology, Inc 

jlarkin att highlandtechnology dott com 
 Click to see the full signature
Reply to
John Larkin

This individual is in his 50s, and has a Ph.D. in Industrial Engineering, whatever that is.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

sched_setaffinity() ought to be able to restrict the realtime process to run only on one CPU core. You can apparently use the "taskset" utility to launch the app with the affinity pre-set.

I don't think that this, by itself, will keep *other* processes from using that core.

To do the latter, you'll need to use the "control groups" mechanism. You'd need to create two control groups, with different values in the "cpuset" subsystem control - one for Core A and one for Core B.

After creating these two control groups, run a script which locates every process on the system and moves it to the control group for Core A.

Then, when you launch your realtime process, get its PID and use this to move it to the control group for Core B.

You could add a startup-time script to create these groups and then move "init" into the Core A control group. This would cause everything else it spawns to automatically start out in that group.

formatting link
should have enough information to let you do this, I think.

Reply to
David Platt

There are plenty of physics professors who don't know how a lockin works. (besides plugging in wires here and there.)

Reply to
George Herold

On a sunny day (Wed, 07 May 2014 16:23:42 -0700) it happened John Larkin wrote in :

man cpuset

Reply to
Jan Panteltje

On a sunny day (Wed, 07 May 2014 16:23:42 -0700) it happened John Larkin wrote in :

Actually I ment man taskset

Reply to
Jan Panteltje

I think it has to do with designing the pretty enclosures wot our stuff goes into. ;-)

Reply to
Spehro Pefhany

And Labview code that's full of bugs. (But I repeat myself.)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

A few years ago, I did the hardware for a fairly large project.. all kinds of sensors and distributed control bits.. the supervisory control was a HMI-processor box running Labview.. my stuff was only firmware, not the GUI etc. They ended up ripping out the supervisory stuff because the Labview stuff could not be made reliable (at least by the specialist consultant they hired).

How do you handle synchronization in your bespoke code? Getting sub-millisecond synchronization of random instrumentation is not a particularly easy problem.. IEEE-1588, LXI and the like is a start, but it's not easy to work with. CERN has come up with their own stuff.. which is kind of interesting (and open).

Best regards, Spehro Pefhany

Reply to
Spehro Pefhany

Normally I give the firmware a list of things to do, and it goes away and does them. Interleaved control and data acq is a problem best handled in hardware, preferably MSI or a CPLD if there's critical timing, such as in a digital lock-in or other SDR-type thing. That's one reason I wanted to get back up to speed in modern programmable logic.

My boxcar lock-in gizmo is going in three products so far. ;)

An interrupt-driven MCU that doesn't have a lot else to do is good for less precise tasks.

Synchronizing separate instruments, especially interleaved control and data acq, is hard.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

Except that most of them aren't pretty.

--
John Larkin         Highland Technology, Inc 

jlarkin att highlandtechnology dott com 
 Click to see the full signature
Reply to
John Larkin

Yes, that's what I'm talkin' about. Different instruments, separated by meters (or maybe much more), perhaps running at different sample rates with different clocks and you want to combine the data and provide reconstructed control outputs with jitter and time offset very low (nanoseconds to microseconds), despite large latencies and large jitter in the communication links.

Synching things on a board or in a micro "ought to be easy" for a competent embedded designer. Not so easy for someone without an embedded background.

Best regards, Spehro Pefhany

Reply to
Spehro Pefhany

We invented a bus to sync arbitrary waveform generators and such. It's a single coax that can daisy-chain from a master to multiple slaves, with one slave terminating. We send a continuous clock that everyone locks to, and occasionally slip in messages and triggers.

--
John Larkin         Highland Technology, Inc 

jlarkin att highlandtechnology dott com 
 Click to see the full signature
Reply to
John Larkin

One method is to sprinkle samplers on the front of the instruments, and control those separately. Only works for some things, of course.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.