dma on DDR ram main memory? - High Speed Data Logging Sought

Greetings,

I was interested in how a rapid data-gathering computer could do ADC on various sensors and then store quantities of data to disk continuously. At first I thought dma, but then considered difficulty of programming - handshaking on bus, etc. Then I looked at Ram Disks, and came across gamers' use of DDR Ram in large amounts to substitute for a disk. One such is "dimmdrive" which boasts transfer rates of 8000 mb/s. The advantage of a DDR main memory drive like this: It's a fast bus near the cpu, and it's very easy to program with file i/o. DDR ram is about $10 a gb, retail.

The question I have is: Is it possible to offload the DDR RAM main memory via a paging scheme plus dma transfer to a disk? In other words, can the cpu continue to gather data with minimal interruption?

Someone may know of this commercially available at low cost - hooray if true. Other wise, any comments sought about the feasibility of dma on the DDR ram.

Note: I believe it's acceptable to fill an area of ram, and then move to another area under cpu control, as the dma process stores the first area on disk.

One way to think about this is that the cpu + DDR ram is a unit with good, high-speed software access. So that unit should probably be kept intact.

TIA, j

Reply to
haiticare2011
Loading thread data ...

Think I'd think about Linux Mint 15 XFCE edition. If currently using Windows Linux is FAST. This edition has a very friendly index menu and the Desktop uncluttered and to date no issues.

Reply to
Wayne Chirnside

Thanks - There's a reason top 10 super-computers all use Linux, not to mention Google. What a group effort Linux is. I wonder what happened to MSFT's effort to squash it?

Reply to
haiticare2011

Well they did their best with Windows 8 - 8.1 when they jacked the boot sector. When the Admin here couldn't recover her files in 8.1 from Windows 7 files backed up Linux to the rescue. Dug into their damn weird backup and what ya know, an ordinary zip file about three directories in.

So booting into Legacy boot I brought up Linux. recovered her files in pristine shape, emailed the zip to her, then unzipped into Windows after taking the machine off of Legacy boot.

Before I booted back to Windows I brought one wps file up in the Libre office suite and damned if the formatting wasn't dead bang perfect.

Reply to
Wayne Chirnside

Linux is cheaper, for one thing. For performance, it's a mixed bag. My pet peeve with the Linux thread scheduler, for instance, makes it probably 20% slower than Windows on the same cluster code.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

An ADC card with windows or linux seems to be the best bet. I wonder what is best adc card. I only envision 10 mb/sec input, max. Wouldn't preclude ram disk, either. Any takers? j

Reply to
haiticare2011

large

transfer

near the

memory via a

continue

true.

another

disk.

good,

Any way you slice it, you eventually run into disk io speed limitations. If that is not fast enough for your acquisition speed you have limited record length, mighty long but still limited.

?-)

Reply to
josephkk

Ultimately, if you need to stream to disk then you are going to be limited by disk speed. There are a few things you can do to improve the disk speed:

  • Try to minimise non-data writes
  • Use appropriate RAID
  • Use SSD rather than HD (at higher costs)
  • Use PCI SSD rather than SATA SSD (at even higher cost)
  • Use PCI RAM disks rather than flash (at absurd cost)
  • Reduce the data to be written.

If you do things sensibly, then the OS will use DMA anyway.

First, however, you need to find out how fast your data comes in - if your ADCs are connected by USB2 at 60 MB/s max, then you don't need to store faster than that.

Then you need to find out how much data you want to store. If it is less than fits in main ram, then there will be no problem with disk speed as long as you avoid syncing to disk.

Start with a Linux system. That will be minimal cost, and faster than similar hardware with almost anything else. Set your disks using Linux software RAID 0, which will be the fastest setup for most purposes (if you have multiple data streams going to multiple files, you should consider XFS filesystem with a linear catenation instead of RAID 0).

If you want the lowest overhead, you can set up your system with a tmpfs filesystem for storage. Enable lots of swap space (enough to store all your data), and make your tmpfs big enough to the data. Then as you write to the tmpfs files (at full ram speed), old data gets pushed out to swap - you are limited by disk speed, but there are no overheads for things like logs, journals, inode tables, directories, etc. Of course, there is no recovery of the data if you get a crash or power cut!

As for reducing the data to write, consider using a round-robin database that compresses old data.

Reply to
David Brown

Yes, thanks everyone. I asked a dma expert, and like a good advisor, he told me to use a plug-in card on a PC. The most I got out of my research was that a ram disk can easily be set up in DDR ram.The beauty of that is you can open a file in c code and just read data to it at will. I don't know if the ADC cards have a way to go to disk without incurring a time penalty - I don't think that's possible. But I'm looking for recommendation on ADC card.

The only other item on the wish list is to somehow add a memory cape plus dma to a Beagle Bone or Raspberry. That would be really nice. jb

Reply to
haiticare2011

what is

Hmmhp. Is that Mb/s or MB/s? That is not so much, how much disk do you have? Nothing fancy needed.

?-)

Reply to
josephkk

I agree. And the first choice for fast buffer store these days is a good SSD rather than a physical spinning disk. Two (or four) set up as RAID0 will get you a bit more speed and capacity more economically.

It all depends how much data you want to drown in...

--
Regards, 
Martin Brown
Reply to
Martin Brown

10 Mbit/s is really nothing, a USB TV-tuner connected to an old laptop can easily store the whole transport stream 22 Mbit/s (including 4-5 SD channels.

A three year STB can record simultaneously 4 HD or 8 SD channels while watching one SD/HD recording, so this is close to 10 MByte/s total disk loading.

Reply to
upsidedown

Practically every operating systems having virtually memory will also support memory mapped files.

If you create a huge array in memory e.g. in C that does not simultaneously fit into memory, part of the pages that have not been accessed for a while, will be paged out to the page file. This mechanism has been very well tuned, since the performance is critical for the virtual memory handling in the OS.

Memory mapped files are just a mechanism to tell an other file (than the page file) to be used as a backing store for a _specific_ memory array.

For files to be read, just associate the file to a specific array and when you write a statement referencing a specific array location, the associated memory is loaded into physical array. A statement like

OneElem = BigArray[123456789] ;

Will do the trick. No special read statements needed.

For writing, the most effective way is to preallocate the file(s) to receive the values, preferably as contiguous disk space (run defrag before creating the files). Associate that file with some big array and in your program, you could write e.g.

BigArray[123456789] = ReadADC () ;

or

for (i=0 ; i < 100000000000 ; i++ ) BigArray[ i ] = ReadADC () ;

When all physical memory pages have been used, the OS will page out the written pages, not to the page file with normal big arrays, but instead to the file you specified in the association. When you close the association at the end of program, even the last data is written out to disk. Typically, you can also explicitly page out pages to the file in case of power failure or due to file sharing reasons.

The paging subsystem is very effective, there is not much point trying to invent your own.

While 100 MB to 1 GB files might be handled with 32 bit operating systems, but if you want to have one big 1 TB file or ten small 100 GB storage files, you are going to need a 64 bit operating system.

Memory mapped files are available at least on VAX/VMS, OpenVMS, Linux and Windows (NT 3.51, NT4, NT 2000, XP etc.) and I expect on many more modern operating systems.

Reply to
upsidedown

At that data rate, a cheap 1 TB disk will be filled in 5 hours, If you are doing a 24 hour recording, you will need five of these, so why not put them in parallel in some RAID configuration.

If the data does not constantly include full scale data changes, even a simple delta-coding will easily reduce the data rate to one half.

Reply to
upsidedown

interesting c code. Any doc you like on this? OK - Thanks again. I have been making a mountain out of a molehill, since 10 megabytes per sec can be handled by physical drives? Someone also mentioned a SSD - and I wonder if that could be spelled out in more detail. I would like some SSD setup for the Beagle. These cards, R Pi, have a usb 2.0 Would that be fast enough/ In the long run, a DDR ram memory might make sense, as its as fast as you can get, retail level, and costs keep going down, down, down. The gamers seem to be driving that market.

Reply to
haiticare2011

Start with

formatting link

I am not so sure about the statements in the "Drawbacks" section. The second section is irrelevant for any 64 bit OS.

One warning about using memory mapping files on 32 bit Windows systems is that by default, the DLLs are loaded to some nice address all around the 2 /3 GiB user address space and it can be hard to find a _continuous_ free virtual address window much larger than 100 MiB for your big array (file).

If you really going to need high speed (which 10 Mbyte/s is definitely not) and you are prepared to design your own hardware, the dynamic memories are capable of quite high _sequential_ reading/writing speeds (10 ns/8 bytes) by using the RAS/CAS addressing (open one page and do a large number of operations, before closing the page and going to the next DRAM page).

Any random access operations are of course considerably slower, due to the full RAS/CAS sequence required.

Reply to
upsidedown

mmap(2)

"solid state drive" a drive where the only moving parts are electrons.

--
umop apisdn 


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Reply to
Jasen Betts

ha ha. OK. There are several types of SD DDR on cpu mem bus cards on pci bus memory on usb ser port

usb thumb drives are $1 a gb - unthinkable a few years ago. DDR ram is now $10 a gb. But it's all beach sand, right? So that route seems clear for the PC.

But how to add memory to a R Pi or Beagle?

Thanks jb

"The entire printed record of civilization up to 1970 is now generated, in terms of size in bytes, every day."

Reply to
haiticare2011

That kind of development is beyond me. I am a follower in this area. In the meantime I encourage gamers, since they use ramdisks and drive down the cost.

128 gb of DDR ram would be very good. (now $1000)

Any idea how to add reasonably fast memory to a R Pi or Beagle?

Reply to
haiticare2011

I was thinking of the 2.5" wide type with a SATA interface.

POP RAM is hard to upgrade.

--
umop apisdn 


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Reply to
Jasen Betts

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.