Loop bandwidth of PLL run through USB?

You can buffer and regurgitate the data swith microsecond timing so long as the overall throughput is good enough and your buffer and allowable latency are sufficient.

If you try to compensate for the inevitable jitter and latency (and occasional AWOL) it might be easier to close the loop without going through the PC, with an ARM or DSP.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward" 
speff@interlog.com             Info for manufacturers: http://www.trexon.com 
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany
Loading thread data ...

The USB stack is different in WIN7 vs Xp. Win7 adheres to the standard while XP did not 100%

Cheers

Reply to
Martin Riddle

That may be. But my (few) impressions regarding USB performance showed that Win 7 has issues. Other people say it more drastically:

formatting link

Quote "Windows XP was dramatically better in general (on my main computer for example, WinXP+SP3 is measurably up to 25 times lower latency than win7/64 on the same hardware, and equivalent device drivers, suggesting there is something fundamentally very very wrong in with Vista and Windows 7 driver architecture)".

If XP was the operating system of the future I'd be less concerned. But unfortunately it isn't.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

How about some embedded PCs. There are Windows versions for them more suitable for this kind of work.

Reply to
LM

ARM? DSP? Ah, but you forget, just TLAs to Joerg. :)

Tim

--
Deep Friar: a very philosophical monk. 
Website: http://seventransistorlabs.com
Reply to
Tim Williams

Because you are wrong. In order to play music and video Windows has realtime abilities. Look at the multimedia timer which will produce accurate 1ms intervals. Although that doesn't keep you from overloading the CPU...

--
Failure does not prove something is impossible, failure simply 
indicates you are not using the right tools... 
nico@nctdevpuntnl (punt=.) 
--------------------------------------------------------------
Reply to
Nico Coesel

Have you called Microsoft support yet ?:-}

Get real. Relying on anything Wimpows is asking for trouble. ...Jim Thompson

--
| James E.Thompson                                 |    mens     | 
| Analog Innovations                               |     et      | 
| Analog/Mixed-Signal ASIC's and Discrete Systems  |    manus    | 
| San Tan Valley, AZ 85142   Skype: Contacts Only  |             | 
| Voice:(480)460-2350  Fax: Available upon request |  Brass Rat  | 
| E-mail Icon at http://www.analog-innovations.com |    1962     | 
              
I love to cook with wine.     Sometimes I even put it in the food.
Reply to
Jim Thompson

This system has to work with stationary Windows PC or regular size. Mainly because a lot of other stuff needs to be available as well.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

Not really. In the 90's we built a system around NT4.0. But only after having an agreement with MS that we could re-write some of the stuff for better robustness. In the beginning I could crash the machines in less than two minutes. Later I couldn't at all, it was almost as robust as a QNX system we had also built.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

Depends on who you get to work with QNX. There are alleged computer programmers who believe that just using it will make their system robust and fast- and don't know how to adjust scheduler granularity or use a WDT properly. But it does do less of the objectionable stuff than Windows does, even on PC hardware.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward" 
speff@interlog.com             Info for manufacturers: http://www.trexon.com 
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

You can have seconds of latencies if you do something stupid like using SetTimer, which posts a message to the message queue and it is retrieved, when other messages have been processed. In the lower process priority classes, the actual priority of a process may even be varied, making the timing even more unpredictable.

Using real time process priority with a high thread priority and using the timeout parameter in the object event wait APIs with multimedia timers enabled, you are quite close to 1 ms with more than 99 % confidence. Of course, there is not much real work you should do at say priority 31, but quite OK to read a peripheral device, copy a byte to local memory and if this is an end of a message, signal an event to a lower priority thread that actually does the job.

Unfortunately the priority of various Windows components have risen in recent Windows versions, thus, there are even fewer usable real time priorities for real work, compared to less important actions, such as updating the cursor position on screen, which now seems to have quite high priority.

Of course, in any virtual memory system, unless you can lock _all_ your pages (code, data, stack, etc.) into the working set (VirtualLock in Windows) and prevent the working set from being swapped out, there is always the possibility that there will be page faults, which can take a long time to handle. In the worst case, the missing page may have to be loaded from disk, either from the page file or from the original code file.

In addition, Windows has a stupid method of selecting pages to be paged out, when the free memory drops low. It randomly selects pages, freeing pure pages (and subsequently reloaded from program image the next time when needed) and putting dirty (modified) pages into a queue waiting to be written into the page file. If the page is referenced again while still in the queue, it is returned back to the process working set and removed from the queue. Apparently this strategy was chosen, since in the old days WinNT supported processors that did not have a decent LRU page replacement support in the hardware.

Reply to
upsidedown

In NT3.51 most display processing was in user mode, while only the lowest hardware specific operations was in kernel mode and hence a lot of user/kernel/user mode switching was required, slowing down the display updates. In NT4.0, most of the higher level display processing was moved to kernel mode.

In NT3.51, if you passed an illegal parameter to a display function, such as a NULL pointer when a real pointer was required, only the offending process might fail.

In unpatched NT4.0, passing a NULL pointer to the kernel mode code, could cause a page fault when accessed in kernel mode. If you have a page fault due to forbidden (non-mappable) page in _kernel_ mode, there is not much to do than initiate the Blue Screen Of Death (BSOD), to avoid any further damage.

For decades prior to NT4, it has been known by kernel mode API writers in any operating system with memory protection, the first thing is to check the parameter validity, before doing anything actual processing. This especially included that the user passed address could be actually accessed in the user mode. Without such checks, the user mode could specify operations such as writing to its own read-only areas or reading/writing any kernel mode location by the unrestricted kernel mode code.

In later NT4 version, such parameter checks were added and the system become usable. IMHO, just with Win2000 and XP, the Windows systems became as reliable as NT3.51.

Reply to
upsidedown

We had to heavily patch NT4.0 before it would reach reasonable robustness. I really like XP but unfortunately Microsoft has migrated to Win 7. And I have seen some USB issues there that I didn't in XP.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

Can it be a Linux OS?

You can get Labwindows Real Time for Linux.....

Regards

Klaus

Reply to
Klaus Kragelund

Not in this case. There is going to be a whole lot of other software that must run on the machine and much of that won't come in a Linux flavor. A virtual machine can be iffy if in the hands of end users where some have no clue about computers.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

You can stay on XP permanently. Use Ubuntu 10.04 as your host. Load Virtualbox 3.2.12 and 4.0.4. Use 3.2 to create your vdi files. This puts them all in one folder so you can back up easily. But 3.2 loads slower than 4.0, so use 4.0 for normal operation. Use the ext4 filesystem. They have got all the bugs out and it is a far superior system than the others.

Use a router so you can kill the firewall in Ubuntu. This will put your vdi files behind 2 NAT translations, so good luck to anyone trying to break in through tcp. It also allows you to use Samba freely so you can copy the vdi files to a backup computer over the LAN.

In Windows, kill Automatic Updates, Restore Point, the cache, firewall, and all the other Microsoft bloat that you won't need.

Kill all the AV programs. You won't need them. Instead, use the System File Checker program from Win98. This is a very different program from the one with the same name in later Windows versions. It allows you to list all the filetypes and folders you want to check, and makes a record of the date, filesize and CRC32 of all the important files. You then can tell instantly if a file has been added, updated, changed, or removed. Use RootKit Revealer to check for malware that hides its files:

formatting link

You can easily pack the main XP system in a gig or so. If you set the vdi file to 3 GB, yu can copy it to a backup in a few seconds. This file can be transferred to another computer so you have a byte-identical copy on your backup. Since the transfer is so fast, you can easily afford to make backup copies often. Since the main file is so small, you can make lots of them.

I put most of the datasheets and working files in two other vdi files and link them to the main XP.vdi. These won't need backing up so often so they can be as large as you wish.

Now for a very important point. You can make other vdi files for different purposes. For example, you can have a full installation of Win7 for checking compatibility. However, Win7 blocks access to some folders, so you cannot use SFC to check for malware in those folders. Win7 also has 6 GB of WINSXS folders so you need a much larger vdi which takes longer to back up.

Most important, you can have a separate vdi for all your banking and financial transactions. This vdi lacks all the communication facilities such as Windows LAN, email, usb, so it has no entry point for phishing and other malware. It has no Flash or other programs that can be compromised. It only goes to your banking sites and is never used for general browsing so it cannot be compromised. It is isolated from the other vdi's, so if one gets compromised, it will have no effect.

Use a password manager to keep the logon information secure. I prefer Stickey Password manage and find it is the best.

If your main vdi is compromised and you don't happen to notice right away, it won't do them any good. None of the keystrokes in one vdi can be detected in another, so a keystroke logger won't be able to get your logon info.

So now you can do your online purchasing with little fear of someone in Russia getting your logon credentials and wiping out your accounts.

Don't try to install .NET. It rewrites the boot area of your hard disk and wipes out Ubuntu. You will have to reinstall Ubuntu. However, if you have partitioned the disk into separate sections and have your Home folder on one, reinstalling Ubuntu will not wipe out all your critical files.

It is often very convenient to have Linux and Windows operating simultaneously. The are some programs in Ubuntu that are very useful and not available in Windows. So you get the best of both worlds.

Now you are independent of Microsoft and have no worries about Win8 and whatever follow on idiocies they come up with. You can make as many vdi files as you wish, so you can have different Windows versions for testing. You have a far superior means of detecting malware, so if you happen to get nailed, you can easily revert to a backup copy. You also have a quick way of recovering from a bad or unwanted installation, or a simple finger fumble that happened to wipe out some important folder.

You have byte-identical copies of all your files, so if you main computer dies, you can immediately switch to your backup and continue working while you get the main back online.

About the only downside I have found is you can only run one core in the vdi. Virtualbox is supposed to handle multiple cores, but I have not figured a way to do it. However, if you need speed, you can easily make another vdi for compute-intensive work. It will then borrow a core from Ubuntu and have it all to itself when it is running.

One minor glitch. I was having problems with flash stuttering while playing, and causing a mysterious slowdown of the entire computer. It also caused unexpected crashes that exited VBox and went all the way back to Ubuntu.

I finally traced it to the recent Flash versions. They apparently do something needed for Win7, but that kills you when running in XP. I installed Flash Player 10.2.159.1, the last of the 10.2 series. This completely fixed all the problems I was having.

Good Luck!

JK

Reply to
John K

Use a microprocessor that talks to the Windows machine via USB, and implements the PLL?

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

Yeah, I've looked at the Cypress PSoC for that. The bigger ones can do it without redlining. Probably some others as well. But I'd need someone to do the uC coding, that's really not my turf.

Another option would be to use the 4046 as Whit3rd mentioned. But this would need to be heavily filtered to make a nice sine, or followed by a DDS module where the 4046 acts as a reference. Also, the XOR results in a frequency-dependent phase error which I can't have. So I'd have to plop a PID between that and the VCO to get rid of the phase error. Possible, but gets busy. However, someone wrote that one don't need no Ph.D. for a PID :-)

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

Getting a clean all-analog solution certainly seems harder than a clean solution built around a micro. It sounds like you need a fairly clean sine wave, though, which means that any solution with a microprocessor in the middle would have to have a nice DAC, and probably filtering and buffering.

Lessee -- FTDI chip, 4046, op-amp, DDS and filter or honkin' big filter

-- eww, it's getting ugly, isn't it?

It's too bad there aren't any really clean ways to build a sine-wave oscillator at that frequency. Then it'd be FTDI chip, 1-gate XOR, loop filter, Wien-bridge oscillator, done...

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

The uC would be breaking a sweat. In order to obtain a clean sine wave at 10kHz it has to either pump data into a port with lots of R2R at more than 1MHz or pump the data into a DAC at the same clip. The data could either come from a 1/4-cycle LUT (gets pretty big) or be calculated on-the-fly (even more sweat beads). Then it has to handle all the loop response stuff, some USB, some housekeeping.

PWM-ing it could be a stretch at 10kHz.

In SMT, not really. And it gets me out of having to program a uC, although I can certainly find someone to do that.

Plus an opamp for the PID. Can't do it with XOR alone because that results in a frequency-dependent phase error.

It's a pity that all the sinewave oscillators such as XR2206 and ICL8038 have been obsoleted. Those could easily be FM-modulated, something that is a real hassle with DDS chips.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.