"Emulate" USB Host

Hi there,

I'm quite sure it's not possible but still I'll ask. :-)

If I'm only interrested connecting an USB keyboard to my embedded unit, could I in some way emulate the host controller with two generic I/O pins and some firmware?

Will there be speed problems? (if the protocol is asyncronuos)

No hubs or other stuff will ne nessesary to support. Just the keyboard class (HID??)

Thanx

Reply to
Yxan
Loading thread data ...

At the least, you'ld need some additional transceiver hardware, I think. The data lines on USB aren't exactly 5V TTL compatible, IIRC.

Other than that, sure, it's possible to do that. Hypothetically. After all, the dedicated USB host controller are managing to do it, and whatever it is they do, it can quite certainly be described as "a couple pins and some firmware" (using some poetic license for the upper protocol layers, which aren't handled by the host controller)

But you almost certainly don't want to do that. Trying to bit-bang a protocol at 11 Mb/s is going to be well beyond tricky.

And that's before you even start working on the higher levels of the protocol stack.

You're trying to do exactly what an entire division of a major company explicitly was told to go out of their to make hard for you do to. And guess what, they did it --- it's friggin' hard.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

Could you elaborate on this a bit? I would like to boycott whoever it was that invented this spec from hell.

Bob

Reply to
Bob Stephens

I don't want to say it's impossible, but it would be a _huge_ task. You still have to enumerate the device, and support a 1.5 MHz asynchronous serial interface. If you want to support HID, you'll have to parse the configuration packet coming back from the keyboard during enumeration to decode the received data (unless you intend to support only one particular keyboard).

USB is designed to offload complexity from peripherals (like keyboards) to the host. ISTR someone putting together some code to bit-bang a USB peripheral (perhaps a keyboard, but probably a custom peripheral) but I've never seen anyone try it with a host.

It would be many orders of magnitude simpler to support a standard PS/2 keyboard. You can probably even find code samples on the net that do 99% of the job for you.

Disclaimer: I haven't done anything with USB OTG (On The Go), which may make things somewhat simpler (but I doubt it).

Regards,

-=Dave

--
Change is inevitable, progress is not.
Reply to
Dave Hansen

Unless I misremember badly, it's Intel. Which makes boycotting them rather harder than one might like.

I've said this before, but I'll say this again: the only assumption about motives that can really explain why USB was designed the way it was designed, given existing technology at the time (particularly IEEE

1394, a.k.a. Firewire), is that it was predominantly a marketing scheme to be able to keep selling powerful personal computers. It was a strategic move against the replacement of PCs by pervasive computing.

As a side effect, it will sooner or later kill off the ordinary PCs' usability for applications typical to this very newsgroup. I suspect there are entire "computer stores" already without a single serial or parallel port to be found on the premises --- well, possibly excepting the debug ports of the cash register and soda vending machine.

To put it bluntly: there was *nothing* wrong with the PC keyboard interface that it would have taken something like USB to fix. There were some extensibility issues with mouse protocols (not with the port itself, mind you!), granted, but even for those, USB is serious overkill. I don't see any plausible reason a human interface device controlled by a person's hand could possibly need a 11 Mb/s data link to the computer.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

I'm not certain what Hans-Bernhard is referring to, but the statement might not be completely fair.

IIRC, Microsoft wanted "new" computers to lose some legacy hardware, specifically keyboard, mouse, serial, and printer ports as well as the ISA bus. They became one of the driving forces, along with Intel, NEC, HP, Agere, Philips, and Compaq (IIRC) of USB.

The goals of USB included replacing the aforementioned ports and bus, as well as providing true PnP so there would be no fiddling with jumpers and no consumption of system resources (specifically Interrupts and I/O ports). Furthermore, it was decided that the peripheral side of the interface should be as simple as possible, so as not to impact the cost of peripherals too much.

Writing code for the peripheral side of the interface isn't a whole lot more complex than writing code to talk to a UART. Writing the host side drivers is, however.

Ever since computers have started shipping with USB ports, many embedded systems programmers have felt it's a plot to make their lives more difficult. I don't believe that's true. It's a plot to make the average computer user's life less difficult, which makes Microsoft's and computer manufacturers' lives less difficult (in theory anyway). And it mostly works.

But embedded systems programmers are not (generally) average computer users. Average computer users simply use their peripherals, generally for what they were designed for. Embedded systems programmers bastardize serial and parallel ports for their own use, and hook up peripherals to systems for which they weren't developed. In doing so, we've ridden the cost curve down and are used to cheap, simple solutions. USB makes some of this more difficult.

Note that I don't have any particular love (or hate) for USB. I have written code for HID devices, but that was years ago at a different employer, and my income isn't dependant on it's success (or failure). I just don't see it as a conspiracy against me using cheap PCs and peripherals, any more than I see protected mode as a conspiracy to keep me away from I/O port access (actually, less: keeping me away from I/O ports _is_ one of the goals of protected mode...).

Regards,

-=Dave

--
Change is inevitable, progress is not.
Reply to
Dave Hansen

Yes, Intel was in there, along with Microsoft (who I think started the ball rolling) and some PC manufaturers.

1394 and USB came along at pretty much the same time (though Apple had FireWire earlier). If you're going to fault USB with overkill, how much worse is 1394?

I don't understand. Could you expand on this?

Agreed.

This is mostly true, but incomplete. USB is designed to replace the keyboard, mouse, serial, and parallel ports as well as the ISA bus. There were _huge_ problems with I/O and interrupt conflicts in legacy systems. ISA PnP never worked very well. USB aims to reduce I/O and interrupt resource usage by concentatrating it all in a single controller, and simplify configuration (for the user) by doing it automatically when a new device is connected.

Neither did the developers of USB. Which is why keyboards and mice are generally designed to use low speed (1.5 MHz) mode. Helps reduce their cost.

Regards,

-=Dave

--
Change is inevitable, progress is not.
Reply to
Dave Hansen

Now why did you have to go and get all reasonable and rational on us? I was looking forward to a protracted rant;)

Bob

Reply to
Bob Stephens

Not worse; but better, actually. 1394 is a well-done protocol, fulfilling a real need (high-rate data streaming, esp. video), and it does its job well.

I'm faulting USB for overkill for some of its applications only: low-bandwidth, low-latency stuff like mouse and keyboard data has no business occupying the same wire as a 11 Mb/s data stream. It's essentially impossible to avoid one type of communication getting into the other's way. USB 1.1 is a complete hodgepodge. It's overkill for some of its planned applications, and severely underpowered for most of the others.

You'ld have thought that the guys at Intel & friends would know that "one size fits all" doesn't ever really work. But they did it nevertheless. Which begs the question: why?

For a long time now, Microsoft and Intel have worked by the "single PC as a center of your digital world" dogma. That's what led to crazy stuff like the current typical supermarket PC: 3+ GHz CPU, a GiB of memory, thermal design problems that make working in outer space appear like a minor issue in comparison, PCs louder than your average car, and all that.

What we're looking at here is utter, total centralism. The same kind of centralism is designed into almost every aspect of the USB protocol. USB is highly asymmetric, offloading all the hard work to the central hub: by silent assumption, that's a PC (of some kind, i.e. Apple gets to play, too).

Now it's commonly accepted that dogmatic centralism is wasteful. Distributed systems are often more efficient, and it's a lot easier to specialize from a distributed general design to a one-node case than to generalize from a centralistic design to a situation where a single center simply can't cut it.

IMHO it's at least plausible to assume that in a purely open market, without the Wintel monopoly, other players would have grabbed a significant share of the market in the move from the single PC to a world of computing power delivered at the exact point it's needed.

Wait a minute: a 11 Mbit/s star (it's not actually a bus), disturbed by high-priority 1.5Mbit/s packets and several layers of protocol overhead is supposed to replace an 64 Mbit/s bus that was already being stretched to its limis by hardware of that day? You gotta be kidding.

If there's a single I/O interface of the legacy PC USB 1.1 was in no way fit to replace, it's the ISA bus.

That's what rightly got us PCI. Not an entirely sweet pill, either, but in a totally different league than USB.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

For the same reason the same guys develop Wireless USB (WUSB), pretending BlueTooth has never existed.

Vadim

Reply to
Vadim Borshchev

Don't most keyboards still have PS/2 OR USB abilities? The PS/2 CAN be done on a couple of wires and some firmware...

The dominant uC's used offer this ability, or do you have a specific USB _only_ keyboard ?

-jg

Reply to
Jim Granville

A little of the topic of this thread---but since it's getting a lot of attention...

I need to implement a USB host controller for an embedded system running VxWorks on a Xilinx Virtex II pro FPGA. Can anyone here recommend a good choice for the external host USB chip? (IP cores were too $$$) and software drivers that will run it? (I'll be connected various devices--HIDs, mass storage, video, etc on the bus). I've been told to avoid Intel parts. Anyone have any experience or recommendations would be more than appreciated! Currently am thinking Cypress or Transmedia.....

Regards,

Bo

Reply to
bo

So if I understand your position, you're not suggesting 1394 for keyboards. And I'm not sure why you brought up 1394 at all. Different bus for different applications. The only thing it really has incommon with USB (AFAIK) is that they're both serial and both self-configuring.

Keyboard and mouse traffic doesn't really seem to get in anyone's way. The only real problems I've heard of is when you try to listen to music while burning a CD. USB 1.x can't keep up with it. Which is one of the reasons there's USB 2.0 (which I haven't tried yet -- the only USB peripheral on the new workstation the company bought for me is the mouse).

I think it's less a matter of "one size fits all" than it is of "adequate for projected uses." Capacity. Is it big enough? Apparently the answer was no (isn't it usually?): Thus USB 2.0.

As to why, I think they've stated it pretty clearly. They wanted to reduce hardware cost, and they wanted to simplify the end-user's job of installing and removing peripherals. A USB controller is much cheaper than the standard complement of 2 PS/2 ports, 1 or 2 serial ports, a parallel port, and an ISA bus, software has essentially zero manufacturing cost, and most users' CPU is idle 99.99% of the time. And USB hot PnP actually _works_ (at least in my experience). So they appear to have succeeded.

I think Intel and Microsoft build what they think will sell. Their game is to make money, not (necessarily) to innovate. If consumers demand something other than the "single PC yada yada yada," then they'll start building that.

Agreed. But the idea is to reduce the cost of the peripheral. How much do you want to pay for a basic keyboard? How successfull would USB have been if the cost adder for the interface was, say, $10 instead of

I don't know where you get this. Distributed computing is _hard_. Microsoft can't even get multithreading to work well, and the myriad application vendors are worse.

You're upset because Intel and Microsoft didn't try to bootstrap an entirely new personal computing paradigm?

Oh, I see. You're upset because the new paradigm couldn't compete.

Actually, I'd describe it as point-to-point. Hubs act as repeaters.

In practice, it's not that bad. In fact, keyfob flash drives are kinda slick. To replicate that functionality with ISA, you need to install a flash card adapter ($ and compatibility issues), and 1394 would probably not be as cost-effective.

When bandwidth requirements exceeded ISA'a capability, we got PCI. By the time USB came around, ISA bandwidth wasn't really an issue. USB is perfectly capable of handling any single peripheral that would normally be attached to the ISA. It's only when you start running a couple (or more) high-bandwidth processes at the same time that you run into trouble.

When a USB device is enumerated, it allocates bandwidth. In theory, you should not be able to attach a device to the USB if its required bandwidth would exceed that available (remaining) on the bus. Except isochronous devices, which say "I'll take whatever is left over at any given time. Obviously, it's the isochronous services that suffer when bandwidth usage gets high -- usually sound devices.

PCI is another technology that has been decried as a conspiracy to prevent embedded systems designers from doing their job. In this case the complexity resides more heavily in the hardware rather thn the software side. How much harder is it to design a PCI card than an ISA? How many PCI slots does your system have? Software is not that much more difficult on PCI than ISA...

Regards,

-=Dave

--
Change is inevitable, progress is not.
Reply to
Dave Hansen

I'm not. That would make even less sense than a keyboard attached to the same wire as a 320 GB external USB 2.0 hard disk.

Because USB devices fall into several large categories:

1) replacement of former "legacy" ports (keyboard, mouse, printer, modem, ...) 2) high-bandwidth devices (nowadays usually USB-2.0 hi-speed), like external harddisks and video stuff. 3) stupid stuff, including USB-to-USB "network cables".

My point, is that 1) and 2) don't have any business travelling over the same pair of wires, and for 2), defining a new protocol was completely superfluous already when USB was originally designed ---

1394 filled that niche nicely, at higher bandwidth, with less hassle.

If they had designed a new bus for category 1), and 1) alone, that'd have been perfectly fine with me. But by forcing category 2) into the same channel, without compelling need of doing so, they crossed the line.

One difference that makes me criticize USB is that its self-configuration mechanism is *way* over-complicated. It's this mechanism, mainly, that causes the need for a PC at the root of the tree. 1394 auto-configuration is so simple that a digical camcorder's firmware can usually get it right without breaking a sweat.

Well, at the heart of it, HID data are sent a USB transfer mode that has them get priority over all high-rate user data on the same bus. Add that HID devices are typically low-speed, and you have each byte of mouse data bulldozing 10 or more bytes of other data out of the way. With USB-2.0, make that ~300 bytes.

This design is a bit like building a single-lane highway to be used jointly by cars and trucks, and then later declaring it good for simultaneous use by Formula-1, too.

USB 1.x can no longer keep up with CD burning anyway, these days. Not since the last 16x burners stopped being produced.

Except that one USB controller was apparently insufficient -- current motherboard trend appears to be to move up from 6 to 8 onboard USB ports --- yes, that's more than the overall number of ports on a typical legacy PC, which USB set out to repeat by a single plug. Doesn't seem to have worked out all that well, does it?

Let's say: it *can* work --- ISA PnP never stood a chance. Arguably they shouldn't even have tried that. The major difference now is that instead of hardware that doesn't have enough flexibility to work in somewhat strange PCs, you now get software that doesn't work in somewhat strange PCs.

Hot PnP? You tell that to the software of my USB scanner, which a) complains loudly each time I boot without the scanner attached, b) doesn't work at all if I plug in the scanner later, and c) reliably brings down my Linux system if I so much as plug it in.

Why should I have to pay *anything* extra for a basic keyboard, just so it can coexist on a shared cable with my video camera, printer and external harddisk? I think I shouldn't, because it shouldn't have to do that. That's where the fundamental misconception is.

Reducing the number of different port types and plugs of a typical PC was a good first idea. It started to gown-hill when somebody decided that it must be reduced all the way to _one_ type of port.

I said it was *efficient*, not that it's easy. The telltale figure is those '99.99% of the time idle' you quoted yourself: that's an overgrown centralistic system, scaled to be powerful enough to do everything and then some all by itself, which doesn't actually get any work to do.

No. Because they (ab)used their effective control over the market to keep the general public from recognizing that less centralistic alternatives were even possible.

No. Because it wasn't given a fair chance to compete. Microsoft decreed that a PC without USB was, effectively, unsellable. So peripherals that aren't USB became effectively unsellable some time after that.

Yeah, but even they are bottlenecked by USB 1.1, so the good ones are 2.0 these days.

But "replacing ISA bus" would have meant to take over the interface to the main ISA bus device of a legacy PC, though: the internal harddisk. And that's been faster than USB1.1's 11 Mbit/s for years before the first USB mainboard was sold.

ISA bus has never really been much of an external interface anyway, so I highly doubt replacing it was ever, actually, part of the design goals of USB. If it had been, USB 1.1 would have been rated a complete failure even by its authors.

USB keyfobs are slick mainly by comparison to what was lacking for such a long time: a random-access, exchangeable storage medium with significantly more capacity than a floppy disk, found in essentially every PC. People got used to waiting for their CD burners to finish working, instead.

Oh, ISA _bandwidth_ was very much an issue --- such a huge issue, in fact, that it the only viable solution to it was to get rid of ISA bus completely. ISA bus as such stopped to be an issue at that point, but its bandwidth didn't.

USB-2.0: yes. USB-1.1 was too slow by almost a factor of 10. Hard disks have been faster than 1.5 MB/s continuous transfer rate for a

*long* time now.
--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

... snip ...

Some years ago I built an extremely cheap peripheral. It's only duty was to read a memory dump from a (nuclear) pulse height analyzer, which in turn was in a peculiar ad-hoc format which sufficed to dump and reload the PHA. The physical interface consisted of the same miniature tape deck as was mounted in the PHA, and a single CMOS buffer chip, all hung off the parallel port. x86 timed loops did the clock/data separation, detected end of record gaps, etc., and allowed the memory dumps to be transferred to the PC and displayed and/or analyzed. It was profitable because it cost almost NIL to manufacture, and the design effort was in the software.

I am sure I was not alone in doing these things. None of this would be possible with a USB interface.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

Funny enough, the FTDI FT2232C chip is exactly designed to do this. Besides the standard asynchronous protocol, it can also do synchronous protocols, specially targeted to ISP and JTAG stuff.

Meindert

Reply to
Meindert Sprang

If you need HS operation, and don't have a PCI local bus, check Philips parts:

onTheGo:

formatting link
host only:
formatting link

Unfortunately, FlexStack source costs about $50K. There's a PCI board with 1761, with a evaluation software for Linux. I do not know if it comes with source for Linux, also. Availability could be an issue for both, also (I suppose)...

They have also many full speed hosts.

Reply to
Antonio Pasini

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.