RJ45/8P8C alternatives

They're reasonably sheltered locations but, out here, the sun cooks

*everything*. E.g., unpainted PVC pipe will turn BLACK from exposure.

I'd really, really, really like to stick with 100Mb and a "traditional" hardware interface. It's a significant compromise to move away from it (but do-able).

Joe's comments re: PHY capabilities are encouraging. I'm just not sure I can "guarantee" performance (changing a design after release is painful :< )

Reply to
Don Y
Loading thread data ...

Thanks, I'll look into that. I'm just not sure how to stress test this aspect of the design -- nor how to document how the connectors should be "wired" (i.e., detailed physical placement of each conductor).

When you look at some of the BFM in high speed cabling (SCSI, enet, etc.)...

Reply to
Don Y

It seems like every "ruggedized" RJ45 just makes the mounting problem more tedious. Other than RJ45's "on pigtails", it seems like my only practical solution is a different connector body entirely.

I'm going to play with some of the connectors suggested, here -- as well as some run-of-the-mill connectors -- to see just how sloppy I can get with fabrication before signal degradation becomes an issue.

Reply to
Don Y

A lot of them may use the same IP blocks but I would think yes, on die or separate would probably be close to the same performance. It would probably depend on what they cut out of the on die to reduce the area.

That's true. The layout guy ran diff pairs between the connectors. I'm not sure about careful impedance control, I'd have to go back and look at the stackup to see exactly what he did. My point was there is a mobo RJ45 connector -> small patch cable -> backplane RJ45 -> edge connector -> card->lan mux->phy and everything works fine. Some day I'd like to take some cat5, re-twist to maybe every foot and test. I could fix the FW on the embedded system to force 100M or 10M and see what happens.

Try it. Take a patch cable, whack it in half and solder a DB9M and F in line. Run several GB of TCP traffic through it and watch the error counts. If you have access to the FW you could pull out more detail on what lower level errors you are seeing.

The good thing about the RJ45 is it is the defacto "this is a ethernet port" to most people. A DB9, they will probably try and plug a serial port into it.

--
Chisolm 
Republic of Texas
Reply to
Joe Chisolm

I'm not sure of that. E.g., does every ARM w/onboard PHY/MAC use the same implementation (i.e., is the IP licensed from ARM regardless of actual company doing the fab?)? And, are there differences between (e.g.,) a NatSemi design and an Intel one (and an ARM one)?

Yes. "Tradeoffs" esp with an eye to keeping the "system" (SoC) at the right price point, power consumption, die size, etc. "Something for nothing" comes to mind... :<

Understood.

I think it might be easier (more deterministic) to just set up a producer that continually emits UDP packets (easier to do at a high rate) having a particular, known, sequential content. Then, a consumer that looks for these packets and *expects* them to have the particular content. So, it sees *all* the traffic and not only "knows" what it should be but *where* it should be

[Obviously, arrange for a pair of these, back-to-back, so you can run FDX tests]

("Hmmm... there should have been another UDP packet right *here* and its contents should have been XXXX. As there is no elastic store

*in* the cable, the PHY must have failed to recognize the start of the packet!")

But, all this will ever tell me is that it adds (at least) some number of errors to a comm stream. It wouldn't let me know, for sure, if it would be AS reliable in all deployment cases. (i.e., the absence of errors doesn't mean there never WILL be any attributable to this hack)

I'd also have to drag out a much longer length of cable and simulate the types of RF/EM environments it would encounter to get a *real* handle on the cost/compromise.

Yup. Adopting some other connector implies protecting against misapplication -- or, picking something sufficiently unique (given the application domain) that nothing comparable would likely be encountered *in* that deployment.

[I rescued what I casually assumed to be a wide SCSI cable (colloquially "SCSI 3") a few weeks back. When I finally sat down to look at it, I noticed the connectors on each end were *female*! Ooops!]
Reply to
Don Y

ion

r

wall mount RJ45 are just punch down so a couple of cm shouldn't break anyth ing

formatting link

how about neutrik? only needs a round hole and two screws

formatting link

-Lasse

Reply to
Lasse Langwadt Christensen

Well TIA-568 does require particular connectors, IEC 60603 in fact.

Now if you wish to argue that, TIA-568 is only moderately expen$ive.

?-)

Reply to
josephkk

at

RJ45

^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Wow, that would be a significant twist reduction from the original cable.

?-)

Reply to
josephkk

I figure if you are going to break it, go all out.... For short runs, say under 10ft, wonder what you can get away with. I would never put it in a product like that but would be fun to try anyway.

--
Chisolm 
Republic of Texas
Reply to
Joe Chisolm

I second this.

At 10/100baseT speeds, we talk about less than 100 MHz frequencies, with a full wavelength wavelength in a twisted pair cable is 2 m.

Anything above 1/10 wavelengths should be treated as a transmission line. Shorter connections than that can be handled as simple RCL modeling.

Unwrapped wires in a M12 conductor are far shorter than that.

Of course for 10GbaseT would not be an optimal solution.

Reply to
upsidedown

They sell M8 to RJ45 cables wired for ethernet (10 or 100), so if you go that route you won't be pioneering it.

--
umop apisdn
Reply to
Jasen Betts

I'm

mobo

connector

In all fairness, Cat3 is good for 10 MHz and has about 1 to 2.2 turns per foot (3.3 to 7 turns per meter). The biggest issue that i am finding is that the blips in line impedance are a lot more disruptive.

?-)

Reply to
josephkk

The easy way to test that is to find a 10/100baseT *MANAGED* ethernet switch, and use an SNMP MIB browser to look at the errors at the MAC (Data Link) layer. There are plenty of SNMP tools available to do this. Details, if anyone wants me to scribble something on the topic.

I used to do this for testing some rather marginal communications links. For example, running 10 or 100baseT over greater than 100 meter cable runs. My favorite demo is to put connectors on both ends of a 1000ft roll of CAT5e or CAT6, and see if it works. 1000ft of

10baseT is easy, but 100baseT has NEXT (near end cross talk) problems.

I've never tried a DE9P/S connector pair for ethernet. However, I have run 10baseT through the common 0.093 Molex nylon "power" connectors with good results. The mechanical designer forgot to install an RJ45 jack on the case and wanted to use four spare pins on the power connector for ethernet. I found no problems with any "impedance bump" at either 10 or 100baseT. I also ran a BER (bit error rate) test over a cable with two such connectors inline, and found something like 5 errors after a 2 day run. However, a Fluke cable certified failed the cable, which may be a problem. My guess(tm) is that a DE9P/S connector pair will work.

Google Images for "military rj45".

M12 to RJ45 ethernet adapter: Note that it has only 4 pins. More:

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

Kinda sounds like the specs for the RJ11 connector used inside telco MPOE (minimum point of entry) boxes: The RJ11 connector works nicely outdoors, but does require some silicon grease to reduce oxidation. If it works for Ma Bell, it should work for your gizmo.

For panel mount, something like this perhaps?

There may not be any need, but the almost endless variety of mounting and protection configurations make the RJ45 connector a good choice. Put differently, why would you *NOT* want to use an RJ45 for ethernet? There has to be a very good reason to justify something different from the common ethernet connector standard.

I think you mean MicroUSB. MiniUSB had problems. Small connectors are a basic requirement on cell phones and portable devices. You apparently don't have that requirement.

Speaking of power, I don't mind dealing with PoE (802.11af) at 48VDC because the source is properly protected from shorts and backfeeding power. I do mind home made kludges, where someone uses the extra 4 wires in the CAT5e cable for power. The resistance of the cable is usually high enough to prevent a fuse from blowing. I've seen a few minor meldowns. If you're going to do your own PoE thing, please design in some protection.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

The NID (Network Interface Device) tends to see very little physical access. I suspect ours has been "opened" three times in 20+ years: once when installed, once when we had some line problems and once when I changed the feeds into the house.

Imagine having that RJ11 on the back porch and carrying your (wired) phone out there from time to time. Granted, it may only see one use per week. But, a helluva lot more than 3 per score years.

[Our back porch RJ11's are behind waterproof covers. Even keeps the critters from turning them into egg caches, etc.]

That's really pretty large (a consequence of RJ45!). Looks like I'd need to find a clear area at least 3/4" - 1" dia. Plus, a fair bit of room *inside* the enclosure.

Because of the environment and the enclosure I am *stuck* with. DB25 was the standard for EIA232. Why did it change? Why can you now find RxTx interfaces on 3mm phone plugs? :-/

And, from bozos plugging telephones into the socket!

You mean, Cisco? :>

I implement alternatives 1 and 2 with some "enhancements" in the PD that allows it to reinitiate the negotiation phase while still physically connected. And, a protocol that allows the PSE and PD to renegotiate power required *and* available after the initial power class selection.

I.e., the device can effectively unplug itself and reconnect claiming a higher/lower power class. And, the PSE can elect to unconditionally shed it as a load regardless of those prior negotiations.

"I don't care what I promised you previously. This is the new reality!"

Reply to
Don Y

Thanks. I couldn't recall the acronym. I tend to use MPOE, Demarc, and NID interchangeably.

You seem to be changing the specs as we go along. Nothing was ever mentioned about surviving a non-trivial number of insertion cycles. The common RJ45 does reasonably well in this area. Insertion cycle specs vary, but I found a few data sheets offering 2500 cycles. Hopefully, that's sufficient.

The RJ11/14 connector on my former Acterna tester probably had over

1,000 insertion cycles. No problem.

Sorry, but they don't make connector shells with rectangular threads. I vaguely recall seeing a product release on a rubber gasket to fit a panel mounted RJ45. However, that would only solve half the problem. The mating plug also needs to be protected. (I still vote for the pigtail).

It was originally a DB25P/S. Then, someone had a better idea and expanded the connector to 37 pins: When IBM contrived the IBM PC, they also ran out of panel space on the plug in cards. So, they contrived a 9 pin RS232 connector. Large companies can do things like that without moral support from standards committees. However, when the multiport serial card manufacturers needed something cheaper than an DB25 octopus, they went to the RJ45. Well, not exactly. They used the 10 pin RJ50 instead. The connector was also designed to use flat ribbon cable, that was reversible for DTE and DCE wiring. I was part of that mess, although my contribution was trivial. The 3.5mm jack came after I went on to other horrors: Who needs hardware flow control anyway?

Yep. Extra credit to router manufacturers that simply ground the unused 4 pins on the ethernet connector. I don't have an answer to a bozo proof connector system.

Nope. Cisco is all 802.3af with no creative PoE. Dlink, Netgear, and Ubiquiti all invented their own PoE systems. For example, the Dlink DWL-P100 pair used 15VDC in to get 5VDC out. Ubiquiti has various voltages: PoE over 1000baseT is yet another complexication, where the DC power needs to be shared with data lines. Mixing systems usually results in destroying something.

That's all in the 802.3af specification. However, the real trick is for the PSE to not deliver power until it has successfully negotiated a current limit with the PD. If the PSE lines are shorted, no power, and therefore, no smoke.

"Oh no... you did what I asked."

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

I tend to prefer TNI (Telephone Network Interface)

I didn't mention spiders nesting in the connectors, either. Nor leaf cutting wasps stuffing leaf-wrapped eggs in them! I pointed to the primary issue: weather resistance with an irregular shaped enclosure. The fact that I decided the characteristics of the enclosure were worth mentioning (and not just saying "outdoor application) and, IMO, ruled out the RJ45 form factor should suggest I can't FIT an RJ45 to the enclosure:

"an irregular-shaped (metal) enclosure"

"suitable connection without requiring new castings"

"tedious to machine *into* the case. ABANDONING it for a more PHYSICALLY CONVENIENT connector leaves me uncertain as to how "untwisting" all those connections *at* the connector would compromise the signal path"

So, to use one, would mean dangling on a pigtail. I.e., the issue is one of moving to a DIFFERENT connector and still satisfying the SPECIFIC wiring constraints that are imposed for RJ45's

Great. If I ever want to site an Acterna tester outside, I'll know what to use!

Exactly. The RJ45 form factor imposes constraints on the type of enclosure ONTO which it can be fitted.

103 modems. RS232 came into being to allow IBM and ATT to "fit together". The original interface imposed all sorts of details on the exact timing of individual signal assertions and releases -- genuine handshakes not just "status indications". These quickly became bastardized as both ends of the link got smarter. And, as the types of comms equipment evolved.

Cisco *currently* may be that way. But, legacy Cisco equipment predated the PoE standards. E.g., "here are some unused conductors in the cable; let's push power down them". Their "pre-PoE" PoE had a proprietary (in the sense of "Cisco-specific") standard. Much lower power and the initial connection between the PD and PSE was negotiated via data transactions (i.e., the PD had to "boot" almost immediately and talk to the switch lest it not be powered as intended)

PoE (and PoE+) support two power distribution alternatives -- A and B (I guess that's more creative than "1 and 2" :< ). One scheme supplies power from the PSE over the unused pairs (if "unused" in the normal course of usage). The other imposes them as a common mode voltage on the data pairs (and, thus, is the only way that power can be delivered when all 8 conductors are in use -- e.g., GbE).

Both alternatives require the PD to engage in the same sort of negotiation when first connecting to the cable/network. That negotiation allows the PSE to know how much power the PD is requesting (device class) as well as provide some protection against non-PD devices being connected (telephones with RJ11's "accidentally" crammed into an RJ45). Power is not AVAILABLE on the cable until the PSE determines that is should supply the power -- and how much.

No. The specification deals with *one* negotiation at time of connection. Even data protocols built atop this "guarantee" the PD the amount of power that it initially negotiated (device class) with the PSE. So, if a PD initially requests ~7W, it can't later ask for ~15W without (electrically) disconnecting and starting a new negotiation (presumably for that 15W level).

Of course, the PSE can decide NOT to honor its request! Now, the PD is effectively "unpowered". This means that PDs have to ask for the most they will EVER need on their initial connection. That can lead to an oversized PSE "just to be safe".

I prefer to leave the policy decisions to the application. If *it* decides that it would be prudent to SHED some load (regardless of how selfish that load happens to be in its power demands), it should be able to do so. Likewise, if a load needs more power, it should be able to negotiate that regardless of its initial assumptions.

Reply to
Don Y

You should be able to use any old Ethernet MAC that supports publishing those counters. The managed switch just provides more likely instrumentation.

That is fine for testing, but at least where I am, nobody wants a managed switch for deployed systems unless you have a solution to guarantee the switch configuration pretty much without human intervention.

This might seem an odd post, but these days, the line between testing and deployment is pretty fine.

A respectable BER fighure for Ethernet is 10^-10. Almost everything is even better than that.

I know this is just a test methodology,. but I'd avoid 1000 feet of copper for deployment unless you have a cable management system capable of keeping *everything* off those cables.

We do use 200 foot cables as a backup for 802.11 but they're purely a security blanket at this point.

Optical is coming farther and farther downmarket. Run 4:1 media redundancy and hope for the best :)

I'd really just bypass the managed switch and go for the Fluke certifier. They're excellent. I have not measured *how* excellent, but I have no failures due to cables since we invested in one.

And again; we run M12 in very rugged environments ( outside, mobile equipment, industrial environment, filthy-dirty ) and they work great.

The Fluke certifier loves 'em.

--
Les Cargill
Reply to
Les Cargill

True. However, the advantage of using a managed switch is that I can carry it around with me, "tap" into any ethernet cable, and produce immediate results without needing to install SNMP services on a target machine. Using a managed switch also allows me some control over NWAY (Auto-Negotiation) negotiation and allows fixing the interface rate and protocol.

A "managed" solution seems to imply that there's someone around to do the managing. That usually means me. However, you do have a point. Most of my customers fail to appreciate receiving error logs via email, and wouldn't know what to do with them anyway. However, long term monitoring is not my purpose here (although I do graphs with MRTG and RRDTool, which my customers can easily understand). Inserting a managed switch, and using SNMP to collect numbers is for testing if a creative cabling and connector scheme is likely to be usable. Think of it as a piece of test equipment, which is normally not left in the circuit.

Customer tested products? Yeah, I know the feeling. When firmware updates start to look like recall notices, I start to worry.

Well, let's see. BER = 10^-10 at 100 Mbit/sec is about 1 error every: 100*10^6 / 1*10^-10 = 1*10^4 = 10,000 seconds = 7 hrs at the MAC layer. For an office environment, I usually see about 1 error per day. For a noisy industrial environment, maybe 10 errors per day. Yeah, that's in the ballpark. Since the observed errors are usually externally induced (i.e. power glitches), the error frequency for 1000baseT is the same as 100baseT, resulting in a BER = 1*10^-11.

I didn't have much choice. The run was through 3/4" PVC conduit, across an open field, and through a plumbers nightmare at each end. For various reasons, RF was unacceptable. The conduit was crammed full of cables and no more could be added. However, there was one existing CAT5e cable that was being used for a single POTS phone line. I proposed to use the existing pairs for ethernet, without expensive ethernet extenders: The company experts declared this gross violation of industry standards to be a capital crime, so I offered to demonstrate that it would work using a new 1000 ft roll of CAT5e cable. No problems at all at 10baseT HDX (half-duplex) or FDX (full-duplex) . 100baseT was iffy in FDX due to NEXT (near end crosstalk), but worked just fine using HDX. I vaguely recall posting a rant on the topic in comp.dcom.wiring, and being accused of heresy or something.

I will admit that I had problems with duplex mismatch at one end of the link, but that was solved by replacing an old 10/100baseT switch with a newer one that correctly implemented auto-negotiation.

I have a spare pair of Ubiquity M5-HP bullet radios, panels, PoE, etc for such occasions. I also have at least 500ft of CAT5e handy for wi-fi backup. I use it sometimes for keeping a link up and running, while I'm juggling radios, updating firmware, or trying to find sources of interference. However, mostly it's used for deploying an added WAP (ireless access point) to extend coverage, while get someone to do the CAT5e backhaul wiring.

Yep. I've done some fiber. The problem with a high fiber diet is that it's still rather expensive and I have problems selling fiber to my cheap customers. Certainly not for residential work: We do have some fiber running around the neighborhood as part of a previous internet and CATV sharing system. The fiber parts have survived quite nicely. However, fiber is no guarantee against assault by mice, squirrels, back hoes, the water district, or falling trees.

I would love to own a Fluke certifier. I can't justify the expense and borrow one when the customer insists on testing. Meanwhile, I use a DC continuity tester to deal with my wiring errors, and a home made TDR for finding cable defects and split pairs. (The fun really began when I wired one end of the cables for EIA-568B, while my accomplice wired the other ends for EIA-568A).

That's probably the right way to do it, but I find exploring the creative and non-standard alternatives more interesting.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

I have yet to run into much that does not allow intervening in auto negotiation, managed or not.

Sounds about right. Once you have the SNMP toolset, it's not hard to establish your configuration. But these things confuse people.

Sure.

Thereabouts. IMO, the Ethernet interface is no longer the source of most errors these days.

LOLz!

Yep.

Probably not in residential.

A TDR is just fine so long as you know what to look for and it supports the right cable profiles. I think the certifier we have is like $1K. Might be less - there are a lot of them.

Not having one probably cost us at least an order of magnitude more.

These days, I just want the phone not to ring :)

--
Les Cargill
Reply to
Les Cargill

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.