design advice

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View

Until now I only used microchip controler for project like IO device serial
RF transmitter decoder software with sampling and CRC correction.
Paging decoder for POCSAG and FLEX, modem control software and other small

At this moment I'm designing a new "universal" controler board with the next
IO / options.

microchip 18F6525 controler TQFP64 housing
dual spi flash device 4 atmel Mbit
dual serial rs232 port or rs485 port or combination
5 serial shift registers 74hc597 for 40 digital IO input
serial shift registers 74hc595 with uln 2803 for digital IO output
clock chip DS1307
switching regulator

sub-board with rtl8019 controler for ethernet option
sub-board with LCD display, key input, rotory encoder and USB chip FT232BM
sub-board with modem for telecom aplications

some combinations are posible display board with or modem or ethernet.

For now the microchip has done the job for me, but i now need TCP/IP control
for my
next project. resonably simple tcp/ip and web project with low speed data
connection (< 100kBit).

I have been following the arm philips LCP new group as an interesting

Could some one advice me why I sould or shouldend use the microchip
and should switch to an other device for some reason.

Any advice is welcome and appresiated  ( I now can still change controler
for my new PCB design),
also if I start (you advice) with new controler where to find newsgroup or
tools to do debugging, compiling etc.


Gerrit Faas

Re: design advice
Quoted text here. Click to load it

TCP/IP implementation is RAM hungry.
PICs don't have enough RAM, so if you want to implement TCP/IP without
limitations you will need uC with more RAM (internal or external)

Best regards
PCB prototypes for $26 at
( )
We've slightly trimmed the long signature. Click to see the full one.
Re: design advice
Take a look at the ZiLog Z80...
Zilog provides already software for TCP/IP and a webserver on the Z80.

I'm working with the smaller eZ8, which has already 2 UARTS (with an IrDA),
single SPI, and a lot of other nice stuff. Including On-Chip Debugging...

- Joris

Quoted text here. Click to load it

Re: design advice
Quoted text here. Click to load it

I think yo mean eZ80, not Z80. eZ80 is a 24-bit MCU, while the Z80 is an old
8-bit MCU (which is still used).

Re: design advice
Quoted text here. Click to load it

Forgot the 'e' indeed ...

Re: design advice

 thanks for the advice I will go into the diferent controlers and loop what
would be most applyable.

If there are stille pro and contra's for microchip vs arm please share.

Kind regards,


Quoted text here. Click to load it

Re: design advice
Quoted text here. Click to load it

ARM is a 32 bit device, while the other suggested processors have
narrower word width. The TCP/IP arithmetic is also 32 bits, so
there is a native advantage with an ARM. Also, a 32 bit processor
can more easily handle memories over 65 kbytes, which pretty
soon prove necessary with the TCP/IP applications.

Been there - done that (TCP/IP on AT91's).

Tauno Voipio
tauno voipio (at) iki fi

Re: design advice
When working with TCP/IP on a device with only few memory, you should be
aware of the send and receive windows used and the optimizations to reduce
network traffic (delayed ACKs).

I've seen an implementation with TCP/IP that had little memory available and
here the window was the limiting factor for communication with a (Windows)
Basically, the sent window of the device was much smaller than the send
window, so the data send a head was little (only 8 or 16KB, while the PC has
a window of at least 64KB). There is an optimization (turned on by default
in Windows) that only sends ACKs at predefined time intervals (or when the
receive window reaches its capacity). The effect was that the window was
partially filled (embedded device was out of memory) and needed to wait a
long time for the ACKs from the PC (which are sent every 200ms).
So it ran over 100Mbps and, due to memory limitations, only reached 800
kbps! It worked, but ensure you have enough memory...
Of course turning off the optimization on the PC so ACKs are send
immediately helped a bit, speeding it up an order of a magnitude!

Hope this illustrates the memory demands TCP makes to actually get decent

- Joris

Windows 2000 TCP/IP Configuration Setting affected:

Key: Tcpip\Parameters\Interfaces\interface

Value Type: REG_DWORD-number

Valid Range: 0-6

Default: 2 (200 milliseconds)

Description: Specifies the number of 100-millisecond intervals to use for
the delayed-ACK timer on a per-interface basis. By default, the delayed-ACK
timer is 200 milliseconds. Setting this value to 0 disables delayed
acknowledgments, which causes the computer to immediately ACK every packet
it receives. Microsoft does not recommend changing this value from the
default without careful study of the environment.

Quoted text here. Click to load it

Re: design advice
Quoted text here. Click to load it

Interesting observation. I wonder if sending little packets much smaller
than Ethernet can handle, say 200 octet packets will trigger the
ACKing mechanism on the receiving side. After all, doesn't telnet
ack every single character?


Re: design advice

Quoted text here. Click to load it
Quoted text here. Click to load it

There is usually an echo with a piggy-back ACK. A single keystroke
generates (if the line mode is off) three segments: the initial character,
the echo with ACK, and the ACk of the echo.

For bulk transfer with small segments, such as Web pages, there can
be a delay before a segment is ACKed, to provide the browser end
the opportunity to piggy-back some data with the ACK. If the sending
end waits for the ACK, this will slow the transfer noticeably. OTOH,
sending without waiting for the ACK makes it necessary to store all
un-ACKed data at the sending end.

Tauno Voipio
tauno voipio (at) iki fi

Re: design advice
When you try out telnet under Windows, and you start hitting those keys
quite fast, you might notice that Windows actually packs several characters
into a single TCP packet (Nailer algorithm, if I recall correctly).

So since networks already have a delay, implementations add some additional
delays to use the network more efficient (less overhead). The above Nailer
algorithm is also documented in the RFC. Perhaps delayed ACKs are here too

- Joris

Quoted text here. Click to load it

Site Timeline