I need a disassembler for AMD 188

You are incorrect. The address FFF0:0000 is a 20 bit address. It represents the original 8086/88 segment:offset pair. The segment portion is shifted left 4 bits and the offset portion is added to that to create a 20-bit address.

Now to go back to the original question, the EPROM is a 512K byte ROM with addresses 0h thru 07ffffh. That doesn't mean it represents those exact addresses. The board is probably designed to map the 512K address space to the upper 512K of the 20-bit address space (simply by decoding A19). That would put the far jump you saw at FFFF:0 (the 8086/88 reset address). And that would put the location of FFF0:0 at offset 07ff00h in the EPROM. Look there and I'll bet you find something that looks like reasonable assembler code??

I wouldn't be so quick to call someone a fool GS. You're apparently not quite the rocket scientist you attempt to portray.

Patrick

Reply to
Patrick Klos
Loading thread data ...

Excuse me, but AFAIK, Intel has referred to the 8086/8 family as a

32-bit processor since the early 80's when I was first writing assembler for it. The 32-bit logical addresses happened to get mapped into a 20-bit address because that's all they had space for on the original 40-pin DIPs, IIRC.

I'll check my library tomorrow, but I still have a 'preliminary' copy of the 8086/8 databook somewhere.

I was making a comment on the progression of the OPs posts. He first asked for a disassembler, then asked where the program started, then asked where the checksum routine was. The last straw was when he asked for a 'decompiler'. To me, at least, it's really unclear as to whether he understands what he's trying to do.

BTW, I don't claim to be a rocket scientist, never have. I have consulted for rocket engineers, including an interesting system for firing torpedos vertically from the deck of a ship. The entire control program fit in less than 8k of carefully written 808x assembly language with full roll, pitch and yaw control.

GS

Reply to
Gob Stopper

It's a jump elsewhere into your EPROM. Nobody says that the chip is connected so that its base address is zero.

If the PROM address 0007fff0 is located at address 000ffff0 physically (it's the address 0ffff:0 written in bus form), where's address 000fff00 (0fff0:0) on the same chip?

Please get the 80186 family reference book and learn the rest by yourself - it will be a long way to go.

The next step in code very probably sets up the chip select unit, and understanding it is a key for the rest of decoding the code.

--

Tauno Voipio
tauno voipio (at) iki fi
Reply to
Tauno Voipio

(-- clip clip --)

The bus address width of 8086/8088 and 80186/80188 is 20 bits. The chips *are* 8/16 bit processors with basic register width of 16 bits.

Please understand that the 80186 family is much more integrated than the base processor. Opening the chip select unit code is crucial to the rest of the disassembly, and the chip select units of different

80186 family members are different.

The first 32 bit processor on the family was 80386. The 80286 had

16 bit registers and 24 bit address width with a segmented memory management unit, but no paging.

Here in the far North we say of a cluless that he's outside like a snowman ...

--

Tauno Voipio
tauno voipio (at) iki fi
Reply to
Tauno Voipio

In my mis-spent youth as a cowboy in Montana we'd say, "Couldn't pour piss out of a boot if the instructions were written on the heel".

Ken Asbury

Reply to
Ken Asbury

I don't know about the 1980's, but at least in the late 1970's the flagship products from Intel/Motorola/Texas (i8086/MC68000/TMS9900) were all marketed as 16 bit processors.

One should remember that 1 MiB of memory was a huge amount, requiring a full 19" rack of core or something like an 8U high 19" box using DRAMS in the late 1970's.

Paul

Reply to
Paul Keinanen

FWIW, I have an Intel MCS-86 Users Manual, marked "preliminary", and dated July 1978, which opens with:

Chapter 1 - Introduction

"The Intel(R) 8086, a new microcomputer, extends the midrange 8080 family into the 16-bit arena. The chip has attributes of both 8 and 16 bit processors. By executing a full set of 8080A/8085 8-bit instructions plus a powerful new set of 16-bit instructions, it enables a system designer familier with the existing 8080 devices to boost performance by a factor of as much as 10 while using essentially the same 8080 software package and development tools."

No mention of 32-bit, in fact Intel was more concerned about promoting compatibility with the 8-bit 8080 - so much so that they make it sound like you could run 8080 code produced with 8080 tools (you couldn't, but it is fairly easy to recode 8080 assembly source for the 8086).

I don't see anywhere in the 1979 "8086 Family Users Manual" where it actually states that it is a 8, 16 or 32 bit processor, however it does say:

Chapter 1 - Introduction ... Functional Description ... Microprocessors ... - Processors operate on both 8- and 16-bit data types; internal data paths are at least 16 bits wide.

It's been a while since I've had these books open - Brings back memories of how excited we were to get them... along with that "flat blue box" containing the first 8086 chipset.

--
Dunfield Development Services         http://www.dunfield.com
Low cost software development tools for embedded systems
Software/firmware development services       Fax:613-256-5821
Reply to
Dave Dunfield

Those compatibility claims were strongly promoted in early materials leaked out prior to the official release (seen some early multiple generation photocopies :-). However, these compatibility claims quickly disappeared, when the binary encoding of the instructions was released and everyone realised that it was not binary compatible with

8080.

Paul

Reply to
Paul Keinanen

Even the 1978 preliminary "MCS-86 users manual" does fess up to the fact that the code is not binary compatible further in (quite a ways in iirc).

Intel promoted that the devices were "source code compatible", and offered a software product to automatically translate 8080 source code into 8086 source (It is quite easy to do - my own "assembly translator" has a set of fairly simple tables to accomplish this). A number of "quirks" in the 8086 architecture can be attributed to the desire to be able to translate 8080 code to 8086 automatically (eg: things like LAHF and SAHF which allowed you to perform the equivalent of "PUSH PSW" and "POP PSW" with a two instruction sequence).

Obviously Intel felt that being able to migrate 8080 applications directly to the 8086 was an important factor, however I don't recall all that many instances where this was actually done - I'm sure there were some, but most of the people I worked with got "into" the 8086 and embraced it's native instruction set fairly early on.

--
Dunfield Development Services         http://www.dunfield.com
Low cost software development tools for embedded systems
Software/firmware development services       Fax:613-256-5821
Reply to
Dave Dunfield

I attempted it with Intel's conversion program, and ended up to doing it all by hand.

The next transition, from 80188 to ARM7TDMI was much less painful. In the meantime, the 80188 code was rewritten in C (Borland 2.5), and a 40 kbyte application moved to GCC code in a week, including a multi-thread kernel.

--

Tauno Voipio
tauno voipio (at) iki fi
Reply to
Tauno Voipio

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.