Is it a lost cause?

The reason zero is tested first is the hardware should handle both negative and positive zero.

Reply to
Andrew Swallow
Loading thread data ...

I demand that Andrew Swallow may or may not have written...

RISC OS. BBC BASIC. Only issue here is that that isn't an emulator...

--
|  _  | Darren Salt, using Debian GNU/Linux (and Android) 
| ( ) | 
 Click to see the full signature
Reply to
Darren Salt

I demand that "gareth G4SDW GQRP #3339" may or may not have written...

I remember doing things like

LDR R0,foo MOV R1,#bar ADD R0,R0,#1

rather than

LDR R0,foo ADD R0,R0,#1 MOV R1,#bar

to avoid some single-cycle delays. I have no idea whether that applies to modern ARM CPUs, though.

--
|  _  | Darren Salt, using Debian GNU/Linux (and Android) 
| ( ) | 
 Click to see the full signature
Reply to
Darren Salt

The PI _IS_ an ARM implementation. It uses an arm6 processor. It can run RiscOS, Linux, FreeBSD among other OSes.

On Linux there is ample support for assembly. Since I am not well versed in ARM assembly, I'll show what gcc outputs for hello.c:

----------------------- $ cat hello.c #include

int main (int argc, char ** argv) { printf ("Hello, world\n"); } $ cc -S hello.c $ cat hello.s .cpu arm6 .eabi_attribute 27, 3 .eabi_attribute 28, 1 .fpu vfpv3-d16 .eabi_attribute 20, 1 .eabi_attribute 21, 1 .eabi_attribute 23, 3 .eabi_attribute 24, 1 .eabi_attribute 25, 1 .eabi_attribute 26, 2 .eabi_attribute 30, 6 .eabi_attribute 34, 0 .eabi_attribute 18, 4 .file "hello.c" .section .rodata .align 2 .LC0: .ascii "Hello, world\000" .text .align 2 .global main .type main, %function main: @ Function supports interworking. @ args = 0, pretend = 0, frame = 8 @ frame_needed = 1, uses_anonymous_args = 0 stmfd sp!, {fp, lr} add fp, sp, #4 sub sp, sp, #8 str r0, [fp, #-8] str r1, [fp, #-12] ldr r0, .L2 bl puts mov r0, r3 sub sp, fp, #4 @ sp needed ldmfd sp!, {fp, lr} bx lr .L3: .align 2 .L2: .word .LC0 .size main, .-main .ident "GCC: (GNU) 4.8.2 20131219 (prerelease)" .section .note.GNU-stack,"",%progbits $

------------- See? Arm assembly, an integrated part of Linux. (Yes, this one was generated by c-code. So I cheated.)

-- mrr

Reply to
Morten Reistad

Pi1/+ use ARMv6, Pi2 uses ARMv7, Pi3 ARMv8.

Reply to
A. Dumas

They were talking about FORTRAN II. Was it all twos-complement back then?

/BAH

Reply to
jmfbahciv

No - the macine I was running Fortran II, IBM 1620, was a decimal (BCD) machine with a sign-magnitude number system. There were 5 bits to a digit: 1, 2, 4, 8, and flag.

The flag bit was used as negative sign, and also as a field end marker.

--

-TV
Reply to
Tauno Voipio

In comp.sys.raspberry-pi message , Tue, 14 Jun 2016 23:42:10, Martin Gregorie posted:

The active PC, I think, should be a real hardware counter, as opposed to an ordinary register, with hardware parallel load for jumps. Most instructions are not jumps, so it is incrementation that should be optimised.

The PDP11 instruction set required pre-decrement of the PC to work; but I imagine that -(PC) occurred so rarely that it could be handled by calling an interrupt in the OS code. But 014747 was an interesting instruction.

FYI : I too started with Algol 60, using just an ASR33. Transfers to/from the computer were, IIRC, by bikenet. Subsequently, I became acquainted with one of the authors of the Algol 60 report.

--
 (c) John Stockton, near London.                Mail ?.?.Stockton@physics.org 
  Web  <                              > - FAQish topics, acronyms, and links.
Reply to
Dr J R Stockton

How's your pendulum clock coming along, for I'm just about to set out on Alan Timmins' 8 day longcase clock?

Reply to
gareth G4SDW GQRP #3339

As for the original post, have to agree. They seem to be pushing Python more than anything. There are languages that can be used per age group but for older people the area which aren't being taught properly or at all are is dev and compilers, both require asm and if you're starting out you don't start on x86 unless you're mental.

RPI should've been MIPS, but that's another bag of worms.

Reply to
Luke A. Guest

I don't understand what your point is, but assembler is a red herring in this context.

I've successfully designed and written systems in COBOL, PL/1 and RPG3 on machines where I didn't know the assembler and didn't have any need to know it (ICL 2900, IBM AS/400, DEC VAX). Same goes for ANSI C on DEC VAX and Alpha Server, Intel X86 and ARM. I also use Java, Perl and awk scripts (all are compiled to an interpreted P-code - just like Python - before they are executed), and again haven't needed to know anything about the code emitted by the compilation phase. I also know and have used assemblers fairly extensively (for ICL 1900, 6800, 6809 and 68xxx kit).

In all cases, regardless of whether the language is compiled or interpreted, a high-level language or an assembler, being able to write concise, efficient, READABLE & MAINTAINABLE code in the required language is much more important that what the language is[1].

So, whats so special about assembler or using a compiler?

[1] However, high level languages are considerably more portable than any assembler and so code written in them is more likely to be reusable. For instance, POSIX-compliant ANSI C can be developed on X86 hardware and then compiled and run without any changes whatsoever on ARM kit. Been there, done it. It works.
--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
 Click to see the full signature
Reply to
Martin Gregorie

Simple systems for consumerist programmers

Reply to
gareth G4SDW GQRP #3339

Let's see your verified CV then.

--
Using UNIX since v6 (1975)... 

Use the BIG mirror service in the UK: 
 Click to see the full signature
Reply to
Bob Eager

If you think that the accounting system for a Fortune 100 financial services company is a "simple system for consumerist programmers" you really have little touch with reality.

Reply to
J. Clarke

I learned IBM 360 assembler in high school, using a genuine "green card" reference card. What a rich selection of instructions to do "just what I wanted" [half a :-) there]. I found a valid reason to use "execute": execute MVC for a variable number of bytes without the overhead of all 'dem registers for the System/370's MVCL "Move-Characters-Long" instruction.

I never needed to use translate-and-test despite it being a wonderful multi-purpose instruction.

After using a 32 bit machine (IBM 360), I found Z80 weak and tedious to program in assembler :-/ "C" to the rescue!

Years later, I finally used a 68000 based trainer in assembler. Bliss!

Reply to
Jeff Jonas

At NJIT around 2005, a pre-requisite course for the Master's in Computer Engineering was a lab still using a 68k Single Board Computer with a minimal monitor in ROM. I thought it was FUN, finally going down to that level with one of the nicest chips ever.

Reply to
Jeff Jonas

... and it's the fun factor the most critics simply don't seem to comprehend.

--
W J G
Reply to
Folderol

That's the tip of the iceberg. What was the I/O hierarchy? The 360 used channel controllers to device controllers, a far cry from each device going direct to the main bus.

The Z80 (in interrupt mode 2: native chip support) received a 7 bit interrupt vector from the interrupting device, thus 128 vectors. But many chips generated multiple interrupts. A dual serial chip used the lowest 3 bits for "status alters vector":

- one bit for channel A or B

- 2 bits for receive, transmit, status or other change so that means there's really only 4 unique bits per interface chip for a maximum of 16.

I'm not up on that part of the 360/370 but I thought the "sense interrupt" gave a status word on who interrupted.

I'm more familiar with the IBM 1130. There were 6 vectored interrupts but each interrupt supports a 16 bit register for sharing it among devices. XIO "sense interrupt" loaded the per-interrupt status word: each device set its assigned bit if it wanted attention. The bit order determined the relative priority of devices at the same interrupt level. Very deterministic and quick. No polling or searching.

The "shift left and count" instruction quickly located the most significant bit set in the status word. No need to test bit by bit. Kinda like a software priority encoder. Intel now has LZCNT: Count the Number of Leading Zero Bits which is a C/C++ Compiler Intrinsic! And the older Intel instructions BSF and BSR - Bit Scan Forward/Reverse.

I vaguely recall the 360 having PSWs for handling interrupts, unsure about alternate register sets.

-- jeffj

Reply to
Jeff Jonas

For better or for worse, the VAX had a *very* rich instruction set. Probably the richest ever.

And it had proper direct 32 bit adsdressing, although of course memory was cheaper by then.

--
Using UNIX since v6 (1975)... 

Use the BIG mirror service in the UK: 
 Click to see the full signature
Reply to
Bob Eager

Exactly. I found it fun to use an Arduino and an Asterisk box for our doorbell. Many can't understand (although my wife can now see the benefits).

--
Using UNIX since v6 (1975)... 

Use the BIG mirror service in the UK: 
 Click to see the full signature
Reply to
Bob Eager

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.