Is it a lost cause?

Well, duh. If you break a system it is pretty easy to demonstrate that and get the glory.
But how does one prove a secure system is, in fact, secure? No glory there.
Reply to
Osmium
Loading thread data ...
Why not turn the problem on its head.
Base the course on a system with *very* well known easily exploited vulnerabilities, and discussion on what should have been done at the outset.
Surely it's the security mindset you want to foster, not specific methods (that may even become redundant over the lifetime of the course itself)
--
W J G
Reply to
Folderol
FWIW, my green card and my flowcharting template are both in my Struble on my desk at work. One of these days I'm going to have to work through Struble again just to get back up to speed.
Reply to
J. Clarke
When the Titan was at Cambridge students were encouraged to try to break the security and report success which sometimes led to a job with the department with instructions to close the hole they found.I just missed it but by all accounts the Titan was a hard nut to crack - by comparison hacking the 370 was discouraged as being too easy.
--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
 Click to see the full signature
Reply to
Ahem A Rivet's Shot
plonk
--
Pete
Reply to
Peter Flass
I REALLY, REALLY miss Execute on x86.
Which are nothing alike. OTOH x86 can be clean today if you confine yourself to a subset of instructions.
--
Pete
Reply to
Peter Flass
If you can't get the "lower intelligence classes" to understand you, maybe you want to reexamine how you're expressing yourself.
--
Pete
Reply to
Peter Flass
At a certain point the compiler has to assume the programmer knows what he's doing when he writes something a certain way.
I think perhaps nine out of ten programmers can write some pretty complex systems without knowing assembler. There are times when you need the tenth guy around, though. I always wanted to be that guy, so I tried to learn the machine language of every machine I worked on.
--
Pete
Reply to
Peter Flass
That's about what I did with PL/I. A bit more than 256 bytes, but still not much. If I worked at it I could probably convert more assembler to PL/I, but at a certain point it's just easier to code the assembler.
--
Pete
Reply to
Peter Flass
The behaviour of rudeness is something that those who are rude need to examine how they are expressing themselves.
Reply to
gareth G4SDW GQRP #3339
Exactly.
--
New Socialism consists essentially in being seen to have your heart in  
the right place whilst your head is in the clouds and your hand is in  
 Click to see the full signature
Reply to
The Natural Philosopher
Any system that does not involve the programming of I/O,. interrupts, DMA etc at the I/O register level is, in computing terms, simple, for financial systems are presented with a sanitised emulation of an idealised machine, especially when RPG is involved.
There may be complexities in the application, but those complexities are not those of the workings of the underlying computer.
Reply to
gareth G4SDW GQRP #3339
I think that about the only time I *had* to use assembler due to specific HLL limitations was when I needed a TP monitor for an ICL 2903 that could handle application-specific code written in COBOL. The problem was that version of COBOL would only accept a string literal as the name of the called module. As a result I wrote a minimal TP-monitor in PLAN 3 while pushing as much code as possible over the wall to COBOL.
--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
 Click to see the full signature
Reply to
Martin Gregorie
Funnily enough, I took a photograph of my piece of Titan memory this morning...all 512 bytes of it.
--
Using UNIX since v6 (1975)... 

Use the BIG mirror service in the UK: 
 Click to see the full signature
Reply to
Bob Eager
It isn't about being able or unable to write complex systems. It's more a matter of their being efficient vs inefficient. Things like "don't put a file open in the inner loop if you can open the file once at the beginning instead". The code works either way, but it has a lot of unnecessary overhead one way that it doesn't the other.
Reply to
J. Clarke
I'm inclined to say that those aspects of computing should be considered simple these days since they are very well established techniques - now when you move on to considering optimising for the various cache levels and sizes and taking full advantage (or trying to) of deep pipelines with multiple execution units, considering processor affinity and handling NUMA considerations then your talking about things that are complex in computing terms. So complex that the optimisation is best performed by software, the art is in designing the hardware and designing the optimisation software to take advantage of it. My hat is off to the people who do this stuff.
True enough - although there are complexities in computing that arise well above the level of the underlying computer - I would say more than there are at that level.
--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
 Click to see the full signature
Reply to
Ahem A Rivet's Shot
On 2 Jul 2016 04:38:20 -0400, snipped-for-privacy@panix.com (Jeff Jonas) declaimed the following:
Sigma: four-bank, four-port, interleaved memory, allowed CPU and three I/O Processors to chase each other through memory. IOPs came in two types SIOP and MIOP (it's been too long, I don't recall if SIOPs were used with tape-drives, and MIOPs with disks and other random devices)
It's still a case of determining what really generated the interrupt -- from within the interrupt handler, and then dispatching to the correct sub-handler.
--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber
In comp.sys.raspberry-pi message , Fri, 1 Jul 2016 20:28:06, Luke A. Guest posted:
For safe programming practice enabling simple calculations, suitable for all ages outside the range of, say, 10 to 75, it is sufficiently easy to write a JavaScript sand-box in an HTML page. The basic idea is to show the result of the value of a textarea in a form - eval(Frm0.TA1.value);.
does that, etc. etc.; you will need a local copy. is simpler; please use a local copy. NOT PI-TESTED.
There must be at least one JavaScript engine in an Internet-supporting Pi distribution; with that, it might not be too difficult to make a browser support HTA coding and/or an equivalent to Windows Script Host, both of which support general un-sandboxed coding.
--

 Merlyn Web Site <                       > - FAQish topics, acronyms, & links.
Reply to
Dr J R Stockton
Execute is the lightest model of self-modifying code, which is a least a bad habit, if not a clear no-no.
It was no accident that execute was not included in x86's. The main need for it was in variable-port I/O, and it was corrected by having I/O addressed via DX.
--

-TV
Reply to
Tauno Voipio
re:
formatting link
Is it a lost cause?
closer to the sigma was series/1. the series/1 peachtree processor was significant more capable than what was selected for the 37x5 communication controllers. early 70s, the science center ... some past posts
formatting link

strongly lobbied the communication group to choose the preachtree processor for the 3705 (rather than the processor they were designing) ... and lost.
the folklore is that the official series/1 RPS system had been done by some former kingston os/360 MFT people that moved to boca and tried to reimplement os/360 MFT ... which wasn't good match for the series/1. The folklore for the alternative EDX system was originally done by some summer co-op students at IBM San Jose Research physics lab.
Late 70s, early 80s, one of the baby bells did a 37x5/NCP emulator on series/1 .... the encapsulated SNA RUs in real network, emulated SNA sessions as "cross domain" (i.e. VTAM emulation with no-single-point-of-failue session management out in distributed series/1). Mid-80s, I got sucked into turning it into type-1 product. We tried to put in brickwalls anticipating the corporate dirty tricks that the communication group was well known for. What the communication group then did can only be described as truth is stranger than fiction.
Old post with part of presentation that I gave in Raleigh at SNA architecture review board meeting
formatting link

part of presentation by one of the original implementors at '86 common user group meeting
formatting link

continue the lost cause theme ... periodically repeated, in the late 80s, a senior disk engineer got a talk scheduled at the world-wide, internal-only, annual, communication group conference, supposedly on 3174 performance ... but opened the talk with the statement that the communication group was going to be responsible for the demise of the disk division. The issue was that the communication group had corporate strategic "ownership" of everything that crossed datacenter walls and they were strongly fighting off client/server and distributed computing trying to preserve their (emulated) dumb termina paradigm and install base. The disk division was seeing data fleeing the datacenter to more distributed computing platforms with drop in disk sales. The disk division had come up with a number of solutions to address the problem, but they were constantly vetoed by the disk division. some past posts
formatting link

a few short years later, the company goes into the red and was being reorged into the 13 "baby blues" ... in preparation for breaking up the company.
--
virtualization experience starting Jan1968, online at home since Mar1970
Reply to
Anne & Lynn Wheeler

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.