2066-pin processors

Sure. However, they dissipate at least a watt apiece, and are several centimetres in size. Now integrate a billion of them onto a chip, and see what happens to your power dissipation and maximum clock speed.

Then you have to make sure that they're limiting hard, or else you won't be able to keep all the downstream gates functioning, because they're strongly nonlinear. That will cause pattern-dependent effects due to poorly controlled upper state depletion.

Terabit communication on a single fibre is pretty easy--you need about

25 40-Gb/s channels, which will fit in one fibre pretty easily. (The record is several hundred Tb/s, depending on who you believe.) The issues are power, latency, reliability, temperature coefficient, and wavelength compatibility.

Datacom links are mostly 850 nm multimode, whereas silicon photonics are

1.5 um single mode, polarization maintaining. Getting the gozinta to connect to the gozouta is (AFAIK) still an unsolved problem.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs
Loading thread data ...

Long distance datacom is in the 1.3/1.5 um band in which the optical erbium amplifiers also work.

The problem with multimode fibers is the dispersion, which limits the MHzkm value. Running 40 Gbit/s/color will limit the distance to a few meters.

Reply to
upsidedown

Your own TCP/IP stack, sounds dangerous :-).

If you only have ARP and UDP that should easier to get right, but if you also include TCP that is an other kettle of worms.

Reply to
upsidedown

There's no such thing as 'long distance datacom'. Datacom means within racks or between racks in a data centre. Maximum distances of a hundred metres or so are typical. AFAIK there are no multimode fibre amplifiers in common use. People sometimes talk about 'multimode EDFAs', but the one's I've seen use large-core erbium-doped fibres with carefully-prepared single-mode excitation followed by a taper to get their output into an ordinary SMF.

You don't use DWDM on multimode, because there's no good way to do add/drop multiplexing on MMF with 50-GHz resolution.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

There is no point in using DWDM on such short distances. In situations in which there are existing multi-mnode fibers, adding some independent services is handy using CWDM with 2-8 colors. For instance running normal data transfer on one color and safety critical communication on an other color. There are no common electrical components in the data paths. Even bidirectional traffic using different colors could be considered CWDM,

Reply to
upsidedown

And corrections. So now you have a more accurate picture.

The level of purely technical "tricks" appears to me to be a good deal lower than it used to be. Technical computer security measures work - buffer overflow attacks, worms, and many other similar attacks are rarer and harder to achieve. When you have things like address space randomisation and non-executable stacks, you pretty much eliminate large classes of attacks. And while the attacks arrive at Gbps, so do patches (as bugs have not been eliminated!). The technical aspects of security have become better - not perfect, but better.

However, the weakest link here is still the weakest link - people. "Social engineering" is how you break security effectively and efficiently, as it always has been. It is pointless to try to find weaknesses in an OS, cpu or programming language when you can get the same effect by sending someone an email with a link to the program you want them to run, disguised as a video of cats falling off a table.

(I fully agree with you that attacks are not fun, of course.)

Why? To find out that people still don't write perfect software, or to find out that they fix security problems when they find them?

You said earlier something to the effect that simplicity was key to security, and I disagreed - I still do, in that there is no simple method to keep things secure and useful. But perhaps I should also say that overly complex software (something Adobe is famous for) is likely to contain more bugs, and therefore more security weaknesses. The "unix philosophy" of "one program, one task" is undoubtedly a reason why *nix has always had a better security record than Windows.

Agreed. It's just paying to make your machine run slowly.

You do realise this is not a personal conversation with just you reading?

Reply to
David Brown

You usually don't want to control it - you merely want the tools to get it right!

Yes, but you don't often need that. It is /occasionally/ useful to have your own named sections, but the great majority of code and data goes in the standard sections picked by the toolchain.

Yes. (The reference to "program loading" obviously only applies for systems where programs are loaded - like hosted systems, or where code is copied from slow storage to ram for execution.)

Even before that, self-modifying code was needed for the first implementation of subroutines. It was done something roughly like this:

// Call subroutine Store "jump after_call" in "sub_doSomething_return" jump sub_doSomething after_call: // rest of main code...

sub_doSomething_return: .space 1 // Reserve space for an instruction sub_doSomething: // The subroutine code jump sub_doSomething_return

There were a number of reasons for self-modifying code to be an efficient solution in the old days, but as well as being a security risk it is also painful to test and debug, and very inefficient on more modern processors.

FORTH also lets you redefine numbers as new words, which leads to "fun".

I know very little of Fortran, so I will take your word for it.

Post-it labels for passwords are often a very good idea. It lets you use stronger passwords without having to remember them. In most circumstances, the kind of person that might break into your house or office and thus have physical access to the post-it, is not going to be the kind of person that would bother with the password - they want to steal your computer, not impersonate you on Usenet. If you want a little extra security (perhaps appropriate for the office), encode your post-it passwords in a simple fashion. An easy way is to add an extra "Q" somewhere in the password - /you/ know that you must omit it when typing in the password, but other people do not.

Yes.

Reply to
David Brown

Of course there is a good reason - the concept does not make sense. There is no such thing as a "secure compiler" or "secure OS" - security is a /process/, not an absolute.

OS's - whether embedded or not - can provide features that help for security control, and can have more or fewer bugs (bugs can lead to security holes, and security holes are arguably just bugs).

And compilers (or toolchains) can give more features or tools towards helping writing correct, bug-free code.

But that is nothing more than saying better tools are a good idea to help write more secure code.

If your cpu supports marking sections of ram as read-only (i.e., it has an MMU or MPU), then you would do well to use it.

Many people place time-to-market, low development costs, etc., above careful quality control. It is a social-economic problem, not a technical one. (Which is a shame - technical problems are usually easier to solve.)

(Will you /please/ learn to snip? It would make threads so much easier. The "you" here is plural, not personal, and applies to most regulars in this group.)

The JPEG image format could not directly contain executable code, but there was a bug in the image decoding routines in a common library that could trigger execution of data in the file.

However, MS's font files are actually a type of DLL, and therefore executable - the font files can /intentionally/ contain executable code (malware or otherwise).

There is no doubt that MS have made some extraordinarily bad decisions from a security (and reliability) viewpoint, and many of them still haunt us today. But there is also no doubt that they are better than they used to be.

Reply to
David Brown

No, you did not. If you think you did, you don't know what "secure" means.

No, it is not secure.

I can quite believe that you (or your company) have written software that is very secure, and for which there has never been a known security breach.

But security is never absolute.

The nearest you get is a system that whose limited features and resources mean there simply isn't anything much that can be abused. (And that is a good reason for keeping things as simple as practical.)

It is also possible to have something that is as secure as necessary - if it is easier and cheaper for the bad guys to use other methods (attack your competitors' products, bribe the installation guy, break into your office and steal your source code), then it is good enough.

You would not trust a locksmith who tries to sell you a door lock that is "unpickable". You would not trust a bank that claims to be "unrobbable". Your would not trust an encryption system that claims to be "undecipherable".

You'd be happy to accept claims that it is easier to knock down your door with a bulldozer than to pick the lock, or easier to blackmail the company boss than to break the encryption code.

So why would you claim to have an OS that is "secure"? I'd be happy with a claim that it is more secure than most other systems, or that there no exploits are known. I'd be happy with a claim that it is too small and limited to be able to run external code, or that the external inputs are too restricted for any practical or useful attacks.

I think that is a wild claim without backing - especially as "insecure" is as meaningless as "secure".

If you want to claim that a lot of people release products without taking security seriously enough, and without ensuring that they are as secure as they should be, then I'd agree. But that would be a different claim.

Reply to
David Brown

We were talking about the uselessness of optical logic, which compared with modern CMOS is physically huge and therefore slow (despite press-release speed claims) as well as massively power hungry.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Disagree. It's possible to design a system that's connected to the internet and absolutely can't be hacked. We do that all the time.

A bad guy might execute command-line functions to do the things our instrument does, like read a temperature, but they can't alter the code or crash the system.

We had one product recently that could be crashed by massively overflowing a serial-input buffer, but all it does is crash. It's fixed.

If both terms are meaningless, we may as well run Windows XP.

--

John Larkin   Highland Technology, Inc   trk 

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

That's the current philosophy, all software has bugs. I agree, which is why we need hardware to provide system security. That could be provably absolute. "Software security" probably will never be absolute.

Microsoft corporate policy: when in doubt, execute it.

--

John Larkin   Highland Technology, Inc   trk 

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

...> We had one product recently that could be crashed by massively

"Secure" means "always meets its guarantees". Those guarantees are usually taken to include "Cannot be crashed" and that separate tasks or functions cannot interfere with each other or observe each other's state, except as designed.

"There is no impenetrable fortress, only a fortress inadequately assailed" - Les Liasons Dangereux

If you don't understand this, then you don't understand security.

Clifford Heath.

Reply to
Clifford Heath

To you. To me, 'secure' has a range of meanings depending on the circs. If somebody's test setup is being DDOSed from outside the organization, whether or not the instruments crash isn't the highest priority issue at that moment. ;)

What's always relevant is whether an opponent can install an APT in the instrument firmware, steal the binary code, or damage the hardware. Those things can be prevented pretty easily by forcing all firmware updates to be out-of-band, e.g. a USB key with a cryptographically signed binary.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Thanks for the uninvited description of which guarantees you care about.

My definition still stands. Yours just applies it.

Clifford Heath.

Reply to
Clifford Heath

If your definition were really that genial, I expect you'd have been more genial about applying it. But that's just me.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

:)

Well, it was JL who claimed that "secure" is meaningless. So I just defined it for him... elliptically, in terms of guarantees, so we can talk about those guarantees, instead of the indirectly-defined idea of "secure".

Meaningful definitions are necessarily *always* indirect like that. We cannot even *identify* the objects named by a term without talking about the characteristics and relationships that distinguish those objects from similar ones. All meaning is relational like this.

When I worked in security, I used these categories: (R)ights - who can initiate what actions? (A)uthentication - how do we know who is asking? (P)rivacy - Keep information from anyone not authorised (I)ntegrity - Ensure that system data and operation is uncorrupted (D)etection - What is needed to detect and report a violation

There are other sets of these backronyms, but this worked for me.

Clifford Heath.

Reply to
Clifford Heath

Clifford Heath wrote in news:c3iVD.86875$ snipped-for-privacy@fx24.iad:

Organic rnd generators.

Reply to
DecadentLinuxUserNumeroUno

Time to market and the easy update over internet has created the practice of sending premature products to the market with the attitude that we will fix it later. Having such update feature also adds the risk of all kinds of attacks.

If products were better tested, there would be lesser needs for updates, so such update features could be disabled or removed completely.

It is interesting that some companies keep constantly fixing their product and one would expect that when the support ends after a few years it would be bug free, which is not true in practice. Take for instance WinXP.

Reply to
upsidedown

Never heard of parallel processing?

Reply to
Robert Baer

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.