Another reason not to like DAB

There's nothing wrong with component reuse. The problem lies with components that weren't designed with security, reliability, robustness, accuracy, etc. in mind. Would you use a semiconductor in a particular set of environmental conditions if you didn't KNOW that the device was designed for use *in* those conditions? At the very least, you'd look at the datasheet and see what "typ" performance is specified for your conditions. If you're a more robust developer, you'd also want to see worst case numbers (barring that, something that quantifies the distribution of values

*around* "typ").

Show me *any* of those parameters for *any* piece of software! :>

Reliability and security are different animals entirely. An application can be VERY reliable -- and VERY insecure.

You can take measures during development to increase reliability, accuracy/correctness and/or security. Even things as simple as "best practices" can make a remarkable difference in each of these aspects of a design/implementation.

You use similar "best practices" when designing hardware. E.g., derating components, shake 'n' bake, etc.

But, too many software development and execution environments do not address these. Or, try to address them as afterthoughts. As if you can buy a big lock to put on the front door of your house to keep it "secure" -- while ignoring the fact that a large percentage of the exterior walls is *glass*!

Coding on bare metal will probably result in a more insecure, less reliable and more costly (to develop) implementation. The whole point of an operating system is to give you an enhanced execution environment that increases your chances of producing secure, reliable and cost-effective implementations.

In my current project, I put lots of "mechanism" into the OS and "support services". So, applications don't need to reimplement things that might be commonly used (e.g., BigRational numerics) as they stand a greater chance of implementing those things incorrectly -- or, cutting corners as an expedient (e.g., "I only need 72bit Rationals") and later erroneously reapplying those "incomplete implementations".

[Great example of this: see how many buggy floating point libraries you can encounter over the years as folks "rolled their own" before these sorts of things were readily available/standardized. Heck, it's just simple arithmetic operations -- how could they possibly get them *wrong*?? :> ]

I provide individual protected execution spaces for each "job" so *you* can't "accidentally" alter or corrupt *my* data -- or, my execution. *Your* bugs affect *you* and no one else!

I provide authentication and authorization mechanisms for fine-grained resource control. E.g., I can let *you* "mute" the audio but never "increase" or "decrease" it. I can let you *set* a value but prevent you from *seeing* its current setting. At the same time, let someone else *see* it but not *set* it.

While none of these things GUARANTEE that the code will be more accurate/correct, reliable or secure, they provide a convenient framework for *cooperative* developers to do the sorts of things that they need to do and reap the benefits of the underlying mechanisms to make their lives easier *and* their code more predictable/inspectible.

[E.g., you could choose to share a value with through some ad hoc mechanism. But, then *you* have to build that mechanism. In doing so, it makes it harder for a code audit to see where you may be introducing a vulnerability. OTOH, if it is EASIER for you to just use some preexisting mechanism to share that value, then your sharing is more readily identified in the codebase. And, the criteria that you have chosen to apply to that sharing are more readily identified: "Why are you sharing this with EVERYONE? Don't you just want to share it with Foo?" (unnecessary sharing leads to exploits) "And, why, exactly, does Foo need to see this? Can't Foo do it's job *without* it??"]

Silly boy. Why would you think it hasn't happened? People have hacked pacemakers, insulin pumps, cars, gaming device$, pay phones, banks, electronics/computer companies, etc. A little time with your favorite search engine should turn up enough documentation to get you thinking...

Note that a "hack" need not mean something was "commandeered". Rather, it represents an unexpected and undesired interference with the intended, normal operation of the device/system.

In the case of the hacked Jeep, you needn't be able to steer, accelerate, brake, etc. in order to have hacked it. Simply interfering with its ability to operate as intended constitutes a hack. E.g., jabbering on the interconnect network -- even if everything you are "saying" is total jibberish -- could easily prevent the system from operating as required. Whether the system handles that assault gracefully or catastrophically is up to the implementors.

When I designed the comms for my automation system, it would have been incredibly naive to think that someone wouldn't elect to "hack" it -- to gain entry to the residence, to "spy" on what I was doing within, to track my TV/radio habits (valuable to a commercial entity -- especially if they could be gleaned without criminal activity!), etc.

And, it's also possible that someone might want to simply *disrupt* the system's proper functioning. Either for some particular exploit or simply to deny those services to the occupants. Imagine the opportunities when accessing that system is possible remotely! (which greatly increases its value to the *owner* -- at the expense of downside hacking risk!)

While there is only *one* such system, the risk is effectively non-existent -- a special case of security by obscurity. OTOH, if I expect others to build upon my efforts -- opening up the design in intricate detail to all sorts of potential hacking experiments -- then failing to address those issues UP FRONT means they will NEVER be addressed, adequately.

Reply to
Don Y
Loading thread data ...

who are these mythical people that expect software to be perfect? It would be nice if it were just competent.

NT

Reply to
tabbypurr

One way is to have separate sensors for safety critical and non-safey critical passenger entertainment systems with separate GPS receivers, so need to feed data unidirectionally from secure system to non-secure system.

Reply to
upsidedown

You've decided that "security" means "can't interfere with the operation of the aircraft". If a system implementation GUARANTEED that, would you consider it "secure"?

What if the system allowed "others" to make note of the credit card numbers and PINs of purchases, phone calls, etc. made in-flight through some part of that system? To the victims of such an exploit, the system would not be considered "secure" -- regardless of whether or not the flight was compromised in any way.

What if the system allowed folks (in the air, on the ground, etc.) to take note of what beverages I ordered? Or, what movies I opted to watch. Or, eavesdrop on my conversation with my land-based accountant over the in-flight telephone system. Or, snoop my web traffic while browsing in flight. Do any of these count as "security issues"?

"Security" means lots of things to different people. The "value" of information and actions varies based on the person/agency involved.

SWMBO has had a new credit card "unexpectedly" issued to her probably 5 times in the past few years ("Please use this card as a replacement for your card number XXX XXX XXX. Your balance will automatically be transfered. The old card will be deactivated in 14 days -- or, as soon as you activate this replacement card").

To *us*, this *appears* secure -- we've not incurred any "unauthorized charges". But, that doesn't mean that the "system" in place for that/those accounts is truly "secure"! The simple fact that the CC issuer opted to take on the expense of issuing new cards, mailing them and creating new accounts suggests that some INsecurity in their system justified this expense in lieu of the risk for the (obviously compromised) account(s).

Will we ever know what *other* personal information has leaked from these (suspected) breaches? Or, does the CC issuer absolve itself of any other responsibility simply because they've covered *their* potential losses??

[I.e., credit card transactions could easily be spoof-proofed. But, there is no financial incentive to doing so. The actual losses are apparently small enough that the issuers can just write them off as "cost of doing business". Of course, they don't bear the cost of any incidental damages that the card holders may ultimately incur!]
Reply to
Don Y

On Fri, 24 Jul 2015 16:11:17 +1000, Sylvia Else Gave us:

It is a "delivered" set of values, and is in no way connected to the aircraft's navigation or control system.

Hell, there is even a huge delay in updates to what "a passenger" "sees". There is no "communications link". There is a one way "data handoff", and that's it. Period..

Reply to
DecadentLinuxUserNumeroUno

On Fri, 24 Jul 2015 00:57:28 -0700, Don Y Gave us:

The Internet access systems on aircraft use a satellite link.. There is no "monitoring" done by "ground personnel".

The satellite service provider has ground based "satellite baseband gateways", and they are the most secure gateways in the entire world.

Reply to
DecadentLinuxUserNumeroUno

I don't doubt that the engineers intend it to be one-way, but what's the actual mechanism?

For example, and leaving aside the question of the exact protocol used on an aircraft, a program may construct a TCP/IP connection with every intention of using it only to transmit data, not to receive it. Should be safe enough.

But TCP/IP is a reliable protocol. It uses acknowledgement packets in the opposite direction. So what happens if an attacker contrives to send deliberately invalid acknowledgement packets? Could the transmission side then be induced to overwrite the end of its buffer? Hopefully, no TCP/IP implementation is quite that fragile, but it illustrates the point.

One way of addressing such issues is to include an element in the communication path that is indisputably one-way. For example, an optical isolator. Then if anyone has made a mistake, and is reading data when they shouldn't, it won't matter. In the TCP/IP example, the system won't work either, and the engineer would wise up quickly (perhaps retaining their job, perhaps not, depending how much it costs to fix it).

Maybe this is what has been done, but I'd like to see a clear statement to that effect from the manufacturers, rather than having to trust them to have got it right.

Sylvia.

Reply to
Sylvia Else

As it has just been said, the communication is usually ARINC (-429). This is no classical bidirectional bus, it is single direction. The position data comes from the navigation system but on a separate physical channel and only readable. Even equipment that is in the cockpit for the pilot like EFB (electronic flight bags) can only read from the aircraft if they are not class 3 classified. But then they are closed systems with software certified to at least DO-178 Level C.

I have designed such systems and I know the rules set and enforced by the FAA and EASA. A commercial aircraft that does not comply with these rules will be grounded. It will only be a craft, no air.

--
Reinhardt
Reply to
Reinhardt Behm

On Fri, 24 Jul 2015 19:38:43 +1000, Sylvia Else Gave us:

You don't get it. Aircraft avionic control hardware is NOT using TCP/IP.

Start over.

Reply to
DecadentLinuxUserNumeroUno

Do you have some issue with "and leaving aside the question of the exactly protocol used on an aircraft"?

It's an example. Any protocol that uses acknowledgements has the same issue.

Sylvia.

Reply to
Sylvia Else

On Fri, 24 Jul 2015 21:49:05 +1000, Sylvia Else Gave us:

Study double redundancy and things like FEC.

The Pluto probe sends data t us, and does not wait for packet receipt acknowledgements. It sends it twice, and both times chock full of FEC data.

Reply to
DecadentLinuxUserNumeroUno

That seems a very strong claim, considering that any satellite transmission can in principle be intercepted with a suitable receiver close enough to the ground station to get a reasonable signal strength.

So the security must depend on the encryption used. In which case how is this different from any other data link or internet connection?

John

Reply to
jrwalliker

Clearly, ARINC-429 isn't the only bus used. ADFX for example.

Since you know the rules, you should be able to cite the documents that contain them so that we can all read them, and check that your interpretation is correct.

Sylvia.

Reply to
Sylvia Else

It would hardly be practical for the probe to use packet based acknowledgement given the distances (and thus times) involved. The situation is rather different from that on an aircraft.

Also, the data is not time critical, and the probe can be directed to resend any lost data again later.

Sylvia.

Reply to
Sylvia Else

Encrypted signals are only "secure" once they are encrypted (assuming the encryption is effective). Any time the signals are available in (effectively) cleartext, they can be snooped.

Seems very unlikely that "the most secure gateways in the world" would be implemented in EACH AND EVERY SEATBACK ON THE AIRCRAFT. Rather, they would be conveyed to a point on the aircraft from which they can be passed through a secure tunnel to ground.

By way of comparison, your employer may have a secure link to "The Internet" which prevents any and all traffic passing over that link from being snooped. But, chances are, that security is NOT imposed at *your* desk, the desk in the cubicle next to you, the desk in the cubicle next to that, etc. Rather, traffic within your organization is much more vulnerable *until* it reaches that gateway.

[E.g., an "infected computer" leaks information *before* any secure comms layer is wrapped around the data. No need to "crack" any encryption as it snoops the data before it has been encrypted!]

This also assumes all of the transactions *are* passed directly to ground. If I purchase an inflight movie, alcoholic beverage, upgraded meal, etc. is that transaction forwarded to "Airline Corporate" on the ground as it occurs? Or, is it cached locally on the aircraft and *batch* processed after the aircraft has landed? If the entertainment is provided *gratis*, what measures are taken to ensure others don't know what I am watching/listening to? ("Who cares?!" "Well, why *should* that information be shared/snoopable with others?")

[Vulnerabilities/exloits happen because information/access leaks unnecessarily. Rather than erring/rationalizing that there is no need to PROTECT a piece of information, one should, instead, ask: "Why should this be *shared*?"]
Reply to
Don Y

On Fri, 24 Jul 2015 04:58:05 -0700 (PDT), snipped-for-privacy@gmail.com Gave us:

IF one knows what modulation schema is being utilized, perhaps. Just knowing the band isn't enough. Even then, there is frequency hopping, etc.

Not the service I mentioned, but when did you ever read about INMARSAT ever being hacked?

Most hackers need monetary motivations. INMARSAT access is not cheap, so hacking it would likely be even more expensive.

There are several factors.

Never hacked throughout its entire service tenure, so said encryption must be pretty good. It certainly speaks for itself. And again, they

*are* the most secure gateways going.
Reply to
DecadentLinuxUserNumeroUno

"Reliable" in network-speak has a different connotation.

*IP* is not a "reliable" protocol but, rather, just a 'best effort' protocol. What TCP adds on top of that is the guarantee of data delivery, *intact* (uncorrupted) and in-order. A client in a TCP transaction need not "double-check" what it has received.

It never claims to do so. A comm link can always be physically/electrically/virtually *cut* -- so any such guarantee would be meaningless.

Reply to
Don Y

On Fri, 24 Jul 2015 22:19:39 +1000, Sylvia Else Gave us:

mil-1553 for one.

Reply to
DecadentLinuxUserNumeroUno

On Fri, 24 Jul 2015 06:26:19 -0700, Don Y Gave us:

You keep saying "ground"

A satellite hook goes UP fully secure, then THE SATELLITE pipes data requests DOWN to the baseband gateway, fully secure, and the result goes BACK UP to the satellite, fully secure, then back to the aircraft, fully secure. Each "seatback" has their own secure pipe to the satellite modem(s). So they ALL get their own "tunnel".

Reply to
DecadentLinuxUserNumeroUno

Safe enough in what context? Recall, we're talking about exploits. So, that suggests something isn't quite as "ideal" as had been intended. There have been successful exploits of well-established stacks in the past -- some of those compromise the *server* (e.g., SYN flood).

The basic "security problem" with TCP/IP is that it was designed for use in a non-hostile environment. It tries to deal with hardware and link failures -- not deliberate attempts to confuse and/or corrupt its implementation.

A SYN flood does something similar -- it (adversary) tells the server that it wants to create a "connection". As TCP is a connection oriented protocol, the server must *remember* this fact and choose to acknowledge it by replying with SYN-ACK. At this point, the adversary has tied up resources *in* the server (a record of the original connection attempt, its IP address, the server-side IP address/port, timer, etc.).

A "cooperative" client would then want to move beyond this "half-open" point in the connection's establishment by replying to the SYN-ACK with an ACK -- completing the three-way handshake.

But, instead, it can simply choose to begin creating *another* connection -- tying up more of similar resources. Until, eventually, the server runs out of "resources for potential connections". (hopefully, doing so gracefully and not *crashing* the server... possibly because the implementation was "off by one" in its handling of the number of potential connections!).

Some CAN implementations essentially "broadcast" data in a read-only fashion. But, that doesn't mean that some *other* device on the same wire can't *spoof* the legitimate sender! (no authentication built into CAN itself). If a client expects a transmission to have been from a bonafide source, then any spoof of that source can coerce the client to divulge something that it wasn't planning on divulging. Or, trick it into acting as if it *was* interacting with the legitimate other party.

Without litigation requiring such disclosures (returning to my initial comment on this subject), it is unlikely that folks will voluntarily disclose these things.

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.