Auto-update protocol

I meant the binary parts always to have the size of a flash page. And then transfer the pages with changes, no matter if there is only one byte changed in a page or all. So there would be not that much overhead.

Sounds like you're having a file system in your flash and your modules are individual files (my device only uses on chip memory). In this case take the above scheme, replace old and new binary version by module versions, put together all module images and place a table of contents at the beginning. Then the device could look for the right module version. And in case the update is interrupted the device can skip the already updated modules.

I only know that DoS attacks are hard to come by since they are using services which should be available normally. On the devices side this might be not to hard as long as it builds up all connections needed itself. But in this case the idea you mentioned earlier of one device informing others abut updates could be a bad idea, because an attacker could use this feature to make a DDoS attack on the server: attacker to all devices: "there is a new update" all devices to server: "fine, send me the new update" server: "what? new update? ... hey, not all at once ... dam it, I quit" ;-) Ok, with this in mind it should be possible to implement that feature in a safe way.

But for the server it will be difficult to prevent DoS attacks (and that's all I know about that).

Ah, users. One can write "In order to make your device secure you have to replace the default key!" in letters big enough so the sentence will fill up an entire page - they won't read it. But after some security issue they'll call the support: "Why wasn't my device secure?"

Anyway, the default key in the source isn't the issue. It's no key, it is just something to make the device work without the user having to do some setup first (although, IMO he should be forced to set a key).

The question is: is it possible read the key which replaced the default key by Step 1: trying to read it over any interface the device offers (easy to cover in software, just don't implement a function for this) Step 2: disassembling the device and than using a JTAG interface or unsoldering the flash chip to read its contents (not to hard to cover if the µC has some internal flash / EEPROM which can be protected so it can't be read out with a JTAG interface) Step 3: by micro probing (ok, if the key it worth this effort, it should be also worth it to hire Schneier to do the security stuff ;-))

A way around this might be to upload the current image for instance once a day (may be at random times) to the update server. This way a corrupted image will be replaced after about a day. The compiling machine should be able to do that. It could also download the image from time to time to check it.

Of course everything can be encrypted, but why do this if you don't need it. The (unencrypted) image file can be used to get a complete binary of your software, which then might be copied to some other hardware, so someone gets a similar device with your application (i.e., without paying for it). Only if this kind of stuff is a problem for you, then you have to encrypt the binary.

Messing around with packets shouldn't be a problem with a secure hash, because it's more than unlikely that someone can mess up the binary and then generate a secure hash, which passes the check.

By, Mike

Reply to
Mike Kaufmann
Loading thread data ...

That's definitely the right approach, particularly if you want to scale to low wireless rates ... on 100Mb WiFi a 1GB file takes minutes to transfer even with a single client and no interference. You can just forget about using something like 3-G cellular.

Multiple devices sharing the same key is not a problem. Although one of the keys is "public" and the other "private", it actually doesn't matter which is which: public(private(text)) == private(public(text)).

The point is to authenticate the origin of the image - the client's "public" key can only decrypt a file encrypted by your private key - successfully decryption is the authentication. (Of course you need to verify by signature that the decryption was successful.)

You only need to worry about more complicated key handling if you are creating programming tools for end users that themselves will need to worry about spoofing. AFAIHS, your previous comments didn't indicate that.

Regardless, it neither desirable nor necessary to use public key encryption for the whole image (or its parts). Symmetric single key encryption (like DES, AES, etc.) requires fewer resources and runs faster. The only parts that require public key for secure communication and authentication are the symmetric key(s) and image signature(s).

True, but signatures for all the pieces can be bundled for en-mass checking. If the pieces need to be encrypted, you only need to provide one decryption key for all of them.

I only assumed that the device stored the program for cold start and that you wanted to keep the code secret. However, since the source will be available there is no particular reason to encrypt images at all - runtime image encryption is typically only done to protect trade secrets.

What I was saying was that the decryption mechanism could be reused in the general program loader, enabling the image(s) to remain encrypted on storage. That way, it would be much harder to gain access to and reverse engineer the code.

So then the only things that require authentication are the image signatures. You don't need to encrypt the image binaries at all - just the image signature bundle.

It has been a long time since I've read any of the relevant RFCs, so I may be completely wrong here (and please feel free to correct me 8), but AFAIK, none of the popular file copy/transfer protocols allow random access.

My understanding is that protocols like FTP, TFTP, Kermit, etc. just send file blocks in order and wait for the receiver to ACK/NAK them. Some of the sliding window implementations allow OoO within the window but, AFAIK, none allows arbitrary positioning of the window or advancing the window beyond a missing block. The so-called "restartable" protocols periodically checkpoint the file/window position and restart from the last checkpoint if the same client re-attaches and requests the same file within some timeout period.

If you want/need essentially random access without needing your own server application you might be forced into a remote file system protocol (like SMB, NFS, etc.). I have no experience implementing such protocols ... I've only used them.

True, but that situation is mitigated by piece wise updates.

Downloading a small header (up to 1 UDP block) once every couple of minutes would not be too bad - at least for a limited number of devices and images (I'm presuming here that "like" devices can use the same image?).

Obviously it would be far better to broadcast/multicast that updates are available and forgo client polling at all ... but that requires an intelligent server.

Couple of possible solutions:

- have all the "like" devices on a network segment (i.e. those that can share an image) cooperate. Use a non-routable protocol to elect one to do the polling and have it tell the others when an update is available.

- run the devices in promiscuous mode and have them "sniff" update checks. This way when any device on the net polls, all devices on the segment see the result.

- combine election and sniffing so only one device actually talks to the server and the others eavesdrop on the conversation.

Sniffing might be more complicated than it's worth - ideally you'd like to extend it to cover the whole update process. But it may be worth doing for update checks because the file name, block number, etc. could be fixed and so easy to search for in sniffed packets.

Yes. But the key is that it is scriptable. Bleary eyed, overworked IT person or dumbass noob makes no difference ... it's hard to get much simpler than " filename". If you must have a GUI you can use Javascript in a web browser so the (l)user only needs to specify the file (or better yet drop it on the browser page).

In any event, the server update process can be stupid friendly.

George

Reply to
George Neuner

Hi Mike,

[attributi>>> Reading this thread made me think about an enhancement to this.

Ah, OK. Yes, that makes more sense -- since you can't write a fraction of a page.

But, you're hoping you can get by with changing some small percentage of the pages (?)

No, no filesystem. Rather, treat *the* flash as a "block special device" (e.g., disk) and manage blocks discretely. Much like you manage pages -- except I manage things in terms of "entry points".

Yes. The difference is, I need to process a "module" at a time so that I can keep the old module "running" while its new image is being downloaded and then flashed. I also have to make sure modules are updated in a specific (not known, a priori) order.

So, I am looking at having one "module" that governs the update process *in* the device. By convention, update that module *first*. Then, activate it and let it control how the other modules are updated (i.e., it can then be very flexible in determining what happens when)

It still leaves open the possibility of having some number of modules at version N and some number at N+1 -- for a potentially indeterminate length of time :-/ (i.e., imagine server dies after 3 of 12 modules have been updated) But, at least things *should* keep running...

No, a device receiving a notification of an update doesn't act on it immediately. That's the same problem that I outlined earlier -- where all devices check for updates when first powered up (since severral may power up concurrently).

Instead, you set a flag that tells yourself "start thinking about an update". This can invoke a random delay before polling the server *for* that update, etc.

Remember, I'm planning on updates being "alongside" a running device and not "in series" with its operation. (but, I need to be able to bound the interval in which an update "will" be processed)

The server is not essential to the continued day-to-day operation of the "devices" (and the system they represent). By way of loose analogy: imagine if your "tape" backup device dies... you can still use your computer! (but, you do so, now, at increased vulnerability) You also tend to *see* these sorts of things (vulnerabilities) a lot more than some remote "black box" with a blinking red light on it! :>

Exactly. I think the solution is just to leave that *out* and force it to be specified. E.g., for the open source devices, I can just leave a syntax error in the sources:

char secret[] = put something here

so it won't build without them putting *something* there (and, if they fail to pick a *good* something, "Hey, *you* picked that key so why are you complaining to me?"

Not a problem.

Don't worry about this in production devices. If someone uses dynamite to blast into a bank vault, the bank can at least argue that they took reasonable measures to protect their customers' belongings. (OTOH, if they leave the door to the vault wide open... or, write the combination on a slip of paper taped to the front of the vault door...)

The "compiling machine" may not have access to that server. E.g., Cisco can't upload updates for my routers to *my* server (because they "can't get in")

I'm not using encryption to "hide" the binary from theft. Rather, to keep someone from tampering with it (e.g., like a digital signature)

Reply to
D Yuniskis

Hi George,

[attributions elided]

Yeah, but it while it makes the implementation more robust (in terms of uptimes), it makes the development process considerably more brittle -- trying to make sure everything "lines up" nicely. :-/

Yes. I am more concerned with the reduction in bandwidth (overall as well as) due to contention, interference, etc. Quite a different world than a wired network with switches.

So, its doubly important to make sure I can recover from all those "Can't Happen"s that *do* happen! :>

Yes. Just like digital signatures. And, I don't have to worry about a real key server as I can embed that process in the development (and update) protocols.

Right.

Yes, this is an advantage on the "bigger" applications (what's another "module" among friends? :> ). But, for these smaller devices, it means adding a second piece of code that just "runs faster" than the one piece of cryptocode that is already *required*. So, to save resources (since updates are infrequent in these "throw-away" projects), it's easier to just wrap the PK utilities in a portable wrapper (forward thinking) and use *them* as is.

E.g., even something as simple as AES uses a bit of resources (more so than wrapping an existing PK implementation)

Yes.

If you don't encrypt the images, then you have to ensure that

*every* "secret" is bundled in with the keys, signatures, etc. E.g., if your code has any "passwords" (e.g., for a telnet session) to access the device, then those must be passed to the device in that "one" encrypted bundle, etc. Encrypting the entire image gets around those "lapses" where you forget that something you have exposed *can* be used against you.

Understood. There have been devices that did this in hardware more than 30 years ago. :-/

See above. Imagine passing the contents of /etc/passwd in cleartext (easy to *sniff*).

Grrrr... (slaps head). No, you are right. I was thinking of something else. Yes, all I can do is order things in the image in such a way that I can abort a transfer once I have obtained "what I need".

This also means that I need to arrange the modules *in* the image in the order that I want them to be burned -- else I would have to open the file repeatedly to get *to* the module that I needed "next" (since I can't buffer the entire image).

No, I will live within something like TFTP. The others are more demanding in terms of resources. And, present entirely new sets of problems (stale handles, etc.)

Yes. I'm taking on extra "requirements" for these throw-away devices which wouldn't be needed, otherwise. It gets hard keeping track of which context applies the more stringent requirements on each design decision ;-)

Yes, in general, each type of device will use the same image (for the throwaway devices). There may be some deviations as I play with different images during development or to test different recovery strategies, etc.

The "devices to come" are a different story. The images there will tend to have bigger (in terms of numbers of affected bytes) discrepancies. I think I will end up breaking those images into pieces *on* the server. This will make server-side management more error-prone (e.g., device A used images 1Ai, 2Ai, 3Ai () while device B uses 1Bj, 2Bj, 3Bj -- even though 1Ai == 1Bj (etc.).

But, if I use the idea of having an "update module" as the first part of any update and make this an *active* object used *in* the update procedure, then that can explicitly go looking for whatever pieces it needs...

And also places more restraints on the network fabric.

If you concentrate that functionality in one (elected) device, then that device must remain on-line for the protocol to work -- else you need to elect his replacement. For things like the A/V clients, I expect them to see frequent power up/down cycles. E.g., walk into the kitchen, power up the audio clients in that room so you can listen to "whatever" you were listening to in the living room. Finish your kitchen task, leave the room and the clients can be powered down. I.e., you want a protocol that can react quickly in the face of a changing network context.

So, what I have instead opted to do is have each device do semi-random polling. But, as a device becomes aware of other peers "on-line", have it increase its average polling interval proportionately -- *knowing* that each of its peers is aware of *its* presence and will inform it if they discover something on their own.

As such, the total polling traffic on the network remains reasonably constant. And, if a node goes offline, its disappearance doesn't directly affect the polling -- all of the other nodes will (eventually) increase their polling frequency to maintain the same overall polling "load" on the (update) server.

Yes, but that means every device has to process every packet. And, means it must "see" every packet (what if there is polling traffic on a different subnet -- not routed locally?)

I thnk the approach I outlined will work. It means each device can operate independantly of any other. Yet, implicitly cooperate in their relations with any "shared resources" (the server).

Yes. But it still relies on the user knowing/remembering that he has to use . If it is something done infrequently, people tend to forget the prerequisites involved (I am always amazed at how few people maintain journals!)

Reply to
D Yuniskis

You're right, I'm hoping ;-)

Here the question is, can you move your modules from one block of flash to an other and run (or load) it from there?

Then, when updating the update module with the update module it must not run out of RAM, overwriting the old update module with the new one, because in case of a power down of the device during that you can't update any more modules. And I think, you'll also have some start up module, which can't be updated (may bee an RTOS), since controllers need a certain address to start.

This is good for people using the source code, but not for those using a binary. They will just end up with an other default key (but fortunately for you complaining to the one who made that binary). I was more thinking about something like the device to demand the user to enter a new key after first power up. But in your case this is not an option, since you want your device to start working without any kind of initial setup.

Well, it depends on the number of devices using the same key. With individual keys for each device you're of course right. One key for each customer is also ok. But very likely not if one key is used for all devices (if their number is large enough to be of any interest).

Then my idea is out of the question. I just thought, if you can upload a new image to the update server you have the necessary access rights. But if you just inform the customers about updates i.e. by a news letter, then you have to keep in mind that they do not download updates N+1 to N+4 but N+5, which might only work, if at least N+3 was installed. Of course this may be of no concern for your application.

By, Mike

Reply to
Mike Kaufmann

Incrementally updating a running program can get tricky even in the best of situations where the program and runtime cooperate. It's particularly difficult if you need to update concurrently with normal execution, especially if you need to convert existing data structures before new code can use them.

In one of my former lives I helped design a modular programming language and the compiler/runtime for a programmable image co-processor board. The runtime allowed demand loading and unloading of arbitrary code modules (i.e. not overlays) under program control. I've had some ideas on how compiler and runtime could manage code modules automagically and incrementally update running programs behind the scenes without them really being aware of it, but I've never had the opportunity to actually try out any of them.

That's true, but I think you can arrange that the two different decrypt modules will never be needed simultaneously.

AES isn't particularly "simple" ... it is simply less costly than some of the PK algorithms. Cost depends on how secure you need it to be ... if you want strong protection you have to pay for it - in code and data space and in cycles. If the intent is simply to prevent casual abuse, you could as well just use a reversible "swizzler".

Point taken with the observation that sensitive information embedded in the image can be segregated such that only a small "resource" portion of the image need be encrypted.

Hopefully such an arrangement is possible ... circular dependencies in a hot patch situation are a really big PITA.

NFS clients are stateless if you don't use file locks ... but I don't know how complex their implementation is. I don't know offhand whether SMB clients are similarly stateless, you might want to take a look at some before you dismiss it.

Using hard link public names on the server would permit the clients to ignore file locking issues.

An active updater is an excellent idea that solves a bunch of problems: it can be a throw-away module that implements things like secondary fast decryption, remote file access, etc. that the application needs only while updating.

It does not solve any hot patch issues though.

George

Reply to
George Neuner

Why is it a problem that all modules do not have the same version? If the effects of a code change are localized within a module then why would you bother to update others? That just results in more work and more network traffic.

Obviously this is easier if you have some kind of file system on the device so the modules can be stored a discrete files, but even using raw flash page storage you can arrange to chunk and separate them - there's no need for contiguous storage if you have an intelligent loader.

George

Reply to
George Neuner

There's probably a semantic discrepancy, here. :-/ (something I've not clarified well enough)

Modules are free-standing bits of code. Controlled as such. E.g., the RTOS may be revised 5 times between "releases"; the "main application" revised twice; the "standard library" unchanged, etc. (I am just making up names of modules for discussion, here).

At some point, a "collection" of modules are validated together as a "supported release". I.e., imagine a chart that has a leftmost "release" column and a bunch of other columns -- one for each module -- showing the version of the module that is supported in that "release".

Note that a module may or may not change from one "release" to the next. But, that only certain combinations of "module versions" are (formally) supported -- though, in theory, many others "should" work (though some might

*not* -- e.g., there may be a bug in RTOS version Q so it is *never* combined with other modules into a formal "release").

Now the semantic issue...

Give each of the modules a *second* "version number" (in addition to their own, independant, "versions") that is tied to the "release". Maybe call it a "release" number).

So, as the user moves from "Release 1" to "Release 2", the individual module-specific versions might be changing from {1,1,3,2,1} to {1,2,5,3,2}. The first set describes Release 1; the second, Release 2.

What happens if modules A, B and C get updated successfully (resulting in {1,2,5,2,1}) but modules D and E don't? I.e., the {1,2,5,2,1} configuration *should* keep operating but was only designed to do so in a transient state -- that is, during an update *process*. It has not been tested as a "production release".

Looked at another way -- using Release Numbers -- we tried to go from a {r1,r1,r1,r1,r1} to {r2,r2,r2,r2,r2} but, instead, got stuck at {r2,r2,r2,r1,r1}.

You *could* confine all changes to a single module (since I only can hot swap a single module at a time) and then formally test (and release) that "single module update". In which case, you would have several "sub-releases" instead of this larger "formal release".

E.g., you could conceivably have:

{1,1,3,2,1} -- "Initial Release" {1,2,3,2,1} -- module B is updated (no change in module A) {1,2,5,2,1} -- module C now updated {1,2,5,3,1} -- module D now updated {1,2,5,3,2} -- all modules updated ("Formal Full Release")

But this means formal testing (documentation, etc.) of many more combinations than the single one from "Initial Release" to "Formal Full Release"

(is that any clearer?)

Reply to
D Yuniskis

Yup! ;-)

Ideally, you change as little as is necessary.

Note that the "update" module lets me, in a pinch, throw my hands in the air and say, "Do this update off-hours as the system will be down for XXXX minutes/hours".

I think in the projects to come, some of this will be easier (in that I will have more resources available) -- though also harder (in that there will be a greater chance for more things to be "in flux".

The throwaway projects will be easier to constrain (interfaces, etc.) but with far fewer resources, that constraint will almost be *imperative*

But they will have to reside in the "active image" concurrently (?) E.g., I need the PK stuff to decode the keys, signatures, etc. So, need it early in the update (else can't decode the modules as they are downloaded). Then, I *quickly* need whatever code is required to decrypt the module images, themselves. And, before the process has finished, I need the PK code (again) in preparation for the *next* update (i.e., I can't discard the PK code once I have decoded the keys and use that "space" for the module decrypt code)

If you take this approach, you have to be sure folks modifying the code are aware of how *any* piece of information can be used to compromise things. I think it easier just to tell them they only have to "protect the crypto keys" and the "process" will then protect the rest.

Exactly. Rather than trying to engineer some set of dependencies on the code a priori, I think the "update module" gives me a way to defer those requirements to "update time". :>

NFS introduces security issues (to the server as well as the clients). E.g., some shops won't use NFS as it exposes bits of their server "needlessly" (hmmm... bad choice of word :< )

TFTP is the ideal transport protocol as it is easy to implement, runs on UDP, very little overhead, supported lots of places, etc. If I augment it with all this other protocol stuff (encryption, module sequencing, etc.) it looks like it will do what I need done.

But you still need at least part of this to persist to the "next update". Hmmm... maybe cut that module in half and treat part of it as "update IPL" and the rest as the "updater module" (this latter part being disposable after an update completes?)

Reply to
D Yuniskis

Terminology?

They need to be loadable from persistent storage, but _not_ in memory as they are not needed simultaneously.

See below.

Yes, you need at least some of the decrypt code to persist, but since you are modular and your modules can be unloaded, you can easily arrange that it doesn't all need to be *active* at the same time.

Consider the following scenario:

=---------

- Each module is verifiable by strong signature. Computing the signature for the encrypted version (to verify the download) is sufficient but you can also compute the plaintext version as a check on the decryption.

- The device maintains a manifest of installed modules to be compared to potential updates.

1) Load the PK decryption module.

2) Download a PK encrypted manifest of the latest release. The manifest contains signatures for the modules that comprise the release, a symmetric encryption key for decrypting them, and a plaintext signature for itself (to check that PK decrypted it properly).

3) Compare the update manifest to the running image manifest and note differences. If the manifests match go back to step 2

4) Unload the PK decryption module.

5) Load the symmetric decryption module.

6) Download and store new (updated) modules.

Ideally you shouldn't replace memory resident code until you've collected the whole set of updates, but obviously that depends on whether you have enough local storage to hold multiple versions.

7) Unload the symmetric decryption module.

8) Save the new manifest and the updated module file locations for your boot loader. Delete the old modules.

9) Finally, replace the memory resident modules with their updated versions. Could be a warm boot or more complex if the program needs to continue running through the replacement.

Rinse, Repeat.

=---------

This process is sound and I *think* it meets your requirements for updating on the client side as you've described them. It may not meet your sensibilities but I can't do anything about that 8-)

WRT symmetric encryption: as you say, this project is a prototype for more capable devices - but even if you don't use it later the reason I'm pushing symmetric encryption now is that you have expressed much concern about the speed of the update process. Symmetric encryption implementations are lighter weight in terms of memory use and far more performant than PK ... attributes that are important to you now on your current low(er) powered devices and also maybe in the future as you contemplate WiFi.

With this process you can make download decryption configurable and tweak it later to eliminate one of the modules or replace them with different implementations.

George

Reply to
George Neuner

Nope.

I understand your semantics. What I don't understand is why you think these semantics require modules which haven't changed to be "rebranded" and thus have to be included in the download/update process.

Formal releases can be specified by manifest using individual module versions ... I see no need for additional "image coherence" version numbers to be placed in the binaries.

See my post in the other thread.

George

Reply to
George Neuner

Yes -- though *knowing* this, you can try to arrange things so that those that aren't likely to change are grouped together (not as likely to be adjacent to things that *do* change often)

Yes.

You don't "update" a module until you know you have it

*and* that it has been flashed correctly. I.e., if power fails, the "volatile" state defaults to "use the old one" (meaning you have to recheck the "new" one, again).

There are *lots* of races/hazzards :-(

Yes, the startup points to the "old" version. As above.

The real gotcha comes if power glitches just as you are making that final flash that says "use the new stuff".

Yes. For these "throw away" projects, I don't care -- I won;t release a binary so *someone* will have to build a binary from the sources (isn't that the purpose of FOSS? :> ) and

*they* can deal with those headaches.

For the "projects to come", I will build custom images (as the sources will *not* be available)

Again, for the throwaway projects, I will build *my* images with the keys that *I* want in them (and others will build their own images). I just don't want to bother with the countless newbie questions for which "RTFM" is the answer.

In the "projects to come" market, a customer would not want to miss an update. It's not like updates are just "adding fluff"; if there is an update released, it is to address a particular need.

For the throwaway projects, if folks want to cherry-pick which updates they want to install, then they can deal with going from N to N+3 however they see fit :> (I suspect they will take the easy way out and simply install N+1, then N+2, then N+3)

Reply to
D Yuniskis

-----------------------------^^^^^^^^^^^^^^

I deliberately chose an example in which a module was *not* updated. Note that "module A" starts off as "1" and ends up as "1". I.e., it never needs to be reflashed/updated.

OTOH, the other 4 modules *are* each updated.

The point is that {1,2,5,2,1}, for example, is not a supported configuration. It is only intended to exist in transition to the "real" update, {1,2,5,3,2}.

I'll look for it. Perhaps I'm missing something...

Reply to
D Yuniskis

I think that's the problem (see below)

There is no (separate) "persistent storage". The images XIP. They are flashed and execute out of the flash.

I.e., the only way to "discard" part of the image ("unload" it in your description below) is to erase that portion of the flash.

That's the misunderstanding! If it's *in* the device, it sits in the address space. There's no "secondary storage" to load/unload from/to.

PK module sits in memory. Whether or not it is *active* is just a function of whether or not it has been "CALLed"

No "unloading". Module just "RETurns" when done.

See above. I.e., this module resides in memory alongside the PK module. Just one or the other is typically "executing" at any time.

There isn't enough RAM to hold more than ~5% of the image (I need to keep the device *running* which uses most of the RAM resources -- modules have to be really small so they can have minimal impact on that RAM usage "while being updated"). There's no "scratch memory" (disk, etc.) to spool things into. RAM + FLASH is all there is.

The point is conserving *ROM* on the first projects. They are SoC so once you use up FLASH, there's no more to play with. If I can eliminate one module, then its space in the FLASH makes room for one of the "update" copies of "a module".

Reply to
D Yuniskis

Got it!

George

Reply to
George Neuner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.