Watermarking

I see your point. But, I think it would be easy to identify that portion as being "never accessed" (?) I.e., it would be comparable to embedding a text string in the binary -- that is never *referenced* (e.g., RCS info)

The Unisite's code image (on disk) *looks* like this. Until youo realize it goes through a custom disk controller before it gets to the processor (and memory). So, you just look at the image *after* it is "in memory".

I.e., unless you have a secure processor, this is all you need to analyze.

Reply to
D Yuniskis
Loading thread data ...

It was copied out of a working program, but I did forget to copy the forward declarations.

Yes, the whole point is that index1[] and index2[] will light up in diffs of branded executables. The idea is to make a counterfeiter think those index arrays are some kind of customer ID string ... that's why they are defined as chars. But when the counterfeiter starts messing with the string, the program (hopefully) stops working.

Other structures, like the ptr[] table, will be identical in all instances - they might be identifiable in a dump from a structured executable (coff, elf, etc.), but if you use an a.out structure, all the data will be anonymous.

Given enough working samples, someone might notice the pair relationship between the indexes. However, they can be combined into a single 2-D array that will interleave their elements in a diff listing and help hide the relationships.

char index2[] = {1, 0, }; char index1[] = {1, 0, }; (int (*)(int)) F1 = (int (*)(int)) func_addr(index2[index1[0]]);

becomes (I think ...)

char index[][2] = { {1,1,}, {0,0,} }; (int (*)(int)) F1 = (int (*)(int)) func_addr(index[index[0][1]][0]);

index2[] becomes the leftmost column and index1[] the rightmost. You could do it either way, but because index1[] has it elements in a fixed order, by putting it last, most of its elements seem to reference unrelated data in a sequential dump of the structure, so even if someone guesses that pairs of characters are somehow related, the actual relationships are shrouded.

I haven't tried to clean it up but I believe macros can sufficiently hide the nasty call syntax and the complicated look-ups.

I initially defined F1, F2, etc. as macros that did the function look-up on demand, but I settled on using explicit function pointer variables because the look-up was so heavy (3 array indexes).

The call to func_addr() is completely unnecessary, you can eliminate it by making the address table a global. I initially chose to use a function because doing so would place the address table in a different load segment than the globals. But in any event, the point is to obfuscate as much as possible and really to force disassembly to figure out how to bypass the branding.

George

Reply to
George Neuner

AFAIK, you can't easily mess with a vtable except in a leaf class. The order of functions is determined partly by standard (new is 1st, delete is 2nd, etc.) and partly by the total ordering of definitions as seen over the entire class hierarchy.

Also, there is no standard vtable implementation - some compilers use arrays, others use hash tables. And in an array implementation, it's possible for complicated classes to have more than 1 vtable.

George

Reply to
George Neuner

The goal is to identify the *source* of any counterfeit products that find their way onto the market. When you are looking at products with 5-10+ man-year development efforts, it is *quite* attractive for a class II thief to take the "cheap road" and just copy your design intact. Especially with all the pac-rim houses operating currently -- where developing "from scratch" something with more than a man-year or two (software... forget the hardware, packaging, etc. as that is pretty straightforward to copy) just doesn't make sense in markets that move as fast as today's.

I've gone back through all of your posts. All I see is the idea of using encryption. Have I missed something?

From the examples you gave, you aren't using a secure processor. Rather, just using encryption as an "opaque envelope" to allow you to distribute your executables without them being easily inspected. This is no different than allowing the device to fetch its updates via SSL, etc. -- it just hides the content of the message WHILE IN TRANSIT (e.g., you can put Update5.zip on your web site for folks to download without worrying about

*who* is downloading it).

To include a watermark in each different executable, you would have to either:

- ensure each user is only "given" an (encrypted) executable with a watermarked image having his *unique* fingerprint (if the user could get his hands on *another* encrypted instance of the binary FINGERPRINTED FOR SOME OTHER USER, then the value of the fingerprint/watermark is lost -- how can you assert that *his* instance was the instance used to produce the counterfeit since he could "load" anyone else's encrypted instance [i.e., all devices share a common crypto key])

- ensure each user (device) has a unique crypto key such that his instance (again, avoiding the term "copy") of the encrypted binary is accessible to him, alone (i.e., to avoid the ambiguity mentioned above). In which case, there is no need for the binary to be "watermarked" as the device itself is already uniquely identifiable -- by the unique crypto key (one would still want to avoid having identical binaries as that would probably provide an avenue for differential cryptanalysis on the part of an attacker).

And, as I said before, this just protects the binary "during transport". It does nothing for the binary once it resides

*in* the device(s) in question. (e.g., the binary is already in prototypes distributed "from the manufacturer" so what does encryption buy them?)

When firms have a couple more commas in their income statements than me, I tend to assume they are doing *something* right! :>

Reply to
D Yuniskis

Hmmm... but, they would still be so "localized" that I suspect it would focus their attention (attacks) on those "few things" (granted, one could extend the idea to larger scale).

One thing I learned early in my career was that if things

*break* easily, they can be easily reverse engineered. I.e., the simplest thing to reverse engineer is something that crashes reliably when tampered with.

If, OTOH, the "keys" (for want of a better word) that you're describing are infrequently invoked (i.e., so there is no

*straightforward* way to verify whether or not your attempt at subversion has "succeeded"), then the thief has to invest considerably more effort to gain a particular level of confidence in his (ahem) "creation" -- i.e., that it won't crap out as soon as the first customer buys it!

It's in ROM so the executables are stripped (why put anything

*else* in the ROM that doesn't contribute to run-time?)

Understood. This is akin to using 0x1234 as TRUE and 0x3412 as FALSE -- and combining two uint8's to form the actual value that your code examines (though one wouldn't typically do that as it imposes a burden on the programmer)

Yes, but it assumes the *need* for such a mechanism. I.e., using it without an underlying need puts a burden on the developer (this was the reason I mentioned vtables as they aren't visible -- without picking nits -- to the developer; like my idea of rearranging the declarations of auto variables and exploiting the resulting variation in stack frame layouts)

Understood. I'm just hoping to find something that can be done "alongside" the development instead of injecting something into the process. Writing clear code is hard enough for most folks. Adding deliberate obfuscation just makes it that much more fragile and vulnerable to error. We used to call the "tamper-proofing" activity "RE-bugging" -- an indication of how hard it was to get it right -- and always did it *after* the executable was known to operate correctly... *two* test cycles! :< Obviously, if you can come up with a scheme that can be surreptitiously used *during* development, then the developer can actually debug "production code" instead of having to add this second "post-processing" step.

Reply to
D Yuniskis

Yes. But, if you analyze the class hierarchies, you can tweek even the vtables of the base classes as long as you ensure each derived class is "compensated accordingly".

Recall, each time you swap a pair of vtable entries, you get a bit of "fingerprint"/"watermark". With a fair number of classes, you can quickly get hundreds of bits. (I'm not sure how sparsely populated "watermark space" needs to be in order to make it "non-trivial" for one watermarked image to be converted to another... I guess it would depend on the techniques being used). For preproduction prototypes, I suspect 100 bits would be more than enough (assuming each bit isn't trivial to "flip")

Yes. This approach would require hacking the compiler (i.e., *a* compiler). That makes it less attractive. But, I don't see any way of manipulating tables that the developer wouldn't otherwise be aware of.

Reply to
D Yuniskis

Hello Don,

[...]

Well, your answer to my question:

|Have you any numbers about the cost to get the content of a flash |microcontroller if it's "copy protection" is used? For example, we are |using Freescale 9S08, S12, Coldfire V2 and I could also imagine to use |a STM32.

...was somewhat vague. Therefore I wrote: "If you tell me what it costs to get a Flash ROM image from one of these, we can continue the effort / benefit discussion". This statement is still valid. If you don't know how "secure" such a single chip device is, it makes no sense to discuss it's suitability.

BTW my question whether your product uses external memory at all is also unanswered. This would prevent the encryption approach.

[...]

this applies also to the other methods you discuss in this thread.

[...]

from my experience, the correlation between income or comany size and smartness is lower than many people might expect.

Oliver

--
Oliver Betz, Munich
despammed.com might be broken, use Reply-To:
Reply to
Oliver Betz

Definitely a compiler level hack - I don't see any simple way to do it after the fact. Swapping vtable entries would also require changing method call sites - the indexes or hash keys would need to match.

One thing I forgot to mention is that some [really good] C++ compilers can incorporate the whole vtable hierarchy into each class so that the class stands alone. This allows any method to be dispatched with a single lookup regardless of the position of its implementing class in the hierarchy.

George

Reply to
George Neuner

I write and hack compilers for fun, have hacked them for business and I was part of a team that wrote a compiler for a user programmable DSP+FPGA signal/image processing PC board. Hacking compilers is tricky business and it is all too easy to wind up with an unreliable tool.

I understand the reluctance to mess with a working executable, but I think adding a post production step with a custom tool is preferable to mucking with compilers. Maybe I've been blessed by good people, but IME it isn't all that hard to get someone to commit to using a provided macro or template system. Trusting them to extend or maintain it is a different issue, but again, IME it hasn't been a big deal.

I've never needed to obfuscate an executable, but I've dealt with very flexible and adaptive programs and I understand your issues with other developers and reliability.

I was principle developer for 3 different lines of industrial QA apps,

2 of which are FDA approved for food and pharmaceutical production as well as general industrial use. These apps were developed and maintained by teams of 3-8 people over the 10 years I was involved with them. These programs have hundreds of runtime options: for equipment enumeration and interfacing, for customizing the operator UI, for inspection tuning, performance tuning, logging, security, etc. Despite nearly every operation being conditional, they are still required to have near perfect inspection reliability (zero false negatives, less than 0.1% false positives) and 99.9% uptime. [Knock wood, I've never once had a production system crash in the field due to my software. Hardware reliability I can't control, but my software can be as perfect as my manager allows. 8-)

The pharma apps are my masterpieces, but my claim to fame is compact discs. If you bought any kind of pre-recorded CD or DVD - music, game, program, etc. - between 1995 and 2001, the odds are about 50% that it passed through one of my QA apps during production.

George

Reply to
George Neuner

Exactly. It also constrains your development: what if you don't have sources to the compiler? What if the vendor is unwilling to make a "special" for you (or, unwilling to go through the certification of that "special")? What if you move to an entirely different hardware/software platform? etc.

I think the drawback there is that it potentially exports that technology from your organization. I.e., there is no reason why a developer needs to know what happens to his sources after he has written/debugged them. If your "transformations" don't sematically alter the executable, he shouldn't be able to tell whether this was "part of the compiler" or not.

It's not really obfuscation. The sources and binaries still are 100% legitimate (ideally). You just want an easy way of making N "functionally identical yet physically different" instances of an executable from one set of sources

I wouldn't think of trying to deploy a watermarking/fingerprinting system in an FDA environment. I don't know how to get through the validation with *different* executables! Unless the changes were confined to "dead code" -- which, in itself, is disallowed. :<

(there are other application domains that would also be reluctant to adopt any sort of watermarking because of their insistence on verifiable "images")

I am haunted by an autopilot (marine) I designed some

30+ years ago. After returning from our test run, an examination of the actual course taken showed an "S" in the plot at one particular place. Did my software "divide by zero" (or something similar)? Or, was this the spot where we stopped to fish (which requires constantly readjusting the boat's direction to keep it pointed into the swells)?

My boss wasn't worried about it (since the rest of the trip -- I think 7 legs? -- went uneventfully) but the image of that "S" is burned into my memory... :-/

Is there an easy/high-speed way to verify prerecorded media is "playable"? E.g., discs that see lots of circulation (e.g., "Blockbusters", public library, etc) that need to be verified as "undamaged" before being reintroduced into circulation?

Reply to
D Yuniskis

I've played with some of the ideas discussed here -- reordering auto variables, using multiple library versions, etc. -- and it looks pretty easy to get the required degree of "differentness" between images.

The library approach is the most reliable -- you can be

*sure* to get different images (which isn't guaranteed when you start mucking with variable declarations). And, it seems easiest to be able to predict/guarantee behaviorally (write the different library versions with "constant performance" as an explicit goal).

Unfortunate as it also requires the most *deliberate* effort (though it can be done in parallel with the regular development) -- whereas the "reorder variable" technique would take very little effort beyond a preprocessor.

Reply to
D Yuniskis

Doesn't sound like a bug to me ... ship it!

There's no way for you to check the integrity of the stamped aluminum cookie other than to try to play it ... on the production line where the orientation of the disc is fixed, checking of the recording is using 2-dimensional laser imaging.

However, checking for scratches in the plastic coating can be done optically. You need high resolution and low-angle offset lighting. An undamaged disc appears as "near-black" to the camera - scratches in the coating reflect more light into the camera.

__ | | | | | | -- camera | | /\ o o -- lights ________

\_ disc

Scratch check is pretty simple to implement from an image processing point of view - threshold to remove the camera bias, erode a bit to reduce/eliminate jitter and noise, blob scan for anything visible and compare blob sizes to your rejection criteria.

You need at least 1Kx1K resolution to catch defects that can affect playback with no oversampling, (ideally) square pixels, and very stable symmetric lighting all around. Way back when, the system I worked on used a fiber ring and laser, but I think it probably could be done now with sufficiently bright LEDs. We used 4 512x512 box cameras, one aimed at each quadrant (b_tch to align but even with the mount much cheaper than a single 1Kx1K camera). Square pixel PAL cameras tend to be a little more expensive than NTSC. You need a good low noise frame grabber too ... I don't remember what we used but something like the Matrox Solios eA/XA would probably work well.

If you're handy with mount construction, I'd guess you could piece together a decent system for less than $5000 (might be less if you can talk your way into an engineering sample on the frame grabber).

Probably you were looking for an answer like: "stick it in the player and run this software" ... Sorry.

George

Reply to
George Neuner

Not uncommon if your autopilot uses a fluxgate or other magnetic compass sensor as its primary heading reference. If you are in relatively shallow water pass over a modern era wreck that isn't a danger to surface navigation, there is usually enough steel around to cause a significant amount of compass deviation. I've had a complete 360 deg. turn caused by such a wreck and innumerable S wiggles. With a little local knowledge you soon learn to avoid the more troublesome ones.

--
Ian Malcolm.   London, ENGLAND.  (NEWSGROUP REPLY PREFERRED)
ianm[at]the[dash]malcolms[dot]freeserve[dot]co[dot]uk
[at]=@, [dash]=- & [dot]=. *Warning* HTML & >32K emails --> NUL:
Reply to
IanM

Hmmmm... it was a cascade control loop. A conventional autopilot: clear magnetic disc floating in liquid with optical sensors to tell when it is "aligned" properly while the whole assembly is "motor driven" -- i.e., to "set" course, turn on a servo loop that keeps the "compass" nulled as the boat is steered onto the desired heading. Thereafter, any deviations from this null activate the rudder servos.

Onto this (*my* "claim to fame") was a software servo loop that took LORAN-C coordinates of "destination" and kept tweeking the "motor drive" to update the "new" course (i.e., instead of conventional autopilot that seeks to maintain a constant heading, my goal was to reach a desired

*destination*)

Ah, that's possible! It is also possible that the anomalies in the recorded track happened when we were in "manual" control (fishing for Blues). It could also have been an anomaly in the LORAN receiver.

It's been 30+ years. I haven't heard of any *deaths* so I don't lose too much sleep over it! :> (though I really would have liked a resolution "back then")

Reply to
D Yuniskis

I've seen significant compass deviation passing under bridges or over tunnels, pipes, electrical cables and even over shallow net anchorages (steel nets placed to snag anchors where the bottom is too soft to hold).

My Koden unit (forget which model but it was the high end one at the time) had 3 precision modes - trading accuracy for speed - but I almost always used it at highest precision because there are a lot of reefs and shoals in my area. I found that in the fast mode, the unit considered a waypoint to be roughly a 300ft circle. I had to be able to steer compass courses through channels as narrow as 30ft (lotsa fun at night in pea soup fog). At high precision my unit could repeatedly find a station set waypoint within 5ft up to about 10kt.

If your unit had different precision modes and the mode was changed unexpectedly that might have affected your plot.

George

Reply to
George Neuner

Fly or float pointed directly to the destination has a problem (Not a big one but a significant one)

Current or wind drift forces the craft off track and the control system then corrects by turning to the destination that is good, however it is not the shortest track to the destination. It results in a track shaped like a "?".

The primary reason that VOR's transmitted direction information as well as fixing a position was be able to fly a direct track. The earlier NDB's (non directional beacons) would lead to longer tracks if the drift was not corrected.

Compass errors are interesting. In my poor starving student days I flew a summer in the arctic and encountered two really nasty compass errors almost daily. In Labrador there is a iron vein a mile or so wide that that runs a couple hundred miles pure havoc. The second one is compass errors near the north magnetic pole. Up to several degrees of error per flight hour.

Regards,

Walter..

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

This is often called a dog curve, as a dog running toward his moving master will trace such a curve.

There is another good reason: The radials of a VOR are not dependent on the alignment of the aircraft compass system. Flying a defined direction (QDM) toward a NDB needs a correct reference direction from the compass system.

It is possible to fly straight using a NDB, but it is more difficult than following a VOR radial. Also, a VOR display responds quicker the an ADF (NDB receiver):

--

Tauno Voipio, (MSEE avionics and CFII)
tauno voipio (at) iki fi
Reply to
Tauno Voipio

Not to mention that flying outbound from a VOR is a lot easier than from a NDB. When I was flying in the arctic the only navaids available were low powered NDB's located just a little too far apart. We got good at nailing an outbound heading.

w..

Reply to
Walter Banks

Hi George,

George Neuner wrote: > On Mon, 17 May 2010 09:52:20 -0700, D Yuniskis > wrote: >> Is there an easy/high-speed way to verify prerecorded >> media is "playable"? E.g., discs that see lots of >> circulation (e.g., "Blockbusters", public library, etc) >> that need to be verified as "undamaged" before being >> reintroduced into circulation? >

Why is that? Can't you just "read" it in a DVD-R (or whatever)? Verify no read errors, etc.?

No doubt because that is faster than "reading" the medium...

Ha! No need to check for scratches:

Scratches? Yes or Hell Yes!

Yeah, something the functional equivalent of "playing it and 'watching' it" -- except at high speed. (e.g., a library probably loans thousands of DVDs daily. It's just not practical to "watch" every one as it is returned...

Reply to
D Yuniskis

I think the water is too deep at this point -- we're out in the Atlantic (SE of P-town).

Depending on where you are operating on the chains, the geometries can conspire to give you really crappy -- or really *good* -- data. E.g., there are areas where the ambiguities inhgerent in the geometry can cause you to be "here" -- or "over there" :>

If "here" and "there" are too close...

In the prototype run, I set all the waypoints to be marked buoys so I could verify we were where we should be (i.e., there are no street corners on the open ocean :> ). I can recall coming close enough to actually fear we were going to hit the buoys (we'd run at about 25kt's to cover as much ground as possible -- the entire trip was several hundred Nmiles). At one point, the "next leg" was almost "back the way we came" (i.e., turn *really* hard to starboard). We passed just to the left of the buoy (narrowly missing another craft that was sitting nearby), then heard the "groan" that comes from trying to make too sharp a turn too quickly as the boat turned around and appeared ready to make another run at the other craft sitting nearby :> (we ended up with the buoy on our left again -- i.e., neatly going *around* it).

Much of the rest of the trip was a blur. I had pulled an all-nighter to get the prototype built and coded in time for the trip. So, I was pretty tired. And, none of use thought to bring any *food* (though we all managed to bring BEER!). Thankfully, the boss's wife was a bit more practical and sent along some food with him! :-/ (at least *I* had an excuse for my thoughtlessness -- I think I was 18 at the time so *my* priorities didn't focus on *food*! :>)

I just took in TD's (.01uS, IIRC), did the TD-lat/lon conversion, looked at where I wanted to be, looked at where I had *been*, determined the cross-track error (to try to gauge local effects of drift) then "turned" the autopilot into the "current" appropriately such that the vector representing the boat's *steered* course and the vector representing the "local drift" would "sum" to the desired destination.

I.e., the goal was to act as a "smart helmsman" and not as a "stupid autopilot".

If I had notes to consult (I *probably* have the sources for the device here somewhere -- no doubt on 17" wide tractor feed "green stripe" paper), I could see which chains we were using and check to see if we were in a region of high GDOP, or maybe some of the trig was running in nasty areas (e.g., tan 90).

As I said, I doubt anyone has *died* from this so I won't lose sleep over it. But, I don't like things that can't be explained (I relentlessly pursue "intermittents" for this reason). Unfortunately, I wasn't in shape to follow up on this at the time ;-)

Reply to
D Yuniskis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.