exposing resource usage

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
Is there any potential downside to intentionally exposing
(to a task/process/job/etc.) its current resource commitments?
I.e., "You are currently holding X memory, Y CPU, Z ..."

*If* the job is constrained to a particular set of quotas,
then knowing *what* it is using SHOULDN'T give it any
"exploitable" information, should it?

[Open system; possibility for hostile actors]

Re: exposing resource usage
On Thu, 13 Apr 2017 18:22:05 -0700, Don Y

Quoted text here. Click to load it


It depends on how paranoid you want to be.  If some of the resources
the process has access to may be influenced by other actors, that
could expose a covert channel.

For example, if one of the bits of information was the number of real
pages held by a task, then another task could make storage demands
that could get some of the first processes' memory paged out (or
discarded).  Similarly, the first process's CPU usage will be affected
by other processes in the system.  Wiggle those back and forth, and
you can do the equivalent of sending Morse code from one process to
another.

OTOH, some of those can be exploited with help from the OS: just time
how fast you're executing a loop - if another process also start
running a CPU bound loop, you're rate should roughly halve.

OTTH, I've always thought the whole covert channel thing was just a
bit *too* paranoid.

Unrelated to the above, I could see some of that knowledge being used
to implement a DoS attack - tune your usage right up to the limit to
maximize the possible impact on the system.

Re: exposing resource usage
On 4/13/2017 6:56 PM, Robert Wessel wrote:
Quoted text here. Click to load it

Initially, I asked with an eye towards whether or not the
system's operational status could be compromised; if an exploit
could "crash" the system or render it effectively unusable.

E.g., if I don't allow a job to directly *query* its
resource quotas, it could still deduce them by empirically
increasing its resource usage and quantifying the result
(until the increases are denied).

    "Ah, OK.  I now know that I *can* use X, Y and Z!"

For example, in most systems, there is no *penalty* to
*attempting*:
     result = seteuid(0);
And, armed with this information, *later* using this capability
to nefarious ends.

OTOH, if this attempt caused the job to abend, it's harder
for a potential attacker to explore that capability without
tipping its hand that it is trying to do something that it
*shouldn't*!

Quoted text here. Click to load it

I'd have considered this a second-order problem.  However, see below...

Quoted text here. Click to load it

To be clear, I'm only talking about exposing YOUR resource usage
to *you*.  Of course, you could always communicate this (overtly
or covertly) to a co-conspirator.

But, you'd not be able to infer anything about other "non-cooperative"
jobs in the system.

The temptation is to dismiss this as an impractical issue.  *But*,
all vulnerabilities can be seen as falling in that sort of category!

Famous last words:  "What are the odds..."

<frown>

Quoted text here. Click to load it

In my initial read of this, I considered it unimportant:  if A and B want
to talk to each other, then let them open a channel to exchange information.

But, your point addresses a "false security" that could be exploited by
willing co-conspirators.

E.g., application A is relatively benign and is granted broad communication
privileges to The Outside World (e.g., a web browser).  OTOH, application B
handles sensitive information (that is protected by the OS from "local leaks")
and tightly constrained in its communication channels:  only having access to
The User (console) and "www.mybank.com" (because it -- and app A -- rely
on a service to provide the connections to The Outside World; A having
broad capabilities with this service but B *only* having a capability to
www.mybank.com!)

B is "sold" as a secure application because it "physically" can't disclose
the secrets that you've entrusted to it (bank account number, passphrase)
to any other parties!

However, A is published by "AAA Corporation" and B is similarly published by
"BBB Corporation (a wholly-owned subsidiary of AAA Corporation)".  So, the
"guarantees" that your OS provides are effectively useless (though they
would prevent application C from learning anything from A *or* B!)

Beyond that, if the system itself is widely disclosed, an attacker *might*
be able to infer the resource uses of other jobs by manipulating its
resource demands and noticing how those are met by the system in light of
the "well known" characteristics of the other aspects of The System
and its other apps.

[I.e., you can put units in a clean room and instrument them to your
heart's content without ever letting anyone/anything KNOW that you
are "quantifying" their behaviors in this way]

Quoted text here. Click to load it

One would, presumably, ensure that the "reservations" placed on
resources for critical aspects of the system's performance would
ensure the system remains functional.  Though, how certain "optional"
aspects (apps) of the system would be impossible to guarantee a
certain level of performance.

OTOH, a savvy user would be able to see which apps are misbehaving
(with the help of an intelligent agent) -- either intentionally
*or* innocently -- and take steps to remove their deleterious
effects.

(sigh)  Thanks for bringing that to my attention!  It shoots to
shit my NEXT goal of providing a way for jobs to make themselves
aware of what other things are happening on the host  :<

Re: exposing resource usage
On Thu, 13 Apr 2017 20:56:19 -0500, Robert Wessel

Quoted text here. Click to load it

IIRC, some high security systems require that process should _not_ be
able to determine, if it is running on a single CPU machine, multiple
CPU systems or in a virtual machine and in all cases what the  CPU
speed is.

Regarding quotas in more normal systems, quotas are used to limit
rogue processes to overclaiming the resources. In practice the sum of
specific quotas for all processes can be much greater than the total
resources available.  Thus, a process my have to handle a denied
requests even if the own quota would have allowed it.

Only in the case in which the sum of all (specific) quotas in a system
is less than the total available resources, in that case you should be
able to claim resources without checks as long as your claim is less
than the quota allocated to that process.  

But how would a process know, if the sum of quotas is less or more
than the resources available ? Thus, the only safe way is to do the
checks for failed resource allocations in any case.



Re: exposing resource usage
On 4/14/2017 12:24 AM, snipped-for-privacy@downunder.com wrote:
Quoted text here. Click to load it

In practice, that's almost impossible to guarantee.  Especially if you can
access other agencies that aren't thusly constrained.  E.g., issue a
request to a time service, count a large number of iterations of a
loop, access the time service again...

Quoted text here. Click to load it

But, that can be to limit the "damage" done by malicious processes
as well as processes that have undetected faults.  It can also be
used to impose limits on tasks that are otherwise unconstrainable
(e.g., how would you otherwise limit the resources devoted to
solving a particular open-ended problem?)

Quoted text here. Click to load it

Yes, but how the "failure" is handled can vary tremendously -- see
below.

Quoted text here. Click to load it

That assumes the application is bug-free.

Quoted text here. Click to load it

How a resource request that can't be *currently* satisfied is
handled need not be an outright "failure".  The "appropriate"
semantics are entirely at the discretion of the developer.

When a process goes to push a character out a serial port
while the output queue/buffer is currently full (i.e., "resource
unavailable), it's common for the process to block until the
call can progress as expected.

When a process goes to reference a memory location that has
been swapped out of physical memory, the request *still*
completes -- despite the fact that the reference may take
thousands of times longer than "normal" (who knows *when* the
page will be restored?!)

When a process goes to fetch the next opcode (in a fully preemptible
environment), there are no guarantees that it will retain ownership
of the processor for the next epsilon of time.

When a process wants to take a mutex, it can end up blocking
in that operation, "indefinitely".

Yet, developers have no problem adapting to these semantics.

Why can't a memory allocation request *block* until it can
be satisfied?  Or, any other request for a resource that is
in scarce supply/overcommitted, currently?

This is especially true in cases where resources can be overcommitted
as you may not be able to 'schedule' the use of those resources
to ensure that the "in use" amount is always less than the
"total available".

Re: exposing resource usage
On Fri, 14 Apr 2017 00:53:06 -0700, Don Y

Quoted text here. Click to load it

Exactly for that reason, the process is not allowed to ask for the
time-of-day.

Quoted text here. Click to load it

There can be many reasons why the Tx queue is full. For instance in a
TCP/IP or CANbus connection, the TX-queue can be filled, if the
physical connection is broken. In such cases, buffering outgoing
messages for seconds, minutes or hours can be lethal, when the
physical connection is restored and all buffered messages are set at
once. In such cases, it is important to kill the buffered Tx-queue as
soon as the line fault is detected.

Quoted text here. Click to load it

This is not acceptable in a hard real time system or at least the
worst case delay can be firmly established. For this reason, in hard
RT systems, virtual memory systems are seldom used or at least lock
the pages used by the high priority tasks into the process working
set.

Quoted text here. Click to load it

There is a guarantee for the highest priority process only, but not
for other processes. Still hardware interrupts (such as the page fault
interrupt) may change the order even for the highest priority process.
For that reason, you should try to avoid page fault interrupts, e.g.by
locking critical pages into the working set.


Quoted text here. Click to load it

For this reason, I try to avoid mutexes as much as possible by
concentrating on the overall architecture.

Quoted text here. Click to load it

As it is done in early architectural design. Trying to add last ditch
cludges during the testing phase is an invitation to disaster.

Quoted text here. Click to load it

Not OK for any HRT system, unless there are a maximum acceptable value
for the delay.


Quoted text here. Click to load it
Overcommitment is a no no for HRT as well as high reliability systems.

These days the hardware is so cheap that for a RT / high reliability
system, I recommend 40-60 % usage of CPU channels and communications
links. Going much higher than that, is going to cause problems sooner
or later.

 A 90-100 % utilization might be OK for a time sharing system or
mobile phone apps or for viewing cat videos :-)



Re: exposing resource usage
On Fri, 14 Apr 2017 12:53:54 +0300, snipped-for-privacy@downunder.com wrote:


Quoted text here. Click to load it

I just realized how old I am (still one of the youngest in CAE and
especially SED newsgroups). During my career in various forms of
computing, the prace/performance  has been improved by a ratio one to
a million, depending on how you interpret the Moore's law (is the
price/performance ratio doubling every 18 or 24 months). With such
huge rations, it is cost effective to do things in one way and other
2-4 years in a completely different way.

Things that required dedicated designs and optimization in the past
does not make sense these days, unless you are making several million
copies and want to save a single cent from the production cost.

For low volume products, it doesn't make sense to use too much
optimization these days. Thus a person with long experience really
needs to think, how much "clever" features are used.



Re: exposing resource usage
On 4/14/2017 5:33 AM, snipped-for-privacy@downunder.com wrote:
Quoted text here. Click to load it

I started work on "embedded" products with the i4004 -- with clock
rates in the hundreds of kilohertz and instruction execution times
measured in tens of microseconds -- for *4* bit quantities!  *Simple*
operations (e.g., ADD) on "long ints" were on the order of a MILLIsecond.
Memory was measured in kiloBITS, etc.

Now, I can run an 8 core 2GHz box with 32GB of RAM and 4T of secondary
store for "lunch money"  :<

Quoted text here. Click to load it

IME, the *hidden* cost of saving pennies (typically, reduced reliability)
far outweighs the recurring cost of all but trivial designs.  Let the
machine (system) carry most of the load so the developer (and user)
aren't burdened/inconvenienced by it.

In the (development) time it takes me to *save* a few pennies, the product
costs have FALLEN by those same few pennies.

[There are, of course, special circumstances that defy these generalizations.
But, most folks waste effort nickel-and-diming designs needlessly.]

Re: exposing resource usage
On Fri, 14 Apr 2017 10:22:28 -0700, Don Y

Quoted text here. Click to load it

You need to consider the inpput/output speeds. Essentially the 4004
was a calculator chip with steroids. The input speed for a manual
calculator is about 100 ms/decimal digit and one expects that the
result is displayed in a second, so you could do quite complicated
computations even with a 1 ms (long) decimal add time.

Just calculated that the 4004 would have been sufficient to handle
summation of data from a slow card reader (300 CPM, cards per minute)
so with ten 8 digit decimal number on each card, you would have to
handle 50 long decimal numbers each second. Using a medium speed (1000
CPS characters per second) paper tape, this would be 125 long decimal
integers/s, which would quite hard for the 4004 to handle.

Simple decimal computers in the 1960's  often used a 4 bit BCD ALU and
handled decimal digits serially.   This still required a lot of DTL or
TTL chips and the CPU cost was still significant.  

With the introduction of LSI chips, the cost dropped significantly in
a few years.

Any programmable calculator today will outperform any 1960's decimal
computer by a great margin at a very small fractional cost.

If things were done in one way in the past with different constraints,
implementing it today the same way might not make sense.

The 4004 had a nice 4 KiB program space. Small applications even in
the 1980's didn't need more and reprogramming a 4 KiB EPROM took just
5 minutes :-)


Re: exposing resource usage
On 4/17/2017 10:24 AM, snipped-for-privacy@downunder.com wrote:
Quoted text here. Click to load it

We used it to plot current position based on real-time receipt of
LORAN-C coordinates:
    <https://en.wikipedia.org/wiki/Loran-C

Each "coordinate axis" (i.e., X & Y, latitude & longitude, etc.)
in LORAN consists of a family of hyperbolic "lines of constant
time difference":
   <https://en.wikipedia.org/wiki/File:Crude_LORAN_diagram.svg
between a master transmitter and one of its slaves (A&B in the
diagram).  With families from *two* such slaves INTERSECTING,
you can "uniquely" [1] determine your location on the globe
(knowing the latitude and longitude of the master and associated
slaves, the shape of the earth, propagation time of radio waves
and "conic sections").

[1] This is a lie as a single hyperbolic curve from one family
(time-difference coordinate #1) can intersect another hyperbolic
curve from another family (TD coordinate #2) at *two* points,
unlike a (latitude,longitude) tuple that is unique.  To confirm
this, print two copies of the above sample and skew them so
AB is not parallel to AC (assume C is the renamed B on the second
instance)

Coordinates are processed at a rate of 10GRI (10 sets of
transmissions -- GRI is the time between transmissions from the master;
    <https://en.wikipedia.org/wiki/Loran-C#LORAN_chains_.28GRIs.29 ).
Each is typically about 50-100ms so 10GRI being 500-1000ms.

It's a fair bit of work to resolve two hyperbolae on an oblate
sphere mapped to a scaled Mercator projection and drive two
stepper motors to the corresponding point before the next
"fix" arrives.

This is the second generation (8085-based) version (bottom, center):
  
<
http://www.marineelectronicsjournal.com/Assets/lew%20and%20jim%20best%20copy.jpg

By then, the code space had soared to a whopping 12KB (at one time,
close to $300 of EPROM!) -- with all of 512 bytes of RAM!!

Quoted text here. Click to load it

The Z80 was still a 4b ALU (multiple clocks to process 8b data)

Quoted text here. Click to load it

Of course!  I suspect I could reproduce the software for the plotters
in a long weekend, now.  No need to write a floating point library,
multiplex PGD displays, scan keypads, drive motor coils, count
*bits* of storage, etc.  Just use <math.h> and a graphics library to
plot line segments on a display "instantaneously".  Load a set
of maps from FLASH, etc.

Quoted text here. Click to load it

You were using 1702's in the mid 70's -- 2Kb (not KB!) parts.


Re: exposing resource usage
On 14.4.2017 ?. 15:33, snipped-for-privacy@downunder.com wrote:
Quoted text here. Click to load it

The good thing about aging is that we don't notice it a lot ourselves as
long as we are healthy. The outside world takes care of keeping us up
to date of course...

Hardware has always been ahead of software and as hardware becomes
faster for the same tasks done 30+ years ago the gap is allowed to
widen - to scary dimensions I would say. But this is how evolution works
I guess, eventually some balance will be reached. Not that we have
that moment in sight as far as I can see.

Dimiter





Re: exposing resource usage
Hi Dimiter,

On 4/14/2017 11:26 AM, Dimiter_Popoff wrote:
Quoted text here. Click to load it

...*if* you let it!

Quoted text here. Click to load it

Its unfair to suggest that software hasn't ALSO evolved/improved
(in terms of "concepts per design" or some other bogo-metric).

One can do things in software, now, "in an afternoon" that would have
taken weeks/months/years decades ago.  And, with a greater "first pass
success rate"!

The thing that has been slowest to evolve is the meatware driving
the software design methodologies.  It's (apparently?) too hard for
folks to evolve their mindsets as fast as the hardware/software
technologies.  Too easy ("comforting/reassuring"?) to cling to
"the old way" of doing things -- even though that phraseology
(the OLD way) implicitly acknowledges that there *are* NEW way(s)!


Re: exposing resource usage
On 14.4.2017 ?. 22:45, Don Y wrote:
Quoted text here. Click to load it

Hi Don,


well we can choose to ignore the reminders of course and we do it - to
the extent possible :). Staying busy doing new things is the best recipe
I know of.

Quoted text here. Click to load it

Oh I am not saying that, of course software has also evolved. Just not
at the same quality/pace ratio, pace might have been even higher than
with hardware.

Quoted text here. Click to load it

Yes of course. But this is mainly because we still have to do more or
less the same stuff we did during the 80-s having resources several
orders of magnitude faster.
I am not saying this is a bad thing, we all do
what is practical; what I am saying is that there is a lot of
room for software to evolve in terms of efficiency. For example todays
systems use gigabytes of RAM most of which stay untouched for ages,
this is a resource evolution will eventually find useful things for.

Quoted text here. Click to load it

Well this is part of life of course. But I think what holds things
back most is the sheer bulkiness of the task of programming complex
things, we are almost at a point where no single person can see the
entire picture (in fact in almost all cases there is no such person).
As an obvious consequence things get messy just because too many people
are involved.
I suppose until we reach a level where software will evolve on its
own things will stay messy and probably get messier than they are
today. Not sure how far we are from this point, may be not too
far. I don't have anything like that working at my desk of course
but I can see how I could pursue it based on what I already have - if
I could afford the time to try.

Dimiter

------------------------------------------------------
Dimiter Popoff, TGI             http://www.tgi-sci.com
------------------------------------------------------
http://www.flickr.com/photos/didi_tgi/






Re: exposing resource usage
Hi Dimiter,

On 4/14/2017 2:27 PM, Dimiter_Popoff wrote:
Quoted text here. Click to load it

I think that last statement goes to the heart of the matter; folks
don't have (make?) the time to try new things.  They're rushed by
development schedules (safer to stick with an estimate for an "old"
approach with which you have experience than guesstimate on something
completely novel), support for legacy products (even if "legacy"
is 6 months ago!), new "business practices", etc.  Many developers
are very comfortable sitting on their laurels (even if they don't
HAVE any!  :> ) than reaching out to explore new application domains
and solution spaces.  And, there are often external pressures
(boss/client/customer/peer) trying to coerce their efforts in a
certain direction (easier to just "go along" than to "make a stand").

Those folks who are more "independent" often have to worry about mouths
to feed, etc. -- can't risk botching a project if it means delaying (or
losing!) your income stream!

Finally, I don't think many folks watch to see what's happening in
the universities and other research domains -- places where folks
don't typically have these same pressures (i.e., they don't have
to produce a 'product' timed to a 'market' so have more leeway to
experiment with new ideas without penalty).  If a developer tries
a new language or development strategy, he feels like he's made
a HUGE effort -- compared to his peers.  The idea of moving into an
entirely new application domain and design approach is just too
big of a leap, for most.

As with the advice of measuring before optimization, they're guilty
of coming to conclusions -- before even attempting the experiment!

[Think about your own experience.  *If* you could, would you approach
your current products differently?  If starting FROM SCRATCH??
Different hardware, software, feature sets, etc.?  And, likely, the
reason you don't make those radical changes for your *next* product
is simply because it would be too big of an investment along with the
psychological abandoning of your *previous* investment.  Hard to do
when you're busy living life, today!]

Re: exposing resource usage
On 4/14/2017 2:53 AM, snipped-for-privacy@downunder.com wrote:
Quoted text here. Click to load it

But, it can't ask a whole class of questions that would allow it to
*infer* the amount of elapsed time.  E.g., send an HTTP request to
*google's* time service; wait; send another request...

It's very difficult to apply "filters" that can screen arbitrary
information from the results to which they are applied!  :>

Quoted text here. Click to load it

Yet *one* way of handling ALL of those conditions in a typical API!
Much cleaner to let the application decide how *it* wants the
"potential block" to be implemented based on *its* understanding
of the problem space.

In my case, I allow a timer-object to be included in service
requests (everything is a service or modeled as such).  If the
request can't be completed (and isn't "malformed"), the task
blocks until it can be satisfied *or* the timer expires (at
which time, the "original error" is returned).

So, if you want the task to continue immediately with the
error indication, you pass a NULL timer to the service
effectively causing it to return immediately -- with PASS/FAIL
status (depending on whether or not the request was satisfied.

[Note that the timer doesn't limit the duration of the service request;
merely the length of time the task can be BLOCKED waiting for that
request to be processed]

Quoted text here. Click to load it

Or, not!  How often do you unplug a network cable momentarily?
Should EVERYTHING that is pending be unceremoniously aborted
by that act?  Even if you reconnect the cable moments later?

Quoted text here. Click to load it

The key, here, is to know what the worst case delay is likely to be.
E.g., if two RT processes vie for a (single) shared resource, there
is a possibility that one will own the resource while another is
wanting it.  But, if the maximum time the one holding it can be
addressed in the "slack time" budget of the other competitor, then
there is no reason why the competitor can't simply block *in*
the request.  This is MORE efficient than returning FAIL and
having the competitor *spin* trying to reissue the request
(when BLOCKED, the competitor isn't competing for CPU cycles
so the process holding the resource has more freedom to "get
its work done" and release the resource!)

Again, the *application* should decide how a potential FAILLed
request is handled.  And, as one common solution is to spin reissuing
that request, then exploiting this *in* the service makes life
easier for the developer AND a more reliable product.

competitorA:
     ...
     result = request_resource(<parameters>, SLACK_TIME_A)
     if (SUCCESS != result) {
         fail()
     }
     ...

competitorB:
     ...
     result = request_resource(<parameters>, SLACK_TIME_B)
     if (SUCCESS != result) {
         fail()
     }
     ...

Quoted text here. Click to load it

Again, that depends on the application and the rest of the activities
in the system.

E.g., I diarize recorded telephone conversations as a low-priority
task -- using whatever resources happen to be available at the time
(which I can't know, a priori).  If the process happens to fault
pages in *continuously* (because other processes have "dibs" on
the physical memory on that processor), then the process runs slowly.

But, the rest of the processes are unaffected by its actions (because
the page faults are charged to the diarization task's ledger, not
"absorbed" in "system overhead").

OTOH, if the demands (or *resource reservations*) of the rest of the system
allow for the diarization task to have MORE pages resident and, thus,
reduce its page fault overhead, the process runs more efficiently
(which means it can STOP making demands on the system SOONER -- once
it has finished!)

Quoted text here. Click to load it

You can't always do that.  Larger systems tend to require more
sharing.  Regardless of how you implement this (mutex, monitor,
etc), the possibility of ANOTHER process having to wait increases
with the number of opportunities for conflict.

Rather than rely on a developer REMEMBERING that he may not be
granted the resource when he asks for it AND requiring him to
write code to spin on its acquisition, let the system make that
effort more convenient and robust for him.  So, all he has to
do is consider how long he is willing to wait (BLOCK) if need be.

Quoted text here. Click to load it

How is making a consistent "time constrained, blocking optional"
API a kludge?

Quoted text here. Click to load it

See above.  Task A is perfectly happy to be BLOCKED while tasks B, C, D
and Q all vie for the processor.  Yet, that doesn't preclude their use in
a RT system.

Quoted text here. Click to load it

Nonsense.  Its a fact of life.

Do you really think our missile defense system quits when ONE deadline
is missed?  ("Oh, My God!  A missile got through!  Run up the white flag!!")

The typical view of HRT is naive.  It assumes hard deadlines "can't be missed".
That missing a network packet -- or character received on a serial port -- is
as consequential as missing a bottle as it falls off the end of a conveyor
belt... or, missing an orbital insertion burn on a deep space probe.

A *hard* deadline just means you should STOP ALL FURTHER WORK on any
task that has missed its hard deadline -- there is nothing more to
be gained by pursuing that goal.

The *cost* of that missed deadline can vary immensely!

If Windows misses a mouse event, the user may be momentarily puzzled
("why didn't the cursor move when I moved my mouse").  But, the
consequences are insignificant ("I guess I'll try again...")

OTOH, if a "tablet press monitor" (tablet presses form tablets/pills
by compressing granulation/powder at rates of ~200/second) happens
to "miss" deflecting a defective tablet from the "good tablets"
BARREL, the press must be stopped and the barrel's contents
individually inspected to isolate the defective tablet from the
other "good" tablets.  (this is a time consuming and expensive
undertaking -- even for tablets that retail for lots of money!)

In each case, however, there is nothing that can be done (by the
process that was SUPPOSED to handle that situation BEFORE the
deadline) once the deadline has passed.

Quoted text here. Click to load it

Again, that depends on the "costs" of those "problems".

SWMBO's vehicle often misses button presses as it is "booting up".
OTOH, the video streamed from the backup camera appears instantly
(backing up being something that you often do within seconds of
"powering up" the car).  It's annoying to not be able to access
certain GPS features in those seconds.  But, it would be MORE
annoying to see "jerky" video while the system brings everything
on-line.

Or, reduce this start-up delay by installing a faster processor...
or, letting the software "hibernate" for potentially hours/days/weeks
between drives (and adding mechanisms to verify that the memory
image hasn't been corrupted in the meantime).

Quoted text here. Click to load it


Re: exposing resource usage
On Thu, 13 Apr 2017 18:22:05 -0700, Don Y wrote:

Quoted text here. Click to load it

It's one less barrier to some outside actor getting that information, and  
therefor using it in an attack (if I know to do something that might make  
a task overflow its stack, for instance, I'll have something concrete to  
try to help me break in).

How hard are you going to work to keep that information out of the hands  
of outside actors?

--  

Tim Wescott
Wescott Design Services
We've slightly trimmed the long signature. Click to see the full one.
Re: exposing resource usage
On 4/14/2017 10:54 AM, Tim Wescott wrote:
Quoted text here. Click to load it

Of course.  The challenge in the design of an OPEN system is coming to a
balance between what you do *for* the developer (to allow him to more
efficiently design more robust applications) vs. the "levers" that you
can unintentionally expose to a developer.

In *closed* systems, the system design can tend to assume the developers
are not malicious; that every "lever" thus provided is exploited to improve
cost, performance, etc.  Any flaws in the resulting system are consequences
of developer "shortcomings".

In an open system, you have all the same possibilities -- PLUS the possibility
of a malicious developer (or user!) exploiting one of those levers in a
counterproductive manner.

Quoted text here. Click to load it

The only way to completely prevent exploits is to completely deny access.
But, that's contrary to the goal of an open system.

Site Timeline