Tone Deafness

Just change the cadence of the tone. One beep per second, 4 beeps per second, etc.

Reply to
miso
Loading thread data ...

The problem is these aren't designed with the entire population in mind. E.g., a fighter pilot in combat is far more focused (and motivated! :> ) than a 70 year old retiree with failing hearing, vision, attention, etc. Or, someone with a hearing impairment...

I've canvassed lots of research papers trying to sort out how these sorts of things affect "real people". And, coerced different people to be guinea pigs as I tested out various approaches. Not very scientific and no "hard and fast rules". E.g., how often can a person be "interrupted"? How far apart in space can those cues be sited? How do you manage the inevitable "overload"?

But, it brought me to the "three layer" model which also seems intuitive (and, very easy for most people to relate to -- as they ultimately will have to configure it!)

I'll chase it down. Thanks!

Reply to
Don Y

My favorite is a feature on Ford vehicles. You can speed limit the car. I found this out when I rented a Ford Focus from ZipCar. I'm a left lane, heavy footed driver and so I punched the accelerator. It limits at

80MPH and it warns you as it gets toward that limit.
Reply to
T

Many just quit from a lack of fuel. ;-)

But if they quit before the tank is completely empty, that would at best protect from sucking op crud that floats on top of the fuel. The intake pipe is always at the bottom of the tank, so crud from the bottom will also get sucked in with a full tank. Luckily, most have an intake filter.

--
Stef    (remove caps, dashes and .invalid from e-mail address to reply by mail) 

Reading is to the mind what exercise is to the body.
Reply to
Stef

Only if they catch the first beep. Single tone vs modulated tone is better and for long enough to be clear.

Mine will do that for frost alert which seems quite reasonable to me. Not sure what it does for low oil warning yet - this is the first car I have had with electronic only indication and no physical dip stick.

That is scary. What model of car has such an insane behaviour?

Mine has a warning light that goes beep with 100 mile range remaining and will beep every time the car is started when range is below 100.

How does that work? Engine management should never allow a modern car engine to stall on an automatic. My (manual clutch) car has smarts to switch the engine off when it is stationary at traffic lights.

I once had a weird immobiliser fault that did that to us in a busy intersection - nothing to do apart from get out and wait for a tow truck. The fault never recurred but failed unit was referred to the maker as the local repair guy couldn't do a satisfactory reset on it.

With no electrics you can't even put the hazard flashers on!

--
Regards, 
Martin Brown
Reply to
Martin Brown

It's not on all Ford vehicles, and is also available on vehicles from a variety of other manufacturers. There are also innumerable aftermarket kits you can install to do the same thing. Primary market is parents, rental agencies, and fleet users. In some locations such things are required by law for larger vehicles (for example all larger trucks and busses in the UK all have a speed governor limiting them to

90 km/h*), although those tend not to be (end-user) programmable like the typical ones for cars.

There have even been examples where different keys had different limits. Some Corvettes limited the speed and engine power when started with the valet key, for example.

Some other cars have taken a monitoring approach instead - at least one model of BMW allowed you to set a limit, and if it was exceeded, an indicator on the dash would light and stay lit until it was reset (which needed a password). So it wouldn't prevent your kid from going fast, but you'd know about it the next time you drove the car.

*FSVO "all": The limit has changed over time, and I don't know what the grandfathering rules are, so it's possible some older vehicles with slightly higher limits are still on the road, plus there are vehicles that have a lower limit.
Reply to
Robert Wessel

Too lazy to read all the posts, but a 1 second long solid tone is very different from 3 short tones.

Or as they use in the movies, blink (beep) once for yes, twice for no.

Scott

Reply to
NotReallyMe

On a sunny day (Fri, 16 Aug 2013 10:23:30 -0700) it happened NotReallyMe wrote in :

beep beep beep

Reply to
Jan Panteltje

It is a 2005 VW Jetta!

Reply to
Charlie E.

So maybe it could sound off when stopped, instead of creating the impression it's the sort of thing that breaks at higher speeds.

--

Reply in group, but if emailing remove the last word.
Reply to
Tom Del Rosso

LCD displays are cheap enough that they could use one to display the actual problem rather than just an idiot light that says 'CHECK ENGINE'. A 16x2 would do, but a 20x4 would be better. Use the aural alarm to alert you to the problem, with one of two distinct signals for immediate problems, and ones that need attention soon. If the engine is low on oil, or overheating you can't wait. Other low fluids can damage the vehicle, but I don't need a 'CHECK ENGINE' light coming on to tell me the Freon level is low. If it's already lit for that minor problem, it can't alert you to more important things.

--
Anyone wanting to run for any political office in the US should have to 
have a DD214, and a honorable discharge.
Reply to
Michael A. Terrell

Or as While E. Coyote once said to the Road Runner, "Beep! Beep!, your @#$%^ ass!"

--
Anyone wanting to run for any political office in the US should have to 
have a DD214, and a honorable discharge.
Reply to
Michael A. Terrell

While it is focused military applications a lot of what it has to say is quite generally applicapable. While is does not much deal with people with impaiments it does address environmental interference.

Couuld you expand a bit on the three layer model?

Reply to
josephkk

Agreed. I'm just saying that designing to standards like this is pretty much like designing for what a "nominal" person is expected to be -- not really the population as a whole.

(E.g., young children, elderly, folks with various "disabilities", etc. aren't addressed, here)

I think I explained it up-thread. But, basically, treat audio as existing in three "layers":

- background is stuff that we know is there yet essentially "ignore". Like conversations happening around us IN WHICH WE ARE NOT PARTICIPANTS. Or, "background music". Or, the TV chattering away while you are doing something else. It's there; you recognize that it's there; you'd notice if it suddenly went missing; you *might* catch something of interest "happening" in that layer (e.g., if you manage to hear folks talking about *you*!); but, for the most part, you ignore it.

- foreground/focus is that with which you are actively engaged. The conversation you are participating in. A TV broadcast you are listening to (over the background noise of other conversations nearby). A musical score. You are PAYING ATTENTION to its content as you are "interested" in it (at least, for the moment).

- distractions/annunciators/interruptions/alerts. An asynchronous layer of events that compete for your attention/focus. A young kid wandering into the room while you're watching TV asking for something to eat. A phone ringing. Doorbell. Fire alarm. etc. Each tries to distract your "focus" and become your *new* focus.

Thinking in this sort of framework, a user's abilities can then be mapped to relative volume levels (if the background is too loud, you can't concentrate on the focus/foreground; if an annunciator is too soft, it won't stand out against the foreground and background; etc.) and temporal/physical displacements (annunciators occurring too close to each other in time/space can't be successfully and reliably resolved -- you are "overloaded" by too many distractions).

I.e., it takes some amount of "processing power" to "register" an annunciator (interruption), recognize what it signifies, and then evaluate that significance in the context of your current "focus" (do I really want to be bothered answering that phone call, now?).

A *second* interruption occurring before a previous interruption has been "handled" rapidly overloads your ability to *remember* which interruptions are "enqueued" (remember, an interruption need not be a persistent sound: "the dyer signal went off", "someone rang the doorbell", "dinner is ready", etc.).

And, how close together such events can be for a *particular* individual varies -- with the individual (some folks are a bit more sluggish to react), with the current focus (dealing with an interruption while totally engrossed in an activity vs. just casually watching TV), with the nature and familiarity of the alert ("what the hell is that sound?").

While a particular person may not be able to decide, a priori, how close together (in time/space) alerts can occur for his/her abilities, he/she can still relate to the idea of dealing with sound on these three "layers". It doesn't require a technical description of how the brain processes sound, The Cocktail Party Effect, etc.

I contend that a successful audio display (is defined as) provides an effective way for the user (listener) to manage that "alert" layer of events -- swapping them into his "focus", etc. And, building a predictable framework that minimizes the need for the user to "remember" what has occurred (but been ignored/deferred).

[I think we are better able to remember visual events than aural ones -- we have "language" that comes to our aid to distill a visual image into a summary of what it represents. Audio events aren't always as easily and deterministically distilled ("it was a beep of some sort"; "it was a screeching sort of sound"; etc.) So, you have to take care to give the user some way of quickly "resolving" the nature of an alert ("it was the telephone ringing"; "it was the doorbell"; "it sounded like water running"; "IT CAME FROM OVER THERE"; etc.) so he doesn't have to try to remember the actual *sound*]

Make sense?

Reply to
Don Y

This is one of the major issues addressed (rather extensively) by MIL- STD-1472.

It is free for the asking, though reasonably long (about 100 pages IIRC)

?-)

Reply to
josephkk

n

,

It's not the piezo that's the problem, but the frequency they emit.

formatting link

There are two different mechansms for sound localisition - one for sounds f rom 800Hz down and one for sounds at frequencies of 1.6kHz and higher.

Neither works very well at around 1kHz - where the ear is most sensitive - and many beepers are designed by people who are unaware of this and go for the most easily audible signal. It got to be a real problem in the open-pla n offices at Cambridge Instruments, where nobody could tell whose phone was buzzing at around 1kHz.

But mainly offer a decent range of frequencies.

No comment.

--
Bill Sloman, Sydney
Reply to
Bill Sloman

Actually, maximum sensitivity is around 3 KHz.

The above-mentioned sound localization is largely in azimuth angle. What's missing in distance. The number of harmonics is also important. The farther a sound travels, especially in an office environment, the more the upper harmonics are attenuated, so one can estimate distance from spectrum.

This was discovered in the Bell System Technical Journal articles on the design of the telephone ring sound (not the ringer circuit). The bells were designed to yield adequate harmonics, and the two bells were tuned a bit apart from one another.

I don't recall the specific BSTJ issue, but I read the articles in the early 1970s, and recall that the research was done in the late 1940s or early 1950s, when the classic black beauty desk phones were being designed.

More recently (2005?) had some phones with pure-tone rings and one could not tell whose phone it was. There was plenty of signal, so all pure-tone analysis mechanisms had plenty of SNR. Turns out that amplitude isn't all that useful for determining distance in a complex environment. Those phones are now junked.

Joe Gwinn

Reply to
Joe Gwinn

I would suggest starting with any of the blackjack dealers in the Las Vegas casinos who have to endure the constant barrage of slot machine sounds.

Spend an hour on those floors and you won't be able to tell a ding from a dong.

Reply to
mpm

On a sunny day (Sun, 18 Aug 2013 09:26:31 -0700 (PDT)) it happened mpm wrote in :

It was not so bad, but it got noisy when I set of the alarms. :-)

Reply to
Jan Panteltje

There are *lots* of mechanisms involved in localizing sound ("placing" it in space).

Essentially, these boil down to time, level (amplitude) and spectra. These can further be grouped into monaural cues and binaural cues -- the latter involving comparisons "between ears" (Interaural - 'I')

At low frequencies, we "hear" the difference in phase between signals arriving at the "near" ear vs. the "far" ear. The difference in transit times of the acoustic wave as it's "farther" for the signal to get to the "far" ear than the near ear.

At high frequencies, the distance between the "ears" approaches (exceeds) the half-wavelength of the signals involved so we process the *time* difference between signals arriving near and far. In each case, the brain processes temporal differences -- Interaural Time Differences (ITD).

Additionally, the level/amplitude of signals arriving at the far ear is diminished from those at the near ear. While some of this is *technically* a distance phenomenon, it is more accurately described as a consequence of the head "absorbing" some of the sound that would have otherwise reached the far ear (had the head not been in the way!) -- Interaural Level (or Amplitude, depending on the researcher -- some even refering to this as Intensity!) Differences.

As with ITD's, the ILD's are also frequency dependent. Higher frequencies being more readily attenuated by the head's presence than lower ones.

ITD and IAD mechanisms most readily resolve location left to right (azimuth). "We" can resolve to a few degrees of accuracy things located (more-or-less) in front of us. This corresponds to time differences on the order of tens of microseconds.

[IIRC, we can sense a difference of a "tenth" -- 0.0001" -- in the geometry of our teeth! But, that's a different subject entirely! :> ]

Distance is handled by looking at the frequency response of the signal *and* echos from our environment (that we "learn"). Far things tend to be softer and "muddier"; near things louder and "brighter". If the sound is sensed as "in motion", we tend to think of it as nearer (for a given amount of detectable motion).

[We tend to have the most trouble accurately resolving distance!]

All of these mechanisms are largely listener INDEPENDENT. And, the aspect of 3-space localization that they all fail to address is elevation and front-back resolution (is that sound actually in front of me in this general direction? or, *behind* me in a similar direction??)

Here, the pinnae come into play -- those parts of our bodies that we visualize when someone says "ears".

Sounds coming from the *back* of the (or *a*) pinna are shaded by that pinna in much the way the head shades near/far sounds.

All the wacky folds in the pinnae's cartilages aren't there for structural support (e.g., like the fold in a blade of grass gives it rigidity). Rather, they create resonances and notches in the frequency response of our "hearing system". And, explain why some folks are better at localizing sound than others!

The interaural differences vary with the elevation of a particular sound source (frequency) because of the irregular nature of the pinnae folds. For a given/"known" sound, we can resolve elevation (not nearly as well as azimuth) by noting these differences.

In practice, we don't "know" sounds to that degree of accuracy. But, can readily *compare* a sound (to itself!) by altering the orientation of our head (and, thus, ears).

Unfortunately, our individual brains tune themselves to *our* pinnae. So, you can't synthesize an artificial frequency response for listener A and expect listener B to "hear" a sound behind those transforms at the same intended point in space. I.e., to simulate control of elevation and front/back resolution in an auditory display, you need to know the

*actual* listener's HRTF (Head Related Transfer Function) which is largely controlled by the detailed shapes of the pinnae and the upper body (off of which acoustic waves reflect/echo). [Of course, outside of nature, we can confuse the brain into thinking something "unnatural" by force-feeding it signals (audible or electrical) that "tell" it something that makes it's normal processing come to a wrong conclusion! The brain makes assumptions, of course! :> ]

Indoors, you rely on echos as a big distance cue. Having multiple tones (a tone and its harmonics, etc.) makes it easier for you to resolve the differences.

This appears in other sensory phenomena as well. E.g., one of my favorite examples is the "Jawbreaker" (?) game found on some windows phones/PDAs. It appears as a screen full of circles ("balls" -- "jawbreakers"!). Your goal is to locate large contiguous clusters of same-colored balls which you can remove in a single action.

When configured for multiple colors (red, green, blue, yellow, purple) you would *think* the differences would be very obvious! But, the colors are presented at the same "value"! As a result, they tend to blend together much more than if there were more variety in value in *addition* to "just hue". E.g., if the blues were *darker* than the yellows, they would stand apart more from them!

[You'd have to be able to manipulate the colors used in the game to truly appreciate the effect this has!]
Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.