Photon counting for the masses

That was a bit silly of me. She lived 93 years. She knew me for 50-something.

--
"For a successful technology, reality must take precedence 
over public relations, for nature cannot be fooled."
                                       (Richard Feynman)
Reply to
Fred Abse
Loading thread data ...

The requirement was that they provide insurance coverage that included *birth control*. (They would not be required to pay for it directly, but the cost would be absorbed by the insurance company.)

But "birth control" is not controversial these days, except for Catholic Bishops (who never get pregnant, nor do altar boys). So they hyped this into an abortion controversy, based on the emergency contraceptive mifepristone (RU-486).

This drug *could*, in fact, be used as an abortifacient (and aspirin could be used as a murder weapon). As an emergency contraceptive, however, it simply prevents the fertilized egg from implanting to start the pregnancy.

But the Religious Right has their own definitions for these things, so they are calling "failure to implant" an "abortion". Seems that as long as they can define their own terms they are always right...

By the way, natural failure to implant is exceedingly common, affecting at least 50% of fertilizations. The woman never even knows it happened. If that's "abortion" then God must be the biggest abortion provider ever!

Best regards,

Bob Masta DAQARTA v6.02 Data AcQuisition And Real-Time Analysis

formatting link
Scope, Spectrum, Spectrogram, Sound Level Meter Frequency Counter, FREE Signal Generator Pitch Track, Pitch-to-MIDI Science with your sound card!

Reply to
Bob Masta

Typical leftist crap. Insurance is free? Have you ever heard of "self-insurance"?

Irrelevant.

"Could be", my ass. RU-486 *IS* an abortifacient.

Try to learn something:

formatting link

RU 486, which is also known as mifepristone, is a synthetic steroid drug made from norethindrone, the active ingredient of Norplant. Its only purpose is to ^^^^^^^^^^^^^^^^^^^^^^ induce abortions, and it has been authorized for this purpose by the Food and ^^^^^^^^^^^^^^^^^ Drug Administration (FDA) up to 49 days gestation.

Pure nonsense.

Reply to
krw

Don't forget Penrose tiling! That was my first exposure to the guy, many years ago. For those who don't know about this, it is an aperiodic tiling scheme: You could tile your bathroom with it (or anything of any size), and there is an obvious pattern, but there is no repeating unit... you can't cut it into squares (like tile commonly comes), you'd have to place each tile individually. It works by using 2 or more different-shaped pieces. Google for examples.

Very cool stuff!

Best regards,

Bob Masta DAQARTA v6.02 Data AcQuisition And Real-Time Analysis

formatting link
Scope, Spectrum, Spectrogram, Sound Level Meter Frequency Counter, FREE Signal Generator Pitch Track, Pitch-to-MIDI Science with your sound card!

Reply to
Bob Masta

The best predictor of chess performance is absolute lifetime number of games played. So said an article in Scientific American too, circa maybe 1976. That, and the ability to recognize a situation from a previous experience. IOW, an efficient look-up table. Intelligence scores were only poorly related.

formatting link
"Log number of games played was the strongest predictor of latest performance rating."

Brains burn about 20W (though some say up to 40W).

Whole humans burn about 120W (2,400 kCal/day), average.

-- Cheers, James Arthur

Reply to
dagmargoodboat

Although he was a bit put out about one of its applications by Kleenex.

formatting link

His legal spat with Kleenex was viewed with some amusement.

formatting link

Is a review of one of his deeper less popular books "The Road to Reality" - which assumes graduate level maths and physics.

--
Regards,
Martin Brown
Reply to
Martin Brown

More quantum gobbledegook from the mystickal fringe of unreality.

Evolution just demands that the next step along the way is not deleterious to the survival of the species - no more and no less. Provided that condition is met the genetic variation survives.

Sickle cell anaemia is an example of a genetic disease that is actually beneficial if the only alternative is to die as an infant of malaria, but not otherwise.

Mainly to prevent you from misleading others that you know what you are talking about when it is obvious that you do not.

Packing more flip-flops into a box probably won't do anything at all interesting but making neuronal computational elements with extremely high network interconnection almost certainly will. It has recently been determined that the critical genetic change betweem chimpanzees, and our ancestors was replication of the SRGAP2 gene. Subsequently 2.5my ago it got copied again producing species Homo.

They even have a mechanism and have already spliced the human version into mice where it alters the brain connectivity in the same way as it does in humans (next they will splice it into a marmoset). Basically the extra copies sabotage brain development and slow it down allowing many more connections to form in the developing parts of the brain neocortex responsible for higher order functions such as language and abstract reasoning. Reported in Cell and New Scientist this week under the tag line "The Humanity Gene" which might be pushing it a bit far.

formatting link
formatting link

Sorry both are behind a paywall unless you are a subscriber. You can still see the abstract summary and one figure on these searches.

It is certainly a powerful driver for the argument that our brain is fundamentally a connection machine and it is the step change in K the number of connections rather than N the number of neurons that provided the additional computational power and memory in humans.

--
Regards,
Martin Brown
Reply to
Martin Brown

OK, *you* explain how memory works, how a single cell can learn and execute sophisticated life strategies, how a bat can sonar-locate mosquitoes in mid-air using chemical computing elements. Explain consciousness while you're at it. By your own rules, you're not allowed to include any new ideas.

Sometimes having too much imagination is bad, but having none is much worse. When confronted by great mysteries, some less-then-pedestrian thinking might be allowed. You sound like Sloman's soul mate here.

I disagree. One of the arguments for intelligent design is that there are lots of mechanisms, especially in cellular chemistry, that couldn't have evolved incrementally. There are books on the subject that make specific and convincing cases. One of the claimed virtues of a quantum computer is that it can explore a huge range of solutions simultaneously.

All those proteins are folded. THINK about it.

Nobody knows how DNA actually works, or how a brain works. Nobody ever will if they refuse to think.

I've known a number of enthusiasts for neural networks. None of them got anything to work.

It has recently been

This is electronics at the level of swapping out power supplies and circuit boards. Not component level, no deep understanding. Sure, if you do it enough, it sometimes works. Cells are incredibly robust, which is why you get the equivalent of shaking a box full of broken watches and sometimes get a working watch... set to the correct time.

Hand waving. The connections are too slow to have very many in the chain before you become dinner for some simpler critter.

--

John Larkin                  Highland Technology Inc
www.highlandtechnology.com   jlarkin at highlandtechnology dot com   

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME  analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators
Reply to
John Larkin

I would be prepared to make the conjecture that it depends on having local hysteresis computing units with multiple inputs and outputs and interconnections to nearby (and not so nearby) units in a single step. The configuration of the paths stores the information and it gets shuffled around and/or reinforced depending on how often it is needed.

It will eventually be possible to work it out in detail one day, but until then we have to make educated guesses, build computer models and test hypotheses about them and test to see if they are correct.

There are already plenty of computer neural networks capable of learning from a training set with feedback and making useful decisions. Fuzzy logic was fashionable for a while and embodied some of the same sort of feedback mechanisms - it helped Shinkansens stop with the doors lined up with the queues every time and saved energy.

formatting link

This is like insisting that St Paul's cathedral could never have been built because there is no visible scaffold. The first life didn't have much by way of competitors and gradually became more complex, competitive and also cooperative with increasing time. We have geological timescales to work with here and not the statutory week.

Some of the oldest organisms have such specialised extremophile niches that they have survived to this day as archea. Slime moulds are about the most primitive known species cooperative when put under stress.

It is a form of acoustic chirp radar. Human ears are actually pretty good at detecting the location of a snapped twig from relative timing too. Owls are even better and can home in on an unsuspecting mouse.

Resonant hairs in a cochlear spiral works pretty well at turning sound frequencies into spatially separated stimuli. You are of course being completely unreasonable to demand complete explanations of how everything works when your "its all quantum magick" claim is called.

I didn't say anything against new ideas. What I object to is this "it is all quantum mysticism" mumbo jumbo that you invoke at every possible opportunity. It has no predictive power whatsoever.

Mostly written by people who think the world is 6000 years old.

This again is another of those false arguments beloved by YECs and other anti-science groups. Just because we can't see how it could be done doesn't mean that we have to take it at face value that it can't. It doesn't have to be incremental either once there are retroviruses around that can splice chunks of gene in and out and they are also very old.

You don't get to see the scaffold that supported the structures we now see after millions more years of evolution unless we are very lucky.

Indeed and if you accidentally create the right chemical environment an autocatalytic infectious agent can spontaneously arise fairly quickly and exploit it. BSE was an example of this in our lifetime.

It depends what you mean by actually works.

They have a very good idea of how the information is transcribed and used to codify proteins. The experimental DNA biologists are close to being able to do the ab initio DNA code for a living cell from scratch.

I suspect some of what is presently classified as junk DNA will turn out to be relevant, but that is a minor detail in the grand scheme.

The selecting of base pairs and amino acids from the mix is governed by quantum rules allowing a match in just 3 compares for peptide building. It probably plays a part in protein folding and enzymes too.

I have seen one or two reasonably impressive systems. Main barrier to using them in some critical applications is that you cannot get it to tell you why it made a particular decision. The advice it gives can still be helpful. You use Google without a second thought and it is using network interconnection metrics to order its searches.

The point here is that it looks very much like the key requirement for human intelligence is the large power hungry brain and the hugely increased number of neuron interconnections available in the neocortex.

Don't you think it is even slightly interesting that these forks occurred at the times the genome was altered in a particular way.

It may only require a pair or a very short chain to do the job. That is one possible advantage of having a huge number of connections per cell you store a lot of states with very short loops.

We can already measure using functional MRI the time between making a decision in the brain and the physical action occurring. What is really interesting is how musicians learn to do things so automatically and precisely that their external physical timing is near perfect without having to wait for closed loop feedback.

--
Regards,
Martin Brown
Reply to
Martin Brown

Given the interconnect delays of electrochemical propagation (if, indeed, that's what's used) the computing cluster must be small. So juat a small group of neurons may do important subroutines, like store and pattern-match an image, or perform some realtime algorithm like tracking a tennis ball. Given that, it makes sense to pack as much computation as possible into each neuron or small cluster. Speed is necessary to survival.

Maybe single neurons or small clusters do the heavy computing, and the network is mostly responsible for i/o. That makes the internal machinery of a neuron very smart. Quantum computing makes sense here, bacause it can potentially do stuff like pattern matching with enormous parallel/simultaneous capacity. Imagine a string of DNA or RNA or something that encodes an image. Imagine that a fresh scene id acquired and coded into another protein sequence. Now get them close to one another and see what matches.

Maybe it's not the paths; that's computer-science thinking. Maybe information is stored inside cells as the states of proteins. After all, single-cell critters can be pretty clever, with no neurons or interconnects at all.

Hey, I'm just a circuit designer. But faced with the workings of a brain, nobody has much more than hand-waving explanations (you just did a few) and I can wave hands as well as anyone.

But I bet inside-the-cell computing is key to memory, pattern recognition, algorithm execution, and consciousness. And it's necessarily parallel quantum computing.

Some of this stuff could be tested. It won't be tested if it's rejected from the start. The history of biology is peppered with cases of institutional rejection of things that turned out to be true. Lack of imagination.

--

John Larkin                  Highland Technology Inc
www.highlandtechnology.com   jlarkin at highlandtechnology dot com   

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom timing and laser controllers
Photonics and fiberoptic TTL data links
VME  analog, thermocouple, LVDT, synchro, tachometer
Multichannel arbitrary waveform generators
Reply to
John Larkin

The "paths" are slow but wide. You get pretty close to constant time delay for one given thing, on the appropriate time scales.

No, a single neuron most likely has very little internal state. It's the connectedness of the neurons that allow for emergent phenomena. Neurons are born and die.

What they find is that regions of the brain - specific geographic locations - act as "sub assemblies". If there's damage to that region, the person loses that ability.

As the instrumentation gets better, I'd guess they will get more resolution.

Charlie Rose's "Brain Series" is a good entry point into this...

but it is the paths. Cut the path, lose the function. Whether it's computer science thinking ( btw, nothing particularly wrong with that ) is irrelevant.

It's more likely electrochemical - neurotransmitters are pretty well understood. They can make them go away and function is lost; restore them, it returns.

That's about as good as science gets.

There's a big hole in the middle between those two scales. That's the problem. We're working to fill in that hole pretty hard now.

Probably not. The combinatorics of that architecture don't support the kind of complexity needed. You're back to the "path" thing.

It is being tested *now*. We're not up on the constraints the bio people know better.

Is that optimal? Probably not. But interdisciplinary stuff is happening with greater frequency.

According to what I've seen, it's early days, but progress is quite rapid. Check the Charlie Rose "Brain" series...

formatting link

Seriously, it's mainly bounded by instrumentation. And they're getting better at that...

Also seriously - the CS metaphors work better than board design/layout metaphors.

-- Les Cargill

Reply to
Les Cargill

They didn't need no stinkin' scaffolding! Dirt piled against the walls was good enough to support the workers, and a lot safer.

--
You can't have a sense of humor, if you have no sense.
Reply to
Michael A. Terrell

They might not have needed it, but Wren did have to use scaffold if only to hide the building from the customer (aka the clergy). It was too late when they found out that what he was building differed rather fundamentally from what they though they were getting. See for example:

formatting link

--
Regards,
Martin Brown
Reply to
Martin Brown

You have no basis for that statement, other then prejudice.

Single-cell critters can do complex things, including learning and adapting to infrequent and improbably events to survive. If an amoeba has internal states, why wouldn't a neuron?

Why waste all that cellular machinery to make a neuron just a logic gate? Why wouldn't a neuron be as functional as, say, a helper t-cell?

Well, that's life. We forget things. We probably have enough redundancy that we don't need to forget much.

Lose the i/o, for sure.

--

John Larkin         Highland Technology, Inc

jlarkin at highlandtechnology dot com
http://www.highlandtechnology.com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom laser drivers and controllers
Photonics and fiberoptic TTL data links
VME thermocouple, LVDT, synchro   acquisition and simulation
Reply to
John Larkin

Single-cell critters can build human beings. And do.

(And every cell in the resulting human has the machinery (DNA) that did it).

--James

Reply to
dagmargoodboat

Good point.

Exactly. You could argue that a single-cell fertilized egg is smarter than any subsystem of the resulting human body, because it created (date I say designed?) them all.

The idea that a neuron, as complex as that fertilized egg, would be used as a dumb neural-network threshold comparator is literally insane. So it follows that neural networks are insane models of a living nervous system.

--

John Larkin         Highland Technology, Inc

jlarkin at highlandtechnology dot com
http://www.highlandtechnology.com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom laser drivers and controllers
Photonics and fiberoptic TTL data links
VME thermocouple, LVDT, synchro   acquisition and simulation
Reply to
John Larkin

Not... really - SFAIK that's really how the science reads on the subject. I'd be fine either way :)

Dunno - I just understand the present theories as being about connectedness. Neural interconnects are "cheap"; electronic ones aren't...

That's because they're interconnected... which is why I brought it up - you get redundancy...

-- Les Cargill

Reply to
Les Cargill

Seems to me that slow critters get eaten by fast critters, so processing speed matters. Given that nerve transmission is ionic and travels meters per second, a brain is like an FPGA, speed-limited by interconnects and not by processing elements. So it makes sense to keep the processing very local. Inside a CLB/cell is pretty local.

--

John Larkin         Highland Technology, Inc

jlarkin at highlandtechnology dot com
http://www.highlandtechnology.com

Precision electronic instrumentation
Picosecond-resolution Digital Delay and Pulse generators
Custom laser drivers and controllers
Photonics and fiberoptic TTL data links
VME thermocouple, LVDT, synchro   acquisition and simulation
Reply to
John Larkin

That's a different system than the cognitive parts of us. And yeah, it's fast. Very fast. We pretty much share that system with most chordates of any account.

My dog can catch food about 100 msec after it leaves my hand. Fast. but the people who know these things say that the great leap was bipedalism - once you get neural critical mass to be up on two legs, the rest is pretty easy.

Speed of critters matters more on the Discovery channel than in real life. Lotta slow ones out there.

Yes. But the computation model isn't like that. Neural systems work more like a CAM than a bespoke logic system. that's just a metaphor, it's not that much like a CAM either. It is like a CAM in that it trains.

They are *specialized cells*. Neurons are different in different contexts. So there probably is some smarts in 'em - I just don't know what that means, exactly.

We'll see. it's early days yet.

-- Les Cargill

Reply to
Les Cargill

Tackling just this one piece. No single cell "learns and executes strategies", that is a temporal behavior of a complete brain. Nor does any single cell "recognize grandma", a much more temporally local event. Your reduction to single cell is irrelevant, incorrect, fatuous, and completely false to facts. = It does not even rise to red herring or strawman level.

Just for you.

?-)

Reply to
josephkk

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.