Temperature Measurement

The standard toolchain for AVR's (especially Arduinos) is gcc. It has supported 64-bit "long long int" for a long time, but only recently (gcc

10) gained 64-bit double support. (Of course you can do 64-bit calculations even if the compiler doesn't support them natively, it's just vastly more work.)

But it does sound like you are asking a /lot/ from such a small cpu.

It's a useful alternative to the less regulated bogomips unit.

Yes, that's the norm for Cortex-M4F.

The NXP i.mx RT10xxx family are Cortex-M7 with (IIRC) 64-bit floating point support. These are fast chips - 528 MHz cores with dual issue. Lots of oomph. And not too expensive (since they have no flash, which is the killer for cheap dies for this kind of device).

There is the general rule that simpler chips are easier to work with and have less that can go wrong (in hardware and software) - /if/ they can do the job you need.

If you need high reliability, there are other options like Cortex-R devices with dual lock-step cores, ECC ram, etc.

Reply to
David Brown
Loading thread data ...

Texas TMP61 is 10k at 25'C and claims to be linear +/-1%. I've never used one, but had an email yesterday as it happens.

PRT is best but more complicated.

--
Cheers 
Clive
Reply to
Clive Arthur

I remember implementing a 32 bit floating point library for a 6800. IIRC[1] the sin/cos CORDIC routine took ~40ms, which wasn't a problem. An 8bit AVR would probably get that to around 1ms (1MHz 6800, 16MHz AVR with multiply hardware)

I think I've still got the source code somewhere, but it is in a special purpose macro language that looks a little like a /very/ simple version of C.

[1] Hey, it was a summer vacation job, back in 1976 - what do you expect!
Reply to
Tom Gardner

If it's a temperature measurement, the calculation may be complicated, but there's no pressure to do it often.

A temperature sensor with a 100msec time constant is fast. Thermistors tend to be an order of magnitude slower.

--
Bill Sloman, Sydney
Reply to
Bill Sloman

I am wondering why one would need 64 bit calculations in some embedded system when both the ADCs as well as DACs are typically 16 bit or less? While done extra bits for intermediate results are nice to have, but 64 is an overkill especially if floating point. A 32 bit floating point is accurate to 6-7 (decimal) digits, while 64 bit floats to

16-17 digits.

Sure, you may have to rearrange some equations to avoid too large numbers (when using integers) or loss of precision (when using floats). IIRC much of this was already in Numerical Analysis 101 course.

I haven't recently checked, if there are any x86 processors aimed for the embedded market. In x86 the 387 floating point instructions are executed as 80 bit values on the internal stack. In addition to normal add/sub/mul/div instructions, there are also transcendental function (trigonometric and exponentials) in the instruction set.

The transcendental functions are not that expensive to implement separately if basic floating point operations are available. These can be calculate with 3-4 order polynomials for 32 bits and 6-8 order for 64 bit accuracy.

If 64 bits are needed after all, how often are those needed. It is a different thing if a lot of high precision floats are needed at say a

48 kHz sampling rate or just generate some display values for human consumption a few times a second. In the latter case, software FP is sufficient.
Reply to
upsidedown

If you are doing calculations in 64bit floating point from real world sensors then you almost certainly do not understand what you are doing. There are only a tiny number of metrology problems where you have more than 7 significant figures of true precision in raw measurement data.

AFAIK none of them involve motors or flow gauges.

Cortex M7 is about the first one where you get double precision reals.

If they could figure out how to scale their problem so that it can be done in 64 bit integers then a Cortex M3 could hack it.

--
Regards, 
Martin Brown
Reply to
Martin Brown

For relatively coarse stuff such as dew point calculation for non-hermetic laser and APD/MPPC/SiPM cooling, we use the very nice Sensirion SHTC3 I2C T/H sensor.

BTW we tend to put in a transistor so that the MCU can power-cycle all the I2C stuff when it gets wedged. Anyone else do that?

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

If you do not need full IEEE float compliance (Not A Number, INFinity, denorms), float implementation is easy, if you have 64 bit integer hardware. Mul/Div is trivial, Add/Sub is a bit more complicated, but it helps if you have a fast barrel shifter (denorm/normalize).

Reply to
upsidedown

I avoid I2C whenever possible. SPI devices don't hang up. Well, unless they are designed by Analog Devices.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

"I'm" not asking anything of the AVR. I'm the voice suggesting it be upgra ded to an ARM. Even with that there seems to be a push to go with as littl e processor as possible to facilitate "verification" whatever that means ex actly. I've asked why a CM0 is easier to verify than a CM3 or a CM4F. So far, no response.

That is the same sort of statement that others make... with no definition o f what would constitute "simpler". Engineering is based on measurable quan tities, not abstract concepts. So is more memory less simple? Is faster s peed less simple?

Yes, I'm aware of the R series. I'm not a reliability engineer and no one else on the team is either. In particular, no one on this team knows the d etails of how to design medical equipment. My concern is they are making a lot of decisions and building hardware that might never be approved for us e because they don't understand the process.

--

  Rick C. 

  --- Get 1,000 miles of free Supercharging 
  --- Tesla referral code - https://ts.la/richard11209
Reply to
Ricketty C

no

y
a

ou

s
t

their word.

yet knows how much processing will be required, so there is concern that t his MCU may run out of oomph. That's an SI unit you know.

t there's no pressure to do it often.

nd to be an order of magnitude slower.

Processing isn't just execution time. Tables take memory space as does cod e. I would never have started this project in C with only 32 kB flash and

2 kB RAM. Once a project is complete the processor can always be down size d to suit, but that seldom happens so spare capacity can provide for future udates.
--

  Rick C. 

  --+ Get 1,000 miles of free Supercharging 
  --+ Tesla referral code - https://ts.la/richard11209
Reply to
Ricketty C

I wrote a math package for the 68K. It was signed 32.32 format, which is all any real-world application needs. It was very fast, and conversion to/from integers was especially fast. Adds didn't need normalization. It was saturating, in that it handled all exceptions in the most reasonable way. That was great for control loops.

Divide was the worst thing, the fix for that being: don't divide.

The HP5370 time interval counter was and is an amazing instrument. It has one 6800 processor inside. That's astonishing. I'd love to see the source code.

I wrote an RTOS for the 6800. That was a pain. The 6800 wouldn't even push/pop the index register.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

I'm glad that you know more about the calculations than the people coding them. Perhaps you would be interested in joining the team and showing them what they are doing wrong? Let me know and I'll have them reach out to you.

Or an M0 as some are pushing hard for. I'm pushing for creating a set of requirements rather than taking another shot in the dark.

This may be very workable on the ATMEGA from a software perspective. Right now the only issue I know of is the I/O count. But I don't see where anyone has made any sort of an estimate of the flash or RAM needed much less processor speed.

I expect to discuss this in today's meeting.

--

  Rick C. 

  -+- Get 1,000 miles of free Supercharging 
  -+- Tesla referral code - https://ts.la/richard11209
Reply to
Ricketty C

I had an interesting Temp. control design once.

During the mid 90's I worked in Winnipeg, MB Canada as Operations Mgr / Tes t Eng for an Automated 2 way Meter Reading company for 4 years call Iris S ystems Ltd that later sold all their patents to Itron in Seattle. It used o ur custom 928 MHZ ISM band cell network from the power meter directly into any utility's Co. Sun database on 6KHz shared bandwidth MUX'd in time, dive rsity with a complex protocol developed in-house. The transmitter had to c reate a 1 ppm TCXO out of a 25 cent AT-cut Xtal.

I developed 3 order polynomials curve fit for every angle cut of AT curves then another Polynominial to generate any curve from 2 test points on the c urve at 40'C, 70'C This linear differential value could be used to predict the entire curve from -40 to 70'C which was our outdoor environmental desi gn spec.

As Hobbs said "An 0603 thermistor with one side soldered to a copper pour . .. has a thermal TC of around 100 milliseconds. "

I made a cheap OCXO design and used styrafoam around a 2oz copper FPC Lexan film with a 0603 thermistor to external Rref values and 1206 heater resist ors to heat up one side of the crystal to match resistance and measure the Xtal connected to a header with a CMOS XO while the XTAL was in a mini cop per foil around the case temp so the frequency shift of 10T was about 4 sec onds from room temp to +40'C, took a reading error in PPM error on the f co unter then switched to 70'C and did the same. Removed the crystal from the header in 10 seconds and sorted the Xtals in bins for prototype (100) level test. The bins then matched with sorted Varicaps were used by the MC6805 t o control the voltage and tune the TCO design within 1ppm over a 110'C rang e for about $1.

So we used an SMT thermistor and heater like an OCXO with a removal Xtal th en rapidly bin the parts. Using my algorithm in the final product, measure the temperature to make the XTAL < 1PPM only took 10 seconds to predict and correct the error over a 110'C range using only 2 precise points on the cu rve.

Those who understand, imagine the family of 3rd order curves going thru 25' C.

My contribution was defining the precise Polynomials for all AT crystal cur ves.

I used a DOS program I got from MJT Microwave Journal to define the coeffic ients.

While my regular job as Ops Mgr for SMT Microwave Prototype Assy was the 10 k part number Master Registry the internal part numbering system, the Foxba se Stock room Pick lists from BOMs, Purchasing, QA and inventory control sy stem and the entire SMT Prototype shop operations. We had a total staff of 15 HW designers, 15 software gurus and 15 other support staff and 3 mgrs/f und raisers. We used optical detection for the power meter rotation of existing meters a nd all the electronics of Radio and power detection fit behind the metal la bel in almost every design. However, most of the industry rejected this an d went to new digital meters without 2-way direct connection to the head of fice database and still required labour of meter readers.

Now you can buy these. 1PPM TCXO's for 1~2$

Reply to
Anthony Stewart

If you follow that route, have a look at the NXP LPC804 which has been mentioned here before. It is an M0+ with on-chip programmable logic, single-cycle multiplier and the flash/eeprom and SRAM are ECC error protected.

John

Reply to
jrwalliker

I did an electronic metering project with Niagra Mohawk. They said that they wanted meter readers to go around and look at things, so wanted proximity readout. My readout box also inductively powered the meter so it could be read if the power was down. I used a 6805 too!

--

John Larkin         Highland Technology, Inc 
picosecond timing   precision measurement  

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

of requirements rather than taking another shot in the dark.

I think the LPC804 is in too small a package, but there are other, similar LPC8xx parts with more I/O. I'm not looking to push any parts on the team. I just want them to consider the options rather than blindly picking some thing. They picked the present processor by looking at one aspect, the fac t that it is in the Arduino Uno. Hopefully we can put together a set of re quired features and pick a part intelligently.

I've taken note of the ECC on the Flash. That may be an important feature. Flash is the one significant failure item in such devices.

--

  Rick C. 

  -++ Get 1,000 miles of free Supercharging 
  -++ Tesla referral code - https://ts.la/richard11209
Reply to
Ricketty C

Saturation, in hardware or software, gives me the warm fuzzies.

Long division was always a pain, starting at primary school!

I acquired a carcass, but the only bit I was able to rescue was the OCXO.

I have a successor, the 53301, and it is an underappreciated piece of kit. One day I might get around to thinking about how many of its functions can/can't be duplicated by modern equipment.

I can see that would be a pain for an RTOS, but at least the pain was localised in a small number of places (if not executions).

The Z80 looked better until you tried to use it for just about anything. Its instruction map was very sparse and non-orthogonal.

My coding style was used a lot of linked list structures for device drivers, so the 16-bit pointer to the next structure is somewhere in the structure. Chaining along the linked list was trivial in the 6800, but a nightmare with the Z80's index registers. So bad that I usually ended up using the 8080 subset, which made me an easy convert to the RISC religion.

All the early processors were a pain in one way or another, strongly encouraging one coding style and penalising others.

6809s and the ARM1 were the nicest.
Reply to
Tom Gardner

Multiply by the inverse value :-)

Some early RISC processors (such as Alpha) had an inverse instruction implemented in hardware but no divide instruction. Apparently it is easier to implement the inverse instruction in a single cycle.

Reply to
upsidedown

I had + and - infinity, just the max values. Anything divided by something small (including zero), or any giant sum, maxed out.

I sort of cheated and shuffled some and used the hardware. Lost some bits that nobody ever noticed.

Square root was annoying.

I wrote it with a pencil, on regular paper, while I was staying with a lady friend in Juneau. I mailed in a few sheets a day, to the guys in New Orleans, who typed it and eventually assembled and ran it. They claimed it had one bug.

Geez, for a geeky EE, I've done some weird stuff.

68K was/is a thing of beauty, like a 32-bit PDP-11 with a lot of registers.
--

John Larkin         Highland Technology, Inc 
picosecond timing   precision measurement  

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.