Processor choice

Thanks. I already have Studio installed but I have to run it in a friggin winders VM. I don't know whether I'll need an ADC or not until I reverse engineer this thing. Probably not.

Thanks, John

Reply to
neonjohn
Loading thread data ...

Unfortunately, no. It uses an 8 pin micro with the numbers ground off. It is programmed to permanently turn off after 6 hours.

John

Reply to
neonjohn

Size requirement. This thing has to be small enough to easily flush down a toilet and not get stuck in bends or internal protuberances. Price isn't a real consideration. I'll probably make this guy 100 of them but I expect to commercialize it, as this pipe locator is the standard of the industry.

The Atmel product catalog disagrees.

The Gnu C compiler allows one to insert assembler mnemonics in the code. I absolutely will not be using this feature. All this controller has to do is turn the oscillator on and off in a certain pattern.

John

Reply to
neonjohn

correct. It is not fear, it's a choice. I'm officially retired and this is to be a fun project. Assembly is NOT fun. All this thing has to do is turn an oscillator on and off in a specific pattern. Less than a page of C code. I enjoy writing C code so that's what it'll be.

John

Reply to
neonjohn

Right. So "8 pin" has nothing to do with it. In fact, the size of the microcontroller isn't directly a requirement either - it is the size of the complete system that matters. You could have a big microcontroller that can generate the sine waves directly, as an alternative to a small microcontroller and external oscillator. Quite possibly the battery will be the biggest part.

Disagrees with what?

Reply to
David Brown

Thanks much but I don't need that kind of power. all this thing has to do is turn an oscillator on and off in a specific sequence. Right now I'm thinking I'll just put the sequence in ROM and have the little loop read the ROM every 10ms or so. Maybe every 100ms.

This device has to be quite small. Think of a ping-pong ball squished out into rounded rectangular tube with closed ends. I also have to make room for the inductive charging circuit.

I may have to temperature compensate the oscillator in the code. I hope not and will try analog compensation first.

Thanks for your advice. John

Reply to
neonjohn

I agree entirely. I think it is amazing how many people are trying to recommend a specific microcontroller when the OP has given little useful about the requirements, and no indication that he has thought through what he actually wants to build. At this stage it is not clear if the best solution is a microcontroller with a DAC that can generate the signals directly in analogue, something with a fast PWM connected to external MOSFET drivers to get more power, a small controller switching an oscillator on and off, or a 556 timer chip generating a pulse pattern. The OP appears to have leapt to "it needs an 8-pin microcontroller" because the device he is copying has an unknown 8-pin chip that he believes is a microcontroller - and then others here have mostly decided he needs an 8-/bit/ microcontroller for it.

No - having a good C compiler frees you from having to use the brain-dead assembly for the brain-dead microcontroller. It lets you write code in a practical language instead.

If you are stuck in the previous century, you might have a microcontroller that is so severely limited that C becomes impractical. I have programmed an AVR Tiny with /no/ ram, 1K flash, and 32 bytes of eeprom - I programmed it in C, using gcc, and the results were within spitting distance of the size I could get using assembly.

No sane developer chooses a PIC for a completely new project, without exceptional reason (such as they excellent temperature robustness). Modern microcontrollers are far smaller, cheaper, with lower power consumption - and high quality zero cost C development tools.

And if one were to compile a list of the top reasons to pick assembly over C, "future support" would not make the first hundred. Assembly is, by definition, the least portable and future-proof option you have.

No, somebody messed up trying to get gcc installed on their system, or messed up using gcc.

But no one is suggesting the guy sends a Raspberry Pi down his toilet!

To suggest you are mixing things up here would be the understatement of the year.

Reply to
David Brown

I'm curious--why do you need to charge a battery that you can't recover?

Reply to
John S

It might be (one-time) charging a supercap with the expectation that if you can't find the device (through the soil) in the time it takes for the cap to discharge, you just flush another down, after it.

This avoids the issue of having to deal with batteries entirely.

Regardless, it allows you to "pot" the device so you don't have to worry about infiltrants.

Reply to
Don Y

Is that all? If so, why not ditch the microcontroller and use a handfull of 555 timer chips to implement the oscillator modulation?

I built such a circuit many years ago, using one 555 to create and audio tone, and the other 555 to FM modulate the audio tone by a frequency ratio of four with a period of a few seconds.

Joe Gwinn

Reply to
Joe Gwinn

Good thought. Thanks.

Reply to
John S

Same story, more or less.

Yes, you need four times as many lines. The rest is true regardless of programming language, but with a twist: In assembly code, everything is visible. In compiled languages, there is much that is hidden from those not literate in assembly, and I solved a lot of intractable bugs in compiled-language programs by dropping into the assembly-code level, and sometimes into the depths of the operating system, and one time, into CPU microcode.

C++ and object-oriented design claims that hiding all the details is an advantage. This is arguable true when the issue is to get some understanding of the architecture of a million-line program. But all that hiding can just cripple debugging.

In a case maybe ten years ago, I helped the software folk to figure out an intractable bug where the realtime system would run for a while, and then just crash, with incomprehensible error messages. I suggested that they find someone in their lab who knew how to use a Linux kernel debugger, as this would allow them to backtrack from the smoking crater. It took that fellow maybe 15 minutes to solve the problem: The system had two major sensors that were used in parallel, and these systems were indexed in the relevant tables as zero and one. Over time, System One was replace with System Two. The crash occurred when someone mistakenly tried to invoke System One, which no longer existed.

Yes, I know that one is supposed to do range checks et all, but that's irrelevant to this story, which is a demonstration of the problems with hidden information - In C++, this mechanism was buried somewhere, and the C++ app debuggers were rendered useless.

Yes. What I do is to write bespoke assembly-coded C (or in the old days,Fortran) callable functions to do what the compiled language cannot.

And sometimes it's useful to debug the compiler output at the assembly level. And to tweak the C code such that the compiler yields efficient assembly code.

Joe Gwinn

Reply to
Joe Gwinn

On a sunny day (Sat, 27 Feb 2021 18:05:06 +0100) it happened David Brown wrote in :

To me at least, asm is a high level language. That it requires some more brain cells to write !perhaps! I never encountered that as a problem.

Scope pic:

formatting link
in PIC 18 asm has Fourier transform too in 16 and 32 bit integer, One afternoon, or was it 2? No way are you pulling that of in C!! The F transform was original C code it is still in the source, I made one coding error in the whole code (forgot sign extension somewhere) easily found comparing test data with the C version.

It is easy! (asm).

Sigh. I got a bunch of these 18F14K22 not much I cannot do with that. Do not even have the Microchip software, only the Linux assembler, wrote the programmer software too (in C on the PC).

Yes C is portable, up to some point, hardware changes, more will be integrated go FPGA and you can have the rest of the logic in the same chip and code any processor you like (but not in C !!)

Nope, did a google, lots of complaints, will maybe fixed.

Strawman, you wanted a C environment, I show you that can suck big time.

Take libforms for example (xforms) to keep things working I had to go back some versions and create a new library called 'libzorms' (z as in z80 or the last word on that), and that even runs on my raspies. And forget about Imake working had to make the makefiles manually not a big deal once you did one. Else all GUI C programs I wrote using it would no longer work So much for portability.

Reply to
Jan Panteltje

One of the goals of HLLs is to lower the level of expertise required to implement a given set of functionality *correctly*. You're not taxed with remembering if THIS opcode affects the PSW (and *how*) vs. THAT opcode. You don't have to worry about *where* an ISR can interrupt a particular instruction's execution. etc.

The thinking being that the compiler (write) can get those details right, *once*, instead of you having to KEEP remembering them each time you write a line of code.

But, I agree; there's lots of detail that *can* be important that gets lost "between the CR and LF"... This isn't usually important in desktop apps but can be critical in a deeply embedded implementation.

What I find more insidious is the "implied" operations that aren't apparent (though familiarity with the language let's you realize what you *missed* when you first looked at the code).

And, compilers, in general, often implement optimizations that make the resulting ASM more puzzling ("Ahhh, so *that's* what it's doing!")

This is the technique that I've historically used for interfacing to the OS and drivers as many compilers did *not* support an ASM directive (gcc is late to the embedded software world)

If changing the C (or any HLL) makes an appreciable difference in performance, you should rethink why you wrote the code the way you did, originally. Perhaps you missed some inherent inefficiency in your implementation. (OTOH, I've seen compilers "reduce" logic in ways that I wouldn't have considered when coding the logic -- for clarity).

But, in general, I let the compiler writer worry about generating more efficient code -- in the next release, etc.

Reply to
Don Y

This is the standard case for using HLLs. But the discussion was what to do when one needed to do something that the HLL cannot do.

That too.

Not usually a problem in practice. Sometimes we turn optimization off, if only to see if it is the optimizer that's causing the problem.

This is the academic theory for sure, but it isn't actually true. Here are two examples from my personal experience, C-vs-Pascal, and Ada83-vs-.

  • C-vs-Pascal: Many years ago, I was given the task of choosing which language would be used to implement an Air Traffic Control automation system. This was in the days when Ada83 was coming, but was not yet available. The contenders were K&R C, and Pascal. The software would end up with something like 100,000 lines of source code. The target computers were SBCs with Motorola 68000-series processors.

The Ada folk preferred Pascal, because it was the base language from which Ada was being created, and because it was seen as a architecturally cleaner language (no GOTO commands or pointers!).

The non-Ada folk preferred C, because it had a large base with many successes in implementation of such systems, and GOTO and pointers were very useful, actually essential in practice.

When my name was published, my phone blew off the hook. Everybody was lobbying me. Feelings were running high ... and were evenly balanced.

There were six or so prior benchmarks comparing C and Pascal available at the time. All six benchmarks concluded that C code occupied 2/3 the space of Pascal code, and took 2/3 the time to execute, a surprise. This was over all target computers and software development chains then in use, so the 2/3 factor was somehow intrinsic to the two languages. Why?

The answer was in the introduction sections of K&R (C) and Jensen and Wirth (Pascal). C was intended as a system implementation language (originally for UNIX itself), while Pascal was intended as a student's first language, used for teaching students how to program.

So, C was optimized for quality of generated code, at the expense of compilation speed, while Pascal was optimized for compilation speed, at the expense of generated code quality. C code would be complied once and run many many times, while Pascal code would be compiled many times (while being debugged) and run just once, to be graded.

The syntax of Pascal reflected this as well - Pascal can be compiled in a single pass, so one must be careful to declare things in the correct order, which becomes intractable in programs of any complexity. C instead requires a multi-pass compiler, so declaration order isn't so important, and C scales to larger and more complex programs.

  • Ada83-vs-: Basically, Ada83 was not suited to multiprocessing (no notion of shared memory), or of asynchronous communications (only rendezvous was in Ada83's architecture). Nor were the early compilers useable, but that's a different issue. Anyway, in the days of the Ada Mandate, we had to use Ada, so we did. My solution was what is now called middleware, which was implemented as Ada plus assembly-coded Ada subroutines as needed to do things that Ada could not. This middleware was also ten times faster than Ada in message handling, because one thing we could do in the assembly code was violate strong typing, needed to implement shared-memory pass-pointer communications, in turn needed to handle the river of data that radars generate - there was not time to copy and recopy data, or any place to keep all those copies.

We don't always have that option. See above.

And awkwardly-designed software can crush any computer, however fast it may be.

More generally, all compilers have odd corners where they do especially stupid things, and it can be necessary to find and fix such things. Usually a simple paraphrase of the C/C++ source code will eliminate bloated stuff. We generally find these areas when overall performance isn't good enough, and start to look for hot spots.

Joe Gwinn

Reply to
Joe Gwinn

I am not sure why the requirement of 8 pins...but anyway, you may want to consider the TI MSP430FR2422. There is a nice development environment in Eclipse that uses GCC. It is not 8 pins but 16 pins in a nice flat pak. I like this processor for a lot of applications. It has a nice compliment of functions, e.g. uart/i2c/spi, timers, pwm, etc.

Reply to
Three Jeeps

A developer can write code that represents the way he *thinks* about the problem. The compiler doesn't care about how a human mind wants to see the problem presented; it is free to write an equivalent version of the code that may not be the "intuitive" form that the developer considers.

E.g., "if (bob || !fred)" can also be implemented as "if !(!bob && fred)". The former is more intuitive. But, the compiler knows what it has "on hand" at any given point in the instruction stream and may find the latter cheaper to implement.

A developer reading that second implementation has to stop and think about what is *effectively* being computed, in spite of the actual sequence of opcodes being issued.

[When writing ASM for multiple register/accumulator architectures, I was always looking for something that I already had "on hand" to manipulate to form a result instead of having to clear space in a register/accumulator to load the "raw" value, again.]

Similarly, in ASM, I can ensure opcodes are executed in the order that *I* deem most important. If conditional operators have different execution times, I can arrange the code so the more common path is the faster one.

A fun puzzle is considering the optimum way of expressing something as trivial as detecting for a leap day; is there any difference between execution times of:

if ((year % 4) == 0) && (month == 2) && (day == 28)

vs.

if ((month != 2) || (day != 28) || (year % 4))

How would a compiler decide the "proper" ordering of evaluation? How would a human reading the code best understand what's happening?

That's a different question. You talked about changing the *C* to generate more efficient code, not changing C to a different language!

How does that pertain to "tweak[ing] the C code such that the compiler yields efficient assembly code"?

I can argue that I *don't* have the option (as every project has a deadline and there's no guarantee that a more optimized implementation may not be available before that deadline comes). But, I can also claim that almost every piece of good continues to be "worked" even after release. And, over time, the quality of (machine) code that actually gets executed improves "for free" (i.e., without the developer having to rewrite anything)

That's just a reason to avoid working in those areas. Or, letting other -- more qualified -- people handle the portions of a task that MUST happen in those areas.

That's why folks write apps and different folks write OSs, drivers, etc.

It's, also, why specs are important ahead of time -- so the writer knows about the boundary conditions to which his code will be subjected and can plan on them in the *natural* flow of the algorithm(s).

[It's also why "programmers" are usually the least qualified to actually deal with UI/UX issues!]
Reply to
Don Y

.)

Yea, I used to do that too. I got really tired of writing double ended que ue handling routines, various math packs, etc. and then grad school taugh t me about productivity, error density, and error types of various programm ing languages. Yea, I could be more productive in C and yea, pointers will hang you if you slip up but personal experience for me says give me a *rea lly* good reason why you want to program in assembly - usually speed and ef ficiency. Been there, done that and the real reasons are few and far betwe en. but, to each their own...

Reply to
Three Jeeps

Pointers are an amusing example of the different mindsets in ASM vs. HLL coding. You think nothing of using an autoincrement offset addressing mode in ASM to *reference* a parameter -- or function/subr. Yet, the same approach in (e.g., C) is considered tantamount to running with scissors (while carrying a glass of acid in the other hand)!

Hmmm.. the *compiler* can use pointers (in its generated code) -- but you can't?

Reply to
Don Y

For $40..... I'm not cleaning it. Maybe this, and the risk of contracting e.coli, are the reason they are one-time use / disposable.

Reply to
mpm

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.