Integrated TFT controller in PIC MCUs

Microchip PIC32 MCUs can be used for graphics applications (as I read on their website), but I couldn't understand if they have a real integrated TFT controller (as in LPC MCUs by NXP) or a different thing.

I would prefer NXP's versus Microchip's solutions for many reasons:

- ubiquitous ARM core (instead of MIPS)

- true integrated TFT controller

- lower cost (I'm not sure on this)

Both solutions offer a free to use graphics library: NXP delivers SEGGER (precompiled), Microchip a proprietary library (with source code).

What do you think?

The project will use a typical 4.3" 480x272 TFT display with RGB interface, and I'm interesting in developing a good HMI.

Reply to
pozz
Loading thread data ...

Personally, I would also go for ARM.

Reply to
Tomas D.

Why? What's the diff?

--

Rick
Reply to
rickman

Most go ARM just because "everybody else does". In fact ARM is a crippled architecture (too few registers for a load/store machine), MIPS has enough I think (32 but I have only looked at it, never used it - I use power).

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

Den onsdag den 7. januar 2015 12.28.59 UTC+1 skrev pozz:

there's also STM,

formatting link

-Lasse

Reply to
langwadt

Il 07/01/2015 20:59, Dimiter_Popoff ha scritto:

What about TFT controller and graphics libraries? Did anyone make a comparison?

Reply to
pozz

I don't think I would say that ARM is "crippled" by having too few registers - it can be nice to have 32 registers available (like MIPS or PPC), but 16 mostly orthogonal registers works well enough for most code. Having 32 registers is good for some kind of code, but also means that you lose up to three bits in the instructions, and you also have less efficient context switching for interrupts (as twice as many registers must be saved). It's a trade-off.

Certainly MIPS is a nice architecture, especially their newer microcontroller versions. But there are many good reasons for not picking Microchip PIC32 devices, even if you like the cpu core.

Reply to
David Brown

Il 08/01/2015 13:53, David Brown ha scritto:

I'm interested in those "good reasons for not picking Microchip PIC32 devices"...

Reply to
pozz

Me too... (as someone not having any experience with the PIC32 family)

Reply to
Dombo

Note - I haven't used PIC32 devices myself either, so don't take this as more than opinion from someone who has read about them, talked about them, considered them, but never tried them. My comments may therefore be inaccurate or outdated. And if other posters with real experience contradict me, they are probably right.

One reason, which has already been mentioned, is simply popularity - since Cortex M3/M4 devices dominate the market that the PIC32 competes in, you get more tools, more testing, more familiarity, more developers and more existing code.

The development tools for the devices are gcc (which is good), using a library written by Microchip (which may or may not be good). Microchip provides them in a free version and a paid-for version - with the free version having optimisation disabled (or at least severely limited). Given that gcc is free and open source software (and not written by Microchip), I believe this is very much against the spirit of the licence for the compiler - and I think it is a poor way to provide "demo" or "eval" versions of the tools.

The chips themselves suffered from a number of major hardware issues when they came out - and from what I have heard from a user, they still do even after several years. A key point is that the USB interface is limited to 12 Mbps (or at least, has severe bugs at 480 Mbps). While 12 Mbps is fine for many uses, this long-lasted problem shows a severe failure in development, testing and quality control that is very off-putting.

Other than that, I think many reasons fall into the category of opinion rather than general than being more general - but they will still be "good" reasons if you agree with the opinion. (Just as "cpus should have more than 16 core registers" is a good reason for disliking ARM's, if that is your opinion.)

I think the whole PIC32 system has been a terrible blow to the microcontroller world - it was done poorly, rushed to the market, had poor free versions of its tools (leading people to see it as a very slow cpu), and I believe it has greatly reduced the chance of MIPS being a serious player in the microcontroller market. MIPS make a series of cores that are competitive or better than many of ARM's cores in terms of speed, features and mips per mW - and the microcontroller world would be a better place with more choice and competition.

Reply to
David Brown

True. But what ARM chip do you have with 480 Mbps USB? You can't say that's a disadvantage if there are no alternatives.

Reply to
edward.ming.lee

Results of arithmetic calculations are not exactly what I would call an opinion.

16 registers - one of which being reserved for the PC - are too few for a load/store machine. Clearly it will work but under equal conditions will be slower than if it had 32 registers, sometimes much slower.

Say a pipeline has 6 stages and you have to do some calculation, e.g. MAC - in a loop. In order to overcome the data dependencies and get the throughput the ALU can achieve (say 2 cycles per MAC) you need 20+ (I think it was 24 but it is a while since I last did it) registers for the operands only, let alone pointers, counters etc.

Then just about anything one writes beyond some very basic complexity needs access to more than 14 variables within a context - and I have been programming with the register model in my head all the time for decades now, not just "have read about it" or have occasionally done so so I know what I am talking about. Having to save/restore registers all the time will impact performance, sometimes severely (several times) as in the MAC example above, even if the VPA compiler provides some virtual registers for programming convenience (without them programming using just r0-r13 would be pretty limiting, it was OK on the 68k where one could use r0-r14 but operands did not have to be necessarily in registers, i.e. it was not a load/store machine).

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

16 is a fact, but the rest is nothing but opinion.

More registers can be slower too.

More registers => more bits in an instruction to specify a register => more to read from code mempry => slower

More registers => more data to save/restore on an interrupt or context switch => higher interrupt latency and context switching overhead.

Wouter

Reply to
Wouter van Ooijen

Den torsdag den 8. januar 2015 kl. 22.57.22 UTC+1 skrev snipped-for-privacy@gmail.com:

a quickly google indicates at least, lpc4300, SAM3, SAM4 if you can do with external phy many more

-Lasse

Reply to
langwadt

Yes, 32 is about the optimum I suppose. But I have not really analyzed that, what I have - and demonstrated by an example which you chose to ignore - is the comparison 32 vs. 16.

Not really. Load/store machines typically have a fixed 32 bit instruction word; using that for only 16 registers is simply waste of space, you will have less information packed in the opcode thus you will need more opcodes to do the same job, thus having to do *more* memory fetches. If you become familiar with the power architecture instruction set you will find very little room for performance improvement, the person who did it knew what he was doing really well.

Not at all. You can always save/restore even just 1 register if you want to on a 32 register machine, what you *cannot* do on ARM is have more than 13 registers to use in an IRQ handler when you need them - so you will need *more* memory accesses in this case.

And since the latency which matters is the IRQ latency (tasks get switched once in milliseconds, IRQ latencies may well have to be in the few uS range) - the time on save/restore all registers is negligible (e.g. 32 2.5nS cycles once per mS on a 400 MHz power core, or 0.008% of the time).

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

OTOH, the PIC32MX also comes in PDIP variants so you can breadboard them.

The only ARM MCUs I know of in PDIP are the LPC810 (cute little MCU BTW) and the LPC1114FN28 (which is not available from Farnell UK and anyway only comes in 600 mil wide packaging). Neither of these have USB support.

You might be confusing the MX with the MZ here. The MX is designed for

12 Mbps only and the MZ is apparently 480 Mbps capable but is the one with the severe bugs. Someone posted a link to the MZ errata list a while back - it was one hell of a read when you looked at what simply didn't work. I don't know what the current situation is with the MZ however.

Do current versions of the MIPS ISA allow you to push a set of registers onto the stack in one instruction as you can with ARM or do you still have to push (and pop) them one after the other manually in your handlers ?

What's the code density for MIPS versus ARM like for the latest MIPS cores ?

The MIPS ISA in the PIC32MX seems a little too basic to me after been used to what the ARM ISA offers.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Nothing about TFT controller?

Reply to
pozz

Freescale Kinetis K20 is one that I know of (the 120 MHz versions support high speed USB). And of course there are vast numbers of "application processor" ARMs (as distinct from "microcontroller" ARMs) that have 480 Mbps USB - and unlike the most MIPS "application processor" devices, many are easily available in small quantities.

But that's not the point - broken 480 Mbps USB /is/ a disadvantage of the PIC32 since that was a major marketing and selling point of the device. Had Microchip done the right thing - disabled that part of the hardware and removed it from the documentation - then they would have had a nice microcontroller with good, working 12 Mbps USB, and plans for future versions with 480 Mpbs. That would have worked well.

Reply to
David Brown

It is a /fact/ that more core registers help some types of code - and in some cases, significantly so. But it is /opinion/ as to whether the advantages there are more important the then disadvantages I mentioned. It depends very much on the type of code, as well as the implementation (as you say below, a superscaler or heavily pipelined cpu needs more registers to keep its execution units busy).

For microcontrollers, such as the Cortex M devices, I think 16 registers is a good balance for a lot of typical code. For bigger processors, such as the Cortex A devices, I could agree that 32 registers would be a better choice. But on most of these devices there is an addition register set (of 32 x 64-bit, IIRC) for SIMD "Neon" instructions - covering much of this need.

Reply to
David Brown

So far PIC32MX is way ahead in terms of package options and family members. Just take a look at the 1xx/2xx family.

Indeed. PIC32MX are all Full-Speed (12 Mbps) USB. The latest families/members also have quite short errata.

It's the PIC32MZ that had rough start, but even in that regard (errata) it is not a world leader in any way. I'd say quite average start for the times we're currently in.

There are new MZ versions expected, as well as some MK family, so future is not so dark. The most disappointed are probably those, that hoped to design something with a brand new family. That's a proven risk.

All in all, Microchip might not be cutting edge and their tech staff does not shine, but they do have pretty good overall value. Constantly trying to reduce errata on newer spins, even not always successfully, is good.

Actually, the PIC32MX is quite good, now that it had matured, and is a very nice and needed alternative.

No, because it is a load/store architecture. I don't really have any experience with the new MicroMIPS, but not expecting that to change.

The PowerPC multiple load/store instructions are microded, slow, and deprecated.

Even ARM went away from LDM/STM in their 64-bit architecture. It just doesn't scale.

Which ARM? Which MIPS?

MIPS16e, as present in the PIC32MX (MIPS M4K core), is comparable to Thumb2. MicroMIPS, as present in the newer PIC32MZ (MIPS 14K core), is even better than MIPS16e.

One benefit is that you can always switch to MIPS32 mode for performance reasons, unlike the pure Thumb2 MCUs, like Cortex-M.

So you don't like load/store architectures, and you won't like AArch64, either.

There is a reason why VAX and 68K didn't survive.

Just my personal opinion / eurocents.

Reply to
Vladimir Ivanov

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.