DS-5 opinions/reviews

Hi,

I'm debating purchasing a DS-5 license to do some bare metal RTOS development work. Anyone with first-hand experience with the product, its quality/robustness and the quality/responsiveness of the ARM Ltd folks in resolving "issues"?

[I'd much prefer using workarounds than having to debug frequent patch releases! "The bug you know is better than the one you haven't yet *found*!"]

Also, any *experience* re: this approach vs. the GCC/GDB route? I'm afraid that working with a commercial product and developing a framework around it would burden future developers with leaner development budgets. So, the GCC/GDB/Eclipse route might be less painful, overall (given that I'd end up building an environment that fits with those "more available" tools)

Thx!

--don

Reply to
Don Y
Loading thread data ...

what would you gain from not using gcc like everyone else except a lighter wallet?

-Lasse

Reply to
lasselangwadtchristensen

I'll let you review the DS5 datasheet and note the differences between it and GCC/GDB/Eclipse.

I've recommendations from two colleagues that speak very highly of it (usage, support, code size, speed and quality). When GCC is brought up, both recalling how much more "work" is required getting, installing, begging for assistance/bug fixes; all to "save a DEVELOPMENT dollar".

[No idea how much your time is valued but it doesn't take long at all for me to "lose" the price of a better tool while trying to save a buck. Unfortunately, neither of my colleagues had to make the "make/buy" decision as their tools were "subsidized"]

My biggest concern is as to the "inertia" that a (paid) COTS solution would impose on future developers. I.e., I surely am not going to build tools and practices that work in *two* different development environments when I'm only *using* one of them! (That would be the worst of both worlds!) Yet, I realize many folks probably don't have pockets as deep as mine (or, perhaps, a different

*attitude* towards "spending development dollars"). OTOH, I don't feel obliged to catering to their future "thrift" when it comes at *my* current expense (*time*).

I was hoping other folks (besides my two colleagues) might have

*informed* opinions to offer as well...
Reply to
Don Y

my point is that ds-5 might be better in some ways but it will limit who will consider using what ever it is you are making. If you are the only one who is going to use it that doesn't matter.

imagine if Linux had required a $1000 compiler license

-Lasse

Reply to
lasselangwadtchristensen

That was exactly the point I made in my OP: "I'm afraid that working with a commercial product and developing a framework around it would burden future developers with leaner development budgets."

Then only folks willing to shell out the $1K would be capable of kernel hacking "out-of-the-box"! It doesn't seem uncommon for FOSS projects to "arbitrarily" pick their own tools, build systems, etc. How is burdening a prospective developer with having to acquire and install (even if they are "free") a different version of make(1), a different VC system, testing framework, etc. any different than saying "shell out $X instead of your *time* if you want to play"? Haven't they, in effect, said their time and effort is worth enough that they should pick the tools and approach that *they* consider the most effective approach to the problem?

Note that my approach doesn't impact folks working in userland. How often do you hack the RTOS of a product you are developing? Are you capable of understanding the bare metal interfaces to do so? Do you even know where to *begin*? Or, do you spend your effort on the *application* side?? (which can use entirely different tools)

Reply to
Don Y

o

if Linus had used a $1000 compiler you would most likely never have heard of Linux

-Lasse

Reply to
lasselangwadtchristensen

I don't use Linux -- so how would that have affected me?

Should I ensure all my hardware designs use thru-hole technology so Joe Hobbyist can build boards with his Weller soldering iron/pencil? And, nice *fat* traces so he can get the boards built for pennies?

Should I only use components that are available in Qty 1? And, keep total product cost under $100 for folks with shallow pockets? Limit the complexity to things that can be accomplished with a PIC and $30 demo board?

Should I only use languages that someone has "blessed" as "mainstream"? Should I shun any technology that requires folks to spend money??

FOSS adherents seem to be more and more "tightwads" -- yet insisting their *employers* pay them with hard currency (instead of free software).

Reply to
Don Y

Disclaimer - I haven't used ARM's own development tools at all.

The IDE for DS5 is Eclipse, the same as for most free and commercial gcc-based ARM toolchains.

The compiler for DS5 is, I think, still Keil's tools. They are a good deal poorer than gcc in C and C++ feature support. ARM is dropping the Keil toolchain entirely for future versions of their development tools, and moving to clang/llvm because they are more flexible, give better code, and have better support for modern standards. (ARM choose clang/llvm rather than gcc because of the licensing.)

I can't tell you what ARM's support would be like. I do know that price is no indication of quality for support, nor is it any indication that you will be up and running any faster or slower than with a free or low-price toolkit. I have seen people spend weeks getting top-price commercial toolchains working, and seen them fighting with support people. I have also seen people getting completely free systems up and running in under and hour. Usually it comes down to the experience of the user with the processor in question, and with that type of toolchain

- the price (or lack thereof) is mostly irrelevant.

For debugger hardware (and to a lesser extent, debugger software), on the other hand, there can be very big differences in quality, features and speed. If you actually need the speed and features, that can be worth paying for.

Reply to
David Brown

That's the problem. Canvassing colleagues only turns up a few working under DS5. But, those that do seem to have good things to say about it...

The switch is already present in DS5 -- AFAICT you have a choice of the linaro gcc compiler or the newer llvm implementation.

Nor can you predict what the type of support I might receive in the FOSS/embedded community would likely be. How many folks are hacking (existing, "running") kernels vs. tinkering with userland apps (vs. never even LOOKING at the sources!)? I didn't notice a lot of folks chiming in with their first-hand experience with A5 cores when I asked, recently! How many *fewer* are developing custom RTOS's on those? And deploying in a distributed environment?

[Of course, no guarantee that any of the ARMLtd folks have the requisite experience/knowledge, either. But, they probably have a bigger incentive to get me to a solution -- no per unit royalties until product moves out the door!]

So far, I can only comment on what the few colleagues who've used their toolchain have had to say. I've done a fair bit of work with gcc/gdb and know there is always a significant effort to getting the remote stubs working properly, adapting to your particular RTOS interfaces, etc.

The risk of the DS5 approach is that it might be too closed to allow me to make those sorts of tweaks (hard to imagine since they aren't also pushing an integrated OS/toolchain solution -- unlike folks like IAR, etc.).

There's a huge difference debugging userland applications with a running OS -- or, even modifying an existing OS to add features -- vs. working from bare metal, up. I.e., starting with *just* silicon.

I suspect ARM's models aren't supported (sold) in FOSS gcc/gdb/eclipse implementations so, you're stuck with debug *hardware* entirely. How many hours of poking at stubs are required before you've paid off the cost of a commercial tool (*cost* seems to be the only issue keeping folks from the ARM tools). How many protoboard revisions do I have to run through with the gdb approach due to its reliance on "real" hardware?

Time to bake the cookies...

Reply to
Don Y

AFAIK Keil's own ARM compiler was dropped in favour of ARM's some time around the Keil 5 timeframe. For some time the only real difference between the two was that MDK only supported microcontroller cores.

-a

Reply to
Anders.Montonen

A day or two of fighting broken DRM is usually enough to make people see the real value of GCC's freedom.

My own experience with DS-5 only extends to a couple of days of following tutorials for Altera's Cyclone-V, but it seemed like a pretty polished package. On the other hand, at least for Cortex-M development I don't know if it would give a lot more than Eclipse combined with the "GNU Tools for ARM Embedded Processors"[1] toolchain, supported by the GNU ARM Eclipse[2] plugin and either a J-Link or OpenOCD and cheap JTAG dongle.

-a

[1] [2]
Reply to
Anders.Montonen

The only time I've had a problem with licensing on a software product was when trying to move from one machine to another (where the license was tied to the MAC of the NIC, etc.). A call to Support and proof of ownership solved that problem.

[Note in Europe I think you are still dealing with dongle'd products?]

I think most licensing woes come from folks using pirated/cracked software and later discovering that a crack was imperfect or was invalidated by an update they downloaded, etc.

I'm working with A series parts and doing bare-metal development. E.g., just getting the VM system up and running is an "effort". I believe ARM's core/device models should make that a lot easier without relying on "real" hardware -- especially if you haven't yet settled on a particular manufacturer's device, designed/laid out PCB, and fabbed prototype quantities.

If, OTOH, ARM makes those models freely available (or not-freely but with ample support for integration with "third party" tools), then that opens the door for other approaches.

Regardless, I would assume (hope) that ARM would have more of an incentive (and *specific* expertise on ARM-licensed devices) to getting you to production as most of their revenues are probably derived from recurring license fees (i.e., per unit device sales). What incentive does someone supporting GCC/GDB have to resolve some issue deep in the ARM IP?

Dunno. So far I've just had good recommendations from the two colleagues using the toolchain. Once I get unpacked, I'll download the evaluation copy of the product and start throwing some of my code at it to see how well it compares (code quality, size, speed) with the other compilers (ARM and not) I have available. As far as evaluating their "Support", I can only, so far, rely on my colleagues' experiences (but neither of them work on bare metal so I suspect the sorts of questions/problems they encounter are different that I am likely to encounter)

Reply to
Don Y

Most of my problems have been with dongles for products that haven't been supported by their manufacturers for ages. Getting parallel port dongles to run on modern 64-bit Windows systems is no fun. But protected software also prevents you from eg. spinning up another CI server whenever you need one, and networked licenses always seem to mysteriously fail at the most inopportune moment. Given the choice, I will always pick the Free option.

You may want to check out QEMU.

ARM have long contracted companies like CodeSourcery/Mentor to develop ARM support for the GNU toolchain, and have in recent years even taken over the development work themselves (AFAIK, previously they were afraid of stepping on the toes of third party compiler vendors.) If there are erratas that need workarounds, they get added to GCC immediately.

Most GCC development by far is done on a commercial basis, either sponsored by the chip makers directly, or by vendors like Mentor or RedHat. The incentives are the same as for any other compiler.

-a

Reply to
Anders.Montonen

Did you not see the link to ARM GCC?

formatting link

It is maintained by ARM employees.

They want to sell more of their chips (or, more accurately, they want their customers to sell more of their chips). That is their incentive.

--

John Devereux
Reply to
John Devereux

I (USA) haven't seen a dongled product in over 20 years. The last I can recall using were some of DataI/O's products (DASH-STRIDES, DASH-PCB, ABEL, etc.).

Please read -- and understand -- the DS5 datasheet (paying particular attention to "Fast Models"). AFAICT, the only "third party support" approach that comes close is (ARM's) Foundation Models (FVP) offering.

"FVPs, as their name suggests, are fixed. They are a black box on which you can test your software, safe in the knowledge that when the hardware arrives, you can port it over easily and quickly. FVPs are binaries derived from Fast Models, though unlike Fast Models are not customizable.

"Fast Models give you the flexibility to add complex peripherals, infrastructure and ARM CoreLink? interconnects along with a host of other ARM and third-party IP blocks. This gives software teams working on custom SoCs the ability to complete the majority of their software and integration ahead of the silicon availability.

And, that's *still* a paid, closed product (with all the same potential issues you raised, above). Yet, it is only intended for folks doing things like Linux application development (and, at that, probably only on "generic ARM hardware" or, at best, the sorts of hardware that the ARM folks envisioned for that *fixed* platform emulation)

How are the gcc/gdb folks (rgardless of who is "backing" them) going to support me, there? It would be like asking a generic tool vendor why the code from their compiler (verified as "correct" by you *and* them) isn't running on some vendor's particular "chip".

But there is more to development than getting the right *code* out of a compiler (given the "input source" provided). Did you note my OP reference to "bare metal"?

Do you, for example, contact your (x86-target) compiler vendor regarding questions about why (YOUR CODE) isn't properly interacting with the MMU on *Intel's* CPU? Or, the interrupt controller? Camera interface (remember, we're talking about SoC's, here)? "Yes, I agree your compiler is generating the correct ASM code for the HLL sources that I'm feeding it. But, my code doesn't work; where's my bug?? (or, the hardware issue that I'm currently unaware of)

Do I have to build at least one *physical* instance of every "system" for which I want to develop code before I can start troubleshooting it (with an ICE)? How helpful will the "GCC/GDB" folks be in that regard?

Reply to
Don Y

Did you see my reference to "models"? Do the ARM GCC folks provide (virtual) *hardware* support? Or, refer you to your silicon vendor for that??

Please see my (coincident) reply to Anders...

Reply to
Don Y

All the setup problems can be solved in FOSS by throwing them a preconfigured virtual machine. You can't do that with proprietary software unless you're very sure of the licences.

On a lower level, a package manager solves a lot of the same problems - saying 'take Ubuntu xx.xx, type apt-get install libfoo bar2 xbaz and you're ready'. Doesn't matter if the project uses wierd tools, just add a few more characters to the apt-get line. Again it's more pain when you have to mess about with downloading from behind paywalls, setting up licence servers, etc etc.

Theo

Reply to
Theo Markettos

I think it depends.

Models can be really useful, and ARM's are the obvious choice. If you're modifying the processor, or implementing the ISA yourself, you need a model. Models are also very handy for verification. Models can, however, be slow (eg we have a MIPS formal model that runs at ~120KIPS. Some models run at IPS - memory system modelling is a lot more complicated.)

However, most end-users aren't dealing with implementing the ISA. Indeed, most end-users aren't even building silicon. If you're just writing software, it's the core that matters and a Cortex An in chip X is the same as a Cortex An in chip Y. The cache setup and the SoC peripherals will be different, but so will the model from your intended SoC.

So you have more visibility and rigour in a model, but a Cortex An emulator combined with a Cortex An implementation may be sufficient, and likely faster. In that case it doesn't matter so much what the implementation is, since you are only interested in the core.

Though I don't know the state of verification of emulators - I get the feeling that Qemu and friends are rather ad-hoc rather than being formally checked against the ISA spec (verification gets rather complicated where memory consistency and concurrency are involved).

Theo

Reply to
Theo Markettos

The goal isn't to create "virtual products" (that need to perform exactly like the silicon in all regards). Rather, to be able to verify the integrity of the software -- and, to some extent, proposed

*hardware* -- implementation(s) without having to physically implement *every* board/design... only to discover that stepping up to a larger memory complement or more performant processor would have been a better investment.

Or, understand what power consumption is likely to be (given an actual, "modeled" performance level -- how many opcodes actually *are* executed to perform this particular task?)

[I.e., the "physical hardware" approach to development gets expensive very quickly when you have to redo two dozen designs -- schematic, layout, fab -- just to add memory or beef up the processor, etc. Or, take advantage of newly announced -- but not yet available -- devices. Ask your hardware folks what it costs -- time and money -- to change the SoC on a board and just "reconnect the I/O's"!]

We routinely debug hardware "at DC" or "in a simulator". We "single step" code to verify it's proper operation. Each approach *models* the physical characteristics of the mechanism being evaluated.

Most users (developers) aren't exposed to bare metal. When was the last time you tinkered with the scheduling algorithm in your RTOS? Or, tweaked the paging algorithm in the virtual memory implementation? I don't see the logic of crippling a development approach just so some *potential* future "developer/maintainer doesn't have to spend any money".

E.g., another approach that "solves" the problem is to *close* the RTOS implementation and treat it entirely as a "binary component": "You don't need to understand or tinker with anything inside this box" Just like many video, wireless, network drivers in Linux. This gives their developers the leeway to approach their tasks with whatever tools they choose! Don't want to use their binary? Write your own, from scratch, using whatever tools you want on the metal!

But then you can only test the "instruction set" -- not the peripherals tied to that core.

E.g., the "ARMulator" would let you test your ARM binaries... but any twiddling with cache/MMU controls was out of the question. Code that adjusted the "address decoder" did nothing, etc.

Exactly. AFAICT, ARMs models are essentially the same IP that is used to fab the silicon, placed in a wrapper that can be plugged into an IDE. If that is, indeed, the case (as expressed by ARM), then this also gives you a reference against which to measure other vendors' actual silicon: "the model claims it should behave, thusly". I doubt anyone (vendor) would lend much credence with how a FOSS model performed ("then, the model is flawed!")

But, I'm not "developing Linux apps" so a *canned* model is unlikely to do more than an instruction set simulator would. Hence the appeal of their "Fast Models" product.

Dunno. I stumbled on a colleague using DS5 at an offsite and he raved about it. While discussing it, another colleague chimed in about his experiences with it and how effective it had been at getting code up and running ("debugged") long before hardware or even silicon was available. When I countered with gcc/gdb/eclipse, both frowned as if a bad taste in their mouths.

But, we didn't have time to "play" with the tools there so I'll have to look at an evaluation copy to get a better feel for what they are talking about. At the very least, I can probably run my code samples through their toolchain and look at size/speed/memory utilization, etc. Then, be on a better footing to "intelligently" discuss the pros and cons and explore the model side at our *next* offsite (formally put it on the agenda instead of trying to squeeze it in during a lunch break).

I was hoping folks, here, would have more experience with the tools to comment, first-hand.

Morning tea...

Reply to
Don Y

The issue is, what sort of (self-imposed!) obligation do I have to pick tools that are "affordable" (with no concern for other issues that might affect their overall utility/effectiveness)?

E.g., given that, IN REAL TERMS, few folks will ever be tinkering at this level (how many Linux kernel hackers are there? How many userland contributors? How many "simple users"? Seriously... offer up REAL estimates of each of these!) Is it really worth burdening *my* efforts just to make it "inexpensive" for folks who *might* want to tinker in the future?

My hardware designs are all "open" (schematics, films, etc). But, does that mean I have to use FOSS tools to *create* them, as well? On the off chance that someone will want to *modify* a design? Is it acceptable to produce TIFFs (i.e., un-editable) of schematic pages so they don't have to purchase those same tools? Even if that means more effort required for them to create an editable document in their tool-of-choice?

Should I only use thru-hole components for those folks who can't afford the tools for SMT work? (or, should I leave the burden of making a thru-hole version in *their* lap: "here's the schematic, YOU see if you can find this SoC in a DIP...") What about components that are really only available (or affordable) in large production quantities? Does everything have to be a PIC or a PC?

Do I have to create the molds for the various plastic enclosures using FOSS CAD tools in case someone wants to add another mounting boss? Ditto any sheet-metal work?

At what point do you say, "the likelihood of someone tinkering with this thing is low enough that *they* can afford to bear the costs if they choose to do so"?

I'm not sure how many FOSS projects you've looked "under the hood". Why so many different implementation language choices (yet, never a clear analysis of why "this is better than that")? Or, build systems (I have more INCOMPATIBLE "makes" here than I can count!)? Different file compressors (I've even got cpio archives!)? Different (incompatible) VCS's?

Is the criteria "as long as a FREE piece of software/hardware can be acquired to do the job, then you are at liberty to use whatever you want"? I.e., within those constraints, the initial developer(s) are free to choose whatever environment/implementation they want?

I'm surprised they are willing to pay for the actual *components* (chips) and don't insist on those being "free", as well! :>

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.