Embedded folk have stronger imperatives than that. Either it doesn't matter, because the embedded system in question is in a radar system or some such and can drink as much power as it likes (in which case the off-the-shelf chip will probably do the trick), or it matters so much that there's no way that it will get a look-in: the guy who does (whatever) in less silicon, with fewer watts/longer battery life will walk away with the business. Mostly that means fixed point, rather than even binary floating point. The other big "embedded" (i.e., non-user-programmed) user of floating point is gaming graphics, and I suspect that that's a field safe from the scurge of decimal arithmetic for the forseeable future, too. (Most of that stuff doesn't even bother with the full binary IEEE FP spec, of course: just what it takes to get the job done.)
Also: if on-chip parallel processing ever gets useful, then there may well be significant pressure to spend the gates/area/power on more binary FUs than fewer decimal FUs. Outside of vector co-processors and GPUs, I have my doubts about that.
So: python and C# (both popular languages) are going to add 128-bit decimal float to their spec, and Java already has it? This works fine on binary computers, I expect. I also don't expect that there will be *any* applications that would get a significant performance boost from a hardware implementation, so customer pull will (IMO) be minimal. The only reason that this is being thought of at all is that no-one can think of anything better to do with the available transistor budget...