Language feature selection

APL must be one of the most cryptic languages that is actually used (i.e., excluding languages on this list ).

Reply to
David Brown
Loading thread data ...

If the changes you want can be created with a "preprocessor", then I think it has less value -- the same preprocessor (with a different back-end) could be applied to some other language (assuming the syntax you've chosen is compatible to both).

E.g., I build state machines "in-line" with my code by letting a preprocessor deal with their (abbreviated) syntax instead of forcing the SYNTAX of the implementation to be compatible with the application's language.

OTOH, if you are actually changing the nature of the language by adding some characteristic (e.g., support for infix notation in overloaded arithmetic operators in C++), then I can't see an easy way of doing it -- short of rewriting the compiler and/or runtime, etc.

[Or, some intermediary document that *drives* the operation of the compiler]

In Limbo:

// define channel in file scope so visible to producer/consumer // if the producer and consumer exist in different modules, then // the definition must be consistent across them pipe: chan of (...stuff...)

init(...) { ... // instantiate channel pipe = chan of (...stuff...) ... spawn producer(someargs, pipe) spawn consumer(otherargs, pipe) ... }

producer( args, gozeout: chan of (...stuff...) ) { ... while (...) { ... gozeout know how it is (my feeling about the latter is they are

If your "expansion" can be restated as syntactic shorthands (i.e., handled by a preprocessor), then that relies on how much you want to contort the underlying syntax to support the

*changed* syntax. E.g., you could replace parens with angle brackets EVERYWHERE -- or, in specific cases -- but that makes the syntax of the exposed "language" significantly (though superficially) different from the underlying language. [However, it could conceivably "make sense" if the additions you were making were more "consistent" in that new representation]

In FORTH, for example, "pick a word (identifier), any word" (practically). You're effectively treating (FORTH) "words" as your "alphabet" though subject to the syntax restrictions of FORTH.

I guess I don't see how you can make *significant* changes to a language without (the potential risk of) reimplementing that language entirely.

Consider the "channel" example above -- through to "running code"...

[A pair of machines to build, today -- now that the memory tests have finished...]
Reply to
Don Y

Non-traditional in today's environment only or non-traditional over the whole of computing history ?

You sound like you are asking for language features which would be considered non-traditional in today's world only.

As such I would offer you the strong typing as seen in the Wirth languages or (especially) Ada. A person who only knows Javascript and C would probably consider C to be strongly typed. A person with Wirth languages (or especially Ada) in their background would have a different viewpoint.

I think we are losing something if we continue the move away from strongly typed languages for various application domains. I really like the discipline that strongly typed languages force upon you as I think you produce better code as a result.

Simon.

PS: I'll let you decide if you consider Ada to be a Wirth language or not. He didn't design it but it's very strongly based on his language concepts.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

To me, "traditional" means COBOL, FORTRAN and assembly.

--
Grant Edwards               grant.b.edwards        Yow! Life is a POPULARITY 
                                  at               CONTEST!  I'm REFRESHINGLY 
 Click to see the full signature
Reply to
Grant Edwards

On punch cards or paper tape. :)

--
Grant Edwards               grant.b.edwards        Yow! I'm ZIPPY the PINHEAD 
                                  at               and I'm totally committed 
 Click to see the full signature
Reply to
Grant Edwards

The main language feature I like is the ability to interact with and test code without having to recompile or deal with a complex debugger. Forth provides that with an interpreter that can run on the host or in some cases the entire tool runs on the target so there is no host/target dichotomy.

Last year I was able to work on a TI ARM kickstart board remotely by using a simple terminal emulator over my network. The entire tool ran on the ARM board with the files and editor on my laptop. Mind you this was not a cell phone type ARM processor, this was a Stellaris CM3 device with a few kilobytes of RAM.

--

Rick C
Reply to
rickman

The @ in various forms to tie a physical address to a symbolic variable. This construct more that any other single thing allows many high level languages have the ability broaden the range of potential applications from high level to close to the machine.

It is language independent and very easy to add to compilers without changing the basic form of the language.

w..

Reply to
Walter Banks

Do you mean pointer variables, like *x in C?

True in that the input programs can look about the same as before. But it can change the range of behaviours possible to the programs, making them more flexible (maybe good) but less predictable (maybe bad). So it's a trade-off like lots of other things are.

Reply to
Paul Rubin

Do you mean having a compiler extension for:

volatile uint8_t REG @ 0x1234;

rather than the standard C:

#define REG (*((volatile uint8_t*) 0x1234))

Certainly the "@" syntax is neater, and certainly it is nice if means "REG" turns up in the linker map file and debugging data. But it is hardly a breakthrough, and does not allow anything that cannot be done in normal C syntax just as efficiently (assuming the compiler implementation is sane).

Most embedded compilers don't have anything equivalent to the "@" syntax

- yet people seem to manage to use them perfectly well for "close to the machine" programming.

Reply to
David Brown

Yes and no. It limits their scope downwards ... but that actually isn't as limiting as it sounds. It does not prevent, e.g., modular systems.

I think you're confusing the closure with the function, and the scope of the closure with the scope of the function.

A closure can be defined over a function anywhere the function is in scope. A function F exported from module X may be used by a closure in module Y which imports X. Similarly a closure defined in module X may be exported from X as an opaque object.

Recall that a module may require "initialization" when it is imported. Closures defined for export would be created at that time.

You can do this even with stack bound closures. Consider that imported namespaces need to be available before the importing module's code can execute. However, even in the case of the 1st (top) module, the *process* invoking it already exists, and therefore there is already is a stack.

With appropriate language support, a module which exports closures can construct them on the process stack at the point when the module is

1st imported. Then they would be available anywhere "below" the site of the import (subject to visibility).

You would need to be careful of multiple imports in such a scenario, but that is simply a namespace issue: additional imports would simply reference the existing namespace and the closures created by the 1st import. [This would not be any different given heap closures - all the imports would still reference the same objects.]

In any case, the functions involved cannot be unloaded (at least not easily) if they will be needed by something that is still running.

My point, though, is that the function and a closure that uses it are

2 different things. Their lifetimes necessarily are linked, but their visibility scopes may be very different.

George

Reply to
George Neuner

I saw it as an abstract question. My comment was a clean simple way to directly interface to the machine completely separate from the language proper. Developers in a surprising number of languages have found some round about way to accomplish this demonstrating its need.

The C constant pointer Kludge although functionally similar generally misses out having proper symbolic debugging support.

uint8_t REG @ 0x1234; is just as viable a declaration without the volatile.

Many if not most non open source compilers for embedded systems some form @.

The concept as I have found is just as useful in many languages I have used.

w..

Reply to
Walter Banks

I was thinking of something functionally similar to the outcome or constant pointers without the baggage. C pointer to (*x) variables can access specific physical locations is not the same as a simple address assignment to a variable with source level debugging and symbol table support to support that variable at that address.

I wasn't specifically thinking of C when I posted the comment. It is a valid comment in many languages.

w..

Reply to
Walter Banks

Far and away the most common reason to need variables at specific addresses is to access hardware registers. In such cases, the C "constant pointer kludge" is perfectly good, and decent compilers will generate solid code for it. (Often they will be able to generate more optimal code than if you use alternatives such as symbols in assembly or linker-defined addresses.)

It is true that the variable is not a normal variable available for symbolic debugging - but you don't /want/ it to be available in the debugger like a normal variable, because it is /not/ a normal variable. With normal variables, you want to be able to hover your mouse over the variable "nextCharacter" in your code and have the debugger show you the value. But you absolutely do not want that to happen when you hover over "UART_DATA_REG", because reading it could trigger a move from a FIFO, setting a CTS signal, or whatever.

An IDE for embedded development will usually have a window for "registers" or "IO registers" - /that/ is where you want to see this sort of thing, because that window and the software behind it should be aware of details about what can be read or written in different ways.

But what use is that?

No, they don't. /Some/ have a type of "@" syntax. /Some/ have other extensions to declare something as an I/O register at a fixed address.

and/or linker scripts.

The "@" form is most common in compilers for brain-dead 8-bit CISC architectures, which are rapidly becoming a thing of the past. With more modern cpu architectures, it is usually more efficient to access peripherals as structs with a base address, rather than a series of explicit addresses. In many compliers, the "@" syntax or other special I/O register syntax is tightly tied to fundamental types - you can use it for uint8_t, uint16_t and perhaps uint32_t, but not for "struct uart_t".

It is certainly possible that compilers for modern cpus /could/ have a "@" syntax that works on structs, arrays, and other types. But there is /nothing/ that could be done with that syntax that could not also be done perfectly well with the "constant pointer kludge". And since these definitions are almost always buried inside an automatically generated chip-specific header file, "ugly" is of little relevance.

"what constitutes a volatile access is implementation dependent".

Other languages that don't have something equivalent to the "C constant pointer kludge" may need an extension of some sort to efficiently access I/O registers. But which languages would that be, and are they actually used in embedded programming?

Reply to
David Brown

In Ada (actually used in embedded programming), it is always as simple as this:

Some_IO_Reg : Some_IO_Reg_Type; -- type defined as appropriate for Some_IO_Reg'Address use 16#0000_0800#; -- or whatever value is needed

so no kludge is required anywhere. Register types are easily and clearly described using bit level record representation clauses. In fact, pointers are almost never needed (and that is a very good thing).

-Britt

Reply to
Britt

I only know a little Ada, and haven't used it for anything serious, but I remember having seen something like that. So no need for an "@" extension there.

I don't feel there is much wrong with the standard C method of making constant pointers such as I described. Walter called it a "kludge" - I specifically use quotation marks because I don't see it as a kludge. The only problem I see is that it defines the names as macros which don't obey scope.

Reply to
David Brown

Agreed. Apart from the scope issue, I don't see any problems with the standard C "kludge".

--
Grant Edwards               grant.b.edwards        Yow! Do you have exactly 
                                  at               what I want in a plaid 
 Click to see the full signature
Reply to
Grant Edwards

I referred it it as a kludge because it declares a variable as a pointer to an address expecting the compiler to optimize it as an address reference and in many compilers/linkers fails to provide the source level debugging support I would like. I wasn't using "kludge" as an insult but as a less than optimal way of describing what is desired.

w..

Reply to
Walter Banks

Am 09.03.2017 um 16:36 schrieb David Brown:

Well, here is one detail: it's not entirely standard, because it involves the explicitly implementation-defined conversion of an integer constant to a pointer. I.e. in principle the same construct on two different C compilers could yield different results.

And given the fact that a hardware register, pretty much by definition, _is_ an entirely scope-less, global object, that's not much of an obstacle, either. If one were truly worried about scope, it would always be possible to encapsulate all accesses in an accessor module --- at the likely cost of at least one function call overhead.

Reply to
Hans-Bernhard Bröker

Since Don asked for new useful features, there's new Ada functionality in this area currently working it's way through the Ada standards committee and currently looks like it might be ratified for the next version of Ada.

Representing register bitfields as Ada bitfields currently has one major limitation in that you cannot directly update multiple register bitfields in Ada in a single R-M-W sequence without either using a temporary variable or resorting to C style bitmasks.

In my original proposal, which is currently working it's way through the ARG, you would be able to specify multiple bitfields to be updated as part of a single assignment statement which would be translated into a single R-M-W sequence by the compiler. No C style bitmasks or temporary variables required.

See

formatting link
for details. If the formal language at the beginning puts you off, then skip down to the appendix section which contains my original proposal.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

That's true (and I have been a bit imprecision about distinguishing between "standard C method" meaning "the method used by most people in C" and "standard C" meaning "fully defined by the C standards").

But any "sane" compiler will, in practice, implement these things in the same way - so it is a much more portable solution than an @ extension.

Yes.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.