C++ and stuff on embedded processors, again

For hard real time, pauses may also be fine. "Real time" just means that the timings have fixed upper limits. As long as the pauses can never cause timings to be missed, they do not cause a problem.

Garbage collection will /always/ take some time (as will any other method of recycling resources) - you just have to make sure it does not take /too/ long at the wrong time.

Hotspot at run time means spending time tracking statistics about how often code sections are run - so there is an overhead there. It can also be predicted at compile time (such as by looking at which functions are called within big loops, or even just manual annotations in the code) - these have no run-time overhead, but can be less accurate.

My understanding was that C# was byte-compiled, much like Java usually is. But I am sure it can also be fully ahead-of-time compiled (again, like Java), and I have no idea of the quality or features of typical C# VM's.

Profile-driven optimisation has been around for a very long time, where run-time statistics are used to improve the next compilation of the code. It is unlikely that dynamic run-time JIT on native code will do a better job there, but it is possible - especially before compilers typically did things like function cloning and link-time optimisations.

I think it is surprising that you think many people think "JIT == slow". I suspect most people think JIT is a way to make byte-code execution in VM's run faster.

Reply to
David Brown
Loading thread data ...

You got me, it was a half-trolling, half-serious statement. IMO dynamic linking is certainly becoming less important by the day, when compared to say 1990 RAM and disk space resources are basically unlimited. It's like, it's hard to care about "wasting" run-time memory when you often have gigabytes of the stuff available. Whether that's a good or bad mentality is another discussion...

I mean, if you allow yourself to broaden your definition of "application" 99% of the "applications" an average home user executes on a day-to-day basis are some interpreted JavaScript that gets sent down the pipe at them from a server.

Yes, my point was that the stuff that's required to be written in C/C++ for performance reasons is a relatively small percentage of the codebase. The resource files and scripting stuff takes up 95% of the rest of it. Modern AAA titles sometimes require 10 or 20 gigabytes of disk space, only a small percentage of that is compiled C++.

Reply to
bitrex

OK. In this group, it is hard to tell - there are plenty of people (not you) that make posts /far/ wilder than any troll!

I disagree entirely.

There are other good reasons for dynamic linking, even if you ignore the efficiency aspect. On *nix, the use of common shared libraries as part of the distribution means that bug fixes, security fixes, and other improvements can be made on libraries and fix the problem for all software that uses it. It also means common versions for common functions. For example, I use subversion for versioning control. On my Linux system, all clients (IDE plugins, command-line clients, file-manager plugins, etc.) use the same system libraries, and therefore agree on the same format for meta information stored in the filesystem. On my Windows system, these different tools use their own libraries - sometimes I am unlucky and they have different major versions and different metadata formats, causing all sorts of "fun".

Dynamic linking done write is a useful feature.

(That does not mean I disagree with the comments in this thread about the complications of getting it right, such as deciding which side of the API handles memory allocation and freeing.)

I don't define it that broadly - at best, that would be a web application.

Also note that I don't think it is a /bad/ thing that there is an increasing proportion of applications written in interpreted, byte-coded or JIT'ed languages. It means that the developers can spend more time on functionality and testing, and less time farting around tracking their mallocs and frees, or hand-writing yet another hashmap implementation. I just don't agree that compiled code is obsolete for applications.

It is not - most of the codebase is going to be in C or C++ (or, these days, perhaps other compiled languages like Go or Rust).

An even tinier percent of that is scripting code - the movies, pictures, textures, 3-D descriptions, etc., make up the overwhelming bulk of most games - and that is not code at all.

Reply to
David Brown

C++11 and later's smart pointers seem like a reasonable compromise to me. They are deterministic, have little to no overhead and used religiously for resource acquisition ensure that the application developer will _never_ leak memory.

When you have the advantage that your language makes a distinction between stack memory and heap memory the solution to garbage collection is pretty straightforward - don't make garbage. With non-reference counting smart pointers most of the work in ensuring that you don't is done at compile time, not runtime. A program that uses the reference counting smart pointers everywhere will be less efficient than one that uses a dedicated garbage collector, yeah. So don't do that, that's bad design, there's no reason to.

Reply to
bitrex

I've got a buddy in the biz, I think I'll ask to refresh my memory. ;-)

I don't see how it could be that way. Most game dev companies finally don't employ that many hotshot C/C++ devs and it doesn't take a shitload of code relatively speaking to write a graphics/physics engine.

Dragon Age: Origins had nearly a million words worth of character dialogue alone. I guess that's technically a resource but I'd guess the scripting code to organize and execute all that appropriately ain't small, either.

Reply to
bitrex

On a sunny day (Tue, 03 Oct 2017 14:40:19 +0200) it happened David Brown wrote in :

I have not done any game coding for eons... I do write a whole lot of video related stuff. Sometimes 'video' is more efficient than keeping logs and then doing a replay of say positions of vessels or planes.

So, in that context, I added yuv format output to xgps_mon (the app for xgpspc). So if you run it like: xgpspc_mon -y | ffmpeg -f yuv4mpegpipe -i - -f avi -vcodec mpeg4 bp76.avi just all day... yesterdays 'log': 633649398 Oct 3 07:52 bp76.avi #mediainfo bp76.avi General Complete name : bp76.avi Format : AVI Format/Info : Audio Video Interleave File size : 604 MiB Duration : 11h 50mn Overall bit rate : 119 Kbps Writing application : Lavf54.6.100

Video Format : MPEG-4 Visual Format profile : Simple@L1 Format settings, BVOP : No Format settings, QPel : No Format settings, GMC : No warppoints Format settings, Matrix : Default (H.263) Codec ID : FMP4 Duration : 11h 50mn Bit rate : 119 Kbps Width : 800 pixels Height : 480 pixels Display aspect ratio : 1.667 Frame rate : 1.000 fps Resolution : 24 bits Colorimetry : 4:2:0 Scan type : Progressive Bits/(Pixel*Frame) : 0.309 Stream size : 603 MiB (100%) Writing library : Lavc54.23.100

So, note the frame rate of 1 fps:-) fast enough for boats and planes and every sensor. Now play it back with: mplayer -fps 25 bp76.avi (for faster or any other frame rate with mplayer). I think a file size for that video for a whole day of 633,649,398 bytes is cool.

This excerpt from today for example detects the coastguard patrolling (2):

formatting link

Try downloading it (25 MB) with: wget

formatting link
and then play it with: mplayer -fps xgpspc_mon_trace.avi I bought the world database of all ship MMSI numbers cheap on ebay, every ship is looked up in real time, and some lookups result in alerts...

Also note the storm alert, dderived form the air-pressure changes.

Anyways, video is fun :-)

Reply to
Jan Panteltje

This excerpt from today for example detects the coastguard patrolling (2):

formatting link

Try downloading it (25 MB) with: wget

formatting link
and then play it with:

Errata: mplayer -fps 25 xgpspc_mon_trace.avi

Reply to
Jan Panteltje

Yes indeed - smart pointers here make it much harder to get things wrong (but don't assume they will /never/ leak - programmers can be very creative in finding ways to add bugs). They do not make memory management entirely deterministic, however. They make the destructor run at a known time, but that typically ends up calling "free" or a similar memory deallocator - and such functions can have significant variations in run time. Similarly, allocation is run at a known time, but the underlying "malloc" may have very unpredictable timing. It is the varying run-time of "malloc" and "free" that makes them unpopular in realtime embedded systems, and the same applies when you have C++ smart pointers.

Exactly. Usually unique_ptr pointers have exactly zero cost at run-time or in code space.

Yes, there is good reason to do it - if you need a shared pointer that can be accessed by more than one bit of code, use it. But don't use such pointers if you don't need them.

Reply to
David Brown

And most of the people writing the game code are ordinary C++ developers, not hotshots - you keep them for the low-level libraries.

It is not just a "technical" distinction, it is a /real/ distinction. The resources outweigh the scripting by a many orders of magnitude, and dialogue is a resource.

But maybe modern games make use of a higher proportion of scripting for handling things like user responses to the dialogues. And I suppose that for games with large contents, the scripting part eventually will outweigh the main code, as the scripting scales with the content while the low-level stuff remains reasonably constant.

Reply to
David Brown

Agreed, and precisely. I was too brief (for once)!

Yes, so the question becomes are the enhancements worth it and can the "image warm up phase" be tolerated.

The answers are usually "yes". Proviso: it is more likely to be true for better with long-lived applications like web servers and IDEs, and less likely to be true with short lifetime applications.

Hmm. That sounds like you know how to circumvent the halting problem!

Another issue is that in trying to optimise everything, you can fail to fully optimise that which really matters. Doubly so if it changes during the course of execution.

Yes, that's the case. I wonder if that "bytecode AoT optimisation/specialisation" is responsible for the inordinately long time it takes to install windows updates.

Caveats: I don't run windows any more, and my C# info is largely limited to a talk by Anders Hjelsberg before C# was unleashed. I wasn't interested in C# since he couldn't give a useful answer on why C# was /significantly/ better than Java. (Other people present independently came to the same conclusion)

It is a red-queen's race, where the result should be continuously re-evaluated.

That's merely what I've observed over the years :(

Reply to
Tom Gardner

Agreed.

No, unfortunately. But you can certainly get somewhere:

void foo(void) { initialise(); for (int i = 0; i < 1000000; i++) { doSomething(); } tidyUp(); }

It does not take a magical compiler to see that any effort optimising initialise() and tidyUp() is negligible compared to doSomething().

Absolutely true - see above. Compile-time hotspot analysis avoids run-time overheads but is not necessarily accurate.

Nah, that's just Microsoft's patented "look and feel". They don't want you to mistake your system for a Linux machine.

Reply to
David Brown

Yep. The distinction between "coder" and all the other people who work on a project has blurred a lot now that most everything is handled via interpreted scripts on large projects. It saves time once you have a VM sandbox where nothing a user can do can "break" the build in any material way, so you have artists, playtesters, quality assurance people all with their own revision of the game writing scripts for it. The revisions that make the cut get merged into the trunk and then redistributed.

It's not a good use of resources for either a QA person or a C/C++ coder to notice something wrong with just basic gameplay that doesn't jive with what the spec says it's supposed to be, that isn't an "engine" issue and have to notify the code team and wait for their fix to evaluate where the problem is rectified. Just DIY and if the result works for you post it for review.

Reply to
bitrex

The halting theorem just says there's no general algorithm to solve it for all inputs. Once you start putting any kind of qualifications on "all inputs" the situation improves a lot...

Reply to
bitrex

I've written a few of them, but in a smallish bare-metal instrument program, we mostly use a main state-machine loop that uses flags to coordinate with interrupt routines. We find that we don't need a multitasking kernel, namely we don't need to suspend/resume persistent processes.

If we need nasties like TCP/IP, web page service, local file storage, we sometimes cut over to Linux. But we have done that stuff bare-metal, too.

--

John Larkin         Highland Technology, Inc 
picosecond timing   precision measurement  

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

There might be some validity to that for small/academic problems. But once you have something the size of, say, an enterprise server it is far less likely to be usefully valid

Reply to
Tom Gardner

Indeed, but that is effectively one microbenchmark. It falls down when there are many many possible microbenchmarks.

It doesn't need to be "accurate" merely "better". Besides, when caches and networks are involved, the concept of "accurate" becomes moot :(

I recently told a NI salesman that I wasn't clever enough to run windows, so they should create more linux clients. He laughed - until I pointed out that I didn't know how to protect against viruses nor to recover machines that MS had bricked.

Reply to
Tom Gardner

@Jan: tried to look up the fdesign app you referenced, but came up short. Do you have a link?

Cheers

Klaus

Reply to
Klaus Kragelund

On a sunny day (Tue, 3 Oct 2017 13:12:30 -0700 (PDT)) it happened Klaus Kragelund wrote in :

fdesign is just part of libforms (also called 'xforms', not to be confused with 'xforms' hehe).

Although libforms can be installed in Raspberry with apt-get install libform-dev as binary, it also requires a lot of other stuff (I have it working on raspi). That binary does _not_ seem to have fdesign.

Other people than the original developers seem to be maintaining libforms these days,

So I have taken the liberty to put an older source code package on my site: wget

formatting link
If you unpack that with tar -zxvf xforms-1.0.tar.gz then cd xforms-1.0 you will already see the fdesign directory,. Next is xmkmf -a (for that you need to have 'imake' installed, apt-get install imake on Debian likes) Better see the README00

Not sure which one I am running now on this PC.... [I only update things when I buy new hardware, and that was already a few years ago for this PC).

I also have a html manual..

But maybe you should try these links first, before playing with my older version ;-)

formatting link
formatting link
formatting link

So I have _not_ tested teh latest versions, normally compile from source anyways, but my raspberry was an apt-get.

If you get stuck ask again.

Reply to
Jan Panteltje

I like the state machine design pattern a lot for embedded, and you definitely don't need C++ or OOP to make that work.

The place where I feel OOP has a place is in defining the abstraction between the state machine which organizes the logic of the application, and the "drivers" which actually do the dirty work of interfacing the embedded system to hardware.

An ideal state machine has no conception of global variables or "mutable state"; it's a purely functional structure. Using OOP for the interfacing allows you to have an abstraction layer to the hardware that's extensible and easy to modify, but all the mutable state is wrapped up there, and ideally one can maintain a strong firewall between the physical implementation of some task like file storage and the logic defining how that resource is used in the application.

Reply to
bitrex

GCC is pretty awesome at "knowing" what you're trying to do and optimizing it nowadays. With move assignment and construction it avoids making unnecessary copies like the plague; even if you return some big data structure like a vector from a function by value instead of reference it definitely won't copy it if it doesn't have to.

Tail call optimization has been around a while but I've found that even recursive functions which aren't obviously tail-call optimizable but might be if say an accumulator were used...the compiler will suss it out and rework the function so that it is able to be optimized in asm.

Reply to
bitrex

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.