filling remaining array elements with fixed value

[...]

[Example that uses list comprehensions and includes the "special values" and curly braces and whatnot.]

If this were comp.lang.python, I'd be obliged to post a version that uses iterators and itertools instead of list comprehensions, but we'll spare the denizens of c.a.e...

--
Grant Edwards               grant.b.edwards        Yow! I've got a COUSIN 
                                  at               who works in the GARMENT 
                              gmail.com            DISTRICT ...
Reply to
Grant Edwards
Loading thread data ...

Even that's a bit tricky. The compiler doesn't need the linebreaks, and at the end of the day, conceptually you've got an array of all 0xFF with a couple of exceptions. So build the array that way:

data = ['0xFF'] * 1024 data[0:3] = ['0x0A', '0x0B', '0x0C'] assert len(data) == 1024 print(""" /* Automatically generated document, do not edit. */ #include uint8_t data[] = {{ {0} }}; """.format(','.join(data)))

Memory and processor cycle inefficient as all hell. Still don't care.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com 
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

Very unsure, of course, which was my point: having a virtual machine snapshot from 2014, virtualizing a 2014 machine, will not help me in

2029, if there are no machines/hypervisors that can run that snapshot.

David Brown advised using KVM for virtualization, because KVM can "cross-virtualize", for example running an x86 VM in emulation on a processor of a different architecture. I will look into that, thanks David!

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

Thanks, David, for your helpful answer.

Today they will... but will they, in 2029, or 2040? I am unsure.

This emulation ability is certainly a step towards a solution.

As I understand your suggestion, it involves the following steps that I should do to set up a long-term maintenance system that does not assume survival of the current host-PC architecture and OS until 2040:

  1. Find a virtual HW composition, probably based on x86, that: a) QEMU can emulate, and b) is supported by the OSes on which our tools run, and c) runs our tools, too.
  2. Configure KVM+QEMU to emulate this virtual HW.

  1. Install our OSes and tools on a VM using this virtual, emulated HW.

  2. Maintain KVM and QEMU, using their source code, to keep them working on future host PCs, and preserving their ability to emulate the HW composition defined in step 1.

This looks possible in principle. Far from easy, though.

Good advice, but unfortunately not fully possible for us, because the customers require us to use some closed-source tools. Fortunately, the compiler is open source (GNAT Pro).

The project in question is part of the ESA/EUMETSAT Meteosat Third Generation programme, which intends to build six satellites of two different types, but will keep only one or two in orbit at any given time -- the rest of the built satellites will be stored (i.e. "archived") and launched later, as and when the flying ones are retired. Yes, we plan to archive some physical computers for the development and maintenance environment, but we will do that as late in the project as possible -- when the *next* generation of PCs no longer supports our (frozen) tools.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

Hi Don,

I think we're in violent agreement. :)

Within the last six months I was asked to come on-board a project that required porting from one assembly language to another that was rife with very tedious macros (another tool "feature" that can be grossly misapplied, IMO), and for whom the lead engineer used several tools he was adept with to build and test the project, including his own customized scheme interpreter, gnu make, a central code autogenerater based on awk, etc. And all this on XP using MS Visual SourceSafe!

It was a nightmare.

--
Randy Yates 
Digital Signal Labs 
http://www.digitalsignallabs.com
Reply to
Randy Yates

In the event of your virtualization software no longer running the VM, or no longer running on the hardware of the day...

Could you not run the obselete virtualization software as a virtual machine on the new virtualization software? :)

--

John Devereux
Reply to
John Devereux

Correct: only certain privileged instructions are trapped and emulated.

It's reasonable to worry that new chips won't support some mode that you need going forward, but consider that Intel's "i" series processors today can software emulate a Pentium/MMX faster than the actual chip ever ran.

As long as new chips retain ISA compatibility, or there is a decent emulator available, there should not be a problem. At worst, you might need to run the VM software on top of the emulator.

As an example, VMware still has downloadable "player" versions [which run VMs but don't create them] for every generation of their software. If you need to run a v1.1 VM created for an 80386, you can. These players can be run on top of QEMU or Bochs x86 emulators.

Long term, it's more likely that you can keep a VM in service than an actual computer. I know DonY has had good luck keeping old machines going for decades, but in my experience, his experience is unusual.

If you are dealing only with software tools and don't need to keep special hardware, then VMs definitely are the way to go.

If you do need to keep hardware, remember that every successful bus architecture ever made is still available in an industrial backplane. It may cost you a limb, but it's possible to keep all your old bus tied hardware and still run your software in a VM on a modern CPU. You may be able to combine your old development systems into (perhaps many) fewer boxes, though if you have incompatible hardware setups you may need to use a bare metal hypervisor rather than an OS hosted one.

YMMV, George

Reply to
George Neuner

I think, to some degree, it's "only natural" for folks to come up with solutions that fit *their* abilities/visions/expectations/etc. "Why *personally* take on extra work/risk for no "personal" gain?"

In the corporate setting, you're not rewarded for anticipating future needs -- just meet the deadline/target cost/etc. If you comply with any *explicit* requirements on your methodology, then you're golden.

In the "independent" setting, you face similar (though different) constraints. E.g., if I give a fixed bid, then any "extra costs" above whatever the "cheapest/quickest" way I can do it come out of *my* pocket (extra time and/or expense). On a T&M job, then the costs of "doing it right" (whatever *that* means!) get passed directly to the client -- in a very *obvious* manner ("Why am I paying you to buy all these tools and come up with these 'elaborate' schemes? Can't you just use...?")

In my case, I only contractually agree to provide sources, schematics and hardware prototypes as deliverables. How I *get* to that point is entirely up to me! And, as I only agree to support a design to the extent of *bug* fixes (i.e., I make no guarantees as to my willingness to take on enhancements, derived products, etc.), then I just have to make sure whatever approach I use is viable for the duration of the support aspect of the contract (of course, bugs can, theoretically, turn up at *any* future time -- a flaw in my contracts! :< )

But, it often (esp in the FOSS world) appears that *no* consideration for others is made in the choice of tools. E.g., writing something in perl to do what a sed script could just as easily do. Or, using the "newest" compressor when the savings over a more traditional compressor are negligible ("Yippee! You saved 2KB! That will trim my download time by a few milliseconds and save me half of a 4KB disk block!")

Or, some convoluted build scheme (e.g., some even require you to use on-line servers to do the build) instead of just "make" (and, lets not forget all the variations on make!). Jaluna's build system was, by far, one of the most needlessly complex!

And, even if you buy into the reasoning for whatever choices the original/previous developer made, there's never anything that describes the process and/or *why* it is (or needs to be) the way it is!

These folks are the ones who should be *forced* to perform some maintenance aspect on *their* project 5 or 10 years after release ("What do you mean, you can't do it? Didn't *you* come up with this scheme? If *anyone* should be able to do it, it should be

*you*, right?" :> )

On a related -- though different -- note, I have not yet found a

*good* way to provide a "roadmap" to the code in my projects. E.g., for the hardware, I can draw a block diagram (or, do a hierarchical design) that shows "The Whole" and lets the viewer drill down to the detail of interest.

But, I've not been able to come up with a similar mechanism for software. Especially when it's a "system" and not "just a program". I.e., *after* you understand the system and have had some experience navigating the codebase, you can *probably* find your way around. But, when exposed to it *cold*, it's just too overwhelming: "Where do I start?"

Reply to
Don Y

The essence of the problem is that *someone* must provide the support for whatever tools -- actual hardware, software, emulation, etc. If you are "lucky" (or very conservative) and pick something that

*stays* "mainstream", then your *chances* of benefiting from SOMEONE ELSE providing that support (*ignorant* of your needs).

OTOH, if you are *unlucky* and make a choice that the "market" eventually abandons, then you need to be in a position to "support yourself".

There have been a lot of different processors, languages, etc. in the past 15 years (reflecting your 15-year timeframe backwards). How many of them are still "supported"? Can you find a 68040 emulator? 99000? 32000? Z380? etc. (*other* than "hobbyist" attempts)

Supporting *compilers* (assemblers, linkage editors, etc.) is almost always possible -- even if you have to roll your own. These are just "text processing" applications, of sorts. And, if push comes to shove, they needn't be very *speedy* (e.g., preserve their binaries and documentation for the CPU/OS on/in which they execute and you can always write a *simulator* that can be dog-slow as it slogs through the executable).

Interactive applications (tools) are the big risk. Not just because they can be tedious to use "at reduced speed" (run your IDE on a 100MHz PC someday and see how much fun *that* is! :> ). But, also, because (IME) many desktop apps and toolkits (libraries) have inherent races that you *aren't* victimized by solely because the machine is fast enough to make these "critical regions" small enough that you don't encounter them (often). Slow the processor down and, suddenly, those regions grow to a size where your "human speed" actions can easily trip them up.

But, by far, the *biggest* risk is the actual silicon itself. Can you be sure to find components 5, 10, 15 years hence? Will those components be the *same*, functionally, as the ones you have specified today? Are there aspects of your design that subtly (invisibly?) rely on some characteristic of *these* devices OF WHICH YOU MAY NOT BE AWARE?

[I was asked to come up with a new design for a *hand tool* many years ago because a change in vendors had caused one of the new components for the *old* design to be BETTER than it had been, previously. As a result, the production line was unable to build the old design as it had adapted to the flaws in the old component!]

If you do a "big buy" and warehouse the "spares", are you sure they are operational at the time of purchase? Will your storage techniques ensure their *continued* functionality years later? What if your warehouse catches fire? While you can cheaply duplicate your sources, binaries, tools, etc. keeping an off-site backup of your *inventory* essentially means *doubling* your inventory!

Long term support is not an enviable position to be in. Ideally, make it someone else's problem! :>

--don

Reply to
Don Y

DonY had to start preparing for long-term support long before emulators, hypervisors, etc. were available. And, do so *without* a "support staff" onto which he could pass that responsibility. :>

I would discourage archiving hardware just because it *is* hard to keep it running. Especially vintage 80's hardware (where things were far less "standarized" than today's machines). And, there *are* alternatives, today.

I started playing with this option (at your suggestion, George) many months ago. I'm now at a point in my career where I can shed those support *requirements* and see how I *might* have done things.

One thing I discovered was it is much easier to set up a machine that *just* runs VM's (than to try to have that capability alongside your "regular workstation tools"). So, I set aside one of the smaller servers for that role. And, in keeping with my preference for small spindles, I've opted to just build "small systems" on individual *removable* ~140GB drives. So, I can pull a "system" and set it on the shelf, "cold" (instead of leaving the drive on-line where it can be the victim of a power glitch or careless "rm -r *", etc.

Unfortunately, it takes a *lot* of time to set up all these VM's and the various tools that the tools *in* each of them require! I've had to rethink how to partition the "systems" so I don't end up having to add -- and maintain -- Tool_X to several different VM's. I haven't been able to sort out if I can run multiple VM's with the *illusion* of a single unified desktop (e.g., have schematic and PCB tools in one VM yet *see*/manipulate those objects as well as "drag and drop" between that and another VM hosting software devel tools)

(Having a personal IT department would be *so* nice!)

Keep in mind any devices used for your development activities that sit

*outside* the "PC" also are of concern! If you can't "talk" to your target at some future date, some of those most precious tools that run *on* the PC may be of little use!
Reply to
Don Y

Yes -- and saying "just use a virtual machine", as some have said to me (outside USENET, I mean) is not enough.

In my case, I only need to keep a SW maintenance environment (compiler, linker, testing tools) working. An emulator for the host copmuter could be sufficient, and should not be too hard to maintain if it is written portably in a standard mainstream language and does not rely on specific HW support.

It seems that current hypervisors require some specific virtualization support from their host processors, which could become a problem in the long term. The KVM website says that it requires virtualization HW, but I'm not sure if that applies also in the KVM+QEMU combination. On the other hand, perhaps QEMU alone is sufficient -- I won't need to run multiple VMs on the same host.

I would not want to roll my own Ada compiler, however. (Well, actually I would like to do that, but it would be too expensive and/or take too long.)

But the simulator/emulator has to be complete and accurate enough to run the operating system on which the compiler/linker or other tool runs. That requires simulation of a whole computer, not just a processor. QEMU can simulate some systems, but I don't yet know if it can simulate a system on which my development OS and tools can run. Of course I could extend QEMU...

Fortunately I won't need any such.

Fortunately, again, that is not my problem. In fact, all the target systems will be built soon, and those that are not deployed at once will be moth-balled for future use. In nitrogen at controlled temperature and humidity, I believe.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

Of course -- that just changes the problem from "how do I support my tools" to "how do I support the VM that my tools *rely* upon".

Is *all* of your testing done without dealing with the "real world"? I.e., passing test cases (const's) to the UUT and verifying the results are "as expected"?

I think (speaking without detailed knowledge of your specifics) that QEMU or similar "simulator" can probably do the job for you. The problem then becomes ensuring that QEMU will run on "whatever" a workstation looks like in 2029!

No. It only needs to *emulate* the features of the OS that the applications require! E.g., it probably doesn't need to support signals (directly), timing primitives, limited IPC/pipe support, etc. It almost certainly wouldn't need to know how to talk to

*real* "devices", etc. Even filesystem support could be hacked (as *you* know where all reads and writes for a particular compiler invocation should be directed!)

If you're using GNAT, then just grep the sources for all calls to the OS/filesystem/etc. (elide stdlib and it's ild and see where the "undefined references" occur)

You *don't* use gdb or any other interactive tools for debugging?

Any problem that can be someone ELSE's is preferable to problems that must be *yours*! :>

Note that there *are* groups who actually are focused on issues of preserving *media* for (VERY) long periods of time. Most of those solutions tend to require a bit of an investment, though.

OTOH, that sort of investment may be acceptable to the folks underwriting your effort.

I've recently been lamenting how much "old research" is essentially "lost" due to poor preservation techniques. Even things like microfiche which *tried* to make such preservation (of *paper*) more practical have proven ineffective after just *decades*. It's disheartening to imagine how much will be "reinvented", needlessly, as other things "slip away" due to inattention, disinterest, etc.

Reply to
Don Y

If the day comes when we can't get a computer to read a simple file of bytes, there will be lots of bigger problems than your particular case! I know windows tries to make that sort of thing more difficult with each generation, but fortunately we have Linux, the BSD's, and other Unix systems - these are going to be around for a long time to come, and old versions can still run fine on new hardware.

But you should make a point of sticking to mature and stable filesystems

- prefer ext3 rather than btrfs, for example (FreeBSD will work with ext3, albeit without journalling, giving you a second source).

If it were easy, your customers would not be paying you big money to solve the problem! But yes, that's pretty much what I had in mind.

If you want extra points, get a PPC based computer and check you can run the VM on that too. While I would not expect that in 2040 we will have PPC computers but no x86 compatibles, this would give you a second working system for extra confidence.

Well, you get as close as you can. If you can avoid dealing with node-locking, licence restrictions, etc., that will be fewer problems to worry about.

Reply to
David Brown

Correct me if I am wrong, but isn't that still an array of 8 char? ;)

--

Rick
Reply to
rickman

For a long time, I used 1/2 inch 9 track 1600 bpi (no need for head alignment as with 800 bpi) open reel ANSI magnetic tapes for storing source files. No file archives or compressing, just plain sequential text files. These could be readable on any mainframe or minicomputer of the time and I assumed also in the future.

Unfortunately I was wrong, for instance in Finland, there is only a single functioning 1/2 inch tape drive in a computer museum, but how long is it going to be working.

So in reality, you need to do the copying to any mature technology about every 10 years.

Realistically CDROM (and DVD/BlueRay) file systems on physical disks would be the most likely media to be readable in 2040. How would you connect any current magnetic or SSD to a computer in 2040 ?

The question is as relevant today, when I have 5 or 8 channel paper tapes or 1/2 magnetic tapes, into which holes on my modern laptop do I feed these tapes ? :-)

Reply to
upsidedown

If you want hardware you will be able to buy in 2040, put your app on an

8051... the CPU that will never die... at least until they stop making microwave ovens. lol
--

Rick
Reply to
rickman

I see the smiley, but as you probably know, such chains of simulations have been used in the past, typically when a computer manufacturer (say IBM) comes out with a new architecture but wants to keep its current customers happy by running their old programs without recompilation. The most recent such case was perhaps when Apple switched from PowerPC to Intel for its PCs.

Can we expect that the machines in 2040 will be able to run today's x86 executables, or will backward compatibility break at some point, perhaps because of Moore's law coming to a stop? The first break will certainly be covered by SW to simulate the old architecture (x86) on the new (whatever it is), but that simulation SW may not widely used for more than a few years, and will then perhaps rot.

Some professional or industrial organization could define a "long term persistent" architecture that is designed to be simple to simulate and simple for porting tools, letting performance suffer as it will. This would be "deep time" thinking in the computer domain. But perhaps the x86 architecture already plays this role, de facto.

Returning to John's suggestion and smiley, I believe it would be easier to maintain a single simulator of a very old machine, running on new machines, than a chain of simulators of a series of different machines.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

The problem with 9 track tape is that it requires regular maintenance (e.g., "retensioning" periodically) to preserve the integrity of the data recorded thereon. Of course, unless you buy a second "dummy" transport and remove the head, that retensioning puts wear on the media and the head. No big deal if you *only* use the transport and media for archival storage and restoration -- but, if you also regularly have it in service... :<

And, *expected* life is more like 5-8 years if you start with good media and keep it stored properly (avoid heat and humidity). OTOH, I still have an original X Windows 10.4 distribution on a 7" reel that was readable as recently as last year (haven't tried it since).

The biggest killer for low density tape (I have an 800/1600/3200 transport) was how little you could store on them! E.g., less than 100MB on a 10 inch reel -- lots of space for very little data!

That's about right. OTOH, there is no reason that you *have to* discard the source medium. If push comes to shove and your "new" archive is lost/corrupt/inaccessible, you can *hope* that you may be able to recover from the predecessor.

Much "consumer" CD/DVD media is not suited to long term storage. Again, you're in the 10 year ballpark if well cared for. A bigger problem may be finding a *good*, reliable drive that will still function after that period of time.

Again, if used regularly, there is a risk of the laser diode going south. Or, the mechanism gumming up from *lack* of use. Or, one of the countless little plastic parts cracking, etc.

Don't you have a paper tape reader/punch? (I have two -- one standalone and one in the ASR-33...). You can always keep an optical reader "in a tiny box" to gain access to them.

Along with a large collection of "tape (and other media) drives". (sheesh! talk about a potpourri of "experiments in marketing"... there have got to be more media forms than anyone can count!)

[I draw the line on Hollerith cards, though...]
Reply to
Don Y

Yes; real world HW is not my problem, that's for the higher levels in the supply chain. All our testing is on a simulated target system.

But our full testing system is fairly complicated, involving a target-processor and equipment simulator, a special test language (in fact several), a queue of tests to be run, a supervisor to run them, lots of I/O log files, etc. And optionally Eclipse, although I think I will avoid that if possible.

Yep. Or in 2040 or so, which is the target for maintenance.

However, I should be frank that this discussion about VMs is only theoretical for me, at the moment. The original customer requirements asked for maintenance until 2040, but at contract time this was reduced to optional extended maintenance packages, each of limited duration. At present, our plan is to archive host PCs, purchased as late as possible, and hope that they will stay functional as long as required.

Good point!

Right, the full OS is not needed. But if we include the build system (gnatmake or gprbuild) and the testing system, these certainly use signals and IPC, and require concurrent processes in possibly different virtual memory spaces. Not so simple as the compiler.

I prefer not to. We may use the GPS IDE for convenience, and perhaps some other GNAT Pro interactive tools, and perhaps even Eclipse for the testing system, but we try to stay with tools that allow command-line usage and shell scripting. So the core development tools are not interactive.

I know. I don't think that preserving the bits and bytes of the development tools on readable media will be a problem (as long as the company survives and remembers its responsibility for this). I am worried about *interpreting* (running) those bits and bytes in the future.

I'm not sure how seriously the customers take the 2040 date. As I said, the long-term maintenance requirement was descoped from the tender stage to the contract, but it remains as the planned end-of-life date. It seems to me likely that when the moth-balled, 10-20 year-old satellites are dusted off and launched, their HW will have some glitches for which SW work-arounds may be needed.

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

I'm sure that the file of bytes can be *read*, but can it be interpreted correctly?

I'm doubtful that 2040's hardware will run 2014's Linux/x86 without an emulator like QEMU. With an emulator, yes, but the emulator must emulate a computer system with peripherals, complete enough to run the tools we need.

Are we sure that ext3 will be supported in 2040? Possibly not in the native 2040 OS, but hopefully in our emulator.

Ok. (But the "big money" is not really there, because -- as I said in another post -- the long-term maintenance requirement was descoped from tender phase to contract phase.)

--
Niklas Holsti 
Tidorum Ltd 
niklas holsti tidorum fi 
      .      @       .
Reply to
Niklas Holsti

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.