1000 year data storage for autonomous robotic facility

If we are using the evolutional model, several sites with different technologies must be used.

Some of these sites are successful, some are not, but of course we do not know in advance, which system will survive and which will fail.

Reply to
upsidedown
Loading thread data ...

Yeah, that should be doable.

Those can certainly be designed to last 1000 years.

Yeah, I don't see any need for overhead crane beams.

If you can get the electronics that drives everything to last

1000 years by replacement of what fails, the mechanical stuff they need to move parts around should be easy enough.

Obviously with multiple devices that move parts around so when one fails you just stop using that one etc.

It doesn't actually. The approach the egyptians took lasted fine, even when the locals chose to strip off the best of the decoration to use in their houses etc.

Corse its unlikely that you could actually afford something that big and hard to destroy.

That conflicts with your other proposal of a tomb like thing in the Australian desert. Its going to be hard to stop those involved in checking its working from telling anyone about it.

There is going to be one hell of a temptation for one of them to spill the beans to 60 Minutes etc.

At which time you have just blown your disguise as a tomb.

Its more likely to just attract vandals who watch the video.

Or they might just point and laugh instead.

Reply to
Rod Speed

Oddly I've got some radio from the 1920s that are still working fine (one Atwater Kent had the pot metal tuning mechanism disintegrate, but if you tuned each capacitor by hand it still worked fine). But radios of essentially the same technology from the 30s an 40s are all dead. Parts like electrolytic capacitors do not have long life. The "improvement" of tubes with cathode coatings also limited their useful life. Today, since short lifetime parts are just too convenient to ignore, nobody builds for any extended life. Electronic lifetimes just keep getting shorter and shorter.

Some years ago I started a project of an electronic grandfather "superclock". But the idea was not to simply build an accurate clock, but to build one that several hundred years from now would still be running as accurately. (Same idea as a mechanical grandfather clock...ever notice the similarity of a tall grandfather clock to a relay rack... get the picture)

But I soon discovered that building electronics with several hundred year life is not so simple. Making sure all you capacitors are of materials that don't degrade, that active parts have a decent life time and all the rest takes some careful considerations even if the electronics ends up shielded in air-tight containers. Sure you can pick out things like ceramic and glass capacitors and other items that will work for hundreds of years but using ONLY those items to build a complex device takes some serious design thought.

Reply to
benj

I'm thinking there may be a different way to do this. The basic problem is that the life of an electronic system can currently be built that will last about 50 years before MTBF declares that problems will begin. With redundancy and spares, this might be extended to 100 years. The building will last somewhat longer, but probably no more than 100 years before maintenance problems arrive.

Rather than replace all the individual components, I suggest you consider replacing the entire building and all the machinery every

50-100 years. Instead of one building, you build two buildings, in alternation. When the first facility approaches the end of its designed life, construction on a 2nd facility begins adjacent to the first facility. It would be an all new design, building upon the lessons learned from its predecessor, but also taking advantage of any technological progress from the previous 100 years. Instead of perpetually cloning obsolete technology, this method allows you to benefit from progress. When the new facility is finished, the severed heads are moved from the old facility to the new. The old equipment can then be scrapped, and the building torn down to await the next reconstruction in 100 years.

Note: The 100 year interval is arbitrary, my guess(tm), and probably wrong. The MTBF may also increase with technical progress over time.

It's called a finite state machine. Every state, including failure modes, must have a clearly defined output state, which in this case defines the appropriate action. These are very efficient, quite reliable, but require that all possible states be considered. That's not easy. A friend previously did medical electronics and used a finite states. Every possible combination of front panel control and input was considered before the machines servo would move. Well, that was the plan, but some clueless operator, who couldn't be bothered to read the instructions, found a sequence of front panel button pushing that put the machine into an undefined and out of control state. You'll have the same problem. Some unlikely combination of inputs, that were completely impossible in accordance to even the worst case operating conditions, will happen and ruin everything. I've seen state diagrams and tables, for fairly simple machines, cover a wall of an office.

Maybe. I have some good stories of alarms going awry. The short version is that too few and too many alarms are both a problem. Too few and there's not enough warning to be able to prevent a failure. Too many, and the humans that are expected to fix the problem treat it as "normal" and over a period of time, ignore the alarm (Chicken Little effect). I've seen it happen. I did some process control work at a local cannery. Plenty of sensors and alarms everywhere. Because maintenance was overextended, the sensors were constantly getting clogged with food residue. Rather than keep them clean, someone simply increased the sensor sensitivity so that it would work through the encrusted food residue layers. The result was constant false alarms, as the overly sensitive sensors failed to distinguish between a line stoppage or another layer of filth. The false alarms were far worse when the sensors were cleaned, which served as a good excuse to never clean them. I managed to fix the problem just before the cannery closed and moved to Mexico. Hint: Building and planning alarm systems is not easy.

I suspect that you are not involved in running a volunteer organization. In terms of reliability, volunteers can be anything from totally wonderful to an absolute disaster. Because the usual "carrot and stick" financial incentives are lacking with volunteers, there's very little you can do to motivate or control volunteers. If you demand that they do something that they consider personally repulsive, they'll just walk away. Please talk with someone that runs a volunteer organization for additional clues.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

That's just plain wrong when its designed to last 1000 years in the first place without any maintenance.

That's much harder to achieve with an autonomous system with no humans involved.

Impossible with an autonomous system with no humans involved.

But does necessarily involve someone keeping humans involved in doing that for 1000 years, just to keep your head. Good luck with that.

And how do you proposed to recruit a new crew of humans the next time you need to replace everything except the heads ?

Not if there are no humans involved.

Reply to
Rod Speed

Re 1000 year data storage: Could Intel or some other company use modern equipment but old design rules to make the integrated circuits have a much longer expected lifetime?

It seems like it might be possible that if dimensions of the devices were made larger then things would last longer.

I know that making flash memory just a few times larger and using only single level cells increases the number of reliable life cycles 100's of times (1000 to hundreds of thousands) while at the same time raising the data decay time from a couple of about a year to about a 10 years. Refreshing every year would only require 1000's of write cycles, well within the 100's of thousands possible.

I think the functions besides memory storage a couple 10's of years now, but I don't know if making things a few times larger and tuning the manufacturing process would get to a 1000 years. (For example, I don't know if the memory cells would last a 1000 years, but data decay would not be a problem since only 100's of rewrites/cell would be needed for refresh and 100's of thousands are possible. (Actually, millions of rewrite cycles are likely to be possible.)

Changing the designed circuit speed, the actual clock rate, and operating voltage can also improve expected lifetime.

A long term power source would still be an issue unless things can be made to not need refresh. I don't know how things scale, so I used the numbers for actual products to get back to 10 year decay time. I don't know if you would have to make things logarithmically bigger in 1 dimension, or perhaps 2 or 3 or if linearly bigger in 1 dimension or perhaps 2 or 3, of if making things much bigger than the old stuff would increase the expected lifetime.

Reply to
Mark F

Mark F wrote

Yes, but how much longer is less clear.

And particularly if the design was to minimise diffusion somehow.

I guess that since it's a cryo facility, one obvious way to get a longer life is to run the ICs at that very low temp too etc.

You'd be better off with some form of ROM instead life wise.

Much longer than that with core.

Like I said, ROM is more viable for very long term storage.

Yes, that's the big advantage of ROM and core.

Reply to
Rod Speed

I've seen both extremes. On the short side, I have a fair collection of older Buffalo LinkStation HD-H250LAN NAS boxes which uses a

40x40mm, 12v 60ma fan. The fans last about 9 to 12 months. I stock replacements. Nothing wrong with the NAS board or hard disk drive. They just kill fans. I've tried 3 different brands, 3 different types (bushing, bearing, maglev), and they all die. A thermistor probe and about an hour of tinkering found the problem. The air flow is too slow for the fan to cool itself. The operating temperature was hot enough to soften the thin plastic (about 65C). There was also a slight torque on the softened plastic frame, which eventually deformed the frame sufficiently to cause the ends of the blades to drag on something. This slowed the fan even more, which caused additional heat rise. There's a fairly large tangle of wires and connectors partially blocking the air flow. However, the grand prize for lousy thermal design goes to the package designer, who sized the air intake ports much smaller than the exhaust port and located them so that the air flow bypasses the devices that produce the heat.

At the other extreme, I have several servers that have been operating

24x7 for many years. My SCO ODT 3.2v4.2 486DX266 server has been running essentially unchanged since about 1991. I did have a power supply failure in about 1996, but the fan continues to operate. I can't say the same for the CPU fan, which also died in 1996 and was replaced with a bigger heat sink and no fan. The only thing unique about the arrangement is about every 3-4 years, I drag them outside and blow out the accumulated dust. The fans are nothing special and probably all cheap bushing types.

I sometimes sleep in the office, so no illuminated fans for me.

My previous rant on the ATX thermal design (from 2004):

There's not much wrong with todays fans. What's wrong is that they sometimes are designed into inappropriate packages, with too high a heat buildup, obstructed air flow, or get clogged with dirt and dust causing drag. If properly sized, packaged, and cleaned, they'll last forever.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
Santa Cruz CA 95060 http://802.11junk.com 
Skype: JeffLiebermann     AE6KS    831-336-2558
Reply to
Jeff Liebermann

ead

The OP said nothing about humans (the *robots* use the software during the 1000 yrs), or why the facility needed to be autonomous for

1000yr.

If the facility's tech can be modified per outside developments, does it still qualify as autonomous?

Did you keep the machinery to read them, too?

Mark L. Fergerson

Reply to
alien8752

He did however imply that there would be humans around in the future to thaw him out and upload the contents of his head.

He wasn't proposing that his robots do that.

He did say that later, essentially he believes that that's the most likely way to ensure that his frozen head will still be around in 1000 years for the humans that that have worked out how to upload the contents to do that.

Yes, if it can operate by itself.

You don't need to if you have multiple generations, you only need to keep the machinery for the latest generation.

Reply to
Rod Speed

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.