disable journalling?

This subthread is pouring with misinformation. I'm not going to try to correct it as it is not relevant to my question, but it is pretty obvious that some people's remembrance has been distorted by time.

Reply to
Rob
Loading thread data ...

Golly. You are right.. it was all so long ago..

--
Ineptocracy 

(in-ep-toc?-ra-cy) ? a system of government where the least capable to  
lead are elected by the least capable of producing, and where the  
members of society least likely to sustain themselves or succeed, are  
rewarded with goods and services paid for by the confiscated wealth of a  
diminishing number of producers.
Reply to
The Natural Philosopher

CP/M used an extent-based file system of 128 byte sectors. It did not know how much of the sector contents was occupied, so the file size was accounted in sectors only.

--

Tauno Voipio
Reply to
Tauno Voipio

What misinformation?

As I said, I never knowingly saw FAT8, but another poster mentioned it and its mentioned in the Wikipedia article on FAT filing systems. In that era I was working on 1900 mainframes and SWTPC microcomputers.

The latter used the 6800 chip and ran TSC Flex 2, which used a sector chaining scheme on its disk. Track 0 contains the label, boot sector and directory. Directory entries contain the address of the data sector as a two byte track/sector key. Each sector in the file contains either the track/sector key of the next sector or zero, which show it is the last sector in the file.

Since there was no clustering used in FAT8 or FAT12 it follows that, although track zero, which contained the volume label, boot sector and (presumably) the file allocation tables, the data part of the disk (track

1 onwards) directory can't clontain more than 256 addressable sectors because the FAT entry has only 8 bits.

FAT12 pushed the sector limit up to 4096 sectors, which could easily handle a 1.2MB disk using 512b sectors. Since you obviously know different, kindly enlighten us.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

Well there's the statement that you wont find wear levelling code in any OS drivers. JFFS2 is a flash filing system for Linux with wear levelling for instance.

For discrete Flash chips, the app/filing system handles it, JFFS2 for Linux. For SD Cards, some SD card controllers handle it. I can't say if my employer's SD controller IP handles it or not, commercially confidential information. If you cross my palm with sufficient cash then I can sell you some IP and the user config manuals and you can find out for yourself!

Checking the specs of some commercially available SD cards, the only mention of wear levelling is in a SanDisk document. As to whether all SanDisk cards do wear levelling is something you'll have to ask SanDisk.

The thing to note is that SD Card preferred filing system is FAT, FAT32 and exFAT depending on the card size. The spec mentions how to format the device. It's a reasonable assumption to make given the lack of definitive information from vendors, that any wear levelling the SD Card implements will be driven to make FAT more reliable. There is every possibility that the opaque and proprietary wear levelling manufacturer A implements may well fight against the usage pattern the filing system (ext2/3/4/whatever) is doing resulting in shorter life. A card from manufacturer B may well work better.

For the OP with a Pi in a hard to reach location and worries about flash life I would setup a similar Pi and SD card system and thrash the bejesus out of it to simulate 12months usage in a short space of time. If at the end of the test the system is still working then you need to schedule a visit to the Pi once a year to swap the SD card for a new one. That would give you some degree of comfort whilst you consider alternatives.

I've been using CF and SD cards since 2003 for photography/phones and never seen one fail from use. I have a 8GB SD Card in an original Asus eeePC701 which is used as the primary disk instead of the on-board SSD. That machine is used a couple of days per week every week and has been since 2008 and the SD Card (ext3) is still working fine business.

My massive personal experience of a handful of SD cards may not be representative of all use cases. But as this is Usenet, I'll argue I'm right till someone else invokes Godwin's Law and the thread dies!

Reply to
mm0fmf

Correct! As far as I can recall,the data block pointers were all stored in the directory table structure. Any files using more than 16 blocks (sectors, clusters?) just used extra directory entries known as extents IIRC. There was no common FAT mechanism in this system.

It always struck me as an 'inelegant' method of storage but I suppose it did add a level of redundency. I can't remember whether the directory tables were duplicated or not (which would have been an essential requirement to take full advantage of this redundency feature imo).

--
Regards, J B Good
Reply to
Johny B Good

Apologies for that. It was my fault for dragging the discussion 'off topic'. I guess my rant at the major OEM's ineptitude regarding the limits of MS file systems didn't help much either.

Al I wanted to do was make Martin appreciate the tiny 4K block size of the file systems we now have today. Afaicr, it's always been that size for Ext2/Ext3 anyway. The mention of MS's FAT systems was purely to highlight that much larger block sizes were routinely in use on the vast majority of PCs. From that perspective, NTFS is a vast improvement over the earlier FAT based systems.

--
Regards, J B Good
Reply to
Johny B Good

I'm pretty certain that SD media is just that, fash ram chip(s) with the absolute minimum of 'packaging'. You only have to look at a tiny micro-SD card with 4GB capacity to realise that there just isn't the room for additional 'wear levelling' controller chips.

If there is any sort of wear leveling involved, it can only reside in the card reader chips and I'm not aware if this is true for any card reader. It might be, for all I know. I guess I could try googling this question to satisfy my own curiousity.

Undoubtedly true.

If some form of 'wear levelling' was in use, this could help otherwise, no. The fact that the FreeNAS / NAS4Free boot images assume the worst in this regard rather suggests a lack of any such wear levelling mechanism being in routine use.

From what I remember about disabling journalling alongside the issue of reserved blocks comes from my use of tune2FS under Knoppix live CD when prepping up EXT2 disk volumes to maximise usable data space when mounted in my FreeNAS box.

Afaicr, it was just a straightforward command to enable/disable journalling. I had to unmount the disk volume to reduce the excessive amount of 'reserved space' to a more reasonable 400MB but I don't think was necessary just to turn the journalling on and off.

--
Regards, J B Good
Reply to
Johny B Good

4K is just a default; you can override it when creating a filesystem. For a small fs mke2fs may choose 1K blocks automatically.
--
http://www.greenend.org.uk/rjk/
Reply to
Richard Kettlewell

ORLY?

Transistors are routinely made with 22nm dimensions. Work out how many would fit in 1mm sq of silicon and contrast that with the fact that a Z80 CPU had less than 10000 transistors in it. There's acres (hectares) of space.

Reply to
mm0fmf

Or you could chase part numbers found in someone?s blog post about disassembling an SD card, and discover there?s an entire ARM core in there.

Omitting wear levelling support would be a bizarre decision anyway given the other hoops any flash controller must jump through to provide the storage model expected by the host.

--
http://www.greenend.org.uk/rjk/
Reply to
Richard Kettlewell

Clusters. Cluster sizes range from 1k to 32k bytes.

Well, there actually is, but only in RAM. The first time CP/M needs to allocate a block on the disk, it walks the directory to built an allocation bitmap for the drive. It then uses this bitmap to find free clusters.

For removable media, CP/M also keeps checksums of each of the directory sectors to detect a disk change.

The one thing it never struck me as was redundant; not sure why it would strike you that way. Since the allocation bitmap isn't kept on disk, it can't disagree with the information in the directory, a prime feature of the lack of redundancy.

The directory is not duplicated. And since I have no idea why the CP/M scheme would strike you as redundant, I have no idea what you mean by taking advantage of this redundancy.

--
roger ivie 
rivie@ridgenet.net
Reply to
Roger Ivie

It's simple, each directory entry has its own list of data pointers to each block address used by the file data. There's no dependency on a seperate FAT like there is with FAT based FSs.

If both FAT tables get zeroed out, the only info each directory entry has is to the first cluster address which is also the entry point into the now empty FAT. The actual cluster addresses used beyond that first one can only be guessed at from the filesize data and an assumption that the file was unfragmented.

Of course, if the directory database was erased or corrupted in the CP/M FS you'd lose access to the file data, but this would also be true with a FAT based system. The FATs represented an additional point of failure which is one of the reasons why they're normally duplicated.

Using FATs simplified the file directory table compared to the messy CP/M setup. BTW, I'm surprised that the directory table wasn't duplicated in the CP/M FS if only to guard against power outage induced corruption.

--
Regards, J B Good
Reply to
Johny B Good

I appreciate that the 4K block size was a default that could be changed to suit filesize usage requirements. The same applies to FAT and NTFS as well. It's just that the default in FAT16 and FAT32 would automatically increase with disk volume size to hold the FATs to a maximum size limit (with FAT32 this was 8MB per FAT, afair, until it hit the 32KB cluster size limit where the FATs would then be allowed to grow beyond the 8MB limit). NTFS can stick with its default 4KB cluster size indefinitely afaik.

--
Regards, J B Good
Reply to
Johny B Good

Bloody Hell, well I never! I'm obviously a little bit out of touch (I did say I was going to google to satisfy my curiousity).

There's no point in asking whether it was a full size SD card or a micro SD since if this is true of the full size card, the micro SD is obliged to follow suit. It's getting harder and harder to appreciate just how much processing power can be squeezed into a one millimetre square of silicon these days.

I was pretty certain the early flash memory cards didn't possess such luxuries as 'wear levelling' (and quite possibly, this may have been true with the earlier flash memory cards). Now I'm not so sure.

My assumption was that any wear levelling would be implemented by the card reader controller chips to save sacrificing any silicon 'real estate' on the media itself to controller functions at the expense of memory capacity.

I guess I was overlooking the need to incorporate an interface controller just to get the contact count on the card down to a practical level. From there I suppose it's only a very small stretch to include wear levelling in the controller.

That just leaves me wondering about boot images designed for flash boot media that have an overriding obsession to minimise write activity at all costs. The implication of this is that this was regarded as essential with the earlier flash memory products, the present obsession now being merely an historical artifact.

I guess I should start googling into this before offering any more advice on the subject. :-(

--
Regards, J B Good
Reply to
Johny B Good

There is what appears at first sight to be quite a good, though disorganised, description of flash memory technology here:

formatting link

From reading that its obvious that the main parts of a flash memory device require different assembly lines: the controller is a standard IC chip built up on single crystal silicon, but while the memory cells may be on a single crystal substrate, the actual 'memory bit' in each cell is polycrystalline silicon wrapped in various insulating layers of silicon dioxide and silicon nitride. Polycrystalline silicon is not used in standard IC chips.

I suspect that, since the process of making the controller IC is different from that used to make the memory cells, they are two separate chips that follow different process paths until they are assembled onto a carrier and packaged. But, what do I know.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

On Sun, 09 Mar 2014 04:26:22 +0000, Johny B Good declaimed the following:

Early Compact Flash may not have; the interface was a close match to parallel hard-drives, and may have expected the host to run tests for bad-block remapping.

SD (and many others) have devolved to a serial data stream interface, which means a more complex protocol just to transfer the data.

And, perversely, Class 10 cards are often NOT the best fit for a randomly accessed file system. Class 2/4/6 cards are rated for writing to a fragmented file system, while Class 10 is rated for writing to an unfragmented file system (essentially, streaming continuous video to a freshly formatted card; vs writing photos wherein the user may have deleted some shots).

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber

Extents was what the extra directory entries were called. This rather kludgy mechanism meant that deleting a file required removing all the matching directory entries, which in turn meant that if someone put a file called ????????.??? on the disc it couldn't be deleted without deleting every file on the disc. It was this kludge that made me decide to write CP/N instead of porting CP/M for the Torch.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

I did once see a file called *.* on a system. I can't remember now whether it was an early version of DOS or of VMS. The trick was to rename it, with confirmation, before deleting the renamed version.

--
Alan Adams, from Northamptonshire 
alan@adamshome.org.uk 
http://www.nckc.org.uk/
Reply to
Alan Adams

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.