Corrupted SD Card

One of the preloaded SD Cards I bought (Win10, yet!) failed during first boot. It caused an endless error that caused a repeated reboot attempt, and the keyboard was locked out at that point, so the bailout wasn't usable.

I also have several R/C plane TX's that use a low capacity SD card. So far, half (2 of 4) of the cards supplied had some sort of problem. The Difficulties arose when a mandatory software update file was loaded on the cards, and the TX's could not read it properly.

It also sees that some of even the name branded cards can have difficulties when they are pushed speed wise close to the standards.

Reply to
Charlie
Loading thread data ...

Thanks for that. The card checks out fine under h2testw, but has failed

4 times, twice with cloned images of another good card, and one each with fresh images of Raspbian Jessies and Ubuntu Mate. The latter two would boot until you expanded the filing system, then it would fail during the boot sequence, as with the full cloned images.

---druck

Reply to
druck

On 10/04/2016 20:25, druck wrote: []

If a card works with h2testw I'm very surprised that it fails elsewhere. It makes me wonder whether one set of hardware is OK and the other not.....

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

.. and .. No chance the SD card is too big? IIRC there is a 32 GB limit for some devices.

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

failed

each

two

fail

+1

The clones could be bad but that doesn't explain why fresh images (checksum verified?) also fail. It could be the hardware used to write the clones/images to the card or the utility for that. Swap out the card reader, use a different utility?

--
Cheers 
Dave.
Reply to
Dave Liquorice

Well the Raspberry Pi 2 and Raspberry Pi 3 which they failed in are working perfectly with other cards. So strange.

---druck

Reply to
druck

It's only 16GB.

---druck

Reply to
druck

Used both the built in card reader on the laptop, and an USB3 external reader (on a USB2 port). Used both Win32DiscImager under Windows and dd from a Linux VM. So I think I've covered most of the bases on that.

---druck

Reply to
druck

Yes, strange. I hope you find the cause eventually.

--
Cheers, 
David 
Web: http://www.satsignal.eu
Reply to
David Taylor

out

Looks like it, was each physical reader/writer used with each utility?

Does the h2test(?) utility read/write/verify every single cell on the card? If you can get that level of access, ie bypass the glue in the card that makes it look like a disk.

--
Cheers 
Dave.
Reply to
Dave Liquorice

This "glue" is running on the host computer not the card.

Reply to
mm0fmf

That is incorrect.

Reply to
Rob

I think Dave is referring to the wear-levelling code which maintains the mapping between externally visible flash blocks, and the permanent IDs assigned to physical flash blocks. Wear levelling is managed by altering this mapping and copying block content between physical blocks as necessary. You'd need to disable wear-levelling to do the sort of exhaustive cell-by-cell checks on all flash blocks that he's talking about.

Presumably the 'glue' you are referring is OS-specific and, in the Linux/ UNIX case, maps logical block numbers to HDD side/track/sector references when an HDD is being used. This mapping process must be different for flash because: (a) the logical blocks used by Linux are much smaller than the externally-visible flash blocks. IIRC a common flash block size is 4K while Linux typically uses 512b blocks, so 8 LBNs must be mapped onto each flash block and (b) it can only access the blocks that the on-card wear-leveller makes externally visible.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

OT... does this mean that over time the number of externally visible flash blocks - or storage size of the device may change over time?

fruit

Reply to
fruit

No, it means that the internal number of flash blocks can be larger than the externally visible number.

Reply to
Rob

On Tue, 12 Apr 2016 11:29:49 +0100, fruit declaimed the following:

This may be of interest... Especially the part about number of open allocation units -- as it applies to non-FAT file systems (or really random FAT output too).

formatting link

Under a Linux file system, cards with 1-4 open units are likely to wear out faster and perform slower (even if a class 10 -- as those are acknowledged to be optimized for single file streaming, aka: video) than cards with lots of open units.

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber

Exactly so. IFAIK as physical blocks fail they'll get removed from the physical block list, but that shouldn't affect the number of visible blocks as long as there are more usable physical blocks than visible blocks.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

Depends what the OP means by disk. Most people mean a thing which supports files and folders when they refer to a disk. That is a function of the filing system which runs on the host.

Reply to
mm0fmf

In this case (when trying to test if a card is OK) it indicates the layer that translates from a disk block number to a flash block on the card. This is on the card, not on the Pi.

I never heard about people who mean "filesystem" when they say "disk", at least not in the Linux world.

Reply to
Rob

Thanks both

fruit

Reply to
fruit

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.