![]() ![]() megacli and megaclisas-status should give you what you need: megaclisas-status So it looks like the PERC6/i is based on an LSI chipset, which means megacli should work on it. I never did figure out how to use smartctl on individual disks in an array, but post-removal inspection didn't turn up any flags on the drive, so it probably wouldn't have helped. Hopefully, it's as easy for you with RAID 6. In my case, hot swapping the unformatted and uninitialized replacement drive in a RAID 0 array went without a hitch while the server was running. megasasctl -vv will help identify the disk by slot, model and serial number. You probably have a bad disk that needs to be replaced ASAP. Make sure you run megasasrpt on every boot to create the required node in /dev/, then use megasasctl -vv in a cronjob to keep an eye on disk errors and act accordingly.īut don't be misled by the spurious errors generated within the OS. I had this problem on a PE2950 and megactl was very helpful in monitoring the situation. This can lead to instability and crashes (somewhat defeating the purpose of some RAID configurations). It seems that a common problem with PERC6/i controllers is that they allow too many disk errors before marking a disk as failed and taking it offline. So far so good as well, so I really don't know what else to check. I've set megacli -t long -vv to run, and now checking megacli -vv -s every now and then. I've taken advise and checked against megaclisas-status which also doesn't report anything bad. I'm using omreport which is part of openmanage from Dell it doesn't report anything bad. Is there any other tools available that I can use to check the physical drive's health status, to try to track down the fault drive to replace? However, omreport from Dell doesn't suggest any of the physical drives may be failing, and is reporting the virtual disk is in good health, too: I've done a bit, and every where I look, it seem to suggest that the drive may be failing, and should be replaced. On a recent reboot, fsck was forced, and it came up with a bunch of error like the ones shown here: I have an old PE2950III with 6 drives on RAID 6 behind a PERC6/i which I am using as a Proxmox VM host.
0 Comments
Leave a Reply. |