MD raid5 recovery
wergor 09.10.2018 - 22:28 1993 3
wergor
connoisseur de mimi
|
kann sich noch jemand an diesen thread erinnern? https://www.overclockers.at/storage...as-jetzt_240782 es hat eine der anderen platten erwischt smartctl zeigt mal wieder keine fehler smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-36-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, [url]www.smartmontools.org[/url]
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD30EFRX-68AX9N0
Serial Number: WD-WCC1T1263163
LU WWN Device Id: 5 0014ee 2b3588b96
Firmware Version: 80.00A80
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Oct 9 22:10:53 2018 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (39540) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 397) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x70bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 3839
3 Spin_Up_Time 0x0027 189 176 021 Pre-fail Always - 5508
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 31
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 039 039 000 Old_age Always - 44535
10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 30
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 16
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 14
194 Temperature_Celsius 0x0022 113 103 000 Old_age Always - 37
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 375
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 3
200 Multi_Zone_Error_Rate 0x0008 200 199 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 20553 -
# 2 Short offline Completed without error 00% 20530 -
# 3 Short offline Completed without error 00% 20506 -
# 4 Extended offline Completed without error 00% 20492 -
# 5 Short offline Completed without error 00% 20481 -
# 6 Short offline Completed without error 00% 20458 -
# 7 Short offline Completed without error 00% 20434 -
# 8 Short offline Completed without error 00% 20410 -
# 9 Short offline Completed without error 00% 20386 -
#10 Short offline Completed without error 00% 20362 -
#11 Short offline Completed without error 00% 20338 -
#12 Extended offline Completed without error 00% 20325 -
#13 Short offline Completed without error 00% 20314 -
#14 Short offline Completed without error 00% 20290 -
#15 Short offline Completed without error 00% 20266 -
#16 Short offline Completed without error 00% 20242 -
#17 Short offline Completed without error 00% 20218 -
#18 Short offline Completed without error 00% 20194 -
#19 Short offline Completed without error 00% 20170 -
#20 Extended offline Completed without error 00% 20156 -
#21 Short offline Completed without error 00% 20146 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
wie würdet ihr weiter verfahren? so schnell wie es geht eine ersatzplatte rein und rebuilden? versuchen das raid mit 3 platten als read only zu mounten und sicherungen ziehen? die alte platte testen und falls es gut geht (was eher unwahrscheinlich sein dürfte) einen rebuild riskieren? server@homeserver:~$ sudo mdadm --examine /dev/sd[abcd]1 >> raid.status
mdadm: only give one device per ARRAY line: <ignore> and /dev/md/0
mdadm: No md superblock detected on /dev/sdb1.
server@homeserver:~$ cat raid.status
/dev/sda1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 38c6ec9d:66fb4bfb:4045343f:7df0d971
Name : localhost.localdomain:0
Creation Time : Mon Sep 9 09:43:02 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 8790400512 (8383.18 GiB 9001.37 GB)
Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=1024 sectors
State : clean
Device UUID : 1210f5ff:16254505:d1bbc6a3:3e9ae35f
Update Time : Tue Oct 9 20:31:55 2018
Checksum : 771b1cd1 - correct
Events : 39665
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 38c6ec9d:66fb4bfb:4045343f:7df0d971
Name : localhost.localdomain:0
Creation Time : Mon Sep 9 09:43:02 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 8790400512 (8383.18 GiB 9001.37 GB)
Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=1024 sectors
State : clean
Device UUID : 10351ce1:c10425d1:395944e5:1ddaca3e
Update Time : Tue Oct 9 20:31:55 2018
Checksum : 4486fec4 - correct
Events : 39665
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 2
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 38c6ec9d:66fb4bfb:4045343f:7df0d971
Name : localhost.localdomain:0
Creation Time : Mon Sep 9 09:43:02 2013
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 8790400512 (8383.18 GiB 9001.37 GB)
Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=1024 sectors
State : clean
Device UUID : 13a0e327:c25d2525:d04159a1:37586378
Update Time : Tue Oct 9 20:31:55 2018
Checksum : d4fc2979 - correct
Events : 39665
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
server@homeserver:~$
Bearbeitet von wergor am 09.10.2018, 22:33
|
Hansmaulwurf
u wot m8?
|
die alte platte testen und falls es gut geht (was eher unwahrscheinlich sein dürfte) einen rebuild riskieren? Wenn du einen rebuild machst dann imho nur gleich mit neuer Platte.
|
wergor
connoisseur de mimi
|
habe das raid gerade wieder online gebacht, mit 3 von 4 platten (active, degraded). morgen kommt die neue festplatte. muss das raid zum rebuild unmounted werden?
|
davebastard
Vinyl-Sammler
|
mWn nicht, das ist ja der grund für raid, dass die verfügbarkeit erhöht wird. es wird langsamer sein aber sollte weiterhin funktionieren
|