Monday, 9 December 2013

LDOM FAILED DUE TO RPOOL WENT TO SUSPENDED STATE

LDOM FAILED AFTER UAT

SPARC M5-32, No Keyboard
Copyright (c) 1998, 2013, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.35.3, 96.0000 GB memory available, Serial #83405637.
Ethernet address 0:14:4f:f8:ab:45, Host ID: 84f8ab45.



Boot device: /virtual-devices@100/channel-devices@200/disk@0:a  File and args:
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
\
Hostname: MURMXPSP
VxVM sysboot INFO V-5-2-3409 starting in boot mode...
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate FBT table for module vxfs
WARNING: VxVM vxdmp V-5-3-1391 APM for array type OTHER_DISKS is not available
NOTICE: VxVM vxdmp V-5-0-34 [Info] added disk array OTHER_DISKS, datype = OTHER_DISKS

NOTICE: VxVM vxdmp V-5-0-34 [Info] added disk array 000295700464, datype = EMC

NOTICE: VxVM vxdmp V-5-0-0 [Info] removed disk array FAKE_ENCLR_SNO, datype = FAKE_ARRAY

VxVM sysboot INFO V-5-2-3390 Starting restore daemon...
Dec  9 11:33:11 vxvm:vxconfigd: V-5-1-16765 Selecting configuration database copy from emc0_11fd from disks: emc0_11fd emc0_11f1
Dec  9 11:33:12 vxvm:vxconfigd: V-5-1-16766 Trying to import the disk group MXP-ARCH using configuration database copy from emc0_11fd
Dec  9 11:33:12 vxvm:vxconfigd: V-5-1-16254 Disk group import of MXP-ARCH succeeded.
Dec  9 11:33:12 vxvm:vxconfigd: V-5-1-16765 Selecting configuration database copy from emc0_11ef from disks: emc0_11ef emc0_11fc emc0_11ee emc0_11fb emc0_11f
0
Dec  9 11:33:12 vxvm:vxconfigd: V-5-1-16766 Trying to import the disk group MXP-DATA using configuration database copy from emc0_11ef
Dec  9 11:33:13 vxvm:vxconfigd: V-5-1-16254 Disk group import of MXP-DATA succeeded.
Dec  9 11:33:13 vxvm:vxconfigd: WARNING V-365-1-1 This host is not entitled to run Veritas Storage Foundation/Veritas Cluster Server.
As set forth in the End User License Agreement (EULA) you must complete one of the two options set forth below. To comply with this condition of the EULA and
 stop logging of this message, you have 52 days to either:
- make this host managed by a Management Server (see http://go.symantec.com/sfhakeyless for details and free download), or
- add a valid license key matching the functionality in use on this host using the command 'vxlicinst' and validate using the command 'vxkeyless set NONE'.


Dec  9 11:33:15 svc.startd[11]: svc:/system/VRTSperl-runonce:default: Method "/opt/VRTSperl/bin/runonce" failed with exit status 127.
Dec  9 11:33:15 svc.startd[11]: svc:/system/VRTSperl-runonce:default: Method "/opt/VRTSperl/bin/runonce" failed with exit status 127.
Dec  9 11:33:15 svc.startd[11]: svc:/system/VRTSperl-runonce:default: Method "/opt/VRTSperl/bin/runonce" failed with exit status 127.
Dec  9 11:33:15 svc.startd[11]: system/VRTSperl-runonce:default failed: transitioned to maintenance (see 'svcs -xv' for details)
Dec  9 11:33:16 MURMXPSP sendmail[1998]: My unqualified host name (MURMXPSP) unknown; sleeping for retry

MURMXPSP console login:
MURMXPSP console login: root
Password:
Dec  9 11:33:20 MURMXPSP login: ROOT LOGIN /dev/console
Last login: Mon Dec  9 11:28:11 on console
Oracle Corporation      SunOS 5.11      11.1    July 2013
You have new mail.
-bash-4.1# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices are unavailable in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or 'fmadm repaired', or replace the device
        with 'zpool replace'.
        Run 'zpool status -v' to see device specific details.
  scan: resilvered 1.17M in 0h0m with 0 errors on Thu Dec  5 14:04:10 2013
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c3d0s0  ONLINE       0     0     0
            c3d1s0  UNAVAIL      0     0     0

errors: No known data errors
-bash-4.1# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c3d0s2       auto:ZFS        -            -            ZFS
c3d1s2       auto:ZFS        -            -            ZFS
emc0_11ee    auto:sliced     MXP-DATA1    MXP-DATA     online thinrclm
emc0_11ef    auto:sliced     MXP-DATA2    MXP-DATA     online thinrclm
emc0_11f0    auto:sliced     MXP-DATA3    MXP-DATA     online thinrclm
emc0_11fa    auto:sliced     MXP-DATA4    MXP-DATA     online thinrclm
emc0_11fb    auto:sliced     MXP-DATA5    MXP-DATA     online thinrclm
emc0_11fc    auto:sliced     MXP-DATA6    MXP-DATA     online thinrclm
emc0_11fd    auto:sliced     MXP-ARCH1    MXP-ARCH     online thinrclm
emc0_11f1    auto:sliced     MXP-ARCH2    MXP-ARCH     online thinrclm
emc0_11f2    auto:sliced     -            -            online thinrclm
emc0_11f3    auto:sliced     -            -            online thinrclm
emc0_11f4    auto:sliced     -            -            online thinrclm
emc0_11f5    auto:sliced     -            -            online thinrclm
emc0_11f6    auto:sliced     -            -            online thinrclm
emc0_11f7    auto:sliced     -            -            online thinrclm
emc0_11f8    auto:sliced     -            -            online thinrclm
emc0_11f9    auto:sliced     -            -            online thinrclm

-bash-4.1# zpool detach rpool c3d1s0
cannot detach c3d1s0: pool I/O is currently suspended


-bash-4.1# zpool status
  pool: rpool
 state: SUSPENDED
status: One or more devices are unavailable in response to IO failures.
        The pool is suspended.
action: Make sure the affected devices are connected, then run 'zpool clear' or
        'fmadm repaired'.
        Run 'zpool status -v' to see device specific details.
   see: http://support.oracle.com/msg/ZFS-8000-HC
  scan: resilvered 1.17M in 0h0m with 0 errors on Thu Dec  5 14:04:10 2013
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       UNAVAIL      0     0     0
          mirror-0  UNAVAIL      0     0     0
            c3d0s0  UNAVAIL      0     0     0
            c3d1s0  UNAVAIL      0     0     0
-bash-4.1#
-bash-4.1#
-bash-4.1# zpool status
  pool: rpool
 state: SUSPENDED
status: One or more devices are unavailable in response to IO failures.
        The pool is suspended.
action: Make sure the affected devices are connected, then run 'zpool clear' or
        'fmadm repaired'.
        Run 'zpool status -v' to see device specific details.
   see: http://support.oracle.com/msg/ZFS-8000-HC
  scan: resilvered 1.17M in 0h0m with 0 errors on Thu Dec  5 14:04:10 2013
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       UNAVAIL      0     0     0
          mirror-0  UNAVAIL      0     0     0
            c3d0s0  UNAVAIL      0     0     0
            c3d1s0  UNAVAIL      0     0     0
-bash-4.1# Dec  9 11:34:16 MURMXPSP sendmail[1998]: unable to qualify my own domain name (MURMXPSP) -- using short name
Dec  9 11:34:16 MURMXPSP sendmail[1998]: [ID 702911 mail.alert] unable to qualify my own domain name (MURMXPSP) -- using short name

-bash-4.1# zpool status -v
  pool: rpool
 state: SUSPENDED
status: One or more devices are unavailable in response to IO failures.
        The pool is suspended.
action: Make sure the affected devices are connected, then run 'zpool clear' or
        'fmadm repaired'.
   see: http://support.oracle.com/msg/ZFS-8000-HC
  scan: resilvered 1.17M in 0h0m with 0 errors on Thu Dec  5 14:04:10 2013
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       UNAVAIL      0     0     0
          mirror-0  UNAVAIL      0     0     0
            c3d0s0  UNAVAIL      0     0     0
            c3d1s0  UNAVAIL      0     0     0

device details:

        c3d0s0    UNAVAIL         experienced I/O failures
        status: FMA has faulted this device.
        action: Run 'fmadm faulty' for more information. Clear the errors
                using 'fmadm repaired'.

        c3d1s0    UNAVAIL         experienced I/O failures
        status: FMA has faulted this device.
        action: Run 'fmadm faulty' for more information. Clear the errors
                using 'fmadm repaired'.

Problem:-
1)After Rebooting guest Domain
  Secondary domain Disk service failed to provide service
   We manually login to secondary domain and check disk status it is fine

2)after  i remove secondary disk from LDOM

3)now try to boot from primary disk

4)still ZPOOL did not update my latest disk info it check for secondary disk it is not available so i got I/O error

5)i boot into singular mode

SPARC M5-32, No Keyboard
Copyright (c) 1998, 2013, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.35.3, 96.0000 GB memory available, Serial #83405637.
Ethernet address 0:14:4f:f8:ab:45, Host ID: 84f8ab45.



{0} ok devalias
net                      /virtual-devices@100/channel-devices@200/network@2
vdisk_11fd               /virtual-devices@100/channel-devices@200/disk@11
vdisk_11fc               /virtual-devices@100/channel-devices@200/disk@10
vdisk_11fb               /virtual-devices@100/channel-devices@200/disk@f
vdisk_11fa               /virtual-devices@100/channel-devices@200/disk@e
vdisk_11f9               /virtual-devices@100/channel-devices@200/disk@d
vdisk_11f8               /virtual-devices@100/channel-devices@200/disk@c
vdisk_11f7               /virtual-devices@100/channel-devices@200/disk@b
vdisk_11f6               /virtual-devices@100/channel-devices@200/disk@a
vdisk_11f5               /virtual-devices@100/channel-devices@200/disk@9
vdisk_11f4               /virtual-devices@100/channel-devices@200/disk@8
vdisk_11f3               /virtual-devices@100/channel-devices@200/disk@7
vdisk_11f2               /virtual-devices@100/channel-devices@200/disk@6
vdisk_11f1               /virtual-devices@100/channel-devices@200/disk@5
vdisk_11f0               /virtual-devices@100/channel-devices@200/disk@4
vdisk_11ef               /virtual-devices@100/channel-devices@200/disk@3
vdisk_11ee               /virtual-devices@100/channel-devices@200/disk@2
vdisk8_p                 /virtual-devices@100/channel-devices@200/disk@0
vnet8_hb2                /virtual-devices@100/channel-devices@200/network@4
vnet8_hb1                /virtual-devices@100/channel-devices@200/network@3
vnet8_pro                /virtual-devices@100/channel-devices@200/network@2
vnet8_s                  /virtual-devices@100/channel-devices@200/network@1
vnet8_p                  /virtual-devices@100/channel-devices@200/network@0
net                      /virtual-devices@100/channel-devices@200/network@0
disk                     /virtual-devices@100/channel-devices@200/disk@0
virtual-console          /virtual-devices/console@1
name                     aliases
{0} ok boot vdisk8_p -s
Boot device: /virtual-devices@100/channel-devices@200/disk@0  File and args: -s
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
\
Booting to milestone "milestone/single-user:default".
Hostname: MURMXPSP
Requesting System Maintenance Mode
SINGLE USER MODE

Enter user name for system maintenance (control-d to bypass): WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate SDT table for module vxfs
WARNING: couldn't allocate FBT table for module vxfs


Enter user name for system maintenance (control-d to bypass): root
Enter root password (control-d to bypass):
single-user privilege assigned to root on /dev/console.
Entering System Maintenance Mode

Dec  9 12:14:28 su: 'su root' succeeded for root on /dev/console
Oracle Corporation      SunOS 5.11      11.1    July 2013
You have new mail.
-bash-4.1# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices are unavailable in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or 'fmadm repaired', or replace the device
        with 'zpool replace'.
        Run 'zpool status -v' to see device specific details.
  scan: resilvered 1.17M in 0h0m with 0 errors on Thu Dec  5 14:04:10 2013
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c3d0s0  ONLINE       0     0     0
            c3d1s0  UNAVAIL      0     0     0

errors: No known data errors
-bash-4.1# zpool clear rpool

-bash-4.1# zpool status
pool: rpool
 state: DEGRADED
status: One or more devices are unavailable in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or 'fmadm repaired', or replace the device
        with 'zpool replace'.
        Run 'zpool status -v' to see device specific details.
  scan: resilvered 1.17M in 0h0m with 0 errors on Thu Dec  5 14:04:10 2013
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            c3d0s0  ONLINE       0     0     0
            c3d1s0  UNAVAIL      0     0     0

errors: No known data errors
-bash-4.1# zpool detach rpool c3d1s0
-bash-4.1# zpool status
  pool: rpool
 state: ONLINE
  scan: resilvered 1.17M in 0h0m with 0 errors on Thu Dec  5 14:04:10 2013
config:

        NAME      STATE     READ WRITE CKSUM
        rpool     ONLINE       0     0     0
          c3d0s0  ONLINE       0     0     0


7)then I allocate secondary disk to my guest domain
8) attach secondary disk
9)now server running fine.


1 comment:

  1. I have used Kaspersky protection for many years now, I'd recommend this Anti virus to everyone.

    ReplyDelete