Wednesday 26 November 2014

well known services

Wellknow servies:-

F-logi(fabric login)FFFFFE
P-logi(Port login)FFFFFC
Pr-logi(process login)
SCN (State Change notification)
RSCN (register State Change Notification)

Fiber Chanel Layers

FC-Layers:-

1)FC-0: Physical layer
2)FC-1: 8/10 Encoding & Decoding
3)FC-2: Framing & flow control
4)FC-3: Common services
5)FC-4: Upperlayer Protocol Mapping


Physical Layer:-
single mode cables 9 Microns(supports upto to 5 KM)
multimode cables  50 microns or 62.5 microns (support 200 meters)

LC connectors(1/2 inch size,SFP(small form factor pluggable))
SC Connectors(1 inch size , Gbic(gigabit interface card))



FC-1 (8/10 encoding & Decoding )
Transcation word is collection of 4 Characters(40bits).

FC-2: Framing & flow control

SCSI Pay load: 50K
FC Pay load: 2112 bytes
Frame size :2148 bytes


FC-3 layer: common service

FC-4 layer: Upper layer protocol mapping

Mapping SCSI data to SCSI
Mapping IDE  data to IDE

Fabrics & it's addressing

Fabric:-
Fabric is a well designed,intelligent& self configurable Network.

Physical Addressing:-

N: Node Port
F: Fabric Port
E: Expansion Port
G: Generic Port
U: Universal Port
NL: Node Loop Port
FL: Fabric Loop Port

Logical Addressing:-
it is 24 bit number(3*8=24)
Switch No:switch Port No: Arbitrated loop private address

FC Addressing

FC-Addressing:-
IPV2
IPV4(4*8=32 bit) 4 octants
IPV6(6*8=48 bit) 6 octants
WWN  World wide name
WWNN World wide Node Name(8*8=64 bit) 8 octants
WWPN World Wide Prot Name(8*8=64 bit) 8 octants

Fiber Channel Topologies

Fiber Channel:-
it is a gigabit speed networking technology.
Primarly used for storage network.
it transfers data in serial mode.

Topologies:-
1)Point to Point Topology
2)FC-AL(Fiber Channel Arbiterated Loop) Topology(126 devices max)
3)Switched Fabric Topology(239 devices max)


Point To Point topology:-

1)good performance
2)No addressing mechanism

Disadvantages:-
1) scalability not possible.

FC-AL Topology:-
1)up to 126 devices we can connect.
2)if one device communicating rest of all blocked.
FC-HUB:-
fc-Hub is implemented to eliminate blocking, hear bandwidth is shared.


Switched Fabric Topology:-
1)up to 239 devices we can connect.
2)each switch in a fabric have unique id.
3) in a fabric only one principle switch is configurable rest of all switches
are called subordinate switches.

Principal Switch:-
1) it is unique in fabric.
2) it maintain domain integrity of the fabric.
Rules for Principal switch:-
1)Least domain id
2)Least WWN's
3)Manual assigning

Wednesday 15 October 2014

how to login to SUN BLADE 6000 MODULAR SYSTEM's specific balde through console??

login as: root
Using keyboard-interactive authentication.
Password:
Oracle(R) Integrated Lights Out Manager
Version 3.1.1.10 r72831
Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
Warning: password is set to factory default.
-> ls
 /
    Targets:
        CH
        STORAGE
        Servers
        System
        CMM
    Properties:
    Commands:
        cd
        show
-> cd CH
/CH
-> ls
 /CH
    Targets:
        CMM
        MIDPLANE
        BL0 (BL0)
        BL1 (BL1)
        NEM0
        NEM1
        FM0
        FM1
        FM2
        FM3
        FM4
        FM5
        PS0
        PS1
        T_AMB
        HOT
        VPS
        OK
        SERVICE
        TEMP_FAULT
        LOCATE
    Properties:
        type = Chassis
        ipmi_name = /CH
        product_name = SUN BLADE 6000 MODULAR SYSTEM
        product_part_number = xxxxxxxxx
        product_serial_number = xxxxxxxx
        product_manufacturer = ORACLE CORPORATION
        fru_serial_number = xxxxxxx
        fault_state = OK
        clear_fault_action = (none)
        power_state = On
    Commands:
        cd
        set
        show
        start
        stop
-> cd BL0
/CH/BL0
-> ls
 /CH/BL0
    Targets:
        SP
        SYS
        PRSNT
        STATE
        ERR
        VPS
    Properties:
        type = Blade
        ipmi_name = BL0
        product_name = SPARC T3-1B
        product_part_number = xxxxxxxxxxx
        product_serial_number = xxxxxxxx
        system_identifier = BL0
        fru_name = ASSY,BLADE,SPARC T3-1B
        fru_version = FW 3.0.16.5.b
        fru_part_number = xxxxxxx
        fru_serial_number = xxxxxxxxxx
        fru_extra_1 = FW 3.0.16.5.b
        fault_state = OK
        load_uri = (none)
        clear_fault_action = (none)
    Commands:
        cd
        load
        set
        show
-> cd SP
/CH/BL0/SP
-> ls
 /CH/BL0/SP
    Targets:
        cli
        network
    Properties:
        type = Service Processor
    Commands:
        cd
        reset
        show
-> cd cli
/CH/BL0/SP/cli
-> start
Are you sure you want to start /CH/BL0/SP/cli (y/n)? y
start: Connecting to /CH/BL0/SP/cli using Single Sign On

Oracle(R) Integrated Lights Out Manager
Version 3.0.16.5.b r70648
Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
Warning: password is set to factory default.

SUN6000BLADE BL0-> ls
 /
    Targets:
        HOST
        STORAGE
        SYS
        SP
    Properties:
    Commands:
        cd
        show
SUN6000BLADE BL0-> cd HOST
/HOST

SUN6000BLADE BL0-> ls
 /HOST
    Targets:
        bootmode
        console
        diag
        domain
        tpm
    Properties:
        autorestart = reset
        autorunonerror = false
        bootfailrecovery = poweroff
        bootrestart = none
        boottimeout = 0
        hypervisor_version = Hypervisor 1.10.3.a 2011/09/14 13:24
        macaddress = 00:21:28:80:cc:90
        maxbootfail = 3
        obp_version = OpenBoot 4.33.4 2011/11/17 13:45
        post_version = POST 4.33.4 2011/11/17 14:31
        send_break_action = (Cannot show property)
        status = Solaris running
        sysfw_version = Sun System Firmware 8.1.4.e 2012/01/14 17:39
    Commands:
        cd
        set
        show
SUN6000BLADE BL0-> start console
Are you sure you want to start /HOST/console (y/n)? y
Serial console started.  To stop, type #.
TESTSERVER console login:

Tuesday 14 October 2014

HOW to clear Buffer cache in linux ?

1)To check buffer size.
# top
top - 10:03:51 up 3 days, 20:03,  3 users,  load average: 0.53, 0.78, 0.83
Tasks: 487 total,   1 running, 486 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.7%us,  0.4%sy,  0.0%ni, 99.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  132350512k total, 119378720k used, 12971792k free,  1817552k buffers
Swap: 157286396k total,       20k used, 157286376k free, 110546064k cached
2) to release buffer size.
# sync && echo 3 > /proc/sys/vm/drop_caches

3)to check buffer size.
# top
top - 10:15:41 up 3 days, 20:15,  3 users,  load average: 0.73, 0.76, 0.80
Tasks: 485 total,   1 running, 484 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.7%us,  0.3%sy,  0.0%ni, 99.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  132350512k total,  2502388k used, 129848124k free,    13256k buffers
Swap: 157286396k total,       20k used, 157286376k free,   137032k cached

Friday 3 October 2014

How to configure VNC server in solaris 10 ?

1)check vncserver is available or not.
#which vncserver
/usr/bin/vncserver
2)login into requied user & check XGUI path availability if not export below.
 $PATH=$PATH:/usr/X/bin:/usr/X11/bin
 $export PATH
3)start VNCserver configuration just by executing vncserver
 $vncserver
Warning: TESTSERVER:1 is taken because of /tmp/.X1-lock
Remove this file if there is no X server TESTSERVER:1
New 'TESTSERVER:2 ()' desktop is TESTSERVER:2
Creating default startup script /export/home/user1/.vnc/xstartup
Starting applications specified in /export/home/user1/.vnc/xstartup
Log file is /export/home/user1/.vnc/TESTSERVER:2.log
4)VNCuser password creation
$ vncpasswd
Password:
Verify:
5)share both session & password
eg:-
xx.xx.xx.xx:1
passwd:abc1234

6)to kill vncserver
$ vncserver -kill :2
Killing Xvnc process ID 433
7)to set geometry & path
$vncserver -geometry 1024x768 -depth 24

8)to check xhost in Graphical session
$/usr/X/bin/xhost +

HOW TO INSTALL & CONFIGURE SOLARIS CLUSTERS ON SOLARIS 10 ?


1) Down load Cluster package ,extract package & install on both nodes.
# ./installer
Unable to access a usable display on the remote system. Continue in command-line                                                                               mode?(Y/N)
Y

   Welcome to Oracle(R) Solaris Cluster; serious software made simple...
   Before you begin, refer to the Release Notes and Installation Guide for the
   products that you are installing. This documentation is available at http:
   //www.oracle.com/technetwork/indexes/documentation/index.html.
   You can install any or all of the Services provided by Oracle Solaris
   Cluster.

   Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
   <Press ENTER to Continue>


Installation Type
-----------------
   Do you want to install the full set of Oracle Solaris Cluster Products and
   Services? (Yes/No) [Yes] {"<" goes back, "!" exits} Yes
   Install multilingual package(s) for all selected components [Yes] {"<" goes
   back, "!" exits}:

Checking System Status
    Available disk space...        : Checking .... OK
    Memory installed...            : Checking .... OK
    Swap space installed...        : Checking .... OK
    Operating system patches...    : Checking .... OK
    Operating system resources...  : Checking .... OK

System ready for installation

   Enter 1 to continue [1] {"<" goes back, "!" exits} 1


Screen for selecting Type of Configuration
1. Configure Now - Selectively override defaults or express through
2. Configure Later - Manually configure following installation

   Select Type of Configuration [1] {"<" goes back, "!" exits} 2
Ready to Install
----------------
The following components will be installed.
Product: Oracle Solaris Cluster
Uninstall Location: /var/sadm/prod/SUNWentsyssc33u2
Space Required: 236.44 MB
---------------------------------------------------
        Java DB
           Java DB Server
           Java DB Client
        Oracle Solaris Cluster 3.3u2
           Oracle Solaris Cluster Core
           Oracle Solaris Cluster Manager
        Oracle Solaris Cluster Agents 3.3u2
           Oracle Solaris Cluster HA for Java(TM) System Application Server
           Oracle Solaris Cluster HA for Java(TM) System Message Queue
           Oracle Solaris Cluster HA for Java(TM) System Messaging Server
           Oracle Solaris Cluster HA for Java(TM) System Calendar Server
           Oracle Solaris Cluster HA for Java(TM) System Directory Server
           Oracle Solaris Cluster HA for Java(TM) System Application Server EE (HADB)
           Oracle Solaris Cluster HA for Instant Messaging
           Oracle Solaris Cluster HA/Scalable for Java(TM) System Web Server
           Oracle Solaris Cluster HA for Apache Tomcat
           Oracle Solaris Cluster HA for Apache
           Oracle Solaris Cluster HA for DHCP
           Oracle Solaris Cluster HA for DNS
           Oracle Solaris Cluster HA for MySQL
           Oracle Solaris Cluster HA for Sun N1 Service Provisioning System
           Oracle Solaris Cluster HA for NFS
           Oracle Solaris Cluster HA for Oracle
           Oracle Solaris Cluster HA for Agfa IMPAX
           Oracle Solaris Cluster HA for Samba
           Oracle Solaris Cluster HA for Sun N1 Grid Engine
           Oracle Solaris Cluster HA for Solaris Containers
           Oracle Solaris Cluster Support for Oracle RAC
           Oracle Solaris Cluster HA for Oracle E-Business Suite
           Oracle Solaris Cluster HA for SAP liveCache
           Oracle Solaris Cluster HA for WebSphere Message Broker
           Oracle Solaris Cluster HA for WebSphere MQ
           Oracle Solaris Cluster HA for Oracle 9iAS
           Oracle Solaris Cluster HA for SAPDB
           Oracle Solaris Cluster HA for SAP Web Application Server
           Oracle Solaris Cluster HA for SAP
           Oracle Solaris Cluster HA for PostgreSQL
           Oracle Solaris Cluster HA for Sybase ASE
           Oracle Solaris Cluster HA for BEA WebLogic Server
           Oracle Solaris Cluster HA for Siebel
           Oracle Solaris Cluster HA for Kerberos
           Oracle Solaris Cluster HA for Swift Alliance Access
           Oracle Solaris Cluster HA for Swift Alliance Gateway
           Oracle Solaris Cluster HA for Informix
           Oracle Solaris Cluster HA for xVM Server SPARC Guest Domains
           Oracle Solaris Cluster HA for PeopleSoft Enterprise
           Oracle Solaris Cluster HA for Oracle Business Intelligence Enterprise
Edition
           Oracle Solaris Cluster HA for TimesTen
           Oracle Solaris Cluster HA for Oracle External Proxy
           Oracle Solaris Cluster HA for Oracle Web Tier Agent
           Oracle Solaris Cluster HA for SAP NetWeaver
        Oracle Solaris Cluster Geographic Edition 3.3u2
           Oracle Solaris Cluster Geographic Edition Core Components
           Oracle Solaris Cluster Geographic Edition Manager
           Sun StorEdge Availability Suite Data Replication Support
           Hitachi Truecopy Data Replication Support
           SRDF Data Replication Support
           Oracle Data Guard Data Replication Support
           Oracle Solaris Cluster Geographic Edition Script-Based Plugin Replica
Support
           Oracle Solaris Cluster Geographic Edition Sun ZFS Storage Appliance
Replication
        Quorum Server
        Java(TM) System High Availability Session Store 4.4.3

1. Install
2. Start Over
3. Exit Installation
   What would you like to do [1] {"<" goes back, "!" exits}?
Oracle Solaris Cluster
|-1%--------------25%-----------------50%-----------------75%--------------100%|

Installation Complete

Software installation has completed successfully. You can view the installation
summary and log by using the choices below. Summary and log files are available
in /var/sadm/install/logs/.

Your next step is to perform the postinstallation configuration and
verification tasks documented in the Postinstallation Configuration and Startup
Chapter of the Java(TM) Enterprise System Installation Guide. See: http:
//download.oracle.com/docs/cd/E19528-01/820-2827.
   Enter 1 to view installation summary and Enter 2 to view installation logs
   [1] {"!" exits} !
In order to notify you of potential updates, we need to confirm an internet connection. Do you want to proceed [Y/N] : Y
An internet connection was not detected. If you are using a Proxy please enter it now.
Enter HTTP Proxy Host : ^C#

2) Apply latest Patch for same.
# patchadd xxxxxx-xx
Validating patches...
Loading patches installed on the system...
Done!
Loading patches requested to install.
Done!
Checking patches that you specified for installation.
Done!

Approved patches will be installed in this order:
xxxxxx-xx

Checking installed patches...
Executing prepatch script...
Installing patch packages...

Patch xxxxxx-xx has been successfully installed.
See /var/sadm/patch/145333-27/log for details
Executing postpatch script...
Patch packages installed:
  SUNWcvmr
  SUNWsccomu
  SUNWsccomzu
  SUNWscderby
  SUNWscdev
  SUNWscgds
  SUNWscmasa
  SUNWscmasar
  SUNWscmasasen
  SUNWscmasau
  SUNWscmasazu
  SUNWscmautil
  SUNWscmd
  SUNWscr
  SUNWscrtlh
  SUNWscsal
  SUNWscsmf
  SUNWscspmu
  SUNWsctelemetry
  SUNWscu
  SUNWscucm
  SUNWsczr
  SUNWsczu
  SUNWudlmr

3) post configure Sun cluster.
# scinstall
  *** Main Menu ***
    Please select from one of the following (*) options:
      * 1) Create a new cluster or add a cluster node
        2) Configure a cluster to be JumpStarted from this install server
        3) Manage a dual-partition upgrade
        4) Upgrade this cluster node
      * 5) Print release information for this cluster node
      * ?) Help with menu options
      * q) Quit
    Option:  1
  *** New Cluster and Cluster Node Menu ***
    Please select from any one of the following options:
        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster
        ?) Help with menu options
        q) Return to the Main Menu
    Option:  1
  *** Create a New Cluster ***

    This option creates and configures a new cluster.
    You must use the Oracle Solaris Cluster installation media to install
    the Oracle Solaris Cluster framework software on each machine in the
    new cluster before you select this option.
    If the "remote configuration" option is unselected from the Oracle
    Solaris Cluster installer when you install the Oracle Solaris Cluster
    framework on any of the new nodes, then you must configure either the
    remote shell (see rsh(1)) or the secure shell (see ssh(1)) before you
    select this option. If rsh or ssh is used, you must enable root access
    to all of the new member nodes from this node.
    Press Control-D at any time to return to the Main Menu.

    Do you want to continue (yes/no) [yes]?
  >>> Typical or Custom Mode <<<
    This tool supports two modes of operation, Typical mode and Custom
    mode. For most clusters, you can use Typical mode. However, you might
    need to select the Custom mode option if not all of the Typical mode
    defaults can be applied to your cluster.
    For more information about the differences between Typical and Custom
    modes, select the Help option from the menu.
    Please select from one of the following options:
        1) Typical
        2) Custom
        ?) Help
        q) Return to the Main Menu
    Option [1]:  2
  >>> Cluster Name <<<
    Each cluster has a name assigned to it. The name can be made up of any
    characters other than whitespace. Each cluster name should be unique
    within the namespace of your enterprise.
    What is the name of the cluster you want to establish [TestCluster]?  TestCluster
  >>> Cluster Nodes <<<
    This Oracle Solaris Cluster release supports a total of up to 16
    nodes.
    List the names of the other nodes planned for the initial cluster
    configuration. List one node name per line. When finished, type
    Control-D:
    Node name:  NODEA
    Node name:  NODEB
    Node name (Control-D to finish):  ^D

    This is the complete list of nodes:
        NODEA
        NODEB
    Is it correct (yes/no) [yes]?

    Attempting to contact "NODEB" ... done
    Searching for a remote configuration method ... done
    The Oracle Solaris Cluster framework is able to complete the
    configuration process without remote shell access.
  >>> Authenticating Requests to Add Nodes <<<
    Once the first node establishes itself as a single node cluster, other
    nodes attempting to add themselves to the cluster configuration must
    be found on the list of nodes you just provided. You can modify this
    list by using claccess(1CL) or other tools once the cluster has been
    established.
    By default, nodes are not securely authenticated as they attempt to
    add themselves to the cluster configuration. This is generally
    considered adequate, since nodes which are not physically connected to
    the private cluster interconnect will never be able to actually join
    the cluster. However, DES authentication is available. If DES
    authentication is selected, you must configure all necessary
    encryption keys before any node will be allowed to join the cluster
    (see keyserv(1M), publickey(4)).
    Do you need to use DES authentication (yes/no) [no]?
  >>> Minimum Number of Private Networks <<<
    Each cluster is typically configured with at least two private
    networks. Configuring a cluster with just one private interconnect
    provides less availability and will require the cluster to spend more
    time in automatic recovery if that private interconnect fails.
    Should this cluster use at least two private networks (yes/no) [yes]?
  >>> Point-to-Point Cables <<<
    The two nodes of a two-node cluster may use a directly-connected
    interconnect. That is, no cluster switches are configured. However,
    when there are greater than two nodes, this interactive form of
    scinstall assumes that there will be exactly one switch for each
    private network.
    Does this two-node cluster use switches (yes/no) [no]?
  >>> Cluster Transport Adapters and Cables <<<
    Transport adapters are the adapters that attach to the private cluster
    interconnect.
    Select the first cluster transport adapter:
        1) igb0
        2) igb1
        3) igb2
        4) igb3
        5)ixgbe2
 6)ixgbe3
        n) Next >
    Option:  9
    Adapter "ixgbe2" is an Ethernet adapter.
    Searching for any unexpected network traffic on "ixgbe2" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.
    The "dlpi" transport type will be set for this cluster.
    Name of adapter (physical or virtual) on "NODEB" to which "ixgbe2"
Invalid adapter name.
    Name of adapter (physical or virtual) on "NODEB" to which "ixgbe2"
    Select the second cluster transport adapter:
        1) igb0
        2) igb1
        3) igb2
        4) igb3
        5)ixgbe2
 6)ixgbe3
        n) Next >
    Option:  10
    Adapter "ixgbe3" is an Ethernet adapter.
    Searching for any unexpected network traffic on "ixgbe3" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.
    The "dlpi" transport type will be set for this cluster.
    Name of adapter (physical or virtual) on "NODEB" to which "ixgbe3"
  >>> Network Address for the Cluster Transport <<<
    The cluster transport uses a default network address of 172.16.0.0. If
    this IP address is already in use elsewhere within your enterprise,
    specify another address from the range of recommended private
    addresses (see RFC 1918 for details).
    The default netmask is 255.255.240.0. You can select another netmask,
    as long as it minimally masks all bits that are given in the network
    address.
    The default private netmask and network address result in an IP
    address range that supports a cluster with a maximum of 32 nodes, 10
    private networks, and 12 virtual clusters.
    Is it okay to accept the default network address (yes/no) [yes]?
    Is it okay to accept the default netmask (yes/no) [yes]?
    Plumbing network address 172.16.0.0 on adapter ixgbe2 >> NOT DUPLICATE ... d
    Plumbing network address 172.16.0.0 on adapter ixgbe3 >> NOT DUPLICATE ... d
  >>> Set Global Fencing <<<
    Fencing is a mechanism that a cluster uses to protect data integrity
    when the cluster interconnect between nodes is lost. By default,
    fencing is turned on for global fencing, and each disk uses the global
    fencing setting. This screen allows you to turn off the global
    fencing.
    Most of the time, leave fencing turned on. However, turn off fencing
    when at least one of the following conditions is true: 1) Your shared
    storage devices, such as Serial Advanced Technology Attachment (SATA)
    disks, do not support SCSI; 2) You want to allow systems outside your
    cluster to access storage devices attached to your cluster; 3) Oracle
    Corporation has not qualified the SCSI persistent group reservation
    (PGR) support for your shared storage devices.
    If you choose to turn off global fencing now, after your cluster
    starts you can still use the cluster(1CL) command to turn on global
    fencing.
    Do you want to turn off global fencing (yes/no) [no]?
  >>> Resource Security Configuration <<<
    The execution of a cluster resource is controlled by the setting of a
    global cluster property called resource_security. When the cluster is
    booted, this property is set to SECURE.
    Resource methods such as Start and Validate always run as root. If
    resource_security is set to SECURE and the resource method executable
    file has non-root ownership or group or world write permissions,
    execution of the resource method fails at run time and an error is
    returned.
    Resource types that declare the Application_user resource property
    perform additional checks on the executable file ownership and
    permissions of application programs. If the resource_security property
    is set to SECURE and the application program executable is not owned
    by root or by the configured Application_user of that resource, or the
    executable has group or world write permissions, execution of the
    application program fails at run time and an error is returned.
    Resource types that declare the Application_user property execute
    application programs according to the setting of the resource_security
    cluster property. If resource_security is set to SECURE, the
    application user will be the value of the Application_user resource
    property; however, if there is no Application_user property, or it is
    unset or empty, the application user will be the owner of the
    application program executable file. The resource will attempt to
    execute the application program as the application user; however a
    non-root process cannot execute as root (regardless of property
    settings and file ownership) and will execute programs as the
    effective non-root user ID.
    You can use the "clsetup" command to change the value of the
    resource_security property after the cluster is running.

Press Enter to continue:
  >>> Quorum Configuration <<<
    Every two-node cluster requires at least one quorum device. By
    default, scinstall selects and configures a shared disk quorum device
    for you.
    This screen allows you to disable the automatic selection and
    configuration of a quorum device.
    You have chosen to turn on the global fencing. If your shared storage
    devices do not support SCSI, such as Serial Advanced Technology
    Attachment (SATA) disks, or if your shared disks do not support
    SCSI-2, you must disable this feature.
    If you disable automatic quorum device selection now, or if you intend
    to use a quorum device that is not a shared disk, you must instead use
    clsetup(1M) to manually configure quorum once both nodes have joined
    the cluster for the first time.
    Do you want to disable automatic quorum device selection (yes/no) [no]?  yes
  >>> Global Devices File System <<<
    Each node in the cluster must have a local file system mounted on
    /global/.devices/node@<nodeID> before it can successfully participate
    as a cluster member. Since the "nodeID" is not assigned until
    scinstall is run, scinstall will set this up for you.
    You must supply the name of either an already-mounted file system or a
    raw disk partition which scinstall can use to create the global
    devices file system. This file system or partition should be at least
    512 MB in size.
    Alternatively, you can use a loopback file (lofi), with a new file
    system, and mount it on /global/.devices/node@<nodeid>.
    If an already-mounted file system is used, the file system must be
    empty. If a raw disk partition is used, a new file system will be
    created for you.
    If the lofi method is used, scinstall creates a new 100 MB file system
    from a lofi device by using the file /.globaldevices. The lofi method
    is typically preferred, since it does not require the allocation of a
    dedicated disk slice.
    The default is to use lofi.
 For node "NODEA",
    Is it okay to use this default (yes/no) [yes]?

 For node "NODEB",
    Is it okay to use this default (yes/no) [yes]?
    Configuring global device using lofi on NODEB: done

    Is it okay to create the new cluster (yes/no) [yes]?
    During the cluster creation process, cluster check is run on each of
    the new cluster nodes. If cluster check detects problems, you can
    either interrupt the process or check the log files after the cluster
    has been established.
    Interrupt cluster creation for cluster check errors (yes/no) [no]?
  Cluster Creation
    Log file - /var/cluster/logs/install/scinstall.log.1896
    Started cluster check on "NODEA".
    Started cluster check on "NODEB".
    cluster check failed for "NODEA".
    cluster check failed for "NODEB".
The cluster check command failed on both of the nodes.
Refer to the log file for details.
The name of the log file is /var/cluster/logs/install/scinstall.log.1896.

    Configuring "NODEB" ... done
    Rebooting "NODEB" ... done
    Configuring "NODEA" ... done
    Rebooting "NODEA" ...
Log file - /var/cluster/logs/install/scinstall.log.1896

Rebooting ...
updating /platform/sun4v/boot_archive

NOTE:-
Automatically reboots two servers.

Now add quorum device manually:-
#clq add <shared deive 1g lun>

Files:-
/usr/cluser/bin
/var/cluster/logs
/etc/hosts
/etc/vfstab
 /global/.devices/node@1 & ...
#cd /etc/cluster
ccr                  locale               qd_userd_door        remoteconfiguration  syncsa.conf
clpl                 nodeid               ql                   security             vp
eventlog             original             release              solaris10.version    zone_cluster





 

HOW TO CONFIGURE MULTIPLE SOLARIS PUBLISHERS ON SOLARIS 11.2?

1)Down load latest SRUs & unzip same in required location.
#unzip -d /IPS/SOL11.2_SRU2.8/reop p19691311_1100_SOLARIS64_1of2.zip
#unzip -d /IPS/SOL11.2_SRU2.8/reop p19691311_1100_SOLARIS64_2of2.zip
2)rebuild Repository
# pkgrepo -s /IPS/SOL11.2_SRU2.8/repo rebuild
Initiating repository rebuild.
3)Verify Repository
 # pkgrepo -s /IPS/SOL11.2_SRU2.8/repo verify

4)Create pkg/server new instance
# svccfg -s pkg/server add  sol11-2sru2-8
# svcs -a|grep sol11-2sru2-8
5) configure require properties.
# svccfg -s svc:/application/pkg/server:sol11-2sru2-8
svc:/application/pkg/server:sol11-2sru2-8> listprop
svc:/application/pkg/server:sol11-2sru2-8> addpg pkg application
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg  application
svc:/application/pkg/server:sol11-2sru2-8> addpg general framework
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg      application
general  framework
svc:/application/pkg/server:sol11-2sru2-8> addpropvalue general/enabled boolean: true
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg                                application
general                            framework
general/enabled                   boolean     true
general/complete                  astring
restarter                          framework            NONPERSISTENT
restarter/logfile                 astring     /var/svc/log/application-pkg-server:sol11-2sru2-8.log
restarter/start_pid               count       2193
restarter/start_method_timestamp  time        1412052471.802291000
restarter/start_method_waitstatus integer     256
restarter/contract                count
restarter/auxiliary_state         astring     fault_threshold_reached
restarter/next_state              astring     none
restarter/state                   astring     maintenance
restarter/state_timestamp         time        1412052471.832000000
restarter_actions                  framework            NONPERSISTENT
restarter_actions/enable_complete time        1412052471.850069000

svc:/application/pkg/server:sol11-2sru2-8> setprop pkg/port=8082
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg                                application
pkg/port                          count       8082
general                            framework
general/enabled                   boolean     true
general/complete                  astring
restarter                          framework            NONPERSISTENT
restarter/logfile                 astring     /var/svc/log/application-pkg-server:sol11-2sru2-8.log
restarter/start_pid               count       2193
restarter/start_method_timestamp  time        1412052471.802291000
restarter/start_method_waitstatus integer     256
restarter/contract                count
restarter/auxiliary_state         astring     fault_threshold_reached
restarter/next_state              astring     none
restarter/state                   astring     maintenance
restarter/state_timestamp         time        1412052471.832000000
restarter_actions                  framework            NONPERSISTENT
restarter_actions/enable_complete time        1412052471.850069000
svc:/application/pkg/server:sol11-2sru2-8> setprop pkg/inst_root="/IPS/SOL11.2_SRU2.8/repo"
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg                                 application
pkg/port                           count       8082
pkg/inst_root                      astring     /IPS/SOL11.2_SRU2.8/repo
general                             framework
general/complete                   astring
general/enabled                    boolean     false
restarter                           framework           NONPERSISTENT
restarter/logfile                  astring     /var/svc/log/application-pkg-server:sol11-2sru2-8.log
restarter/start_pid                count       2193
restarter/start_method_timestamp   time        1412052471.802291000
restarter/start_method_waitstatus  integer     256
restarter/contract                 count
restarter/auxiliary_state          astring     disable_request
restarter/next_state               astring     none
restarter/state                    astring     disabled
restarter/state_timestamp          time        1412052841.027547000
restarter_actions                   framework           NONPERSISTENT
restarter_actions/enable_complete  time        1412052471.850069000
restarter_actions/auxiliary_tty    boolean     true
restarter_actions/auxiliary_fmri   astring     svc:/network/ssh:default
restarter_actions/disable_complete time        1412052841.048321000
svc:/application/pkg/server:sol11-2sru2-8> end
6)to set the publishers
#pkg set-publisher -G '*' -g http://xx.xx.xx.xx:8082/ solaris
#pkg set-publisher -G '*' -g http://xx.xx.xx.xx:8082/ solaris
7)to check publisher
#pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F http://xx.xx.xx.xx:8081/
solaris                     origin   online F http://xx.xx.xx.xx:8082/
8)pkg update
#pkg update --accept


Tuesday 23 September 2014

suncluster configuration information for resource group script ??

#!/usr/bin/bash
#########################################
######### TESTING SCRIPT#######
########## SUN CLUSTER Info##############
#####VERSION=1.0############################
##DESIGN&IMPLEMENTED:CHITTIBABU MIRIYALA#
#########################################
echo "From:chittibabu.oracle@gmail.com" >"/tmp/output1"
echo "To:chittibabu.oracle@gmail.com">>"/tmp/output1"
echo "Subject:SUNCLUSTER CONFIGURATION INFO: TESTNODE1 & TESTNODE 2(testdgrg) ">>"/tmp/output1"
echo "Content-type: text/html">>/tmp/output1
echo "<html>">>"/tmp/output1"
echo "<body>">>"/tmp/output1"
echo "<table width=100%>">>"/tmp/output1"
echo "<tr bgcolor="#00FF00"><pre>" >>/tmp/output1
/usr/cluster/bin/clrg  status testdbrg >>"/tmp/output1"
echo "</pre></tr>"  >>/tmp/output1
echo "<tr bgcolor="#D8BFD8"><pre>" >>/tmp/output1
/usr/cluster/bin/clrs status -g testdbrg >>"/tmp/output1"
echo "</pre></tr>"  >>/tmp/output1
echo "<tr bgcolor="#D2B48C"><pre>" >>/tmp/output1
/usr/cluster/bin/clrg  show -v testdbrg >>"/tmp/output1"
echo "</pre></tr>"  >>/tmp/output1
echo "</table>" >>"/tmp/output1"
echo "</body>">>"/tmp/output1"
echo "</html>">>"/tmp/output1"
cat "/tmp/output1"|mail chittibabu.oracle@gmail.com
>"/tmp/output1"

Saturday 20 September 2014

how to release zfs cache memory online?

problem:-
server have 128GB RAM only application using 97 GB but top command showing 127.3 GB used & 0.7 free
solution:-
zfs consumes more meory.
1)TO CHECK KERNAL MEMORY STATUS
# echo "::memstat" |mdb -k
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                     299058              2336   10%
ZFS File Data             1221918              34359738   40%
Anon                      1457002             11382   47%
Exec and libs                1484                11    0%
Page cache                  27856               217    1%
Free (cachelist)            12098                94    0%
Free (freelist)             66007               515    2%
Total                     3085423             24104
Physical                  3071311             23994
2)MAKE A LARGE FILE IN /tmp DIRECTORY
 # cd /tmp
 # ls
crontab.1647             crontab.1945             gdm-auth-cookies-.raWbc  hsperfdata_oracle        hsperfdata_root          sh1647.1
# du -sh *
   0K   crontab.1647
   0K   crontab.1945
   8K   XYZ
 104K   hsperfdata_oracle_ABC
   8K   hsperfdata_oracle_BAC
   8K   sh1647.1
 # mkfile 50G test
3)CHECK MEMORY STATUS
# echo "::memstat" |mdb -k
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                     288926              2257    9%
ZFS File Data              570593              4457   18%
Anon                      1397923             10921   45%
Exec and libs                 613                 4    0%
Page cache                 752637              5879   24%
Free (cachelist)            10285                80    0%
Free (freelist)             64446               503    2%
Total                     3085423             24104
Physical                  3071311             23994
making permenant
a.checking memory usage in user wise. out of 128GB , 97 GB is used rest of memory disappear
     Example:
          prstat -s size -–a
          NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                          
              32 sybase     96G   96G    75%  42:38:04 0.2%
              72 root      367M  341M   0.3%   9:38:11 0.0%
               6 daemon   7144K 9160K   0.0%   0:01:01 0.0%
               1 smmsp    2048K 6144K   0.0%   0:00:22 0.0
b. to total physical memory:
          prtdiag | grep -i Memory
          Memory size: 131072 Megabytes
3. approx 75% of the physical memory is used under typical load.  Add a few percent for headroom (let’s call it 80).
4. 25% of 128GB is 32GB = 34359738368 bytes
5. Configure ZFS ARC Cache limit in /etc/system
   set zfs:zfs_arc_max=34359738368
6.Reboot system

HOW TO CONFIGURE HALF DUPLEX TO FULL DUPLEX IN SOLARIS 10 (CE) NIC CARDS?

1)check current duplex of NIC card
# dladm show-dev
hme0            link: down      speed: 0     Mbps       duplex: unknown
hme1            link: down      speed: 0     Mbps       duplex: unknown
ce0             link: up        speed: 100   Mbps       duplex: half
ce1             link: up        speed: 100   Mbps       duplex: half
2)chnaging half duplex to full duplex
#ndd -set /dev/ce instance 0
# ndd -set /dev/ce adv_1000fdx_cap 0
# ndd -set /dev/ce adv_1000hdx_cap 0
# ndd -set /dev/ce adv_100fdx_cap 1
# ndd -set /dev/ce adv_100hdx_cap 0
# ndd -set /dev/ce adv_10fdx_cap 0
# dladm show-dev
hme0            link: down      speed: 0     Mbps       duplex: unknown
hme1            link: down      speed: 0     Mbps       duplex: unknown
ce0             link: down      speed: 0     Mbps       duplex: unknown
ce1             link: up        speed: 100   Mbps       duplex: half
# ndd -set /dev/ce adv_10hdx_cap 0
# ndd -set /dev/ce adv_autoneg_cap 0
# dladm show-dev
hme0            link: down      speed: 0     Mbps       duplex: unknown
hme1            link: down      speed: 0     Mbps       duplex: unknown
ce0             link: up        speed: 100   Mbps       duplex: full
ce1             link: up        speed: 100   Mbps       duplex: half
#kstat -m ce

file store information:-
#more /platform/sun4u/kernel/drv/ce.conf
adv_cap_autoneg=1 adv_cap_1000fdx=0 adv_cap_1000hdx=0 adv_cap_100fdx=1
adv_cap_100hdx=0 adv_cap_100T4=0 adv_cap_10fdx=0 adv_cap_10hdx=0;

NOTE:-
for nic card2
#ndd -set /dev/ce instance 1
# ndd -set /dev/ce adv_1000fdx_cap 0
# ndd -set /dev/ce adv_1000hdx_cap 0
# ndd -set /dev/ce adv_100fdx_cap 1
# ndd -set /dev/ce adv_100hdx_cap 0
# ndd -set /dev/ce adv_10fdx_cap 0
# ndd -set /dev/ce adv_10hdx_cap 0
# ndd -set /dev/ce adv_autoneg_cap 0
LOGS:-
Sep 14 02:05:54 TESTSERVER genunix: [ID 408822 kern.info] NOTICE: ce0: no fault external to device; service available
Sep 14 02:05:54 TESTSERVER genunix: [ID 611667 kern.info] NOTICE: ce0: xcvr addr:0x01 - link up 100 Mbps half duplex
Sep 14 02:06:05 TESTSERVER genunix: [ID 408822 kern.info] NOTICE: ce0: no fault external to device; service available
Sep 14 02:06:05 TESTSERVER genunix: [ID 611667 kern.info] NOTICE: ce0: xcvr addr:0x01 - link up 100 Mbps half duplex
Sep 14 02:06:35 TESTSERVER in.mpathd[225]: [ID 594170 daemon.error] NIC failure detected on ce0 of group ipmp0
Sep 14 02:06:35 TESTSERVER in.mpathd[225]: [ID 832587 daemon.error] Successfully failed over from NIC ce0 to NIC ce1
Sep 14 02:07:21 TESTSERVER genunix: [ID 408822 kern.info] NOTICE: ce0: no fault external to device; service available
Sep 14 02:07:21 TESTSERVER genunix: [ID 611667 kern.info] NOTICE: ce0: xcvr addr:0x01 - link up 100 Mbps full duplex
Sep 14 02:07:38 TESTSERVER in.mpathd[225]: [ID 299542 daemon.error] NIC repair detected on ce0 of group ipmp0
Sep 14 02:07:38 TESTSERVER in.mpathd[225]: [ID 620804 daemon.error] Successfully failed back to NIC ce0

=======
3) kstat command to get parameters
# kstat -p|grep ce0
ce:0:ce0:alignment_err  0
ce:0:ce0:brdcstrcv      19331
ce:0:ce0:brdcstxmt      108
ce:0:ce0:cap_1000fdx    1
ce:0:ce0:cap_1000hdx    1
ce:0:ce0:cap_100T4      0
ce:0:ce0:cap_100fdx     1
ce:0:ce0:cap_100hdx     1
ce:0:ce0:cap_10fdx      1
ce:0:ce0:cap_10hdx      1
ce:0:ce0:cap_asmpause   0
ce:0:ce0:cap_autoneg    1
ce:0:ce0:cap_pause      0
ce:0:ce0:class  net
ce:0:ce0:code_violations        0
ce:0:ce0:collisions     0
ce:0:ce0:crc_err        0
ce:0:ce0:crtime 183.2032178
ce:0:ce0:excessive_collisions   0
ce:0:ce0:first_collision        0
ce:0:ce0:ierrors        0
ce:0:ce0:ifspeed        100000000
ce:0:ce0:ipackets       38196
ce:0:ce0:ipackets64     38196
ce:0:ce0:ipackets_cpu00 32832
ce:0:ce0:ipackets_cpu01 4707
ce:0:ce0:ipackets_cpu02 348
ce:0:ce0:ipackets_cpu03 309
ce:0:ce0:late_collisions        0
ce:0:ce0:lb_mode        0
ce:0:ce0:length_err     0
ce:0:ce0:link_T4        0
ce:0:ce0:link_asmpause  0
ce:0:ce0:link_duplex    2
ce:0:ce0:link_pause     0
ce:0:ce0:link_speed     100
ce:0:ce0:link_up        1
ce:0:ce0:lp_cap_1000fdx 0
ce:0:ce0:lp_cap_1000hdx 0
ce:0:ce0:lp_cap_100T4   0
ce:0:ce0:lp_cap_100fdx  0
ce:0:ce0:lp_cap_100hdx  0
ce:0:ce0:lp_cap_10fdx   0
ce:0:ce0:lp_cap_10hdx   0
ce:0:ce0:lp_cap_asmpause        0
ce:0:ce0:lp_cap_autoneg 0
ce:0:ce0:lp_cap_pause   0
ce:0:ce0:multircv       174
ce:0:ce0:multixmt       0
ce:0:ce0:norcvbuf       0
ce:0:ce0:noxmtbuf       0
ce:0:ce0:obytes 2799515
ce:0:ce0:obytes64       2799515
ce:0:ce0:oerrors        0
ce:0:ce0:opackets       25469
ce:0:ce0:opackets64     25469
ce:0:ce0:pci_bad_ack_err        0
ce:0:ce0:pci_dmarz_err  0
ce:0:ce0:pci_dmawz_err  0
ce:0:ce0:pci_drto_err   0
ce:0:ce0:pci_err        0
ce:0:ce0:pci_parity_err 0
ce:0:ce0:pci_rma_err    0
ce:0:ce0:pci_rta_err    0
ce:0:ce0:peak_attempts  0
ce:0:ce0:promisc        off
ce:0:ce0:qos_mode       0
ce:0:ce0:rbytes 2516702
ce:0:ce0:rbytes64       2516702
ce:0:ce0:rev_id 17
ce:0:ce0:rx_allocb_fail 0
ce:0:ce0:rx_hdr_drops   0
ce:0:ce0:rx_hdr_pkts    38056
ce:0:ce0:rx_inits       0
ce:0:ce0:rx_len_mm      0
ce:0:ce0:rx_msgdup_fail 0
ce:0:ce0:rx_mtu_drops   0
ce:0:ce0:rx_mtu_pkts    140
ce:0:ce0:rx_new_hdr_pgs 1189
ce:0:ce0:rx_new_mtu_pgs 35
ce:0:ce0:rx_new_nxt_pgs 0
ce:0:ce0:rx_new_pages   1224
ce:0:ce0:rx_no_buf      0
ce:0:ce0:rx_no_comp_wb  0
ce:0:ce0:rx_nocanput    0
ce:0:ce0:rx_nxt_drops   0
ce:0:ce0:rx_ov_flow     0
ce:0:ce0:rx_pkts_dropped        0
ce:0:ce0:rx_rel_bit     38196
ce:0:ce0:rx_rel_flow    0
ce:0:ce0:rx_split_pkts  0
ce:0:ce0:rx_tag_err     0
ce:0:ce0:rx_taskq_waits 0
ce:0:ce0:snaptime       10593.455837
ce:0:ce0:tx_allocb_fail 0
ce:0:ce0:tx_ddi_pkts    90
ce:0:ce0:tx_dma_bind_fail       0
ce:0:ce0:tx_dma_hdr_bind_fail   0
ce:0:ce0:tx_dma_pld_bind_fail   0
ce:0:ce0:tx_dvma_pkts   7
ce:0:ce0:tx_hdr_pkts    25458
ce:0:ce0:tx_inits       0
ce:0:ce0:tx_max_pend    32
ce:0:ce0:tx_msgdup_fail 0
ce:0:ce0:tx_no_desc     0
ce:0:ce0:tx_nocanput    0
ce:0:ce0:tx_queue0      13186
ce:0:ce0:tx_queue1      670
ce:0:ce0:tx_queue2      2453
ce:0:ce0:tx_queue3      9236
ce:0:ce0:tx_starts      25545
ce:0:ce0:tx_uflo        0
ce:0:ce0:xcvr_addr      1
ce:0:ce0:xcvr_id        2121811
ce:0:ce0:xcvr_inits     1
ce:0:ce0:xcvr_inuse     1
======
# dladm show-dev
hme0            link: down      speed: 0     Mbps       duplex: unknown
hme1            link: down      speed: 0     Mbps       duplex: unknown
ce0             link: up        speed: 100   Mbps       duplex: full
ce1             link: up        speed: 100   Mbps       duplex: full

Tuesday 26 August 2014

Sample mail script to attach files to auto generated mails

#!/usr/bin/ksh
df -h > /scripts/df.txt
export MAILTO="chittibabu.miriyala@gmail.com"
export CONTENT="/scripts/df.txt"
export SUBJECT="sending files example "
(
 echo "Subject: $SUBJECT"
 echo "MIME-Version: 1.0"
 echo "Content-Type: text/plain"
 echo "Content-Disposition:attachment; filename=df_output.txt"
 cat $CONTENT
) | sendmail -f yyy@gmail.com $MAILTO

Smart way to impliment File system list autogenerated script

#!/bin/bash
MAILTO=chittibabu.miriyala@gmail.com
mail $MAILTO <<EOF
From: $MAILTO
To: $MAILADD
Subject:"mail testing for list of directores"
`cd /oracle;ls -l`
`df -h`
EOF

Password aging Auto generated mail

#!/usr/bin/bash
#########################################
######### Chittibabu Generated SCRIPT########
######### PASSWD AGING CHECK##############
##VERSION=1.0############################
##DESIGN&IMPLEMENTED:CHITTIBABU MIRIYALA#
##TESED BY :MAHESH KUMAR#################
#########################################
Node=`uname -n`
echo "From:chittibabu.miriyala@gmail.com" >"/tmp/output1"
echo "To:chittibabu.miriyala@gmail.com">>"/tmp/output1"
#Reply-To:chittibabu.miriyala@gmail.com
echo "Subject:PASSWD AGING : $Node">>"/tmp/output1"
echo "Content-type: text/html">>"/tmp/output1"
echo "<html>">>"/tmp/output1"
echo "<body>">>"/tmp/output1"
echo "<table width=100%>">>"/tmp/output1"
echo "<tr BGCOLOR="#FFFF00"><td colspan="6"> <h1> $Node</h1> </td></tr>">>"/tmp/output1"
NDAYS=`/usr/bin/perl -e 'printf("%d\n", time / (3600 * 24))'`
for i in `cat /etc/passwd|cut -d ":" -f 1|egrep -v "uucp|daemon|bin|sys|adm|lp|dladm|netadm|netcfg|smmsp|gdm|zfssnap|upnp|xvm|mysql|openldap|webservd|postgres|svctag|unknown|nobody|noaccess|nobody4|ftp|dhcpserv|aiuser|pkg5srv"`
do
#echo " user name : $i"
if [ `passwd -s $i |awk '{print $2}'` ==  "PS" ]
then
LAST_CHANGE=`cat /etc/shadow|grep $i|cut -d ":" -f 3`
DELTA=`echo $NDAYS - $LAST_CHANGE|bc`
MAX=`logins -x -l $i|grep PS|awk '{print $4}'`
con=`echo " $MAX - $DELTA"|bc`
if  [ $MAX = -1 ]
then
echo "<TR BGCOLOR="#00FF00">">>"/tmp/output1"
echo "<td>$i USER PASSWD STATUS</td><td>NO PASSWD EXPIRY</td>">>"/tmp/output1"
continue
fi
if [ $DELTA -le $MAX ] && [ $con -le 3 ]
then
echo "<TR BGCOLOR="#FF0000">">>"/tmp/output1"
echo "<td>$i USER PASSWD STATUS</td><td> PASSWD WILL EXPIRE IN  $con DAYS</td>">>"/tmp/output1"
else
echo "<TR BGCOLOR="#00FF00">">>"/tmp/output1"
echo "<td>$i USER PASSWD STATUS</td><td> PASSWD WILL EXPIRE IN  $con DAYS</td>">>"/tmp/output1"
fi
else
echo "<TR BGCOLOR="#FF0000">">>"/tmp/output1"
        echo "<td>$i USER PASSWD STATUS</td><td>BAD</td>">>"/tmp/output1"
fi
done
echo "</table>">>"/tmp/output1"
echo "</body>">>"/tmp/output1"
echo "</html>">>"/tmp/output1"
cat "/tmp/output1"|mail chittibabu.miriyala@gmail.com

LDOMS configuratin auto generated mails

# cat /usr/bin/genpactldmconfig
#!/usr/bin/bash
#ldom script
Node=`uname -n`
echo "From:chittibabu.miriyala@gmail.com" >"/tmp/output1"
echo "To:chittibabu.miriyala@gmail.com">>"/tmp/output1"
#Reply-To:chittibabu.miriyala@gmail.com
echo "Subject:Guest VM Config Info: $Node">>"/tmp/output1"
echo "Content-type: text/html">>"/tmp/output1"
echo "<html>">>"/tmp/output1"
echo "<body>">>"/tmp/output1"
echo "<table width=100%>">>"/tmp/output1"
#echo "<tr BGCOLOR="#FFFF00"><td colspan="6"> <h1>NAME</h1> </td></tr>">>"/tmp/o                                                                              utput1"
echo "<tr BGCOLOR="#FFFF00"><td> <h6>NAME</h6> </td><td> <h6>STATE</h6> </td><td                                                                              > <h6>FLAGS</h6></td><td> <h6>CONS</h6> </td><td> <h6>VCPU</h6></td><td><h6>MEMO                                                                              RYU</h6> </td><td> <h6>UTIL</h6> </td><td colspan="3"><h6>UPTIME</h6></td></tr>"                                                                              >>"/tmp/output1"
LDM=`ldm list |grep -v NAME  |awk '{print $1}'`
for i in `echo $LDM`
do
if [ "`ldm list $i|grep -v NAME|awk '{print $2}'`" == "active" ]
then
        echo "<TR BGCOLOR="#00FF00">">>"/tmp/output1"
        f=`ldm list $i|grep -v NAME`
        for y in `echo $f`
        do
        echo "<td> $y</td>">>"/tmp/output1"
        done
        echo "</tr>">>"/tmp/output1"
else
        echo "<TR BGCOLOR="#FF0000">">>"/tmp/output1"
        f=`ldm list $i|grep -v NAME`
        for y in `echo $f`
        do
        echo "<td> $y</td>">>"/tmp/output1"
        done
        echo "</tr>">>"/tmp/output1"
fi
done
echo "</table>">>"/tmp/output1"
echo "<table width=100%>">>"/tmp/output1"
#echo "<tr BGCOLOR="#FFFF00"><td colspan="6"> <h1> $Node</h1> </td></tr>">>"/tmp                                                                              /output1"
echo "<tr BGCOLOR="#FFFF00"><td> <h6>Pool NAME</h6> </td><td> <h6>SIZE</h6> </td                                                                              ><td> <h6>ALLOC</h6></td><td> <h6>FREE</h6> </td><td> <h6>CAP</h6></td><td><h6>D                                                                              EDUP</h6> </td><td> <h6>HEALTH</h6> </td><td> <h6>ALTROOT</h6> </td></tr>">>"/tm                                                                              p/output1"
POOL=`zpool list|grep -v NAME  |awk '{print $1}'`
for i in `echo $POOL`
do
if [ `zpool list $i |grep -v NAME|awk '{print $7}'` == "ONLINE" ]
then
        echo "<TR BGCOLOR="#00FF00">">>"/tmp/output1"
        f=`zpool list $i|grep -v NAME`
        for y in `echo $f`
        do
        echo "<td> $y</td>">>"/tmp/output1"
        done
        echo "</tr>">>"/tmp/output1"
else
        echo "<TR BGCOLOR="#FF0000">">>"/tmp/output1"
        f=`zpool list $i|grep -v NAME`
        for y in `echo $f`
        do
        echo "<td> $y</td>">>"/tmp/output1"
        done
        echo "</tr>">>"/tmp/output1"
fi
done
echo "</table>">>"/tmp/output1"
echo "<table width=100%>">>"/tmp/output1"
#echo "<tr BGCOLOR="#FFFF00"><td colspan="6"> <h1>NAME</h1> </td></tr>">>"/tmp/o                                                                              utput1"
echo "<tr BGCOLOR="#FFFF00"><td> <h6>NAME</h6> </td><td> <h6>STATE</h6> </td><td                                                                              > <h6>FLAGS</h6></td><td> <h6>CONS</h6> </td><td> <h6>VCPU</h6></td><td><h6>MEMO                                                                              RYU</h6> </td><td> <h6>UTIL</h6> </td><td colspan="3"><h6>UPTIME</h6></td></tr>"                                                                              >>"/tmp/output1"
LDM=`ldm list |grep -v NAME  |awk '{print $1}'`
for i in `echo $LDM`
do
if [ "`ldm list $i|grep -v NAME|awk '{print $2}'`" == "active" ]
then
        echo "<TR BGCOLOR="#00FF00">">>"/tmp/output1"
        f=`ldm list $i|grep -v NAME`
        for y in `echo $f`
        do
        echo "<td> $y</td>">>"/tmp/output1"
        done
        echo "</tr>">>"/tmp/output1"
        echo "<TR BGCOLOR="#CCEEFF">">>"/tmp/output1"
        echo "<td colspan=10><pre>">>"/tmp/output1"
        ldm list -l $i >>"/tmp/output1"
        echo " </pre></td>">>"/tmp/output1"
        echo "</tr>">>"/tmp/output1"

else
        echo "<TR BGCOLOR="#FF0000">">>"/tmp/output1"
        f=`ldm list $i|grep -v NAME`
        for y in `echo $f`
        do
        echo "<td> $y</td>">>"/tmp/output1"
        done
        echo "</tr>">>"/tmp/output1"
        echo "<TR BGCOLOR="#FF0000">">>"/tmp/output1"
        echo "<td colspan=10><pre>">>"/tmp/output1"
        ldm list -l $i >>"/tmp/output1"
        echo " </pre></td>">>"/tmp/output1"
        echo "</tr>">>"/tmp/output1"
fi
done
echo "</table>">>"/tmp/output1"
echo "<table width=100%>">>"/tmp/output1"
#echo "<tr BGCOLOR="#FFFF00"><td colspan="6"> <h1> $Node</h1> </td></tr>">>"/tmp                                                                              /output1"
echo "<tr BGCOLOR="#FFFF00"><td> <h6>Pool NAME</h6> </td><td> <h6>SIZE</h6> </td                                                                              ><td> <h6>ALLOC</h6></td><td> <h6>FREE</h6> </td><td> <h6>CAP</h6></td><td><h6>D                                                                              EDUP</h6> </td><td> <h6>HEALTH</h6> </td><td> <h6>ALTROOT</h6> </td></tr>">>"/tm                                                                              p/output1"
POOL=`zpool list|grep -v NAME  |awk '{print $1}'`
for i in `echo $POOL`
do
if [ `zpool list $i |grep -v NAME|awk '{print $7}'` == "ONLINE" ]
then
        echo "<TR BGCOLOR="#00FF00">">>"/tmp/output1"
        f=`zpool list $i|grep -v NAME`
        for y in `echo $f`
        do
        echo "<td> $y</td>">>"/tmp/output1"
        done
        echo "</tr>">>"/tmp/output1"
        echo "<TR BGCOLOR="#CCEEFF">">>"/tmp/output1"
        echo "<td colspan=10><pre>">>"/tmp/output1"
        zpool status -v  $i >>"/tmp/output1"
        echo " </pre></td>">>"/tmp/output1"
        echo "</tr>">>"/tmp/output1"

else
        echo "<TR BGCOLOR="#FF0000">">>"/tmp/output1"
        f=`zpool list $i|grep -v NAME`
        for y in `echo $f`
        do
        echo "<td> $y</td>">>"/tmp/output1"
        done
        echo "</tr>">>"/tmp/output1"
        echo "<TR BGCOLOR="#CCEEFF">">>"/tmp/output1"
        echo "<td colspan=10><pre>">>"/tmp/output1"
        zpool status -v  $i >>"/tmp/output1"
        echo " </pre></td>">>"/tmp/output1"
        echo "</tr>">>"/tmp/output1"

fi
done
echo "</table>">>"/tmp/output1"
echo "<table width=100%>">>"/tmp/output1"
echo "<tr BGCOLOR="#FFFF00"> <h6>LDOM CONFIGURATION INFORMATION </h6> </td></tr>                                                                              ">>"/tmp/output1"
echo "<TR BGCOLOR="#CCEEFF">">>"/tmp/output1"
echo "<td><pre>">>"/tmp/output1"
ldm list-config >>"/tmp/output1"
echo " </pre></td>">>"/tmp/output1"
echo "</tr>">>"/tmp/output1"
echo "</table>">>"/tmp/output1"
echo "</table>">>"/tmp/output1"
echo "<table width=100%>">>"/tmp/output1"
echo "<tr BGCOLOR="#FFFF00"> <h6>LDOM SERVICE INFORMATION </h6> </td></tr>">>"/t                                                                              mp/output1"
echo "<TR BGCOLOR="#CCEEFF">">>"/tmp/output1"
echo "<td><pre>">>"/tmp/output1"
ldm list-services >>"/tmp/output1"
echo " </pre></td>">>"/tmp/output1"
echo "</tr>">>"/tmp/output1"
echo "</table>">>"/tmp/output1"
echo "<table width=100%>">>"/tmp/output1"
echo "<tr BGCOLOR="#FFFF00"> <h6>LDOM I/O INFORMATION </h6> </td></tr>">>"/t                                                                              mp/output1"
echo "<TR BGCOLOR="#CCEEFF">">>"/tmp/output1"
echo "<td><pre>">>"/tmp/output1"
ldm list-io -l >>"/tmp/output1"
echo " </pre></td>">>"/tmp/output1"
echo "</tr>">>"/tmp/output1"
echo "</table>">>"/tmp/output1"
echo "</body>">>"/tmp/output1"
echo "</html>">>"/tmp/output1"
cat "/tmp/output1"|mail chittibabu.miriyala@gmail.com