Wednesday 15 October 2014

how to login to SUN BLADE 6000 MODULAR SYSTEM's specific balde through console??

login as: root
Using keyboard-interactive authentication.
Password:
Oracle(R) Integrated Lights Out Manager
Version 3.1.1.10 r72831
Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
Warning: password is set to factory default.
-> ls
 /
    Targets:
        CH
        STORAGE
        Servers
        System
        CMM
    Properties:
    Commands:
        cd
        show
-> cd CH
/CH
-> ls
 /CH
    Targets:
        CMM
        MIDPLANE
        BL0 (BL0)
        BL1 (BL1)
        NEM0
        NEM1
        FM0
        FM1
        FM2
        FM3
        FM4
        FM5
        PS0
        PS1
        T_AMB
        HOT
        VPS
        OK
        SERVICE
        TEMP_FAULT
        LOCATE
    Properties:
        type = Chassis
        ipmi_name = /CH
        product_name = SUN BLADE 6000 MODULAR SYSTEM
        product_part_number = xxxxxxxxx
        product_serial_number = xxxxxxxx
        product_manufacturer = ORACLE CORPORATION
        fru_serial_number = xxxxxxx
        fault_state = OK
        clear_fault_action = (none)
        power_state = On
    Commands:
        cd
        set
        show
        start
        stop
-> cd BL0
/CH/BL0
-> ls
 /CH/BL0
    Targets:
        SP
        SYS
        PRSNT
        STATE
        ERR
        VPS
    Properties:
        type = Blade
        ipmi_name = BL0
        product_name = SPARC T3-1B
        product_part_number = xxxxxxxxxxx
        product_serial_number = xxxxxxxx
        system_identifier = BL0
        fru_name = ASSY,BLADE,SPARC T3-1B
        fru_version = FW 3.0.16.5.b
        fru_part_number = xxxxxxx
        fru_serial_number = xxxxxxxxxx
        fru_extra_1 = FW 3.0.16.5.b
        fault_state = OK
        load_uri = (none)
        clear_fault_action = (none)
    Commands:
        cd
        load
        set
        show
-> cd SP
/CH/BL0/SP
-> ls
 /CH/BL0/SP
    Targets:
        cli
        network
    Properties:
        type = Service Processor
    Commands:
        cd
        reset
        show
-> cd cli
/CH/BL0/SP/cli
-> start
Are you sure you want to start /CH/BL0/SP/cli (y/n)? y
start: Connecting to /CH/BL0/SP/cli using Single Sign On

Oracle(R) Integrated Lights Out Manager
Version 3.0.16.5.b r70648
Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
Warning: password is set to factory default.

SUN6000BLADE BL0-> ls
 /
    Targets:
        HOST
        STORAGE
        SYS
        SP
    Properties:
    Commands:
        cd
        show
SUN6000BLADE BL0-> cd HOST
/HOST

SUN6000BLADE BL0-> ls
 /HOST
    Targets:
        bootmode
        console
        diag
        domain
        tpm
    Properties:
        autorestart = reset
        autorunonerror = false
        bootfailrecovery = poweroff
        bootrestart = none
        boottimeout = 0
        hypervisor_version = Hypervisor 1.10.3.a 2011/09/14 13:24
        macaddress = 00:21:28:80:cc:90
        maxbootfail = 3
        obp_version = OpenBoot 4.33.4 2011/11/17 13:45
        post_version = POST 4.33.4 2011/11/17 14:31
        send_break_action = (Cannot show property)
        status = Solaris running
        sysfw_version = Sun System Firmware 8.1.4.e 2012/01/14 17:39
    Commands:
        cd
        set
        show
SUN6000BLADE BL0-> start console
Are you sure you want to start /HOST/console (y/n)? y
Serial console started.  To stop, type #.
TESTSERVER console login:

Tuesday 14 October 2014

HOW to clear Buffer cache in linux ?

1)To check buffer size.
# top
top - 10:03:51 up 3 days, 20:03,  3 users,  load average: 0.53, 0.78, 0.83
Tasks: 487 total,   1 running, 486 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.7%us,  0.4%sy,  0.0%ni, 99.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  132350512k total, 119378720k used, 12971792k free,  1817552k buffers
Swap: 157286396k total,       20k used, 157286376k free, 110546064k cached
2) to release buffer size.
# sync && echo 3 > /proc/sys/vm/drop_caches

3)to check buffer size.
# top
top - 10:15:41 up 3 days, 20:15,  3 users,  load average: 0.73, 0.76, 0.80
Tasks: 485 total,   1 running, 484 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.7%us,  0.3%sy,  0.0%ni, 99.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  132350512k total,  2502388k used, 129848124k free,    13256k buffers
Swap: 157286396k total,       20k used, 157286376k free,   137032k cached

Friday 3 October 2014

How to configure VNC server in solaris 10 ?

1)check vncserver is available or not.
#which vncserver
/usr/bin/vncserver
2)login into requied user & check XGUI path availability if not export below.
 $PATH=$PATH:/usr/X/bin:/usr/X11/bin
 $export PATH
3)start VNCserver configuration just by executing vncserver
 $vncserver
Warning: TESTSERVER:1 is taken because of /tmp/.X1-lock
Remove this file if there is no X server TESTSERVER:1
New 'TESTSERVER:2 ()' desktop is TESTSERVER:2
Creating default startup script /export/home/user1/.vnc/xstartup
Starting applications specified in /export/home/user1/.vnc/xstartup
Log file is /export/home/user1/.vnc/TESTSERVER:2.log
4)VNCuser password creation
$ vncpasswd
Password:
Verify:
5)share both session & password
eg:-
xx.xx.xx.xx:1
passwd:abc1234

6)to kill vncserver
$ vncserver -kill :2
Killing Xvnc process ID 433
7)to set geometry & path
$vncserver -geometry 1024x768 -depth 24

8)to check xhost in Graphical session
$/usr/X/bin/xhost +

HOW TO INSTALL & CONFIGURE SOLARIS CLUSTERS ON SOLARIS 10 ?


1) Down load Cluster package ,extract package & install on both nodes.
# ./installer
Unable to access a usable display on the remote system. Continue in command-line                                                                               mode?(Y/N)
Y

   Welcome to Oracle(R) Solaris Cluster; serious software made simple...
   Before you begin, refer to the Release Notes and Installation Guide for the
   products that you are installing. This documentation is available at http:
   //www.oracle.com/technetwork/indexes/documentation/index.html.
   You can install any or all of the Services provided by Oracle Solaris
   Cluster.

   Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
   <Press ENTER to Continue>


Installation Type
-----------------
   Do you want to install the full set of Oracle Solaris Cluster Products and
   Services? (Yes/No) [Yes] {"<" goes back, "!" exits} Yes
   Install multilingual package(s) for all selected components [Yes] {"<" goes
   back, "!" exits}:

Checking System Status
    Available disk space...        : Checking .... OK
    Memory installed...            : Checking .... OK
    Swap space installed...        : Checking .... OK
    Operating system patches...    : Checking .... OK
    Operating system resources...  : Checking .... OK

System ready for installation

   Enter 1 to continue [1] {"<" goes back, "!" exits} 1


Screen for selecting Type of Configuration
1. Configure Now - Selectively override defaults or express through
2. Configure Later - Manually configure following installation

   Select Type of Configuration [1] {"<" goes back, "!" exits} 2
Ready to Install
----------------
The following components will be installed.
Product: Oracle Solaris Cluster
Uninstall Location: /var/sadm/prod/SUNWentsyssc33u2
Space Required: 236.44 MB
---------------------------------------------------
        Java DB
           Java DB Server
           Java DB Client
        Oracle Solaris Cluster 3.3u2
           Oracle Solaris Cluster Core
           Oracle Solaris Cluster Manager
        Oracle Solaris Cluster Agents 3.3u2
           Oracle Solaris Cluster HA for Java(TM) System Application Server
           Oracle Solaris Cluster HA for Java(TM) System Message Queue
           Oracle Solaris Cluster HA for Java(TM) System Messaging Server
           Oracle Solaris Cluster HA for Java(TM) System Calendar Server
           Oracle Solaris Cluster HA for Java(TM) System Directory Server
           Oracle Solaris Cluster HA for Java(TM) System Application Server EE (HADB)
           Oracle Solaris Cluster HA for Instant Messaging
           Oracle Solaris Cluster HA/Scalable for Java(TM) System Web Server
           Oracle Solaris Cluster HA for Apache Tomcat
           Oracle Solaris Cluster HA for Apache
           Oracle Solaris Cluster HA for DHCP
           Oracle Solaris Cluster HA for DNS
           Oracle Solaris Cluster HA for MySQL
           Oracle Solaris Cluster HA for Sun N1 Service Provisioning System
           Oracle Solaris Cluster HA for NFS
           Oracle Solaris Cluster HA for Oracle
           Oracle Solaris Cluster HA for Agfa IMPAX
           Oracle Solaris Cluster HA for Samba
           Oracle Solaris Cluster HA for Sun N1 Grid Engine
           Oracle Solaris Cluster HA for Solaris Containers
           Oracle Solaris Cluster Support for Oracle RAC
           Oracle Solaris Cluster HA for Oracle E-Business Suite
           Oracle Solaris Cluster HA for SAP liveCache
           Oracle Solaris Cluster HA for WebSphere Message Broker
           Oracle Solaris Cluster HA for WebSphere MQ
           Oracle Solaris Cluster HA for Oracle 9iAS
           Oracle Solaris Cluster HA for SAPDB
           Oracle Solaris Cluster HA for SAP Web Application Server
           Oracle Solaris Cluster HA for SAP
           Oracle Solaris Cluster HA for PostgreSQL
           Oracle Solaris Cluster HA for Sybase ASE
           Oracle Solaris Cluster HA for BEA WebLogic Server
           Oracle Solaris Cluster HA for Siebel
           Oracle Solaris Cluster HA for Kerberos
           Oracle Solaris Cluster HA for Swift Alliance Access
           Oracle Solaris Cluster HA for Swift Alliance Gateway
           Oracle Solaris Cluster HA for Informix
           Oracle Solaris Cluster HA for xVM Server SPARC Guest Domains
           Oracle Solaris Cluster HA for PeopleSoft Enterprise
           Oracle Solaris Cluster HA for Oracle Business Intelligence Enterprise
Edition
           Oracle Solaris Cluster HA for TimesTen
           Oracle Solaris Cluster HA for Oracle External Proxy
           Oracle Solaris Cluster HA for Oracle Web Tier Agent
           Oracle Solaris Cluster HA for SAP NetWeaver
        Oracle Solaris Cluster Geographic Edition 3.3u2
           Oracle Solaris Cluster Geographic Edition Core Components
           Oracle Solaris Cluster Geographic Edition Manager
           Sun StorEdge Availability Suite Data Replication Support
           Hitachi Truecopy Data Replication Support
           SRDF Data Replication Support
           Oracle Data Guard Data Replication Support
           Oracle Solaris Cluster Geographic Edition Script-Based Plugin Replica
Support
           Oracle Solaris Cluster Geographic Edition Sun ZFS Storage Appliance
Replication
        Quorum Server
        Java(TM) System High Availability Session Store 4.4.3

1. Install
2. Start Over
3. Exit Installation
   What would you like to do [1] {"<" goes back, "!" exits}?
Oracle Solaris Cluster
|-1%--------------25%-----------------50%-----------------75%--------------100%|

Installation Complete

Software installation has completed successfully. You can view the installation
summary and log by using the choices below. Summary and log files are available
in /var/sadm/install/logs/.

Your next step is to perform the postinstallation configuration and
verification tasks documented in the Postinstallation Configuration and Startup
Chapter of the Java(TM) Enterprise System Installation Guide. See: http:
//download.oracle.com/docs/cd/E19528-01/820-2827.
   Enter 1 to view installation summary and Enter 2 to view installation logs
   [1] {"!" exits} !
In order to notify you of potential updates, we need to confirm an internet connection. Do you want to proceed [Y/N] : Y
An internet connection was not detected. If you are using a Proxy please enter it now.
Enter HTTP Proxy Host : ^C#

2) Apply latest Patch for same.
# patchadd xxxxxx-xx
Validating patches...
Loading patches installed on the system...
Done!
Loading patches requested to install.
Done!
Checking patches that you specified for installation.
Done!

Approved patches will be installed in this order:
xxxxxx-xx

Checking installed patches...
Executing prepatch script...
Installing patch packages...

Patch xxxxxx-xx has been successfully installed.
See /var/sadm/patch/145333-27/log for details
Executing postpatch script...
Patch packages installed:
  SUNWcvmr
  SUNWsccomu
  SUNWsccomzu
  SUNWscderby
  SUNWscdev
  SUNWscgds
  SUNWscmasa
  SUNWscmasar
  SUNWscmasasen
  SUNWscmasau
  SUNWscmasazu
  SUNWscmautil
  SUNWscmd
  SUNWscr
  SUNWscrtlh
  SUNWscsal
  SUNWscsmf
  SUNWscspmu
  SUNWsctelemetry
  SUNWscu
  SUNWscucm
  SUNWsczr
  SUNWsczu
  SUNWudlmr

3) post configure Sun cluster.
# scinstall
  *** Main Menu ***
    Please select from one of the following (*) options:
      * 1) Create a new cluster or add a cluster node
        2) Configure a cluster to be JumpStarted from this install server
        3) Manage a dual-partition upgrade
        4) Upgrade this cluster node
      * 5) Print release information for this cluster node
      * ?) Help with menu options
      * q) Quit
    Option:  1
  *** New Cluster and Cluster Node Menu ***
    Please select from any one of the following options:
        1) Create a new cluster
        2) Create just the first node of a new cluster on this machine
        3) Add this machine as a node in an existing cluster
        ?) Help with menu options
        q) Return to the Main Menu
    Option:  1
  *** Create a New Cluster ***

    This option creates and configures a new cluster.
    You must use the Oracle Solaris Cluster installation media to install
    the Oracle Solaris Cluster framework software on each machine in the
    new cluster before you select this option.
    If the "remote configuration" option is unselected from the Oracle
    Solaris Cluster installer when you install the Oracle Solaris Cluster
    framework on any of the new nodes, then you must configure either the
    remote shell (see rsh(1)) or the secure shell (see ssh(1)) before you
    select this option. If rsh or ssh is used, you must enable root access
    to all of the new member nodes from this node.
    Press Control-D at any time to return to the Main Menu.

    Do you want to continue (yes/no) [yes]?
  >>> Typical or Custom Mode <<<
    This tool supports two modes of operation, Typical mode and Custom
    mode. For most clusters, you can use Typical mode. However, you might
    need to select the Custom mode option if not all of the Typical mode
    defaults can be applied to your cluster.
    For more information about the differences between Typical and Custom
    modes, select the Help option from the menu.
    Please select from one of the following options:
        1) Typical
        2) Custom
        ?) Help
        q) Return to the Main Menu
    Option [1]:  2
  >>> Cluster Name <<<
    Each cluster has a name assigned to it. The name can be made up of any
    characters other than whitespace. Each cluster name should be unique
    within the namespace of your enterprise.
    What is the name of the cluster you want to establish [TestCluster]?  TestCluster
  >>> Cluster Nodes <<<
    This Oracle Solaris Cluster release supports a total of up to 16
    nodes.
    List the names of the other nodes planned for the initial cluster
    configuration. List one node name per line. When finished, type
    Control-D:
    Node name:  NODEA
    Node name:  NODEB
    Node name (Control-D to finish):  ^D

    This is the complete list of nodes:
        NODEA
        NODEB
    Is it correct (yes/no) [yes]?

    Attempting to contact "NODEB" ... done
    Searching for a remote configuration method ... done
    The Oracle Solaris Cluster framework is able to complete the
    configuration process without remote shell access.
  >>> Authenticating Requests to Add Nodes <<<
    Once the first node establishes itself as a single node cluster, other
    nodes attempting to add themselves to the cluster configuration must
    be found on the list of nodes you just provided. You can modify this
    list by using claccess(1CL) or other tools once the cluster has been
    established.
    By default, nodes are not securely authenticated as they attempt to
    add themselves to the cluster configuration. This is generally
    considered adequate, since nodes which are not physically connected to
    the private cluster interconnect will never be able to actually join
    the cluster. However, DES authentication is available. If DES
    authentication is selected, you must configure all necessary
    encryption keys before any node will be allowed to join the cluster
    (see keyserv(1M), publickey(4)).
    Do you need to use DES authentication (yes/no) [no]?
  >>> Minimum Number of Private Networks <<<
    Each cluster is typically configured with at least two private
    networks. Configuring a cluster with just one private interconnect
    provides less availability and will require the cluster to spend more
    time in automatic recovery if that private interconnect fails.
    Should this cluster use at least two private networks (yes/no) [yes]?
  >>> Point-to-Point Cables <<<
    The two nodes of a two-node cluster may use a directly-connected
    interconnect. That is, no cluster switches are configured. However,
    when there are greater than two nodes, this interactive form of
    scinstall assumes that there will be exactly one switch for each
    private network.
    Does this two-node cluster use switches (yes/no) [no]?
  >>> Cluster Transport Adapters and Cables <<<
    Transport adapters are the adapters that attach to the private cluster
    interconnect.
    Select the first cluster transport adapter:
        1) igb0
        2) igb1
        3) igb2
        4) igb3
        5)ixgbe2
 6)ixgbe3
        n) Next >
    Option:  9
    Adapter "ixgbe2" is an Ethernet adapter.
    Searching for any unexpected network traffic on "ixgbe2" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.
    The "dlpi" transport type will be set for this cluster.
    Name of adapter (physical or virtual) on "NODEB" to which "ixgbe2"
Invalid adapter name.
    Name of adapter (physical or virtual) on "NODEB" to which "ixgbe2"
    Select the second cluster transport adapter:
        1) igb0
        2) igb1
        3) igb2
        4) igb3
        5)ixgbe2
 6)ixgbe3
        n) Next >
    Option:  10
    Adapter "ixgbe3" is an Ethernet adapter.
    Searching for any unexpected network traffic on "ixgbe3" ... done
    Verification completed. No traffic was detected over a 10 second
    sample period.
    The "dlpi" transport type will be set for this cluster.
    Name of adapter (physical or virtual) on "NODEB" to which "ixgbe3"
  >>> Network Address for the Cluster Transport <<<
    The cluster transport uses a default network address of 172.16.0.0. If
    this IP address is already in use elsewhere within your enterprise,
    specify another address from the range of recommended private
    addresses (see RFC 1918 for details).
    The default netmask is 255.255.240.0. You can select another netmask,
    as long as it minimally masks all bits that are given in the network
    address.
    The default private netmask and network address result in an IP
    address range that supports a cluster with a maximum of 32 nodes, 10
    private networks, and 12 virtual clusters.
    Is it okay to accept the default network address (yes/no) [yes]?
    Is it okay to accept the default netmask (yes/no) [yes]?
    Plumbing network address 172.16.0.0 on adapter ixgbe2 >> NOT DUPLICATE ... d
    Plumbing network address 172.16.0.0 on adapter ixgbe3 >> NOT DUPLICATE ... d
  >>> Set Global Fencing <<<
    Fencing is a mechanism that a cluster uses to protect data integrity
    when the cluster interconnect between nodes is lost. By default,
    fencing is turned on for global fencing, and each disk uses the global
    fencing setting. This screen allows you to turn off the global
    fencing.
    Most of the time, leave fencing turned on. However, turn off fencing
    when at least one of the following conditions is true: 1) Your shared
    storage devices, such as Serial Advanced Technology Attachment (SATA)
    disks, do not support SCSI; 2) You want to allow systems outside your
    cluster to access storage devices attached to your cluster; 3) Oracle
    Corporation has not qualified the SCSI persistent group reservation
    (PGR) support for your shared storage devices.
    If you choose to turn off global fencing now, after your cluster
    starts you can still use the cluster(1CL) command to turn on global
    fencing.
    Do you want to turn off global fencing (yes/no) [no]?
  >>> Resource Security Configuration <<<
    The execution of a cluster resource is controlled by the setting of a
    global cluster property called resource_security. When the cluster is
    booted, this property is set to SECURE.
    Resource methods such as Start and Validate always run as root. If
    resource_security is set to SECURE and the resource method executable
    file has non-root ownership or group or world write permissions,
    execution of the resource method fails at run time and an error is
    returned.
    Resource types that declare the Application_user resource property
    perform additional checks on the executable file ownership and
    permissions of application programs. If the resource_security property
    is set to SECURE and the application program executable is not owned
    by root or by the configured Application_user of that resource, or the
    executable has group or world write permissions, execution of the
    application program fails at run time and an error is returned.
    Resource types that declare the Application_user property execute
    application programs according to the setting of the resource_security
    cluster property. If resource_security is set to SECURE, the
    application user will be the value of the Application_user resource
    property; however, if there is no Application_user property, or it is
    unset or empty, the application user will be the owner of the
    application program executable file. The resource will attempt to
    execute the application program as the application user; however a
    non-root process cannot execute as root (regardless of property
    settings and file ownership) and will execute programs as the
    effective non-root user ID.
    You can use the "clsetup" command to change the value of the
    resource_security property after the cluster is running.

Press Enter to continue:
  >>> Quorum Configuration <<<
    Every two-node cluster requires at least one quorum device. By
    default, scinstall selects and configures a shared disk quorum device
    for you.
    This screen allows you to disable the automatic selection and
    configuration of a quorum device.
    You have chosen to turn on the global fencing. If your shared storage
    devices do not support SCSI, such as Serial Advanced Technology
    Attachment (SATA) disks, or if your shared disks do not support
    SCSI-2, you must disable this feature.
    If you disable automatic quorum device selection now, or if you intend
    to use a quorum device that is not a shared disk, you must instead use
    clsetup(1M) to manually configure quorum once both nodes have joined
    the cluster for the first time.
    Do you want to disable automatic quorum device selection (yes/no) [no]?  yes
  >>> Global Devices File System <<<
    Each node in the cluster must have a local file system mounted on
    /global/.devices/node@<nodeID> before it can successfully participate
    as a cluster member. Since the "nodeID" is not assigned until
    scinstall is run, scinstall will set this up for you.
    You must supply the name of either an already-mounted file system or a
    raw disk partition which scinstall can use to create the global
    devices file system. This file system or partition should be at least
    512 MB in size.
    Alternatively, you can use a loopback file (lofi), with a new file
    system, and mount it on /global/.devices/node@<nodeid>.
    If an already-mounted file system is used, the file system must be
    empty. If a raw disk partition is used, a new file system will be
    created for you.
    If the lofi method is used, scinstall creates a new 100 MB file system
    from a lofi device by using the file /.globaldevices. The lofi method
    is typically preferred, since it does not require the allocation of a
    dedicated disk slice.
    The default is to use lofi.
 For node "NODEA",
    Is it okay to use this default (yes/no) [yes]?

 For node "NODEB",
    Is it okay to use this default (yes/no) [yes]?
    Configuring global device using lofi on NODEB: done

    Is it okay to create the new cluster (yes/no) [yes]?
    During the cluster creation process, cluster check is run on each of
    the new cluster nodes. If cluster check detects problems, you can
    either interrupt the process or check the log files after the cluster
    has been established.
    Interrupt cluster creation for cluster check errors (yes/no) [no]?
  Cluster Creation
    Log file - /var/cluster/logs/install/scinstall.log.1896
    Started cluster check on "NODEA".
    Started cluster check on "NODEB".
    cluster check failed for "NODEA".
    cluster check failed for "NODEB".
The cluster check command failed on both of the nodes.
Refer to the log file for details.
The name of the log file is /var/cluster/logs/install/scinstall.log.1896.

    Configuring "NODEB" ... done
    Rebooting "NODEB" ... done
    Configuring "NODEA" ... done
    Rebooting "NODEA" ...
Log file - /var/cluster/logs/install/scinstall.log.1896

Rebooting ...
updating /platform/sun4v/boot_archive

NOTE:-
Automatically reboots two servers.

Now add quorum device manually:-
#clq add <shared deive 1g lun>

Files:-
/usr/cluser/bin
/var/cluster/logs
/etc/hosts
/etc/vfstab
 /global/.devices/node@1 & ...
#cd /etc/cluster
ccr                  locale               qd_userd_door        remoteconfiguration  syncsa.conf
clpl                 nodeid               ql                   security             vp
eventlog             original             release              solaris10.version    zone_cluster





 

HOW TO CONFIGURE MULTIPLE SOLARIS PUBLISHERS ON SOLARIS 11.2?

1)Down load latest SRUs & unzip same in required location.
#unzip -d /IPS/SOL11.2_SRU2.8/reop p19691311_1100_SOLARIS64_1of2.zip
#unzip -d /IPS/SOL11.2_SRU2.8/reop p19691311_1100_SOLARIS64_2of2.zip
2)rebuild Repository
# pkgrepo -s /IPS/SOL11.2_SRU2.8/repo rebuild
Initiating repository rebuild.
3)Verify Repository
 # pkgrepo -s /IPS/SOL11.2_SRU2.8/repo verify

4)Create pkg/server new instance
# svccfg -s pkg/server add  sol11-2sru2-8
# svcs -a|grep sol11-2sru2-8
5) configure require properties.
# svccfg -s svc:/application/pkg/server:sol11-2sru2-8
svc:/application/pkg/server:sol11-2sru2-8> listprop
svc:/application/pkg/server:sol11-2sru2-8> addpg pkg application
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg  application
svc:/application/pkg/server:sol11-2sru2-8> addpg general framework
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg      application
general  framework
svc:/application/pkg/server:sol11-2sru2-8> addpropvalue general/enabled boolean: true
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg                                application
general                            framework
general/enabled                   boolean     true
general/complete                  astring
restarter                          framework            NONPERSISTENT
restarter/logfile                 astring     /var/svc/log/application-pkg-server:sol11-2sru2-8.log
restarter/start_pid               count       2193
restarter/start_method_timestamp  time        1412052471.802291000
restarter/start_method_waitstatus integer     256
restarter/contract                count
restarter/auxiliary_state         astring     fault_threshold_reached
restarter/next_state              astring     none
restarter/state                   astring     maintenance
restarter/state_timestamp         time        1412052471.832000000
restarter_actions                  framework            NONPERSISTENT
restarter_actions/enable_complete time        1412052471.850069000

svc:/application/pkg/server:sol11-2sru2-8> setprop pkg/port=8082
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg                                application
pkg/port                          count       8082
general                            framework
general/enabled                   boolean     true
general/complete                  astring
restarter                          framework            NONPERSISTENT
restarter/logfile                 astring     /var/svc/log/application-pkg-server:sol11-2sru2-8.log
restarter/start_pid               count       2193
restarter/start_method_timestamp  time        1412052471.802291000
restarter/start_method_waitstatus integer     256
restarter/contract                count
restarter/auxiliary_state         astring     fault_threshold_reached
restarter/next_state              astring     none
restarter/state                   astring     maintenance
restarter/state_timestamp         time        1412052471.832000000
restarter_actions                  framework            NONPERSISTENT
restarter_actions/enable_complete time        1412052471.850069000
svc:/application/pkg/server:sol11-2sru2-8> setprop pkg/inst_root="/IPS/SOL11.2_SRU2.8/repo"
svc:/application/pkg/server:sol11-2sru2-8> listprop
pkg                                 application
pkg/port                           count       8082
pkg/inst_root                      astring     /IPS/SOL11.2_SRU2.8/repo
general                             framework
general/complete                   astring
general/enabled                    boolean     false
restarter                           framework           NONPERSISTENT
restarter/logfile                  astring     /var/svc/log/application-pkg-server:sol11-2sru2-8.log
restarter/start_pid                count       2193
restarter/start_method_timestamp   time        1412052471.802291000
restarter/start_method_waitstatus  integer     256
restarter/contract                 count
restarter/auxiliary_state          astring     disable_request
restarter/next_state               astring     none
restarter/state                    astring     disabled
restarter/state_timestamp          time        1412052841.027547000
restarter_actions                   framework           NONPERSISTENT
restarter_actions/enable_complete  time        1412052471.850069000
restarter_actions/auxiliary_tty    boolean     true
restarter_actions/auxiliary_fmri   astring     svc:/network/ssh:default
restarter_actions/disable_complete time        1412052841.048321000
svc:/application/pkg/server:sol11-2sru2-8> end
6)to set the publishers
#pkg set-publisher -G '*' -g http://xx.xx.xx.xx:8082/ solaris
#pkg set-publisher -G '*' -g http://xx.xx.xx.xx:8082/ solaris
7)to check publisher
#pkg publisher
PUBLISHER                   TYPE     STATUS P LOCATION
solaris                     origin   online F http://xx.xx.xx.xx:8081/
solaris                     origin   online F http://xx.xx.xx.xx:8082/
8)pkg update
#pkg update --accept