1) Down load Cluster package ,extract package & install on both nodes.
# ./installer
Unable to access a usable display on the remote system. Continue in command-line mode?(Y/N)
Y
Welcome to Oracle(R) Solaris Cluster; serious software made simple...
Before you begin, refer to the Release Notes and Installation Guide for the
products that you are installing. This documentation is available at http:
//www.oracle.com/technetwork/indexes/documentation/index.html.
You can install any or all of the Services provided by Oracle Solaris
Cluster.
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.
<Press ENTER to Continue>
Installation Type
-----------------
Do you want to install the full set of Oracle Solaris Cluster Products and
Services? (Yes/No) [Yes] {"<" goes back, "!" exits} Yes
Install multilingual package(s) for all selected components [Yes] {"<" goes
back, "!" exits}:
Checking System Status
Available disk space... : Checking .... OK
Memory installed... : Checking .... OK
Swap space installed... : Checking .... OK
Operating system patches... : Checking .... OK
Operating system resources... : Checking .... OK
System ready for installation
Enter 1 to continue [1] {"<" goes back, "!" exits} 1
Screen for selecting Type of Configuration
1. Configure Now - Selectively override defaults or express through
2. Configure Later - Manually configure following installation
Select Type of Configuration [1] {"<" goes back, "!" exits} 2
Ready to Install
----------------
The following components will be installed.
Product: Oracle Solaris Cluster
Uninstall Location: /var/sadm/prod/SUNWentsyssc33u2
Space Required: 236.44 MB
---------------------------------------------------
Java DB
Java DB Server
Java DB Client
Oracle Solaris Cluster 3.3u2
Oracle Solaris Cluster Core
Oracle Solaris Cluster Manager
Oracle Solaris Cluster Agents 3.3u2
Oracle Solaris Cluster HA for Java(TM) System Application Server
Oracle Solaris Cluster HA for Java(TM) System Message Queue
Oracle Solaris Cluster HA for Java(TM) System Messaging Server
Oracle Solaris Cluster HA for Java(TM) System Calendar Server
Oracle Solaris Cluster HA for Java(TM) System Directory Server
Oracle Solaris Cluster HA for Java(TM) System Application Server EE (HADB)
Oracle Solaris Cluster HA for Instant Messaging
Oracle Solaris Cluster HA/Scalable for Java(TM) System Web Server
Oracle Solaris Cluster HA for Apache Tomcat
Oracle Solaris Cluster HA for Apache
Oracle Solaris Cluster HA for DHCP
Oracle Solaris Cluster HA for DNS
Oracle Solaris Cluster HA for MySQL
Oracle Solaris Cluster HA for Sun N1 Service Provisioning System
Oracle Solaris Cluster HA for NFS
Oracle Solaris Cluster HA for Oracle
Oracle Solaris Cluster HA for Agfa IMPAX
Oracle Solaris Cluster HA for Samba
Oracle Solaris Cluster HA for Sun N1 Grid Engine
Oracle Solaris Cluster HA for Solaris Containers
Oracle Solaris Cluster Support for Oracle RAC
Oracle Solaris Cluster HA for Oracle E-Business Suite
Oracle Solaris Cluster HA for SAP liveCache
Oracle Solaris Cluster HA for WebSphere Message Broker
Oracle Solaris Cluster HA for WebSphere MQ
Oracle Solaris Cluster HA for Oracle 9iAS
Oracle Solaris Cluster HA for SAPDB
Oracle Solaris Cluster HA for SAP Web Application Server
Oracle Solaris Cluster HA for SAP
Oracle Solaris Cluster HA for PostgreSQL
Oracle Solaris Cluster HA for Sybase ASE
Oracle Solaris Cluster HA for BEA WebLogic Server
Oracle Solaris Cluster HA for Siebel
Oracle Solaris Cluster HA for Kerberos
Oracle Solaris Cluster HA for Swift Alliance Access
Oracle Solaris Cluster HA for Swift Alliance Gateway
Oracle Solaris Cluster HA for Informix
Oracle Solaris Cluster HA for xVM Server SPARC Guest Domains
Oracle Solaris Cluster HA for PeopleSoft Enterprise
Oracle Solaris Cluster HA for Oracle Business Intelligence Enterprise
Edition
Oracle Solaris Cluster HA for TimesTen
Oracle Solaris Cluster HA for Oracle External Proxy
Oracle Solaris Cluster HA for Oracle Web Tier Agent
Oracle Solaris Cluster HA for SAP NetWeaver
Oracle Solaris Cluster Geographic Edition 3.3u2
Oracle Solaris Cluster Geographic Edition Core Components
Oracle Solaris Cluster Geographic Edition Manager
Sun StorEdge Availability Suite Data Replication Support
Hitachi Truecopy Data Replication Support
SRDF Data Replication Support
Oracle Data Guard Data Replication Support
Oracle Solaris Cluster Geographic Edition Script-Based Plugin Replica
Support
Oracle Solaris Cluster Geographic Edition Sun ZFS Storage Appliance
Replication
Quorum Server
Java(TM) System High Availability Session Store 4.4.3
1. Install
2. Start Over
3. Exit Installation
What would you like to do [1] {"<" goes back, "!" exits}?
Oracle Solaris Cluster
|-1%--------------25%-----------------50%-----------------75%--------------100%|
Installation Complete
Software installation has completed successfully. You can view the installation
summary and log by using the choices below. Summary and log files are available
in /var/sadm/install/logs/.
Your next step is to perform the postinstallation configuration and
verification tasks documented in the Postinstallation Configuration and Startup
Chapter of the Java(TM) Enterprise System Installation Guide. See: http:
//download.oracle.com/docs/cd/E19528-01/820-2827.
Enter 1 to view installation summary and Enter 2 to view installation logs
[1] {"!" exits} !
In order to notify you of potential updates, we need to confirm an internet connection. Do you want to proceed [Y/N] : Y
An internet connection was not detected. If you are using a Proxy please enter it now.
Enter HTTP Proxy Host : ^C#
2) Apply latest Patch for same.
# patchadd xxxxxx-xx
Validating patches...
Loading patches installed on the system...
Done!
Loading patches requested to install.
Done!
Checking patches that you specified for installation.
Done!
Approved patches will be installed in this order:
xxxxxx-xx
Checking installed patches...
Executing prepatch script...
Installing patch packages...
Patch xxxxxx-xx has been successfully installed.
See /var/sadm/patch/145333-27/log for details
Executing postpatch script...
Patch packages installed:
SUNWcvmr
SUNWsccomu
SUNWsccomzu
SUNWscderby
SUNWscdev
SUNWscgds
SUNWscmasa
SUNWscmasar
SUNWscmasasen
SUNWscmasau
SUNWscmasazu
SUNWscmautil
SUNWscmd
SUNWscr
SUNWscrtlh
SUNWscsal
SUNWscsmf
SUNWscspmu
SUNWsctelemetry
SUNWscu
SUNWscucm
SUNWsczr
SUNWsczu
SUNWudlmr
3) post configure Sun cluster.
# scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Create a new cluster or add a cluster node
2) Configure a cluster to be JumpStarted from this install server
3) Manage a dual-partition upgrade
4) Upgrade this cluster node
* 5) Print release information for this cluster node
* ?) Help with menu options
* q) Quit
Option: 1
*** New Cluster and Cluster Node Menu ***
Please select from any one of the following options:
1) Create a new cluster
2) Create just the first node of a new cluster on this machine
3) Add this machine as a node in an existing cluster
?) Help with menu options
q) Return to the Main Menu
Option: 1
*** Create a New Cluster ***
This option creates and configures a new cluster.
You must use the Oracle Solaris Cluster installation media to install
the Oracle Solaris Cluster framework software on each machine in the
new cluster before you select this option.
If the "remote configuration" option is unselected from the Oracle
Solaris Cluster installer when you install the Oracle Solaris Cluster
framework on any of the new nodes, then you must configure either the
remote shell (see rsh(1)) or the secure shell (see ssh(1)) before you
select this option. If rsh or ssh is used, you must enable root access
to all of the new member nodes from this node.
Press Control-D at any time to return to the Main Menu.
Do you want to continue (yes/no) [yes]?
>>> Typical or Custom Mode <<<
This tool supports two modes of operation, Typical mode and Custom
mode. For most clusters, you can use Typical mode. However, you might
need to select the Custom mode option if not all of the Typical mode
defaults can be applied to your cluster.
For more information about the differences between Typical and Custom
modes, select the Help option from the menu.
Please select from one of the following options:
1) Typical
2) Custom
?) Help
q) Return to the Main Menu
Option [1]: 2
>>> Cluster Name <<<
Each cluster has a name assigned to it. The name can be made up of any
characters other than whitespace. Each cluster name should be unique
within the namespace of your enterprise.
What is the name of the cluster you want to establish [TestCluster]? TestCluster
>>> Cluster Nodes <<<
This Oracle Solaris Cluster release supports a total of up to 16
nodes.
List the names of the other nodes planned for the initial cluster
configuration. List one node name per line. When finished, type
Control-D:
Node name: NODEA
Node name: NODEB
Node name (Control-D to finish): ^D
This is the complete list of nodes:
NODEA
NODEB
Is it correct (yes/no) [yes]?
Attempting to contact "NODEB" ... done
Searching for a remote configuration method ... done
The Oracle Solaris Cluster framework is able to complete the
configuration process without remote shell access.
>>> Authenticating Requests to Add Nodes <<<
Once the first node establishes itself as a single node cluster, other
nodes attempting to add themselves to the cluster configuration must
be found on the list of nodes you just provided. You can modify this
list by using claccess(1CL) or other tools once the cluster has been
established.
By default, nodes are not securely authenticated as they attempt to
add themselves to the cluster configuration. This is generally
considered adequate, since nodes which are not physically connected to
the private cluster interconnect will never be able to actually join
the cluster. However, DES authentication is available. If DES
authentication is selected, you must configure all necessary
encryption keys before any node will be allowed to join the cluster
(see keyserv(1M), publickey(4)).
Do you need to use DES authentication (yes/no) [no]?
>>> Minimum Number of Private Networks <<<
Each cluster is typically configured with at least two private
networks. Configuring a cluster with just one private interconnect
provides less availability and will require the cluster to spend more
time in automatic recovery if that private interconnect fails.
Should this cluster use at least two private networks (yes/no) [yes]?
>>> Point-to-Point Cables <<<
The two nodes of a two-node cluster may use a directly-connected
interconnect. That is, no cluster switches are configured. However,
when there are greater than two nodes, this interactive form of
scinstall assumes that there will be exactly one switch for each
private network.
Does this two-node cluster use switches (yes/no) [no]?
>>> Cluster Transport Adapters and Cables <<<
Transport adapters are the adapters that attach to the private cluster
interconnect.
Select the first cluster transport adapter:
1) igb0
2) igb1
3) igb2
4) igb3
5)ixgbe2
6)ixgbe3
n) Next >
Option: 9
Adapter "ixgbe2" is an Ethernet adapter.
Searching for any unexpected network traffic on "ixgbe2" ... done
Verification completed. No traffic was detected over a 10 second
sample period.
The "dlpi" transport type will be set for this cluster.
Name of adapter (physical or virtual) on "NODEB" to which "ixgbe2"
Invalid adapter name.
Name of adapter (physical or virtual) on "NODEB" to which "ixgbe2"
Select the second cluster transport adapter:
1) igb0
2) igb1
3) igb2
4) igb3
5)ixgbe2
6)ixgbe3
n) Next >
Option: 10
Adapter "ixgbe3" is an Ethernet adapter.
Searching for any unexpected network traffic on "ixgbe3" ... done
Verification completed. No traffic was detected over a 10 second
sample period.
The "dlpi" transport type will be set for this cluster.
Name of adapter (physical or virtual) on "NODEB" to which "ixgbe3"
>>> Network Address for the Cluster Transport <<<
The cluster transport uses a default network address of 172.16.0.0. If
this IP address is already in use elsewhere within your enterprise,
specify another address from the range of recommended private
addresses (see RFC 1918 for details).
The default netmask is 255.255.240.0. You can select another netmask,
as long as it minimally masks all bits that are given in the network
address.
The default private netmask and network address result in an IP
address range that supports a cluster with a maximum of 32 nodes, 10
private networks, and 12 virtual clusters.
Is it okay to accept the default network address (yes/no) [yes]?
Is it okay to accept the default netmask (yes/no) [yes]?
Plumbing network address 172.16.0.0 on adapter ixgbe2 >> NOT DUPLICATE ... d
Plumbing network address 172.16.0.0 on adapter ixgbe3 >> NOT DUPLICATE ... d
>>> Set Global Fencing <<<
Fencing is a mechanism that a cluster uses to protect data integrity
when the cluster interconnect between nodes is lost. By default,
fencing is turned on for global fencing, and each disk uses the global
fencing setting. This screen allows you to turn off the global
fencing.
Most of the time, leave fencing turned on. However, turn off fencing
when at least one of the following conditions is true: 1) Your shared
storage devices, such as Serial Advanced Technology Attachment (SATA)
disks, do not support SCSI; 2) You want to allow systems outside your
cluster to access storage devices attached to your cluster; 3) Oracle
Corporation has not qualified the SCSI persistent group reservation
(PGR) support for your shared storage devices.
If you choose to turn off global fencing now, after your cluster
starts you can still use the cluster(1CL) command to turn on global
fencing.
Do you want to turn off global fencing (yes/no) [no]?
>>> Resource Security Configuration <<<
The execution of a cluster resource is controlled by the setting of a
global cluster property called resource_security. When the cluster is
booted, this property is set to SECURE.
Resource methods such as Start and Validate always run as root. If
resource_security is set to SECURE and the resource method executable
file has non-root ownership or group or world write permissions,
execution of the resource method fails at run time and an error is
returned.
Resource types that declare the Application_user resource property
perform additional checks on the executable file ownership and
permissions of application programs. If the resource_security property
is set to SECURE and the application program executable is not owned
by root or by the configured Application_user of that resource, or the
executable has group or world write permissions, execution of the
application program fails at run time and an error is returned.
Resource types that declare the Application_user property execute
application programs according to the setting of the resource_security
cluster property. If resource_security is set to SECURE, the
application user will be the value of the Application_user resource
property; however, if there is no Application_user property, or it is
unset or empty, the application user will be the owner of the
application program executable file. The resource will attempt to
execute the application program as the application user; however a
non-root process cannot execute as root (regardless of property
settings and file ownership) and will execute programs as the
effective non-root user ID.
You can use the "clsetup" command to change the value of the
resource_security property after the cluster is running.
Press Enter to continue:
>>> Quorum Configuration <<<
Every two-node cluster requires at least one quorum device. By
default, scinstall selects and configures a shared disk quorum device
for you.
This screen allows you to disable the automatic selection and
configuration of a quorum device.
You have chosen to turn on the global fencing. If your shared storage
devices do not support SCSI, such as Serial Advanced Technology
Attachment (SATA) disks, or if your shared disks do not support
SCSI-2, you must disable this feature.
If you disable automatic quorum device selection now, or if you intend
to use a quorum device that is not a shared disk, you must instead use
clsetup(1M) to manually configure quorum once both nodes have joined
the cluster for the first time.
Do you want to disable automatic quorum device selection (yes/no) [no]? yes
>>> Global Devices File System <<<
Each node in the cluster must have a local file system mounted on
/global/.devices/node@<nodeID> before it can successfully participate
as a cluster member. Since the "nodeID" is not assigned until
scinstall is run, scinstall will set this up for you.
You must supply the name of either an already-mounted file system or a
raw disk partition which scinstall can use to create the global
devices file system. This file system or partition should be at least
512 MB in size.
Alternatively, you can use a loopback file (lofi), with a new file
system, and mount it on /global/.devices/node@<nodeid>.
If an already-mounted file system is used, the file system must be
empty. If a raw disk partition is used, a new file system will be
created for you.
If the lofi method is used, scinstall creates a new 100 MB file system
from a lofi device by using the file /.globaldevices. The lofi method
is typically preferred, since it does not require the allocation of a
dedicated disk slice.
The default is to use lofi.
For node "NODEA",
Is it okay to use this default (yes/no) [yes]?
For node "NODEB",
Is it okay to use this default (yes/no) [yes]?
Configuring global device using lofi on NODEB: done
Is it okay to create the new cluster (yes/no) [yes]?
During the cluster creation process, cluster check is run on each of
the new cluster nodes. If cluster check detects problems, you can
either interrupt the process or check the log files after the cluster
has been established.
Interrupt cluster creation for cluster check errors (yes/no) [no]?
Cluster Creation
Log file - /var/cluster/logs/install/scinstall.log.1896
Started cluster check on "NODEA".
Started cluster check on "NODEB".
cluster check failed for "NODEA".
cluster check failed for "NODEB".
The cluster check command failed on both of the nodes.
Refer to the log file for details.
The name of the log file is /var/cluster/logs/install/scinstall.log.1896.
Configuring "NODEB" ... done
Rebooting "NODEB" ... done
Configuring "NODEA" ... done
Rebooting "NODEA" ...
Log file - /var/cluster/logs/install/scinstall.log.1896
Rebooting ...
updating /platform/sun4v/boot_archive
NOTE:-
Automatically reboots two servers.
Now add quorum device manually:-
#clq add <shared deive 1g lun>
Files:-
/usr/cluser/bin
/var/cluster/logs
/etc/hosts
/etc/vfstab
/global/.devices/node@1 & ...
#cd /etc/cluster
ccr locale qd_userd_door remoteconfiguration syncsa.conf
clpl nodeid ql security vp
eventlog original release solaris10.version zone_cluster
No comments:
Post a Comment