Anda di halaman 1dari 8

Welcome Babatunde

Account Sign Out Help Country

Products
Oracle Technology Network

Articles

Solutions

Communities

Downloads

I am a...

Store

I want to...

Support

Search

Training

Partners

Server and Storage Administration

How to Build a Secure Cluster


Using Trusted Extensions with Oracle Solaris Cluster 4.1
by Subarna Ganguly

This article discusses how to enable Trusted Extensions on an Oracle Solaris Cluster 4.1 cluster and configure a labeledbranded Exclusive-IP type of zone cluster.
Published February 2013 (updated March 2013)

About Trusted Extensions


Support for Trusted Extensions on Oracle Solaris Cluster 4.1 extends the Trusted Extensions concept of security containers in Oracle Solaris 11
(also known as non-global zones) to zone clusters. These special zone clusters, or Trusted Zone Clusters, are cluster-wide security containers.
Oracle Solaris Trusted Extensions confine applications and data to a specific security label within a non-global
zone. To provide high availability, Oracle Solaris Cluster extends that feature to a clustered set of systems in the
form of labeled zone clusters. The zones (or nodes) in the zone clusters are a brand of their own and are known
as labeled-branded zones.
Oracle Solaris Cluster 4.1 supports both Shared-IP and Exclusive-IP types of labeled-branded zone clusters.
Configuration Assumptions
To enable Trusted Extensions on an Oracle Solaris Cluster 4.1 cluster and configure a labeled-branded
Exclusive-IP type of zone cluster using the procedure provided in this article, you must have the following:

OTN is all about helping


you become familiar
enough with Oracle
technologies to make an
informed decision. Articles,
software downloads,
documentation, and more.
Join up and get the
technical resources you
need to do your job.

A two-node cluster must already be installed and configured with Oracle Solaris 11.1 and Oracle Solaris Cluster
4.1. For instructions about installing a two-node cluster, see "How to Install and Configure a Two-Node Cluster."
For more details, see the Oracle Solaris Cluster Software Installation Guide.
All repositories that are needed for Oracle Solaris and Oracle Solaris Cluster must be configured on the cluster nodes.
The cluster hardware must be a supported configuration for the Oracle Solaris Cluster 4.1 software.
Each node must have two spare network interfaces or virtual interfaces that are used as private interconnects (also known as transports) and at least
one other network interface or virtual interface that is connected to the public network subnet. These interfaces are used by the zone cluster.
Shared disk storage must be connected to the two nodes.
Figure 1 illustrates the configuration discussed in this article.
Note: Although not required, it is recommended that you have console access to the nodes during installation, configuration, and administration.

About

OTN

Figure 1
Nomenclature
Global cluster name: test
Global cluster node names: ptest1 and ptest2
Global cluster private interconnects: vnic11 (on net1) and vnic55 (on net5) on each node
Global cluster public subnet: 10.134.98.0
Global cluster public interface: net0
Zone cluster name: TX-zc-xip
Zone cluster node names: vztest1d and vztest2d
Zone cluster private interconnects: vnic1 (on net1) and vnic5 (on net5) on each node
Zone cluster public subnet: 10.134.99.0
Zone cluster public interface: net3
Prerequisites
To create a cluster-wide security container, or in other words, to create a labeled-branded zone cluster, ensure that you meet the following
prerequisites:
Ensure that the cluster nodes are configured and are healthy.
The command in Listing 1 displays the nodes, quorum status, transport information, and other data that reflect the health of the cluster. Per the
configuration shown in Figure 1, the node names are ptest1 and ptest2.
# cluster show
=== Cluster Nodes ===
--- Node Status --Node Name
--------ptest1
ptest2

Status
-----Online
Online

=== Cluster Transport Paths ===


Endpoint1
--------ptest1:vnic55
ptest1:vnic11

Endpoint2
--------ptest2:vnic55
ptest2:vnic11

Status
-----Path online
Path online

=== Cluster Quorum ===


--- Quorum Votes Summary from (latest node reconfiguration) --Needed

Present

Possible

-----2

------3

-------3

--- Quorum Votes by Node (current status) --Node Name


--------ptest1
ptest2

Present
------1
1

Possible
-------1
1

Status
-----Online
Online

--- Quorum Votes by Device (current status) --Device Name


----------d1
Listing 1

Present
------1

Possible
-------1

Status
-----Online

Ensure that the appropriate Oracle Solaris and Oracle Solaris Cluster versions are installed on each node.
# more /etc/release
Oracle Solaris 11.1 SPARC
Copyright (c) 1983, 2012, Oracle and/or its affiliates.
Assembled 19 September 2012

All rights reserved.

# more /etc/cluster/release
Oracle Solaris Cluster 4.1 0.18.2 for Solaris 11 sparc
Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
Ensure that the correct Oracle Solaris and Oracle Solaris Cluster publishers are set.
You can see how the publishers are set using the command shown in the following example.
# pkg publisher
PUBLISHER
TYPE
STATUS P LOCATION
solaris
origin
online F http://solaris-server.xyz.com/solaris11/dev/
ha-cluster
origin
online F http://cluster-server.xyz.com:1234/
Ensure that Network Information Service (NIS) is disabled for switch services and the NIS service is offline. If the Trusted Extensions (TX) LDAP
server is available, add ldap after files. If there are no Trusted Extensions LDAP servers in the network, set the switches shown in Listing 2.
# svcs -a | grep nis
disabled
11:36:39 svc:/network/nis/domain:default
disabled
11:37:11 svc:/network/nis/client:default
# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/netmask
config/netmask astring
"cluster files"
# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/host
config/host astring
"cluster files"
# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/automount
config/automount astring
"files"
# /usr/sbin/svcadm refresh svc:/system/name-service/switch
# nscfg import -f name-service/switch
Listing 2
Ensure that the /etc/hosts file has the names and addresses of all hosts that the cluster nodes are going to access, including the following:
Package publishers
Default routers
Any required NFS or application servers
Add the cluster private host names to the /etc/hosts file.
First, check the private host name for each cluster node, as shown in Listing 3 and Listing 4.
On the cluster node ptest1, the node ID is 1. Therefore, the private host name is clusternode1-priv and the IP address is 172.16.2.1.
# more /etc/cluster/nodeid
1
# ipadm show-addr
ADDROBJ
lo0/v4
sc_ipmp0/static1
sc_ipmp0/zoneadmd.v4
sc_ipmp0/zoneadmd.v4a
vnic11/?
vnic55/?
clprivnet0/?
lo0/v6
Listing 3

TYPE
static
static
static
static
static
static
static
static

STATE
ok
ok
ok
ok
ok
ok
ok
ok

ADDR
127.0.0.1/8
10.134.98.214/24
10.134.98.219/8
10.134.98.218/24
172.16.0.65/26
172.16.0.129/26
172.16.2.1/24
::1/128

On the cluster node ptest2, the node ID is 2. Therefore, the private host name is clusternode2-priv and the IP address is 172.16.2.2.
# more /etc/cluster/nodeid
2
# ipadm show-addr
ADDROBJ
lo0/v4
sc_ipmp0/static1
sc_ipmp0/zoneadmd.v4
sc_ipmp0/zoneadmd.v4a
vnic11/?
vnic55/?
clprivnet0/?

TYPE
static
static
static
static
static
static
static

STATE
ok
ok
ok
ok
ok
ok
ok

ADDR
127.0.0.1/8
10.134.98.215/24
10.134.98.221/24
10.134.98.222/8
172.16.0.66/26
172.16.0.130/26
172.16.2.2/24

lo0/v6
Listing 4

static

ok

::1/128

Then add the following lines to the /etc/hosts file of each node.
# vi /etc/hosts
172.16.2.1 clusternode1-priv
172.16.2.2 clusternode2-priv
Installing and Enabling Trusted Extensions
On each cluster node, install the Trusted Extensions package.
# pkg install system/trusted/trusted-extensions
Verify the installation of the Trusted Extensions package, as shown in Listing 5.
# pkg info trusted-extensions
Name: system/trusted/trusted-extensions
Summary: Trusted Extensions
Category: Desktop (GNOME)/Trusted Extensions
State: Installed
Publisher: solaris
Version: 0.5.11
Build Release: 5.11
Branch: 0.175.0.0.0.1.0
Packaging Date: Wed Oct 12 14:36:05 2011
Size: 5.45 kB
FMRI: pkg://solaris/system/trusted/trusted-extensions@0.5.11,5.11-0.175.0.0.0.1.0:20111012T143605Z
Listing 5
To enable permission-based access to untrusted systems, such as default routers or NFS servers, allow connections to and from untrusted hosts to
the cluster nodes.
Make a copy of the /etc/pam.d/other file before making any changes.
# cp /etc/pam.d/other /etc/pam.d/other.orig
Modify the following entries in the /etc/pam.d/other file.
pam_roles: Allows remote login by roles
pam_tsol_account: Allows unlabeled hosts to contact Trusted Extensions systems
# pfedit /etc/pam.d/other
...
account requisite pam_roles.so.1 allow_remote
...
account required pam_tsol_account.so.1 allow_unlabeled
Enable Trusted Extensions on each node and reboot.
# svcadm enable -s labeld
# svcs -a | grep labeld
online 11:53:49 svc:/system/labeld:default
# init 6
Change the hostmodel property to weak on each node, as shown in Listing 6.
When Trusted Extensions is enabled, the hostmodel property for both IPv4 and IPv6 is set to strong.
# ipadm show-prop | more
...
ipv4 hostmodel rw strong strong weak strong,
src-prio
weak
ipv4 hostmodel rw strong strong weak strong,
src-prio
...
# ipadm set-prop -p hostmodel=weak ipv4
# ipadm set-prop -p hostmodel=weak ipv6
Listing 6
Add the external hosts that the cluster nodes require as admin_low host types, as shown in Listing 7.
These external hosts are used by the cluster nodes to configure zone clusters such as package publishers, network connectivity by using default
routers and other servers such as NFS or application servers. Access is required for these hosts even though they do not run Trusted Extensions.
On each node, type the following commands. The following example shows the command for the default router.
# netstat -rn
Routing Table: IPv4
Destination
Gateway
------------- ------------default
10.134.98.1
10.134.98.0
10.134.98.214
127.0.0.1
127.0.0.1
172.16.0.64
172.16.0.65
172.16.0.128 172.16.0.129
172.16.2.0
172.16.2.1

Flags
----UG
U
UH
U
U
U

Ref
----5
7
2
3
3
3

Use
---------2810
97
2058
26390
25821
173

Interface
--------sc_ipmp0
sc_ipmp0
lo0
vnic11
vnic55
clprivnet0

Routing Table: IPv6


Destination/Mask Gateway
Flags Ref Use
If
---------------- ----------------- ----- --- ------- ----::1
::1
UH
2
0
lo0
# tncfg -t admin_low add host=10.134.98.1
# tncfg:admin_low> info
name=admin_low
host_type=unlabeled

doi=1
def_label=ADMIN_LOW
min_label=ADMIN_LOW
max_label=ADMIN_HIGH
host=10.134.98.1/32
tncfg:admin_low> exit
Listing 7
Add the cluster transport interface/adapter and the IP addresses of the private hosts to the cipso host template.
On ptest1, type the command shown in Listing 8:
# ipadm show-addr
ADDROBJ
lo0/v4
sc_ipmp0/static1
sc_ipmp0/zoneadmd.v4
sc_ipmp0/zoneadmd.v4a
vnic11/?
vnic55/?
clprivnet0/?
lo0/v6
Listing 8

TYPE
static
static
static
static
static
static
static
static

STATE
ok
ok
ok
ok
ok
ok
ok
ok

ADDR
127.0.0.1/8
10.134.98.214/24
10.134.98.219/8
10.134.98.218/24
172.16.0.65/26
172.16.0.129/26
172.16.2.1/24
::1/128

On ptest2, type the command shown in Listing 9:


# ipadm show-addr
ADDROBJ
lo0/v4
sc_ipmp0/static1
sc_ipmp0/zoneadmd.v4
sc_ipmp0/zoneadmd.v4a
vnic11/?
vnic55/?
clprivnet0/?
lo0/v6
Listing 9

TYPE
static
static
static
static
static
static
static
static

STATE
ok
ok
ok
ok
ok
ok
ok
ok

ADDR
127.0.0.1/8
10.134.98.215/24
10.134.98.221/24
10.134.98.222/8
172.16.0.66/26
172.16.0.130/26
172.16.2.2/24
::1/128

The output in Listing 8 and Listing 9 shows that the following addresses are added on the transport end points:
172.16.0.65, 172.16.0.129, 172.16.0.66, 172.16.0.130
The following are the private node names hosted on the clprivnet0 interfaces:
172.16.2.1 and 172.16.2.2
On each node, type the following commands:
# tncfg
tncfg:cipso>
tncfg:cipso>
tncfg:cipso>
tncfg:cipso>
tncfg:cipso>
tncfg:cipso>

add
add
add
add
add
add

host=172.16.2.1
host=172.16.2.2
host=172.16.0.65
host=172.16.0.66
host=172.16.0.129
host=172.16.0.130

The entries above are stored in the /etc/security/tsol/tnrhdb file.


Creating and Configuring a Trusted Zone Cluster
To create a Trusted Zone Cluster or labeled-branded zone cluster, use the Zone Cluster wizard of the clsetup utility. The utility is menu-driven and
self-explanatory.
If you do not want to use the Zone Cluster wizard of the clsetup utility, follow the steps below:
On each node of the physical interfaces on which the private interconnects of the global cluster are created, create a VNIC for the transport interface
of the Exclusive-IP (XIP, henceforth) zone cluster.
# dladm create-vnic -l net1 vnic1
# dladm create-vnic -l net5 vnic5
net1 and net5 are the physical interfaces on which the transport links of the global cluster are created.
On one of the nodes, configure the zone cluster, as shown in Listing 10.
# clzc configure TX-zc-xip
TX-zc-xip: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:TX-zc-xip> create
clzc:TX-zc-xip> set zonepath=/zones/TX-zc-xip
clzc:TX-zc-xip> set brand=labeled
clzc:TX-zc-xip> set enable_priv_net=true
clzc:TX-zc-xip> set ip-type=exclusive
clzc:TX-zc-xip> add node
clzc:TX-zc-xip:node> set physical-host=ptest1
clzc:TX-zc-xip:node> set hostname=vztest1d
clzc:TX-zc-xip:node> add net
clzc:TX-zc-xip:node:net> set physical=net3
clzc:TX-zc-xip:node:net> end
clzc:TX-zc-xip:node> add privnet
clzc:TX-zc-xip:node:privnet> set physical=vnic1
clzc:TX-zc-xip:node:privnet> end
clzc:TX-zc-xip:node> add privnet
clzc:TX-zc-xip:node:privnet> set physical=vnic5
clzc:TX-zc-xip:node:privnet> end

clzc:TX-zc-xip:node> end
clzc:TX-zc-xip> add node
clzc:TX-zc-xip:node> set physical-host=ptest2
clzc:TX-zc-xip:node> set hostname=vztest2d
clzc:TX-zc-xip:node> add net
clzc:TX-zc-xip:node:net> set physical=net3
clzc:TX-zc-xip:node:net> end
clzc:TX-zc-xip:node> add privnet
clzc:TX-zc-xip:node:privnet> set physical=vnic1
clzc:TX-zc-xip:node:privnet> end
clzc:TX-zc-xip:node> add privnet
clzc:TX-zc-xip:node:privnet> set physical=vnic5
clzc:TX-zc-xip:node:privnet> end
clzc:TX-zc-xip:node> end
clzc:TX-zc-xip> verify
clzc:TX-zc-xip> commit
Jul 31 15:43:19 ptest1 Cluster.RGM.rgmdstarter: did_update called
Jul 31 15:43:19 ptest1 Cluster.RGM.rgmdstarter: new cluster TX-zc-xip added
Listing 10
On each node, type the following commands:
tncfg -z TX-zc-xip
tncfg:TX-zc-xip> set label=PUBLIC
tncfg:TX-zc-shared> exit
From one of the nodes, install the zone cluster TX-zc-xip file. You can do this from the node ptest1.
# clzc install TX-zc-xip
On each node, run the txzonemgr utility. Ensure that the environment variable is set to DISPLAY.
For example:
# DISPLAY=scc60:2
# export DISPLAY
# txzonemgr
Select the global zone. Then, select to configure a per-zone name service.
To perform the sysid configuration on an Exclusive-IP labeled-brand zone cluster, perform the following steps for one zone cluster node at a time:
Boot the zone cluster node.
# zoneadm -z TX-zc-xip boot
Unconfigure the Oracle Solaris instance and reboot the zone.
# zlogin TX-zc-xip
# sysconfig unconfigure
# reboot
The zlogin session terminates during the reboot.
Issue the zlogin command and progress through the interactive screens.
# zlogin -C TX-zc-xip
Open the console connections to the zone cluster nodes. Open new terminal windows for each node and follow through the interactive sysconfig
screens to set up the host name, IP address, LDAP server (if applicable), DNS, and locale. Ensure that you do not enable NIS. When finished, exit
the zone console.
From the global zone, halt the zone cluster node.
# zoneadm -z TX-zc-xip halt
Repeat the preceding steps for the other zone cluster node.
From one of the nodes, boot the zone cluster.
# clzc boot TX-zc-xip
Log in to the zone cluster nodes and set the root password.
# zlogin TX-zc-xip
# passwd
# exit
Setting Up IPMP
Create IPMP groups on both zone cluster nodes.
On zone cluster node vztest1d, create an IPMP group for the public interface.
# ipadm show-addr
# ipadm delete-addr net3/v4
# ipadm delete-addr net3/v6
# ipadm create-ipmp XIPZCipmp0
# ipadm add-ipmp -i net3 XIPZCipmp0
# ipadm create-addr -T static -a local=10.134.99.192/24 XIPZCipmp0
# ipadm show-if
# ipmpstat -g
On zone cluster node vztest2d, create an IPMP group for the public interface.
# ipadm show-addr
# ipadm delete-addr net3/v4
# ipadm delete-addr net3/v6
# ipadm create-ipmp XIPZCipmp0
# ipadm add-ipmp -i net3 XIPZCipmp0
# ipadm create-addr -T static -a local=10.134.99.195/24 XIPZCipmp0
# ipadm show-if
# ipmpstat -g
On vztest1d, type the following command:

# ipmpstat -g
GROUP
GROUPNAME
STATE
FDT
XIPZCipmp0 XIPZCipmp0 ok
-On vztest2d, type the following command:

INTERFACES
net3

# ipmpstat -g
GROUP
GROUPNAME
STATE
FDT
INTERFACES
XIPZCipmp0 XIPZCipmp0 ok
-net3
Add the IP addresses of the transport interfaces and the public network IP addresses from each zone cluster node to the cipso template.
On vztest1d, run the following:
# ipadm show-addr
ADDROBJ
TYPE
lo0/v4
static
XIPZCipmp0/v4
static
vnic1/?
static
vnic5/?
static
clprivnet1/?
static
lo0/v6
static
On vztest2d, run the following:

STATE
ok
ok
ok
ok
ok
ok

ADDR
127.0.0.1/8
10.134.99.192/24
172.16.4.1/26
172.16.4.65/26
172.16.3.193/26
::1/128

# ipadm show-addr
ADDROBJ
TYPE
STATE
ADDR
lo0/v4
static
ok
127.0.0.1/8
XIPZCipmp0/v4
static
ok
10.134.99.195/24
vnic1/?
static
ok
172.16.4.2/26
vnic5/?
static
ok
172.16.4.66/26
clprivnet1/?
static
ok
172.16.3.194/26
lo0/v6
static
ok
::1/128
Log in to the global zone nodes. Add the following addresses to the cipso template using the tncfg command.
10.134.99.192
10.134.99.195
172.16.4.1
172.16.4.65
172.16.3.193
172.16.4.2
172.16.4.66
172.16.3.194
In addition, you can add other external hosts to the cipso template. These externals hosts must be trusted and contacted or communicated by the
Trusted Zone Cluster nodes. For two-way communication, add the public interfaces of the zone cluster nodes to the cipso templates of the other
external hosts that you added.
For example:
# tncfg -t cipso
tncfg:cipso> add host=10.134.99.192
The zone cluster is now ready to be configured for a failover application.
Configuring a Failover Application
The procedure for configuring a failover application is similar to that for configuring a regular zone cluster. Note that pxfs file systems cannot be
mounted inside a labeled-branded zone cluster in read-write mode.
This following process discusses how to create a failover resource group in the Trusted Zone Cluster with an IP address resource and a storage
resource. It uses an example and makes the following assumptions:
Solaris Volume Manager is used to create a file system for the storage resource.
On each global cluster node (ptest1 and ptest2), a non-shared disk slice must be selected to create the local metadb.
In this example, each node has the rpool on the local disk c3t0d0 and another local disk, c3t1d0, on which a slice s4 of size 1 GB is reserved. The
metadb is created on that slice.
Create the metadb on the slice.
# metadb -a -c 3 -f c3t1d0s4
Create a device group (testdg) and file system (/testfs) in the global zone. On one of the nodes, ptest1, select the DID disks that are going to
be added to the device group.
This example uses the following disks:
/dev/did/rdsk/d6
/dev/did/rdsk/d7
# metaset -s testdg -a -h ptest1 ptest2
# metaset -s testdg -a -m ptest1 ptest2
# metaset -s testdg -a /dev/did/rdsk/d6 /dev/did/rdsk/d7
# metainit -s testdg d0 1 1 /dev/did/rdsk/d6s0
# metainit -s testdg d1 1 1 /dev/did/rdsk/d7s0
# metainit -s testdg d10 -m d0
# metattach -s testdg d10 d1
# newfs /dev/md/testdg/rdsk/d10
Select an IP address to use as a logical host name.
This example uses test-5, which is available for use in the zone cluster.
Add the selected IP address and the newly created file system to the Trusted Zone Cluster, as shown in Listing 11.
# clzc configure TX-zc-xip
clzc:TX-zc-xip> add net
clzc:TX-zc-xip:net> set address=test-5
clzc:TX-zc-xip:net> verify
clzc:TX-zc-xip:net> end
clzc:TX-zc-xip> commit

clzc:TX-zc-xip> add fs
clzc:TX-zc-xip:fs> set dir=/testfs
clzc:TX-zc-xip:fs> set raw=/dev/md/testdg/rdsk/d10
clzc:TX-zc-xip:fs> set special=/dev/md/testdg/dsk/d10
clzc:TX-zc-xip:fs> set options=rw,logging
clzc:TX-zc-xip:fs> set type=ufs
clzc:TX-zc-xip:fs> info
fs:
dir: /testfs
special: /dev/md/testdg/dsk/d10
raw: /dev/md/testdg/rdsk/d10
type: ufs
options: [rw,logging]
cluster-control: true
clzc:TX-zc-xip:fs> verify
clzc:TX-zc-xip:fs> end
clzc:TX-zc-xip> commit
clzc:TX-zc-xip> exit
Listing 11
Log in to the zone cluster nodes and create the file system mount points.
# zlogin TX-zc-xip
# mkdir /testfs
# reboot
Create a resource group (testrg) with the logical host name resource (test-5) and the storage resource (has-res).
From one of the nodes, log in to the zone cluster.
#
#
#
#
#
#
#
#
#

zlogin TX-zc-xip
cd /usr/cluster/bin
./clrt register SUNW.HAStoragePlus
./clrg create testrg
./clrslh create -g testrg -h test-5 test-5
./clrs create -g testrg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/testfs has-res
./clrg manage testrg
./clrg online testrg
./clrg status

=== Cluster Resource Groups ===


Group Name
Node Name
-----------------testrg
vztest1d
vztest2d

Suspended
--------No
No

Status
-----Online
Offline

# ./clrs status
=== Cluster Resources ===
Resource Name
Node Name
--------------------has-res
vztest1d
vztest2d
test-5
vztest1d
vztest2d
Listing 12

State
----Online
Offline
Online
Offline

Status Message
-------------Online
Offline
Online
Offline - LogicalHostname online.

View the resource group switchover to another node, as shown in Listing 13.
# ./clrg switch -n vztest2d testrg
# ./clrs status
=== Cluster Resources ===
Resource Name
Node Name
--------------------has-res
vztest1d
vztest2d
test-5
vztest1d
vztest2d
Listing 13

State
----Offline
Online
Offline
Online

Status Message
-------------Offline
Online
Offline
Online - LogicalHostname online.

To the above resource group, which contains a network resource and storage resource, you can add an application resource intended for use in a
trusted environment.
See Also
Please refer to the following links for more information:
Download Oracle Solaris Cluster
Access the Oracle Solaris Cluster documentation
See all Oracle Solaris Cluster technical resources
Also see the following resources:
Trusted Extensions Configuration and Administration
Trusted Extensions User's Guide
Official Oracle Solaris blog
About the Author
Subarna Ganguly has worked at Sun and Oracle for over 12 years, first in the Education and Training groupprimarily training customers and
internal engineers on Oracle Solaris networking and Oracle Solaris Cluster productsand then as a quality engineer for the Oracle Solaris Cluster
product.
Revision 1.4, 03/11/2013
Follow us:
Blog | Facebook | Twitter | YouTube

Anda mungkin juga menyukai