OEL 6.4安装Oracle Database 12C RAC图文详细教程

教程发布:风哥 教程分类:ITPUX技术网 更新日期:2022-02-12 浏览学习:3406

OEL 6.4安装Oracle Database 12C RAC图文详细教程

Oracle Database 12c发布有一周了,这几天尝试了下单机、RESTART和RAC的安装,其中发生了不少趣事。比如安装Oracle 12c Restart花费了4小时最终笔记本死机、RAC安装过程中采用HAIP特性却失败等。Oracle 12c RAC引入了Flex Cluster的概念,但尚未研究成功。

下面是传统方式安装Oracle 12c RAC的教程。

环境介绍

OS: Oracle Enterprise Linux 6.4 (For RAC Nodes),Oracle Enterprise Linux 5.8(For DNS Server),Openfiler 2.3(For SAN Storage)

DB: GI and Database 12.1.0.1

所需介质

linuxamd64_12c_database_1of2.zip

linuxamd64_12c_database_2of2.zip

linuxamd64_12c_grid_1of2.zip

linuxamd64_12c_grid_2of2.zip

– 这里只给出Oracle相关的,操作系统以及其他软件请自身准备。

操作系统信息

RAC节点服务器:

(以node1节点为例)

[root@12crac1 ~]# cat /etc/RedHat-release

Red Hat Enterprise Linux Server release 6.4 (Santiago)

[root@12crac1 ~]# uname -a

Linux 12crac1.linuxidc.com 2.6.39-400.17.1.el6uek.x86_64 #1 SMP Fri Feb 22 18:16:18 PST 2013 x86_64 x86_64 x86_64 GNU/Linux

[root@12crac1 ~]# grep MemTotal /proc/meminfo

MemTotal: 2051748 kB

[root@12crac1 ~]# grep SwapTotal /proc/meminfo

SwapTotal: 5119996 kB

[root@12crac1 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda3 45G 16G 27G 38% /

tmpfs 2.0G 652M 1.4G 32% /dev/shm

/dev/sda1 194M 50M 135M 27% /boot

网络配置信息:

备注:从下面信息中可以发现,每个节点服务器我都添加了五个网卡,eth0用于PUBLIC,而eth1~eth4用于Private,本想采用HAIP特性。

但在安装实验过程中HAIP特性上发生了节点2无法启动ASM实例的问题,因此最后只用了其中eth1接口。

至于HAIP导致的问题,可能是出于BUG,这个问题还有待仔细troubleshooting。

(节点1)

[root@12crac1 ~]# ifconfig

eth0 Link encap:Ethernet HWaddr 00:0C:29:75:36:ED

inet addr:192.168.1.150 Bcast:192.168.1.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fe75:36ed/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:64 errors:0 dropped:0 overruns:0 frame:0

TX packets:56 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:7014 (6.8 KiB) TX bytes:6193 (6.0 KiB)

eth1 Link encap:Ethernet HWaddr 00:0C:29:75:36:F7

inet addr:192.168.80.150 Bcast:192.168.80.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fe75:36f7/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:12 errors:0 dropped:0 overruns:0 frame:0

TX packets:12 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:720 (720.0 b) TX bytes:720 (720.0 b)

eth2 Link encap:Ethernet HWaddr 00:0C:29:75:36:01

inet addr:192.168.80.151 Bcast:192.168.80.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fe75:3601/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:9 errors:0 dropped:0 overruns:0 frame:0

TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:540 (540.0 b) TX bytes:636 (636.0 b)

eth3 Link encap:Ethernet HWaddr 00:0C:29:75:36:0B

inet addr:192.168.80.152 Bcast:192.168.80.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fe75:360b/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:5 errors:0 dropped:0 overruns:0 frame:0

TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:300 (300.0 b) TX bytes:636 (636.0 b)

eth4 Link encap:Ethernet HWaddr 00:0C:29:75:36:15

inet addr:192.168.80.153 Bcast:192.168.80.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fe75:3615/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:1 errors:0 dropped:0 overruns:0 frame:0

TX packets:9 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:60 (60.0 b) TX bytes:566 (566.0 b)

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

(节点2)

[root@12crac2 ~]# ifconfig

eth0 Link encap:Ethernet HWaddr 00:0C:29:A1:81:7C

inet addr:192.168.1.151 Bcast:192.168.1.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fea1:817c/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:126 errors:0 dropped:0 overruns:0 frame:0

TX packets:62 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:10466 (10.2 KiB) TX bytes:6193 (6.0 KiB)

eth1 Link encap:Ethernet HWaddr 00:0C:29:A1:81:86

inet addr:192.168.80.154 Bcast:192.168.80.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fea1:8186/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:23 errors:0 dropped:0 overruns:0 frame:0

TX packets:27 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:2081 (2.0 KiB) TX bytes:1622 (1.5 KiB)

eth2 Link encap:Ethernet HWaddr 00:0C:29:A1:81:90

inet addr:192.168.80.155 Bcast:192.168.80.255 Mask:255.255.255.0

inet6 addr: fe80::20c:29ff:fea1:8190/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:1 errors:0 dropped:0 overruns:0 frame:0

TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:60 (60.0 b) TX bytes:636 (636.0 b)

eth3 Link encap:Ethernet HWaddr 00:0C:29:A1:81:9A

inet addr:192.168.80.156 Bcast:192.168.80.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

eth4 Link encap:Ethernet HWaddr 00:0C:29:A1:81:A4

inet addr:192.168.80.157 Bcast:192.168.80.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

确认防火墙和SELinux是禁用的

(以Node1为例,两个节点相同)

[root@12crac1 ~]# iptables -L

Chain INPUT (policy ACCEPT)

target prot opt source destination

Chain FORWARD (policy ACCEPT)

target prot opt source destination

Chain OUTPUT (policy ACCEPT)

target prot opt source destination

如果防火墙没禁用,那么通过如下方式修改:

[root@12crac1 ~]# service iptables stop

[root@12crac1 ~]# chkconfig iptables off

[root@12crac1 ~]# getenforce

Disabled

如果SELinux没有被禁止,那就通过如下方式修改:

[root@12crac1 ~]# cat /etc/selinux/config

-- 改成SELINUX=disabled

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

# enforcing - SELinux security policy is enforced.

# permissive - SELinux prints warnings instead of enforcing.

# disabled - No SELinux policy is loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of these two values:

# targeted - Targeted processes are protected,

# mls - Multi Level Security protection.

SELINUXTYPE=targeted

DNS服务器:

[root@dns12c ~]# cat /etc/redhat-release

Red Hat Enterprise Linux Server release 5.8 (Tikanga)

[root@dns12c ~]# uname -a

Linux dns12c.linuxidc.com 2.6.32-300.10.1.el5uek #1 SMP Wed Feb 22 17:37:40 EST 2012 x86_64 x86_64 x86_64 GNU/Linux

[root@dns12c ~]# grep MemTotal /proc/meminfo

MemTotal: 494596 kB

[root@dns12c ~]# grep SwapTotal /proc/meminfo

SwapTotal: 3277252 kB

[root@dns12c ~]# ifconfig

eth0 Link encap:Ethernet HWaddr 00:0C:29:7A:FD:82

inet addr:192.168.1.158 Bcast:192.168.1.255 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:114941 errors:0 dropped:0 overruns:0 frame:0

TX packets:6985 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:11015974 (10.5 MiB) TX bytes:1151788 (1.0 MiB)

lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

UP LOOPBACK RUNNING MTU:16436 Metric:1

RX packets:104 errors:0 dropped:0 overruns:0 frame:0

TX packets:104 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:9531 (9.3 KiB) TX bytes:9531 (9.3 KiB)

Iptables和SELinux也禁止。

SAN服务器:

Openfiler 2.3来部署的,在这里分配3个LUN,大小分别为5G和两个8G。

正式部署安装

1、配置DNS服务

以下操作在DNS服务器上进行:

安装bind三个包

[root@dns12c ~]# rpm -ivh /mnt/Server/bind-9.3.6-20.P1.el5.x86_64.rpm

[root@dns12c ~]# rpm -ivh /mnt/Server/bind-chroot-9.3.6-20.P1.el5.x86_64.rpm

[root@dns12c ~]# rpm -ivh /mnt/Server/caching-nameserver-9.3.6-20.P1.el5.x86_64.rpm

配置主区域

[root@dns12c ~]# cd /var/named/chroot/etc

[root@dns12c etc]# cp -p named.caching-nameserver.conf named.conf

[root@dns12c etc]# cat named.conf

options {

listen-on port 53 { any; };

listen-on-v6 port 53 { ::1; };

directory "/var/named";

dump-file "/var/named/data/cache_dump.db";

statistics-file "/var/named/data/named_stats.txt";

memstatistics-file "/var/named/data/named_mem_stats.txt";

// Those options should be used carefully because they disable port

// randomization

// query-source port 53;

// query-source-v6 port 53;

allow-query { any; };

allow-query-cache { any; };

};

logging {

channel default_debug {

file "data/named.run";

severity dynamic;

};

};

view any_resolver {

match-clients { any; };

match-destinations { any; };

recursion yes;

include "/etc/named.zones";

};

[root@dns12c etc]# cp -p named.rfc1912.zones named.zones

[root@dns12c etc]# cat named.zones

zone "linuxidc.com" IN {

type master;

file "linuxidc.com.zone";

allow-update { none; };

};

zone "1.168.192.in-addr.arpa" IN {

type master;

file "1.168.192.local";

allow-update { none; };

};

[root@dns12c ~]# cd /var/named/chroot/var/named

[root@12crac1 named]# cp -p named.zero linuxidc.com.zone

[root@12crac1 named]# cp -p named.local 1.168.192.local

[root@12crac1 named]# cat linuxidc.com.zone

$TTL 86400

@ IN SOA dns.linuxidc.com. root.linuxidc.com. (

42 ; serial (d. adams)

3H ; refresh

15M ; retry

1W ; expiry

1D ) ; minimum

IN NS dns.linuxidc.com.

scan IN A 192.168.1.154

scan IN A 192.168.1.155

scan IN A 192.168.1.156

gns IN A 192.168.1.157

12crac1 IN A 192.168.1.150

12crac2 IN A 192.168.1.151

[root@12crac1 named]# cat 1.168.192.local

$TTL 86400

@ IN SOA dns.linuxidc.com. root.linuxidc.com. (

1997022700 ; Serial

28800 ; Refresh

14400 ; Retry

3600000 ; Expire

86400 ) ; Minimum

IN NS dns.linuxidc.com.

154 IN PTR scan.linuxidc.com.

155 IN PTR scan.linuxidc.com.

156 IN PTR scan.linuxidc.com.

157 IN PTR gns.linuxidc.com.

nslookup或 dig检查

给两个节点配置DNS

(以Node1为例,两个节点相同)

[root@12crac1 ~]# cat /etc/resolv.conf

#domain localdomain

search localdomain

nameserver 192.168.1.158

测试:

[root@12crac1 ~]# nslookup scan.linuxidc.com

Server: 192.168.1.158

Address: 192.168.1.158#53

Name: scan.linuxidc.com

Address: 192.168.1.156

Name: scan.linuxidc.com

Address: 192.168.1.154

Name: scan.linuxidc.com

Address: 192.168.1.155

[root@12crac1 ~]# nslookup 192.168.1.154

Server: 192.168.1.158

Address: 192.168.1.158#53

154.1.168.192.in-addr.arpa name = scan.linuxidc.com.

[root@12crac1 ~]# nslookup 192.168.1.155

Server: 192.168.1.158

Address: 192.168.1.158#53

155.1.168.192.in-addr.arpa name = scan.linuxidc.com.

[root@12crac1 ~]# nslookup 192.168.1.156

Server: 192.168.1.158

Address: 192.168.1.158#53

156.1.168.192.in-addr.arpa name = scan.linuxidc.com.

2、配置/etc/hosts

修改/etc/hosts文件,前两行不懂,添加hostname对应信息。

(以Node1为例,两个节点相同)

[root@12crac1 ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

# For Public

192.168.1.150 12crac1.linuxidc.com 12crac1

192.168.1.151 12crac2.linuxidc.com 12crac2

# For VIP

192.168.1.152 12crac1-vip.linuxidc.com 12crac1-vip.linuxidc.com

192.168.1.153 12crac2-vip.linuxidc.com 12crac2-vip.linuxidc.com

# For Private IP

192.168.80.150 12crac1-priv1.linuxidc.com 12crac1-priv1

192.168.80.151 12crac1-priv2.linuxidc.com 12crac1-priv2

192.168.80.152 12crac1-priv3.linuxidc.com 12crac1-priv3

192.168.80.153 12crac1-priv4.linuxidc.com 12crac1-priv4

192.168.80.154 12crac2-priv1.linuxidc.com 12crac2-priv1

192.168.80.155 12crac2-priv2.linuxidc.com 12crac2-priv2

192.168.80.156 12crac2-priv3.linuxidc.com 12crac2-priv3

192.168.80.157 12crac2-priv4.linuxidc.com 12crac2-priv4

# For SCAN IP

# 192.168.1.154 scan.linuxidc.com

# 192.168.1.155 scan.linuxidc.com

# 192.168.1.155 scan.linuxidc.com

# For DNS Server

192.168.1.158 dns12c.linuxidc.com dns12c

3、系统配置

修改/etc/sysctl.conf,添加如下内容:

fs.file-max = 6815744

kernel.sem = 250 32000 100 128

kernel.shmmni = 4096

kernel.shmall = 1073741824

kernel.shmmax = 4398046511104

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

fs.aio-max-nr = 1048576

net.ipv4.ip_local_port_range = 9000 65500

生效:

[root@12c ~]# sysctl -p

修改/etc/security/limits.conf,添加如下内容:

grid soft nofile 1024

grid hard nofile 65536

grid soft nproc 2047

grid hard nproc 16384

grid soft stack 10240

grid hard stack 32768

Oracle soft nofile 1024

oracle hard nofile 65536

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft stack 10240

oracle hard stack 32768

4、配置YUM源并安装所需包

先将默认的yum配置文件删除或者移动,然后创建一个新的

(以Node1为例,两个节点相同)

[root@12crac1 ~]# cd /etc/yum.repos.d

[root@12crac1 yum.repos.d]# mkdir bk

[root@12crac1 yum.repos.d]# mv public-yum-ol6.repo bk/

[root@12crac1 yum.repos.d]# vi linuxidc.repo

-- 添加如下内容

[Oracle]

name=OEL-$releasever - Media

baseurl=file:///mnt

gpgcheck=0

enabled=1

将光驱挂载上

[root@12crac1 yum.repos.d]# mount /dev/cdrom /mnt

mount: block device /dev/sr0 is write-protected, mounting read-only

下面就可以Yum方式安装所需包了

[root@12crac1 yum.repos.d]# yum -y install binutils compat-libstdc++-33 elfutils-libelf \

elfutils-libelf-devel elfutils-libelf-devel-static gcc gcc-c++ glibc glibc-common \

glibc-devel kernel-headers ksh libaio libaio-devel libgcc libgomp libstdc++ libstdc++-devel \

make numactl-devel sysstat unixODBC unixODBC-devel pdksh compat-libcap1

5、创建用户和组

(以Node1为例,两个节点相同)

创建组:

[root@12crac1 ~]# /usr/sbin/groupadd -g 54321 oinstall

[root@12crac1 ~]# /usr/sbin/groupadd -g 54322 dba

[root@12crac1 ~]# /usr/sbin/groupadd -g 54323 oper

[root@12crac1 ~]# /usr/sbin/groupadd -g 54324 backupdba

[root@12crac1 ~]# /usr/sbin/groupadd -g 54325 dgdba

[root@12crac1 ~]# /usr/sbin/groupadd -g 54327 asmdba

[root@12crac1 ~]# /usr/sbin/groupadd -g 54328 asmoper

[root@12crac1 ~]# /usr/sbin/groupadd -g 54329 asmadmin

创建用户:

[root@12crac1 ~]# /usr/sbin/useradd -u 54321 -g oinstall -G asmadmin,asmdba,asmoper,dba grid

[root@12crac1 ~]# /usr/sbin/useradd -u 54322 -g oinstall -G dba,backupdba,dgdba,asmadmin oracle

设置口令:

[root@12crac1 ~]# passwd grid

[root@12crac1 ~]# passwd oracle

6、创建安装目录以及授权

(以Node1为例,两个节点相同)

[root@12crac1 ~]# mkdir -p /u01/app/grid

[root@12crac1 ~]# mkdir -p /u01/app/12.1.0/grid

[root@12crac1 ~]# mkdir -p /u01/app/oracle/product/12.1.0/db_1

[root@12crac1 ~]# chown -R grid.oinstall /u01

[root@12crac1 ~]# chown -R oracle.oinstall /u01/app/oracle

[root@12crac1 ~]# chmod -R 775 /u01

7、配置境变量

节点1:

[root@12crac1 ~]# vi /home/grid/.bash_profile

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_HOSTNAME=12crac1.linuxidc.com

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/12.1.0/grid

export ORACLE_SID=+ASM1

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

alias sqlplus="rlwrap sqlplus"

[root@12crac1 ~]# vi /home/oracle/.bash_profile

export PATH

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_HOSTNAME=12crac1.linuxidc.com

export ORACLE_UNQNAME=linuxidc12c1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1

export ORACLE_SID=linuxidc12c1

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

alias sqlplus="rlwrap sqlplus"

alias rman="rlwrap rman"

节点2:

[root@12crac2 ~]# vi /home/grid/.bash_profile

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_HOSTNAME=12crac1.linuxidc.com

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/12.1.0/grid

export ORACLE_SID=+ASM1

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

alias sqlplus="rlwrap sqlplus"

[root@12crac2 ~]# vim /home/oracle/.bash_profile

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_HOSTNAME=12crac2.linuxidc.com

export ORACLE_UNQNAME=linuxidc12c2

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1

export ORACLE_SID=linuxidc12c2

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

alias sqlplus="rlwrap sqlplus"

alias rman="rlwrap rman"

8、Iscsi挂载磁盘并配置UDEV

[root@12cr ~]# yum -y install iscsi-initiator-utils

[root@12cr ~]# service iscsid start

[root@12cr ~]# chkconfig iscsid on

[root@12cr ~]# iscsiadm -m discovery -t sendtargets -p 192.168.80.140:3260

[ OK ] iscsid: [ OK ]

192.168.80.140:3260,1 iqn.2006-01.com.openfiler:tsn.3a9cad78121d

[root@12cr ~]# service iscsi restart

Stopping iscsi: [ OK ]

Starting iscsi: [ OK ]

[root@12crac1 ~]# fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00046ecd

Device Boot Start End Blocks Id System

/dev/sda1 * 1 26 204800 83 Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2 26 664 5120000 82 Linux swap / Solaris

Partition 2 does not end on cylinder boundary.

/dev/sda3 664 6528 47102976 83 Linux

Disk /dev/sdb: 2147 MB, 2147483648 bytes

67 heads, 62 sectors/track, 1009 cylinders

Units = cylinders of 4154 * 512 = 2126848 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sdc: 10.5 GB, 10502537216 bytes

64 heads, 32 sectors/track, 10016 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sdd: 6610 MB, 6610223104 bytes

204 heads, 62 sectors/track, 1020 cylinders

Units = cylinders of 12648 * 512 = 6475776 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sdf: 8388 MB, 8388608000 bytes

64 heads, 32 sectors/track, 8000 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sde: 8388 MB, 8388608000 bytes

64 heads, 32 sectors/track, 8000 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sdg: 5335 MB, 5335154688 bytes

165 heads, 62 sectors/track, 1018 cylinders

Units = cylinders of 10230 * 512 = 5237760 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

这里我只用sde、sdf、sdg,其他的是给别的集群使用的。

[root@12crac1 ~]# for i in e f g;do

> echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""

> done

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c450058444273784d2d64705a6a2d544c4756", NAME="asm-diske", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c45003030365263642d32714a702d6866744c", NAME="asm-diskf", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c45006c58576a76452d716d50492d71436c76", NAME="asm-diskg", OWNER="grid", GROUP="asmadmin", MODE="0660"

配置UDEV:

(以Node1为例,两个节点相同)

[root@12crac1 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules

-- 添加如下内容:

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c45003030365263642d32714a702d6866744c", NAME="asm-data", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c4500796861656a6b2d3632475a2d66384631", NAME="asm-fra", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="14f504e46494c45006c58576a76452d716d50492d71436c76", NAME="asm-crs", OWNER="grid", GROUP="asmadmin", MODE="0660"

[root@12crac1 ~]# /sbin/start_udev

Starting udev: [ OK ]

[root@12crac1 ~]# ls -l /dev/asm*

brw-rw---- 1 grid asmadmin 8, 96 Jun 29 21:56 /dev/asm-crs

brw-rw---- 1 grid asmadmin 8, 64 Jun 29 21:56 /dev/asm-data

brw-rw---- 1 grid asmadmin 8, 80 Jun 29 21:56 /dev/asm-fra

9、禁用NTP服务

(以Node1为例,两个节点相同)

[root@12crac1 ~]# chkconfig ntpd off

[root@12crac1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak

10、解压介质

节点1:

[root@12crac1 ~]# chown -R grid.oinstall /install/

[root@12crac1 ~]# chown oracle.oinstall /install/linuxamd64_12c_database_*

[root@12crac1 ~]# chmod 775 /install

[root@12crac1 ~]# su - grid

[root@12crac1 ~]# cd /install/

[grid@12crac1 install]$ unzip linuxamd64_12c_grid_1of2.zip

[grid@12crac1 install]$ unzip linuxamd64_12c_grid_2of2.zip

[root@12crac1 ~]# su - oracle

[oracle@12crac1 install]$ unzip linuxamd64_12c_database_1of2.zip

解压之后大小为:

[oracle@12crac1 install]$ du -sh grid

2.1G grid

[oracle@12crac1 install]$ du -sh database/

2.6G database/

安装cvu相关rpm包:

[root@12crac1 ~]# cd /install/grid/rpm/

[root@12crac1 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm

Preparing... ########################################### [100%]

Using default group oinstall to install package

1:cvuqdisk ########################################### [100%]

拷贝到节点2并安装:

[root@12crac1 rpm]# scp cvuqdisk-1.0.9-1.rpm 12crac2:/install

[root@12crac2 install]# rpm -ivh cvuqdisk-1.0.9-1.rpm

Preparing... ########################################### [100%]

Using default group oinstall to install package

1:cvuqdisk ########################################### [100%]

11、校验

这里只贴失败的项,其中第一个是物理内存不足问题,Oracle推荐每节点至少4GB内存空间,我这里只有2G;第二个问题是配置DNS,这个问题我们可以忽略。

节点1:

[grid@12crac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n 12crac1,12crac2 -verbose

Check: Total memory

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

12crac2 1.9567GB (2051748.0KB) 4GB (4194304.0KB) failed

12crac1 1.9567GB (2051748.0KB) 4GB (4194304.0KB) failed

Result: Total memory check failed

Result: Default user file creation mask check passed

Checking integrity of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined

"domain" and "search" entries do not coexist in any "/etc/resolv.conf" file

Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...

"domain" entry does not exist in any "/etc/resolv.conf" file

Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...

Checking file "/etc/resolv.conf" to make sure that only one search entry is defined

More than one "search" entry does not exist in any "/etc/resolv.conf" file

All nodes have same "search" order defined in file "/etc/resolv.conf"

Checking DNS response time for an unreachable node

Node Name Status

------------------------------------ ------------------------

12crac1 failed

12crac2 failed

PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: 12crac1,12crac2

Check for integrity of file "/etc/resolv.conf" failed

12、安装

1)安装GI

[root@12cr ~]# su - grid

[grid@12cr ~]$ cd /install/grid/

打开Xmanager - Passive,设置DISPLAY,调用runInstaller启动OUI

[grid@12cr grid]$ export DISPLAY=192.168.1.1:0.0

[grid@12cr grid]$ ./runInstaller

这里有几项校验不过去,这几个都忽略。

脚本输出内容:

[root@12crac1 ~]# /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

节点1:

[root@12crac1 ~]# /u01/app/12.1.0/grid/root.sh

Performing root user operation for Oracle 12c

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params

2013/07/01 00:30:25 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful

root wallet

root wallet cert

root cert export

peer wallet

profile reader wallet

pa wallet

peer wallet keys

pa wallet keys

peer cert request

pa cert request

peer cert

pa cert

peer root cert TP

profile reader root cert TP

pa root cert TP

peer pa cert TP

pa peer cert TP

profile reader pa cert TP

profile reader peer cert TP

peer user cert

pa user cert

2013/07/01 00:31:22 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-2673: Attempting to stop 'ora.drivers.acfs' on '12crac1'

CRS-2677: Stop of 'ora.drivers.acfs' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.evmd' on '12crac1'

CRS-2672: Attempting to start 'ora.mdnsd' on '12crac1'

CRS-2676: Start of 'ora.mdnsd' on '12crac1' succeeded

CRS-2676: Start of 'ora.evmd' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on '12crac1'

CRS-2676: Start of 'ora.gpnpd' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on '12crac1'

CRS-2672: Attempting to start 'ora.gipcd' on '12crac1'

CRS-2676: Start of 'ora.cssdmonitor' on '12crac1' succeeded

CRS-2676: Start of 'ora.gipcd' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on '12crac1'

CRS-2672: Attempting to start 'ora.diskmon' on '12crac1'

CRS-2676: Start of 'ora.diskmon' on '12crac1' succeeded

CRS-2676: Start of 'ora.cssd' on '12crac1' succeeded

ASM created and started successfully.

Disk Group RACCRS created successfully.

CRS-2672: Attempting to start 'ora.crf' on '12crac1'

CRS-2672: Attempting to start 'ora.storage' on '12crac1'

CRS-2676: Start of 'ora.storage' on '12crac1' succeeded

CRS-2676: Start of 'ora.crf' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on '12crac1'

CRS-2676: Start of 'ora.crsd' on '12crac1' succeeded

CRS-4256: Updating the profile

Successful addition of voting disk d883c23a7bfc4fdcbf418c9f631bd0af.

Successfully replaced voting disk group with +RACCRS.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE d883c23a7bfc4fdcbf418c9f631bd0af (/dev/asm-crs) [RACCRS]

Located 1 voting disk(s).

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '12crac1'

CRS-2673: Attempting to stop 'ora.crsd' on '12crac1'

CRS-2677: Stop of 'ora.crsd' on '12crac1' succeeded

CRS-2673: Attempting to stop 'ora.storage' on '12crac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on '12crac1'

CRS-2673: Attempting to stop 'ora.gpnpd' on '12crac1'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on '12crac1'

CRS-2677: Stop of 'ora.storage' on '12crac1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on '12crac1'

CRS-2673: Attempting to stop 'ora.evmd' on '12crac1'

CRS-2673: Attempting to stop 'ora.asm' on '12crac1'

CRS-2677: Stop of 'ora.drivers.acfs' on '12crac1' succeeded

CRS-2677: Stop of 'ora.mdnsd' on '12crac1' succeeded

CRS-2677: Stop of 'ora.gpnpd' on '12crac1' succeeded

CRS-2677: Stop of 'ora.evmd' on '12crac1' succeeded

CRS-2677: Stop of 'ora.ctssd' on '12crac1' succeeded

CRS-2677: Stop of 'ora.asm' on '12crac1' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '12crac1'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '12crac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on '12crac1'

CRS-2677: Stop of 'ora.cssd' on '12crac1' succeeded

CRS-2673: Attempting to stop 'ora.crf' on '12crac1'

CRS-2677: Stop of 'ora.crf' on '12crac1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on '12crac1'

CRS-2677: Stop of 'ora.gipcd' on '12crac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '12crac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Starting Oracle High Availability Services-managed resources

CRS-2672: Attempting to start 'ora.mdnsd' on '12crac1'

CRS-2672: Attempting to start 'ora.evmd' on '12crac1'

CRS-2676: Start of 'ora.mdnsd' on '12crac1' succeeded

CRS-2676: Start of 'ora.evmd' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on '12crac1'

CRS-2676: Start of 'ora.gpnpd' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on '12crac1'

CRS-2676: Start of 'ora.gipcd' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on '12crac1'

CRS-2676: Start of 'ora.cssdmonitor' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on '12crac1'

CRS-2672: Attempting to start 'ora.diskmon' on '12crac1'

CRS-2676: Start of 'ora.diskmon' on '12crac1' succeeded

CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server '12crac1'

CRS-2676: Start of 'ora.cssd' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '12crac1'

CRS-2672: Attempting to start 'ora.ctssd' on '12crac1'

CRS-2676: Start of 'ora.ctssd' on '12crac1' succeeded

CRS-2676: Start of 'ora.cluster_interconnect.haip' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.asm' on '12crac1'

CRS-2676: Start of 'ora.asm' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.storage' on '12crac1'

CRS-2676: Start of 'ora.storage' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.crf' on '12crac1'

CRS-2676: Start of 'ora.crf' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on '12crac1'

CRS-2676: Start of 'ora.crsd' on '12crac1' succeeded

CRS-6023: Starting Oracle Cluster Ready Services-managed resources

CRS-6017: Processing resource auto-start for servers: 12crac1

CRS-6016: Resource auto-start has completed for server 12crac1

CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources

CRS-4123: Oracle High Availability Services has been started.

2013/07/01 00:38:01 CLSRSC-343: Successfully started Oracle clusterware stack

CRS-2672: Attempting to start 'ora.asm' on '12crac1'

CRS-2676: Start of 'ora.asm' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.RACCRS.dg' on '12crac1'

CRS-2676: Start of 'ora.RACCRS.dg' on '12crac1' succeeded

2013/07/01 00:39:51 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

节点2:

[root@12crac2 ~]# /u01/app/12.1.0/grid/root.sh

Performing root user operation for Oracle 12c

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params

2013/07/01 00:42:51 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful

2013/07/01 00:43:18 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '12crac2'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on '12crac2'

CRS-2677: Stop of 'ora.drivers.acfs' on '12crac2' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '12crac2' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Starting Oracle High Availability Services-managed resources

CRS-2672: Attempting to start 'ora.evmd' on '12crac2'

CRS-2672: Attempting to start 'ora.mdnsd' on '12crac2'

CRS-2676: Start of 'ora.evmd' on '12crac2' succeeded

CRS-2676: Start of 'ora.mdnsd' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on '12crac2'

CRS-2676: Start of 'ora.gpnpd' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on '12crac2'

CRS-2676: Start of 'ora.gipcd' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on '12crac2'

CRS-2676: Start of 'ora.cssdmonitor' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.cssd' on '12crac2'

CRS-2672: Attempting to start 'ora.diskmon' on '12crac2'

CRS-2676: Start of 'ora.diskmon' on '12crac2' succeeded

CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server '12crac2'

CRS-2676: Start of 'ora.cssd' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '12crac2'

CRS-2672: Attempting to start 'ora.ctssd' on '12crac2'

CRS-2676: Start of 'ora.ctssd' on '12crac2' succeeded

CRS-2676: Start of 'ora.cluster_interconnect.haip' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.asm' on '12crac2'

CRS-2676: Start of 'ora.asm' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.storage' on '12crac2'

CRS-2676: Start of 'ora.storage' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.crf' on '12crac2'

CRS-2676: Start of 'ora.crf' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.crsd' on '12crac2'

CRS-2676: Start of 'ora.crsd' on '12crac2' succeeded

CRS-6017: Processing resource auto-start for servers: 12crac2

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on '12crac1'

CRS-2672: Attempting to start 'ora.ons' on '12crac2'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on '12crac1' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on '12crac1'

CRS-2677: Stop of 'ora.scan1.vip' on '12crac1' succeeded

CRS-2672: Attempting to start 'ora.scan1.vip' on '12crac2'

CRS-2676: Start of 'ora.scan1.vip' on '12crac2' succeeded

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on '12crac2'

CRS-2676: Start of 'ora.ons' on '12crac2' succeeded

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on '12crac2' succeeded

CRS-6016: Resource auto-start has completed for server 12crac2

CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources

CRS-4123: Oracle High Availability Services has been started.

2013/07/01 00:48:43 CLSRSC-343: Successfully started Oracle clusterware stack

2013/07/01 00:49:05 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

查看状态:

[grid@12crac1 ~]$ crsctl stat res -t

-------------------------------------------------------------------------------

Name Target State Server State details

-------------------------------------------------------------------------------

Local Resources

-------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

ora.RACCRS.dg

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

ora.asm

ONLINE ONLINE 12crac1 Started,STABLE

ONLINE ONLINE 12crac2 Started,STABLE

ora.net1.network

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

ora.ons

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

-------------------------------------------------------------------------------

Cluster Resources

-------------------------------------------------------------------------------

ora.12crac1.vip

1 ONLINE ONLINE 12crac1 STABLE

ora.12crac2.vip

1 ONLINE ONLINE 12crac2 STABLE

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE 12crac2 STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE 12crac1 STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE 12crac1 STABLE

ora.MGMTLSNR

1 ONLINE ONLINE 12crac1 169.254.88.173 192.1

68.80.150,STABLE

ora.cvu

1 ONLINE ONLINE 12crac1 STABLE

ora.mgmtdb

1 ONLINE ONLINE 12crac1 Open,STABLE

ora.oc4j

1 ONLINE ONLINE 12crac1 STABLE

ora.scan1.vip

1 ONLINE ONLINE 12crac2 STABLE

ora.scan2.vip

1 ONLINE ONLINE 12crac1 STABLE

ora.scan3.vip

1 ONLINE ONLINE 12crac1 STABLE

-------------------------------------------------------------------------------

2)创建ASM磁盘组

节点1:

[grid@12crac1 ~]$ export DISPLAY=192.168.1.1:0.0

[grid@12crac1 ~]$ asmca

3)安装RDBMS软件

节点1:

[root@12crac1 ~]# su - Oracle

[oracle@12crac1 ~]$ cd /install/database/

[oracle@12crac1 database]$ export DISPLAY=192.168.1.1:0.0

[oracle@12crac1 database]$ ./runInstaller

执行脚本:

节点1:

[root@12crac1 ~]# /u01/app/Oracle/product/12.1.0/dbhome_1/root.sh

Performing root user operation for Oracle 12c

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

4)创建数据库

[oracle@12crac1 ~]$ dbca

点浏览选择ASM磁盘组

归档先别开,否则影响建库速度

13、最终结果

资源状态:

[grid@12crac1 ~]$ crsctl stat res -t

-------------------------------------------------------------------------------

Name Target State Server State details

-------------------------------------------------------------------------------

Local Resources

-------------------------------------------------------------------------------

ora.LISTENER.lsnr

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

ora.RACCRS.dg

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

ora.RACDATA.dg

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

ora.RACFRA.dg

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

ora.asm

ONLINE ONLINE 12crac1 Started,STABLE

ONLINE ONLINE 12crac2 Started,STABLE

ora.net1.network

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

ora.ons

ONLINE ONLINE 12crac1 STABLE

ONLINE ONLINE 12crac2 STABLE

-------------------------------------------------------------------------------

Cluster Resources

-------------------------------------------------------------------------------

ora.12crac1.vip

1 ONLINE ONLINE 12crac1 STABLE

ora.12crac2.vip

1 ONLINE ONLINE 12crac2 STABLE

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE 12crac2 STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE 12crac1 STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE 12crac1 STABLE

ora.MGMTLSNR

1 ONLINE ONLINE 12crac1 169.254.88.173 192.1

68.80.150,STABLE

ora.cvu

1 ONLINE ONLINE 12crac1 STABLE

ora.linuxidc12c.db

1 ONLINE ONLINE 12crac1 Open,STABLE

2 ONLINE ONLINE 12crac2 Open,STABLE

ora.mgmtdb

1 ONLINE ONLINE 12crac1 Open,STABLE

ora.oc4j

1 ONLINE ONLINE 12crac1 STABLE

ora.scan1.vip

1 ONLINE ONLINE 12crac2 STABLE

ora.scan2.vip

1 ONLINE ONLINE 12crac1 STABLE

ora.scan3.vip

1 ONLINE ONLINE 12crac1 STABLE

-------------------------------------------------------------------------------

RAC数据库配置信息

[grid@12crac1 ~]$ srvctl config database -d linuxidc12c

Database unique name: linuxidc12c

Database name: linuxidc12c

Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1

Oracle user: oracle

Spfile: +RACDATA/linuxidc12c/spfilelinuxidc12c.ora

Password file: +RACDATA/linuxidc12c/orapwlinuxidc12c

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: linuxidc12c

Database instances: linuxidc12c1,linuxidc12c2

Disk Groups: RACFRA,RACDATA

Mount point paths:

Services:

Type: RAC

Start concurrency:

Stop concurrency:

Database is administrator managed

[grid@12crac1 ~]$ srvctl status database -d linuxidc12c

Instance linuxidc12c1 is running on node 12crac1

Instance linuxidc12c2 is running on node 12crac2

[grid@12crac1 ~]$ srvctl status listener

Listener LISTENER is enabled

Listener LISTENER is running on node(s): 12crac1,12crac2

实例状态:

sys@linuxidc12C> select instance_name, status from gv$instance;

INSTANCE_NAME STATUS

---------------- ------------

linuxidc12c1 OPEN

linuxidc12c2 OPEN

sys@linuxidc12C> col open_time for a25

sys@linuxidc12C> col name for a10

sys@linuxidc12C> select CON_ID,NAME,OPEN_MODE,OPEN_TIME,CREATE_SCN, TOTAL_SIZE from v$pdbs;

CON_ID NAME OPEN_MODE OPEN_TIME CREATE_SCN TOTAL_SIZE

---------- ---------- ---------- ------------------------- ---------- ----------

2 PDB$SEED READ ONLY 01-JUL-13 04.33.07.302 PM 1720772 283115520

3 linuxidc READ WRITE 01-JUL-13 04.38.41.339 PM 1934986 288358400

查看归档启用与否:

sys@linuxidc12C> archive log list

Database log mode No Archive Mode

Automatic archival Disabled

Archive destination USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence 15

Current log sequence 16

现在手动开启:

[oracle@12crac1 ~]$ srvctl stop database -d linuxidc12c

[oracle@12crac1 ~]$ srvctl start database -d linuxidc12c -o mount

节点1:

[oracle@12crac1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 1 17:13:58 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

idle> alter database archivelog;

Database altered.

idle> alter database open;

Database altered.

节点2:

[oracle@12crac2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 1 17:17:07 2013

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Advanced Analytics and Real Application Testing options

SQL> alter database open;

Database altered.

OK,这样RAC已经运行于归档模式了

sys@linuxidc12C> archive log list

Database log mode Archive Mode

Automatic archival Enabled

Archive destination USE_DB_RECOVERY_FILE_DEST

Oldest online log sequence 15

Next log sequence to archive 16

Current log sequence 16

Oracle 12c RAC安装到这里。

本文标签:
网站声明:本文由风哥整理发布,转载请保留此段声明,本站所有内容将不对其使用后果做任何承诺,请读者谨慎使用!
【上一篇】
【下一篇】