【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装 原创 Oracle 作者:lhrbest 时间:2014-07-02 21:06:58 27949 6 【ASM】Oracl…

【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装

【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装

原创
Oracle 作者:
lhrbest 时间:2014-07-02 21:06:58 27949 6

 

【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装



1.1    简介    

 

1.1.1    ASMLib    

1.1.2    什么是 udev    

1.1.3    Why ASMLIB and why not

1.2    在 RHEL 6.4 上安装 Oracle 11gR2 + ASM –使用udev

1.2.1    检查硬件    

1.2.2    安装软件包检查    

1.2.3    修改主机名    

1.2.4    网络配置    

1.2.5    磁盘准备    

1.2.6    配置目录、用户等    

1.2.7    使用udev管理磁盘    

1.2.8    系统内核参数修改    

1.2.9    为 Linux 系统配置本地 YUM 源    

1.2.10    安装grid    

1.2.11    使用netmgr建立监听    

1.2.12    开始搭建 Oracle 数据库    

1.2.13    使用 netmgr 建立监听–Oracle用户不需要创建    

1.2.14    使用 dbca 创建数据库    

1.2.15    配置ORACLE自动启动    

1.2.16    验证    

1.3    启动crs    

1.4    报错:

1.4.1    Oracle 11gR2 RAC ohasd failed to start 解决方法    

1.4.2    CRS-4639: Could not contact Oracle High Availability Services    

1.4.3    ORA-29701: unable to connect to Cluster Synchronization Service    

1.4.4    asm 实例无法加载diskgroups,ORA-15110: no diskgroups mounted    

1.4.5    在启动DB时报错ORA-27154 ORA-27300 ORA-27301 ORA-27302    

1.4.6    ORA-29786: SIHA attribute GET failed    

1.4.7    11gR2手动创建ASM实例ORA-29786错误解决方法    

1.4.8    ORACLE dbca 找不到asm disks

1.4.9    ora-15077,ASM磁盘组不能挂载    

 

 

Readme

看到群上还有人纠结ASM的安装,我很痛心,不过想想自己当时安装的时候花了将近一周的时间才安装好,主要是白天上班,又没有网络,所以只好晚上安装了,自己开始安装的时候是11.2.0.1.0,这个版本安装有很多bug,虽然安装成功了但是很痛苦,后来参加OCP培训,老师给了11.2.0.3.0版本,安装的时候很顺利,没有报错了。

第一次安装的时候,涉及到ASMlib,udev神马的一堆,学习Oracle没有好的导师痛苦呀,最后经过大量百度,自己摸索才把这一推问题解决了。大家看这篇博文的时候注意文档的层次结构,很郁闷,发表博文不能很好的把文章结构发出来。。。文章最后收录了一些安装过程中可能会出现的一些问题,供大家参考,如果有什么意见请大家留言或加QQ,谢谢大家的来访。


  1. 简介

    1. ASMLib

在Red Hat Enterprise Linux (RHEL)6以前,Oracle均是使用ASMLib这个内核支持库配置ASM。ASMLIB是一种基于Linux module,专门为Oracle Automatic Storage Management特性设计的内核支持库(kernel support library)。但是,在2011年5月,甲骨文发表了一份Oracle数据库ASMLib的声明,声明中称甲骨文将不再提供Red Hat Enterprise Linux (RHEL)6的ASMLib和相关更新。

甲骨文在这份声明中表示,ASMLib更新将通过Unbreakable Linux Network (ULN)来发布,并仅对Oracle Linux客户开放。ULN虽然为甲骨文和红帽的客户服务,但如果客户想要使用ASMlib,就必须使用Oracle的kernel来替换掉红帽的。

这份声明详见Oracle Metalink文档:

Oracle ASMLib Software Update Policy for Red Hat Enterprise Linux Supported by Red Hat [ID 1089399.1]

Software Update Policy for ASMLib running on future releases of Red Hat Enterprise Linux Red Hat Enterprise Linux 6 (RHEL6)For RHEL6 or Oracle Linux 6, Oracle will only provide ASMLib software and updates when configuredUnbreakable Enterprise Kernel (UEK). Oracle will not provide ASMLib packages for kernels distributedby Red Hat as part of RHEL 6 or the Red Hat compatible kernel in Oracle Linux 6. ASMLib updates will be delivered via Unbreakable Linux Network(ULN) which is available to customers with Oracle Linux support. ULN works with both Oracle Linux or Red Hat Linux installations, but ASMlib usage will require replacing any Red Hat kernel with UEK

因此,在Red Hat Enterprise Linux (RHEL)6上使用ASMLib已不再现实,另外ASMLib也有一定的缺点,详见文档说明:

http://www.oracledatabase12g.com/archives/why-asmlib-and-why-not.html

因此,目前在Red Hat Enterprise Linux (RHEL)6上使用Oracle+ASM,已不再使用ASMLib,而是采用udev设备文件来配置ASM。

 

  1. 什么是 udev

udev 是Linux2.6 内核里的一个功能,它替代了原来的devfs,成为当前Linux 默认的设备管理工具。udev 以守护进程的形式运行,通过侦听内核发出来的uevent 来管理/dev目录下的设备文件。不像之前的设备管理工具,udev 在用户空间(user space) 运行,而不在内核空间(kernel space) 运行。

 

  1. Why ASMLIB and why not?

ASMLIB是一种基于Linux module,专门为Oracle Automatic Storage Management特性设计的内核支持库(kernel support library)

长久以来我们对ASMLIB的认识并不全面,这里我们来具体了解一下使用ASMLIB的优缺点。

理论上我们可以从ASMLIB API中得到的以下益处:

  • 总是使用direct,async IO
  • 解决了永久性设备名的问题,即便在重启后设备名已经改变的情况下
  • 解决了文件权限、拥有者的问题
  • 减少了I/O期间从用户模式到内核模式的上下文切换,从而可能降低cpu使用率
  • 减少了文件句柄的使用量
  • ASMLIB API提供了传递如I/O优先级等元信息到存储设备的可能

虽然从理论上我们可以从ASMLIB中得到性能收益,但实践过程中这种优势是几乎可以忽略的,没有任何性能报告显示ASMLIB对比Linux上原生态的udev设备管理服务有任何性能上的优势。在Oracle官方论坛上有一篇讨论ASMLIB性能收益的帖子,你可以从中看到”asmlib wouldnt necessarily give you much of an io performance benefit, its mainly for ease of management as it will find/discover the right devices for you, the io effect of asmlib is large the same as doing async io to raw devices.“的评论,实际上使用ASMLIB和直接使用裸设备(raw device)在性能上没有什么差别。

ASMLIB可能带来的缺点:

结论:我个人的观点是尽可能不要使用ASMLIB,当然这不是DBA个人所能决定的事情。另一方面这取决于个人习惯,rhel 4的早期发行版本中没有提供udev这样的设备管理服务,这导致在rhel 4中大量的ASM+RAC组合的系统使用ASMLIB , 经网友指出udev 作为kernel 2.6的新特性被引入,在rhel4的初始版本中就已经加入了udev绑定服务,但是在rhel4时代实际udev的使用并不广泛(In Linux 2.6, a new feature was introduced to simplify device management and hot plug capabilities. This feature is called udev and is a standard package in RHEL4 or OracleEnterprise Linux 4 (OEL4) as well as Novells SLES9 and SLES10.)。如果是在RHEL/OEL 5中那么你已经有充分的理由利用udev而放弃ASMLIB

Reference:ASMLIB Performance vs UdevRAC+ASM 3 years in production Stories to shareHow To Setup ASM & ASMLIB On Native Linux Multipath Mapper disks? [ID 602952.1]ASMLib and Linux block devices

 


  1. 在 RHEL 6.4 上安装 Oracle 11gR2 + ASM –使用udev

实验环境

OS: Oracle Linux Server release 6.4 x64 RHEL6.4

Database: Oracle Database 11gR2 x64 (11.2.0.1.0)

VMware: VMware Workstation 10.0.0 build-812388

软件:

  1. xmanager-passive
  2. XSHELL

     

    1. 检查硬件

    在正式安装开始前,请先检查你的软硬件条件是否满足安装需要。

    硬件上可以使用命令查看内存情况和 CPU 特性:

    #more /proc/meminfo

    #more /proc/cpuinfo

    其中内存的要求是不低于 1G

    下面的不是必须的,可选使用

    #df –k /dev/shm 检查共享内存

    #df –k /tmp 检查临时磁盘空间

    #more /proc/version 检查操作系统版本

    #uname –r 检查内核版本

     

    内存

    # grep MemTotal /proc/meminfo

    交换空间

    # grep SwapTotal /proc/meminfo

    磁盘空间

    # df -ah

    # free

    #free -m

     

     

    Minimum: 1 GB of RAM

     

    Recommended: 2 GB of RAM or more

    ?

    To determine the RAM size, enter the following command:

    # grep MemTotal /proc/meminfo

     

    swap检查

     

     

    Between 1 GB and 2 GB 1.5 times the size of the RAM

    Between 2 GB and 16 GB Equal to the size of the RAM

    More than 16 GB 16 GB

    # grep SwapTotal /proc/meminfo

     

    To determine the available RAM and swap space, enter the following command:

     

    # free -m

     

     

    ?The following tables describe the disk space requirements for software files and data files for each installation type on Linux x86:

    Installation Type Requirement for Software Files (GB)

    Enterprise Edition 3.95

    Standard Edition 3.88

     

    Installation Type Disk Space for Data Files (GB)

    Enterprise Edition 1.7

    Standard Edition 1.5

     

    [root@yutian ~]# cat /proc/version

    Linux version 2.6.18-164.el5 (mockbuild@x86-002.build.bos.redhat.com) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) #1 SMP Tue Aug 18 15:51:54 EDT 2009

    [root@yutian ~]# lsb_release -id

    Distributor ID: RedHatEnterpriseServer

    Description: Red Hat Enterprise Linux Server release 5.4 (Tikanga)

    [root@yutian ~]#

    1. 安装软件包检查

     

    可以统一检查:

    rpm -q binutils

    compat-libstdc++-33

    elfutils-libelf

    gcc

    gcc-c++

    glibc

    glibc-common

    glibc-devel

    glibc-headers

    ksh

    libaio

    libaio-devel

    libgomp

    libgcc

    libstdc++

    libstdc++-devel

    make

    sysstat

    unixODBC

    unixODBC-devel

    numactl-devel

     

    我们可以看到没有安装过的包会已is not installed 出现:

    [root@rhel6_lhr ~]# rpm -q binutils

    > compat-libstdc++-33

    > elfutils-libelf

    > gcc

    > gcc-c++

    > glibc

    > glibc-common

    > glibc-devel

    > glibc-headers

    > ksh

    > libaio

    > libaio-devel

    > libgomp

    > libgcc

    > libstdc++

    > libstdc++-devel

    > make

    > sysstat

    > unixODBC

    > unixODBC-devel

    > numactl-devel

    binutils-2.20.51.0.2-5.36.el6.x86_64

    compat-libstdc++-33-3.2.3-69.el6.x86_64

    compat-libstdc++-33-3.2.3-69.el6.i686

    elfutils-libelf-0.152-1.el6.x86_64

    gcc-4.4.7-4.el6.x86_64

    gcc-c++-4.4.7-4.el6.x86_64

    glibc-2.12-1.132.el6.x86_64

    glibc-common-2.12-1.132.el6.x86_64

    glibc-devel-2.12-1.132.el6.x86_64

    glibc-headers-2.12-1.132.el6.x86_64

    package ksh is not installed

    libaio-0.3.107-10.el6.x86_64

    libaio-devel-0.3.107-10.el6.x86_64

    libaio-devel-0.3.107-10.el6.i686

    libgomp-4.4.7-4.el6.x86_64

    libgcc-4.4.7-4.el6.x86_64

    libstdc++-4.4.7-4.el6.x86_64

    libstdc++-devel-4.4.7-4.el6.x86_64

    libstdc++-devel-4.4.7-4.el6.i686

    make-3.81-20.el6.x86_64

    sysstat-9.0.4-22.el6.x86_64

    unixODBC-2.2.14-12.el6_3.x86_64

    unixODBC-2.2.14-12.el6_3.i686

    unixODBC-devel-2.2.14-12.el6_3.i686

    unixODBC-devel-2.2.14-12.el6_3.x86_64

    package numactl-devel is not installed

    [root@rhel6_lhr ~]#

     

    如果部分包不存在

    可以批量更新安装,但如果依赖关系缺失,可能需要多执行两遍

    或者手工调整

     

     

     

     

     

     

  3. 检查下列包是否安装,若未安装则要先安装:

     

    rpm -qa | grep binutils-

    rpm -qa | grep compat-libstdc++-

    rpm -qa | grep elfutils-libelf-

    rpm -qa | grep elfutils-libelf-devel-

    rpm -qa | grep glibc-

    rpm -qa | grep glibc-common-

    rpm -qa | grep glibc-devel-

    rpm -qa | grep gcc-

    rpm -qa | grep gcc-c++-

    rpm -qa | grep libaio- 

    rpm -qa | grep libaio-devel-

    rpm -qa | grep libgcc- 

    rpm -qa | grep libstdc++-

    rpm -qa | grep libstdc++-devel- 

    rpm -qa | grep make- 

    rpm -qa | grep sysstat-

    rpm -qa | grep unixODBC-

    rpm -qa | grep unixODBC-devel-

     

    binutils-2.17.50.0.6-2.el5 

    compat-libstdc++-33-3.2.3-61 

    elfutils-libelf-0.125-3.el5

    elfutils-libelf-devel-0.125

    glibc-2.5-12

    glibc-common-2.5-12 

    glibc-devel-2.5-12 

    gcc-4.1.1-52 

    gcc-c++-4.1.1-52

    libaio-0.3.106 

    libaio-devel-0.3.106 

    libgcc-4.1.1-52 

    libstdc++-4.1.1 

    libstdc++-devel-4.1.1-52.e15 

    make-3.81-1.1 

    sysstat-7.0.0 

    unixODBC-2.2.11 

    unixODBC-devel-2.2.11

     

    # rpm -qa | grep make gcc glibc compat openmotif21 setarch 等等

     

    建议你用 rpm –q packagename 逐个检查,因为是官方要求,所以为了不在安装时出现不必要的麻烦,还是确认都全部安装为好。

    虽然我是全新安装的系统也存在 3 个包没安装:Libaio-devel numactl-devel sysstat

     挂载 Linux 5 光盘,查找包的完整路径名称:

    [root@localhost ~]# mkdir /media/cdrom ; mount /dev/cdrom /media/cdrom

    [root@localhost ~]# ll /media/cdrom/Server/ |grep libaio-devel

    [root@localhost ~]# ll /media/cdrom/Server/ |grep numactl-devel

    [root@localhost ~]# ll /media/cdrom/Server/ |grep sysstat

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    安装 RPM 包:

    [root@localhost ~]# rpm -ivh /media/cdrom/Server/libaio-devel-0.3.106-3.2.i386.rpm

    [root@localhost ~]# rpm -ivh /media/cdrom/Server/numactl-devel-0.9.8-7.el5.i386.rpm

    [root@localhost ~]# rpm -ivh /media/cdrom/Server/sysstat-7.0.2-3.el5.i386.rpm

     

     

    rpm -ivh compat-libstdc++-33-3.2.3-69.el6.i686.rpm –force –nodeps

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    [root@localhost RHEL_6.5 x86_64 Disc 1]# pwd

    /media/RHEL_6.5 x86_64 Disc 1

    [root@localhost RHEL_6.5 x86_64 Disc 1]#

     

     

     

     

    另外,为了支持 ODBC 建议顺便安装下面两个包:

    unixODBC-2.2.11 (32 bit) or later

    unixODBC-devel-2.2.11 (32 bit) or later

     

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

     

    1. 修改主机名

    永久生效:

    [root@zijuan /]# vim /etc/sysconfig/network

    NETWORKING=yes

    NETWORKING_IPV6=yes

    HOSTNAME=zijuan

     

    HOSTNAME=zijuan表示主机设置为zijuan.

    注意:修改主机名后,需要重启系统后生效,或者切换个用户然后切换回来就OK

     

     

    查看/etc/hosts文件中必须包含a fully qualified name for the server

     [root@localhost lhr]# cat /etc/hosts

    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

    192.168.128.131 rhel6_linux_asm

    [root@localhost lhr]# hostname

    localhost.localdomain

    [root@localhost lhr]# hostname rhel6.5_linux

    [root@localhost lhr]# hostname

    rhel6_linux_asm

     

    修改/etc/hosts文件

    [root@oracle ~]#vim/etc/hosts

    127.0.0.1 localhost.localdomainlocalhost

    ::1 localhost6.localdomain6localhost6

    192.168.137.112 oracle.domain.comoracle

    注意将主机名对应到真实ip地址,否则oracle有可能将监听程序仅仅建立在127.0.0.1上

    1. 网络配置

    如果要配置em的话,这里最后把系统的ip地址设置成静态的ip地址,不然可能导致em访问的时候改变了数据库服务器的ip地址,从而导致其它问题的出现,具体静态ip地址配置参考( o()︿))o 唉。。。。这一块内容还没有写博客,大家可以百度或者私聊我)

    1. 磁盘准备

     

    这里我们准备5块硬盘

     

    1块硬盘安装操作系统

    234块用于实现ASM 存储数据

    5块硬盘用于存储FRA

     

     

     

    1. 添加磁盘

    1.1. 编辑虚拟机配置

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1.2. 添加硬件

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1.3. 添加第一块硬盘

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1.4. 创建新的虚拟磁盘

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1.5. 选择磁盘类型

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1.6. 设置磁盘大小

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1.7. 完成

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1.8.

    添加第二、三、四、五 块磁盘:重复步骤 1~7

    1.9.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    这里5块磁盘添加完毕后,为了避免后边再重启一次,可以先把后边需要的一个步骤做了,就是找到虚拟机的配置文件,在最后添加一行 disk.EnableUUID=”TRUE”,这里注意修改文件的时候一定要在关机的状态下修改该参数文件。

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1. 对磁盘进行分区

    磁盘添加完成后,启动虚拟机,作为 root 用户登录系统,格式化新添加的两块磁盘。

    [root@localhost share]# fdisk -l | grep “Disk /dev/sd”

    Disk /dev/sde: 10.7 GB, 10737418240 bytes

    Disk /dev/sdd: 10.7 GB, 10737418240 bytes

    Disk /dev/sda: 53.7 GB, 53687091200 bytes

    Disk /dev/sdb: 10.7 GB, 10737418240 bytes

    Disk /dev/sdc: 10.7 GB, 10737418240 bytes

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    磁盘分区完成后,查看所有磁盘情况。

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    注意这里只做磁盘的分区,并不做磁盘的格式化和挂载

    1. 配置目录、用户等

     

    1. 配置用户及用户组

    利用 /usr/sbin/groupadd 命令

     

    ——这里Oracle用户可能已经安装过的,没有影响:

    代码:

    groupadd oinstall

    groupadd dba

    groupadd oper

    groupadd asmadmin

    groupadd asmoper

    groupadd asmdba

     

    –添加用户到组

    useradd -g oinstall -G dba,asmdba,oper,asmadmin oracle

    useradd -g oinstall -G asmadmin,asmdba,asmoper,dba grid

     

    –修改密码

    passwd oracle

    passwd grid

     

    echo oracle | passwd –stdin lhr

    echo grid | passwd –stdin lhr

     

    —查看属主

    [root@rhel_linux_asm ~]# id oracle

    uid=501(oracle) gid=502(dba) groups=502(dba),501(oinstall),504(asmadmin),506(asmdba)

    [root@rhel_linux_asm ~]# id grid

    uid=502(grid) gid=501(oinstall) groups=501(oinstall),502(dba),504(asmadmin),505(asmoper),506(asmdba)

    [root@rhel_linux_asm ~]#

    1. 创建目录并且配置 grid 和 oracle 用户的配置文件

    代码:

    –root用户下:

    mkdir -p /u01/app/oracle

    mkdir -p /u01/app/grid

    mkdir -p /u01/app/grid/11.2.0

    chown -R grid:oinstall /u01/app/grid —/u01/app/grid 的所有者改为grid

    chown -R oracle:oinstall /u01/app/oracle

    chmod -R 775 /u01

     

    ——–Oracle User—-切换到Oracle用户下——

    [root@rhel_linux_asm ~]# su – oracle

    [grid@rhel_linux_asm ~]$ vi ~/.bash_profile

    export ORACLE_SID=orcl

    export ORACLE_BASE=/u01/app/oracle

    export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1

    export LD_LIBRARY_PATH=$ORACLE_HOME/lib

    export TMP=/tmp

    export TMPDIR=$TMP

    export PATH=$PATH:$ORACLE_HOME/bin

     

    ——–Grid User—–切换到grid用户下—–

     

    cd /home/grid

    vim .bash_profile

    export ORACLE_SID=+ASM

    export ORACLE_BASE=/u01/app/grid

    export ORACLE_HOME=/u01/app/grid/11.2.0

    export LD_LIBRARY_PATH=$ORACLE_HOME/lib

    export PATH=$ORACLE_HOME/bin:$PATH

    umask 022

     

    [oracle@dbserver1 ~]$ source .bash_profile

     

    1. 使用udev管理磁盘

     

    1. 配置 udev 绑定的 scsi_id

    注意以下两点:

    首先切换到root用户下:

    5.1. 不同的操作系统,scsi_id 命令的位置不同。

    [root@localhost ~]# cat /etc/issue

    Oracle Linux Server release 6.4

    Kernel on an m

     

    [root@localhost ~]# which scsi_id

    /sbin/scsi_id

    [root@localhost ~]#

    5.2. 编辑 /etc/scsi_id.config 文件,如果该文件不存在,则创建该文件并添加如下行:

    [root@localhost ~]# vi /etc/scsi_id.config

    options=–whitelisted –replace-whitespace

    [root@localhost ~]#

    5.3. 如果是使用 VMware 虚拟机,直接输入 scsi_id 命令可能无法获取 id,需修改 VMware 文件参数,这一步如果在添加磁盘的时候做过这一步的话就可以跳过了,直接获取uuid即可

    [root@localhost ~]# scsi_id –whitelisted –replace-whitespace –device=/dev/sdb

    [root@localhost ~]# scsi_id –whitelisted –replace-whitespace –device=/dev/sdc

    D:VMsOracle Database 11gR2Oracle Database 11gR2.vmx

    使用文本编辑器编辑该文件,在尾部新增一行参数:

    disk.EnableUUID=”TRUE”

    保存文件,重新启动虚拟机。这里注意修改文件的时候一定要在关机的状态下修改,或者 scsi_id -g -u /dev/sdc 来获得uuid-g -u参数在rhel6以后已经不用了

    [root@localhost share]# scsi_id –whitelisted –replace-whitespace –device=/dev/sdb

    36000c29fbe57659626ee89b4fba07616

    [root@localhost share]# scsi_id –whitelisted –replace-whitespace –device=/dev/sdc

    36000c29384cde894e087e5f0fcaa80f4

    [root@localhost share]# scsi_id –whitelisted –replace-whitespace –device=/dev/sdd

    36000c29022aee23728231ed9b1f9743d

    [root@localhost share]# scsi_id –whitelisted –replace-whitespace –device=/dev/sde

    36000c2938f431664218d1d2632ff1352

    1. 创建并配置 udev rules 文件

    [root@localhost ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules

    KERNEL==”sd*”, SUBSYSTEM==”block”, PROGRAM==”/sbin/scsi_id –whitelisted –replace-whitespace –device=/dev/$name”,RESULT==”36000c29fe0fc917d7e9982742a28ce7c”, NAME=”asm-diskb”, OWNER=”grid”,GROUP=”asmadmin”, MODE=”0660″

    KERNEL==”sd*”, SUBSYSTEM==”block”, PROGRAM==”/sbin/scsi_id –whitelisted –replace-whitespace –device=/dev/$name”,RESULT==”36000c293ffc0900fd932348de4b6baf8″, NAME=”asm-diskc”, OWNER=”grid”,GROUP=”asmadmin”, MODE=”0660″

     

    根据步骤 5 获取的 ID 修改 RESULT 值

    这里需要注意,一个KERNEL就是一行,不能换行的,我之前就是犯了这个错误的

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    添加4块硬盘:

    KERNEL==”sd*”, SUBSYSTEM==”block”, PROGRAM==”/sbin/scsi_id –whitelisted –replace-whitespace –device=/dev/$name”,RESULT==”36000c29346c1344ffb26f0e5603d519e”, NAME=”asm-diskb”, OWNER=”grid”,GROUP=”asmadmin”, MODE=”0660″

    KERNEL==”sd*”, SUBSYSTEM==”block”, PROGRAM==”/sbin/scsi_id –whitelisted –replace-whitespace –device=/dev/$name”,RESULT==”36000c29d08ee059a345571054517cd03″, NAME=”asm-diskc”, OWNER=”grid”,GROUP=”asmadmin”, MODE=”0660″

    KERNEL==”sd*”, SUBSYSTEM==”block”, PROGRAM==”/sbin/scsi_id –whitelisted –replace-whitespace –device=/dev/$name”,RESULT==”36000c295037a910bfb765af8f400aa07″, NAME=”asm-diskd”, OWNER=”grid”,GROUP=”asmadmin”, MODE=”0660″

    KERNEL==”sd*”, SUBSYSTEM==”block”, PROGRAM==”/sbin/scsi_id –whitelisted –replace-whitespace –device=/dev/$name”,RESULT==”36000c2982bda048f642acd3c429ec983″, NAME=”asm-diske”, OWNER=”grid”,GROUP=”asmadmin”, MODE=”0660″

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

     

    注意:这里的GROUP=“asmadmin”, 最好修改成 GROUP=“asmdba”,不然最后可能用dbca创建数据库实例的时候找不见磁盘组。

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1. 添加完成后,重启 udev,不同 Linux 发行版本重启方式不一样。

    该步骤慢一点,大约可能需要30秒左右吧,等等等等。。。。。。

    [root@localhost ~]# start_udev

    Starting udev: [ OK ]

    [root@localhost ~]#

    1. 查看绑定的 asm,如果此时还是看不到 asm disk,请重启操作系统后再查看。

    [root@localhost ~]# ll /dev/asm*

    brw-rw—- 1 grid asmadmin 8, 17 Oct 17 14:26 /dev/asm-diskb

    brw-rw—- 1 grid asmadmin 8, 33 Oct 17 14:26 /dev/asm-diskc

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    1. 系统内核参数修改

      1. 编辑 /etc/security/limits.conf 文件,在文件尾部添加如下内容:

    [root@localhost ~]# tail -8 /etc/security/limits.conf

    # add by lhr for oracle and grid on 2014-05-02

    oracle soft nproc 2047

    oracle hard nproc 16384

    oracle soft nofile 1024

    oracle hard nofile 65536

    grid soft nproc 2047

    grid hard nproc 16384

    grid soft nofile 1024

    grid hard nofile 65536

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1. 编辑 /etc/pam.d/login 文件,在文件尾部添加如下内容:

    [root@localhost ~]# tail -1 /etc/pam.d/login

    session required pam_limis.so

    [root@localhost ~]#

    1. 编辑 /etc/profile 文件,设置 shell 限制,在文件尾部添加如下内容:

    [root@localhost ~]# tail -9 /etc/profile

    if [ /$USER = “oracle” ] || [ /$USER = “grid” ]; then

    if [ /$SHELL = “/bin/ksh” ]; then

    ulimit -p 16384

    ulimit -n 65536

    else

    ulimit -u 16384 -n 65536

    fi

    umask 022

    fi

     

    1. /etc/sysctl.conf

    Configuring Kernel Parameters for Linux

     

    vim /etc/sysctl.conf

    fs.aio-max-nr = 1048576

    fs.file-max = 6815744

    kernel.shmall = 2097152

    kernel.shmmax = 4294967295

    kernel.shmmni = 4096

    kernel.sem = 250 32000 100 128

    net.ipv4.ip_local_port_range = 9000 65500

    net.core.rmem_default = 262144

    net.core.rmem_max = 4194304

    net.core.wmem_default = 262144

    net.core.wmem_max = 1048576

     

    生效

    # /sbin/sysctl -p

     

     

     

    1. 安装grid

     

    1. 通过 ZMODEM 上传文件 linux.x64_11gR2_grid.zip 至grid用户家目录 /home/grid/

     

    这步当然也可以使用其它软件来替代上传软件包:

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    [root@localhost ~]# ll /home/grid/

    total 1028228

    -rw-r–r– 1 root root 1052897657 Oct 16 13:22 linux.x64_11gR2_grid.zip

    [root@localhost ~]#

    1. 解压文件

    [root@localhost grid]# unzip linux.x64_11gR2_grid.zip

    [root@localhost grid]# ll

    total 1028232

    drwxr-xr-x 8 root root 4096 Aug 21 2009 grid

    -rw-r–r– 1 root root 1052897657 Oct 16 13:22 linux.x64_11gR2_grid.zip

    [root@localhost grid]#

    1. 作为 grid 用户登录系统,执行安装程序。

      1. 日志

    安装过程中的日志生成地址:

    /u01/app/oraInventory/logs/installActions2014-06-14_10-32-53PM.log

     

     

     

    16.1. 检查安装脚本是否具有可执行权限

    [grid@localhost grid]$ id

    uid=501(grid) gid=500(oinstall) groups=500(oinstall),501(dba),503(asmadmin),504(asmoper),505(asmdba)

    [grid@localhost grid]$ ll runInstaller

    -rwxr-xr-x 1 root root 3227 Aug 15 2009 runInstaller

    [grid@localhost grid]$

    如果无可执行权限,执行以下命令进行授权:

    [root@localhost ~]# chown -R grid:oinstall /home/grid/grid/

    [root@localhost ~]# ll /home/grid/grid/runInstaller

    -rwxr-xr-x 1 grid oinstall 3227 Aug 15 2009 /home/grid/grid/runInstaller

    [root@localhost ~]#

     

    16.2. 执行安装脚本 /home/grid/grid/runInstaller

    首先打开Xmanager – Passive 软件, 然后在 Xshell 会话设置如下:

    [grid@rhel_linux_asm grid]$ clear

    [grid@rhel_linux_asm grid]$ export DISPLAY=192.168.1.100:0.0 —这里的ip地址就是本机的ip地址(ipconfig)

    [grid@rhel_linux_asm grid]$ xhost +

    access control disabled, clients can connect from any host

    [grid@rhel_linux_asm grid]$ ls

    doc install response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html

    [grid@rhel_linux_asm grid]$ ./runInstaller

    Starting Oracle Universal Installer…

     

    Checking Temp space: must be greater than 120 MB. Actual 31642 MB Passed

    Checking swap space: must be greater than 150 MB. Actual 383 MB Passed

    Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed

    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-04-29_10-53-18PM. Please wait …[grid@rhel_linux_asm grid]$

     

    截图如下:

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    17. 安装过程

    [grid@localhost ~]$ /home/grid/grid/runInstaller

    Starting Oracle Universal Installer…

     

    Checking Temp space: must be greater than 120 MB. Actual 38826 MB Passed

    Checking swap space: must be greater than 150 MB. Actual 4095 MB Passed

    Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed

    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-10-17_03-31-41PM. Please wait …[grid@localhost ~]$

    17.1.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    17.2.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    17.3.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    17.4.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」 17.5.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    17.6.

    17.6.1. 检查安装条件

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    17.6.2.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    执行修复脚本:

    [root@localhost ~]# /tmp/CVU_11.2.0.1.0_grid/runfixup.sh

    Response file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.response

    Enable file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.enable

    Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log

    Setting Kernel Parameters…

    kernel.sem = 250 32000 100 128

    fs.file-max = 6815744

    net.ipv4.ip_local_port_range = 9000 65500

    net.core.rmem_default = 262144

    net.core.wmem_default = 262144

    net.core.rmem_max = 4194304

    net.core.wmem_max = 1048576

    fs.aio-max-nr = 1048576

     

    17.6.3. 安装缺失软件包

    # yum install -y package_name

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    这些软件包其实已安装,只不过因为 Oracle Linux 6.4 自带的软件包版本高于检查版本,所以检查不通过,忽略即可。

    Oracle Linux 6.4 的安装光盘中无 pdksh 软件包,安装 ksh 软件包即可。

    # yum install -y ksh

    因为没有 NTP 时钟服务器,所以 NTP 检查不通过,忽略即可。

    17.6.4.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    17.7.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    17.8.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    17.9. 安装时间较长,请耐心等待。

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1. 最后的执行脚本步骤

    安装过程中会弹出对话框,提示你以 root 身份执行两个脚本:

    [root@localhost ~]# /u01/app/oraInventory/orainstRoot.sh

    Changing permissions of /u01/app/oraInventory.

    Adding read,write permissions for group.

    Removing read,write,execute permissions for world.

     

    Changing groupname of /u01/app/oraInventory to oinstall.

    The execution of the script is complete.

    [root@localhost ~]# /u01/app/11.2.0/grid/root.sh

    Running Oracle 11g root.sh script…

     

    The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME= /u01/app/11.2.0/grid

     

    Enter the full pathname of the local bin directory: [/usr/local/bin]: — 输入回车

    Copying dbhome to /usr/local/bin …

    Copying oraenv to /usr/local/bin …

    Copying coraenv to /usr/local/bin …

     

    Creating /etc/oratab file…

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root.sh script.

    Now product-specific root actions will be performed.

    To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user: — 单实例安装需执行以下脚本

    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

     

     

    To configure Grid Infrastructure for a Cluster perform the following steps:

    1. Provide values for Grid Infrastructure configuration parameters in the file – /u01/app/11.2.0/grid/crs/install/crsconfig_params. For details on how to do this, see the installation guide.

    2. Run the following command as the root user:

    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl

    To update inventory properties for Grid Infrastructure, perform the following

    steps. If a pre-11.2 home is already configured, execute the following:

    11.2_Home/oui/bin/runInstaller -updateNodeList -silent -local CRS=false ORACLE_HOME=pre-11.2_Home

    Always execute the following to register the current home:

    11.2_Home/oui/bin/runInstaller -updateNodeList -silent -local CRS=true ORACLE_HOME=11.2_Home.

    If either home is shared, provide the additional argument -cfs.

    截图如下:

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    下边按照要求执行脚本:

    [root@localhost ~]# /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

    2013-10-17 16:18:19: Checking for super user privileges

    2013-10-17 16:18:19: User has super user privileges

    2013-10-17 16:18:19: Parsing the host name

    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

    Creating trace directory

    /u01/app/11.2.0/grid/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory — 报错

    Failed to create keys in the OLR, rc = 32512, 32512

    OLR configuration failed

    [root@localhost ~]#

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    解决报错:

    17.9.1. 检查操作系统中 libcap.so 是否已安装

    如果是 64 位系统,i686 x86_64 都要安装。

    # yum install -y libcap*.i686

    # yum install -y libcap*.x86_64

    [root@localhost ~]# rpm -qa | grep libcap

    libcap-ng-0.6.4-3.el6_0.1.i686

    libcap-2.16-5.5.el6.x86_64

    libcap-devel-2.16-5.5.el6.x86_64

    libcap-devel-2.16-5.5.el6.i686

    libcap-ng-0.6.4-3.el6_0.1.x86_64

    libcap-2.16-5.5.el6.i686

    libcap-ng-devel-0.6.4-3.el6_0.1.i686

    libcap-ng-devel-0.6.4-3.el6_0.1.x86_64

    [root@localhost ~]#

    17.9.2. 检查 libcap.so 文件

    [root@localhost ~]# ll /lib64/libcap.so*

    lrwxrwxrwx 1 root root 11 Oct 17 16:28 /lib64/libcap.so -> libcap.so.2

    lrwxrwxrwx. 1 root root 14 Oct 16 15:22 /lib64/libcap.so.2 -> libcap.so.2.16

    -rwxr-xr-x 1 root root 19016 Oct 13 2011 /lib64/libcap.so.2.16

    [root@localhost ~]#

     

    17.9.3.

    [root@localhost ~]# ln -s /lib64/libcap.so.2.16 /lib64/libcap.so.1

    [root@localhost ~]#ln -s /lib64/libcap.so.2 /lib64/libcap.so

    [root@localhost ~]# ll /lib64/libcap.so*

    lrwxrwxrwx 1 root root 11 Oct 17 16:28 /lib64/libcap.so -> libcap.so.2

    lrwxrwxrwx 1 root root 21 Oct 17 17:01 /lib64/libcap.so.1 -> /lib64/libcap.so.2.16

    lrwxrwxrwx. 1 root root 14 Oct 16 15:22 /lib64/libcap.so.2 -> libcap.so.2.16

    -rwxr-xr-x 1 root root 19016 Oct 13 2011 /lib64/libcap.so.2.16

    [root@localhost ~]#

    17.9.4. 重新执行 /u01/app/11.2.0/grid/root.sh 脚本

    [root@localhost ~]# /u01/app/11.2.0/grid/root.sh

    Running Oracle 11g root.sh script…

     

    The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME= /u01/app/11.2.0/grid

     

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The file “dbhome” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y — 输入 y 覆盖安装

    Copying dbhome to /usr/local/bin …

    The file “oraenv” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y — 输入 y 覆盖安装

    Copying oraenv to /usr/local/bin …

    The file “coraenv” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y — 输入 y 覆盖安装

    Copying coraenv to /usr/local/bin …

     

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root.sh script.

    Now product-specific root actions will be performed.

    To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:

    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

     

     

    To configure Grid Infrastructure for a Cluster perform the following steps:

    1. Provide values for Grid Infrastructure configuration parameters in the file – /u01/app/11.2.0/grid/crs/install/crsconfig_params. For details on how to do this, see the installation guide.

    2. Run the following command as the root user:

    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl

    To update inventory properties for Grid Infrastructure, perform the following

    steps. If a pre-11.2 home is already configured, execute the following:

    11.2_Home/oui/bin/runInstaller -updateNodeList -silent -local CRS=false ORACLE_HOME=pre-11.2_Home

    Always execute the following to register the current home:

    11.2_Home/oui/bin/runInstaller -updateNodeList -silent -local CRS=true ORACLE_HOME=11.2_Home.

    If either home is shared, provide the additional argument -cfs.

    [root@localhost ~]# /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

    2013-10-17 17:04:58: Checking for super user privileges

    2013-10-17 17:04:58: User has super user privileges

    2013-10-17 17:04:58: Parsing the host name

    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

    Improper Oracle Clusterware configuration found on this host

    Deconfigure the existing cluster configuration before starting — 报错

    to configure a new Clusterware

    run “/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig”

    to configure existing failed configuration and then rerun root.sh

     

    17.9.5. 解决步骤 4) 中的错误

    [root@localhost ~]# /u01/app/11.2.0/grid/crs/install/roothas.pl -verbose -delete -force

    2013-10-17 18:25:15: Checking for super user privileges

    2013-10-17 18:25:15: User has super user privileges

    2013-10-17 18:25:15: Parsing the host name

    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

    CRS-4639: Could not contact Oracle High Availability Services

    CRS-4000: Command Stop failed, or completed with errors.

    CRS-4639: Could not contact Oracle High Availability Services

    CRS-4000: Command Delete failed, or completed with errors.

    CRS-4544: Unable to connect to OHAS

    CRS-4000: Command Stop failed, or completed with errors.

    /u01/app/11.2.0/grid/bin/acfsdriverstate: line 51: /lib/acfstoolsdriver.sh: No such file or directory

    /u01/app/11.2.0/grid/bin/acfsdriverstate: line 51: exec: /lib/acfstoolsdriver.sh: cannot execute: No such file or directory

    Successfully deconfigured Oracle Restart stack

    [root@localhost ~]#

    17.9.6. 重新执行 /u01/app/11.2.0/grid/root.sh 脚本

    [root@localhost ~]# /u01/app/11.2.0/grid/root.sh

    Running Oracle 11g root.sh script…

     

    The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME= /u01/app/11.2.0/grid

     

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The file “dbhome” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying dbhome to /usr/local/bin …

    The file “oraenv” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying oraenv to /usr/local/bin …

    The file “coraenv” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying coraenv to /usr/local/bin …

     

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root.sh script.

    Now product-specific root actions will be performed.

    To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:

    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

     

     

    To configure Grid Infrastructure for a Cluster perform the following steps:

    1. Provide values for Grid Infrastructure configuration parameters in the file – /u01/app/11.2.0/grid/crs/install/crsconfig_params. For details on how to do this, see the installation guide.

    2. Run the following command as the root user:

    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl

    To update inventory properties for Grid Infrastructure, perform the following

    steps. If a pre-11.2 home is already configured, execute the following:

    11.2_Home/oui/bin/runInstaller -updateNodeList -silent -local CRS=false ORACLE_HOME=pre-11.2_Home

    Always execute the following to register the current home:

    11.2_Home/oui/bin/runInstaller -updateNodeList -silent -local CRS=true ORACLE_HOME=11.2_Home.

    If either home is shared, provide the additional argument -cfs.

    [root@localhost ~]# /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

    2013-10-17 18:27:55: Checking for super user privileges

    2013-10-17 18:27:55: User has super user privileges

    2013-10-17 18:27:55: Parsing the host name

    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

    LOCAL ADD MODE

    Creating OCR keys for user “grid”, privgrp “oinstall”..

    Operation successful.

    CRS-4664: Node localhost successfully pinned.

    Adding daemon to inittab

    CRS-4124: Oracle High Availability Services startup failed. — 报错

    CRS-4000: Command Start failed, or completed with errors.

    ohasd failed to start: Inappropriate ioctl for device

    ohasd failed to start: Inappropriate ioctl for device at /u01/app/11.2.0/grid/crs/install/roothas.pl line 296.

    [root@localhost ~]#

    这是 11.0.2.1 的一个 BUG,如果安装11.2.0.3的话就不存在这个问题

    17.9.7. 解决 BUG

    17.9.7.1. 回滚 /u01/app/11.2.0/grid/root.sh 产生的操作

    [root@localhost ~]# /u01/app/11.2.0/grid/crs/install/roothas.pl -verbose -delete -force

    2013-10-17 18:45:42: Checking for super user privileges

    2013-10-17 18:45:42: User has super user privileges

    2013-10-17 18:45:42: Parsing the host name

    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

    CRS-4639: Could not contact Oracle High Availability Services

    CRS-4000: Command Stop failed, or completed with errors.

    CRS-4639: Could not contact Oracle High Availability Services

    CRS-4000: Command Delete failed, or completed with errors.

    CRS-4544: Unable to connect to OHAS

    CRS-4000: Command Stop failed, or completed with errors.

    /u01/app/11.2.0/grid/bin/acfsdriverstate: line 51: /lib/acfstoolsdriver.sh: No such file or directory

    /u01/app/11.2.0/grid/bin/acfsdriverstate: line 51: exec: /lib/acfstoolsdriver.sh: cannot execute: No such file or directory

    Successfully deconfigured Oracle Restart stack

    [root@localhost ~]#

    17.9.7.2. 在执行 root.sh 脚本出现 Adding daemon to inittab 时,以 root 身份立即执行下面的命令:

    # /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1

    如果出现 /bin/dd: opening `/var/tmp/.oracle/npohasd”: No such file or directory,说明文件还没有生成,继续执行,直到能执行为止。

    同时开两个 SSH 会话进行下面两步操作

    [root@localhost ~]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1

    /bin/dd: opening `/var/tmp/.oracle/npohasd”: No such file or directory

    [root@localhost ~]# /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1

    [root@localhost ~]# /u01/app/11.2.0/grid/root.sh

    Running Oracle 11g root.sh script…

     

    The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME= /u01/app/11.2.0/grid

     

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The file “dbhome” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying dbhome to /usr/local/bin …

    The file “oraenv” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying oraenv to /usr/local/bin …

    The file “coraenv” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying coraenv to /usr/local/bin …

     

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root.sh script.

    Now product-specific root actions will be performed.

    To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:

    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

     

     

    To configure Grid Infrastructure for a Cluster perform the following steps:

    1. Provide values for Grid Infrastructure configuration parameters in the file – /u01/app/11.2.0/grid/crs/install/crsconfig_params. For details on how to do this, see the installation guide.

    2. Run the following command as the root user:

    /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl

    To update inventory properties for Grid Infrastructure, perform the following

    steps. If a pre-11.2 home is already configured, execute the following:

    11.2_Home/oui/bin/runInstaller -updateNodeList -silent -local CRS=false ORACLE_HOME=pre-11.2_Home

    Always execute the following to register the current home:

    11.2_Home/oui/bin/runInstaller -updateNodeList -silent -local CRS=true ORACLE_HOME=11.2_Home.

    If either home is shared, provide the additional argument -cfs.

    [root@localhost ~]# /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/roothas.pl

    2013-10-17 18:55:45: Checking for super user privileges

    2013-10-17 18:55:45: User has super user privileges

    2013-10-17 18:55:45: Parsing the host name

    Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

    LOCAL ADD MODE

    Creating OCR keys for user “grid”, privgrp “oinstall”..

    Operation successful.

    CRS-4664: Node localhost successfully pinned.

    Adding daemon to inittab

    CRS-4123: Oracle High Availability Services has been started.

    ohasd is starting

    ADVM/ACFS is not supported on redhat-release-server-6Server-6.4.0.4.0.1.el6.x86_64

    localhost 2013/10/17 18:56:11 /u01/app/11.2.0/grid/cdata/localhost/backup_20131017_185611.olr

    Successfully configured Oracle Grid Infrastructure for a Standalone Server

    [root@localhost ~]#

     

    17.10. 结束 Grid 安装

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1. 11.2.0.3.0 root脚本

    新版本就不会报错的

    [root@rhel6_lhr oraInventory]# /u01/app/grid/11.2.0/root.sh

    Performing root user operation for Oracle 11g

     

    The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME= /u01/app/grid/11.2.0

     

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The contents of “dbhome” have not changed. No need to overwrite.

    The contents of “oraenv” have not changed. No need to overwrite.

    The contents of “coraenv” have not changed. No need to overwrite.

     

     

    Creating /etc/oratab file…

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root script.

    Now product-specific root actions will be performed.

    Using configuration parameter file: /u01/app/grid/11.2.0/crs/install/crsconfig_params

    Creating trace directory

    LOCAL ADD MODE

    Creating OCR keys for user “grid”, privgrp “oinstall”..

    Operation successful.

    LOCAL ONLY MODE

    Successfully accumulated necessary OCR keys.

    Creating OCR keys for user “root”, privgrp “root”..

    Operation successful.

    CRS-4664: Node rhel6_lhr successfully pinned.

    Adding Clusterware entries to upstart

     

    rhel6_lhr 2014/06/14 22:42:26 /u01/app/grid/11.2.0/cdata/rhel6_lhr/backup_20140614_224226.olr

    Successfully configured Oracle Grid Infrastructure for a Standalone Server

    1. 利用asmca创建磁盘组

    [grid@rhel6_lhr ~]$ export DISPLAY=192.168.1.100:0.0

    [grid@rhel6_lhr ~]$ xhost +

    grid 用户执行 asmca 命令

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    这里利用asmca创建磁盘组的时候需要ASM实例启动才能配置,我们点击yes后报错:

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    查看日志来解决,或者利用命令行来启动asm实例:

     

    18.1.

    SYS Password — sys

    ASMSNMP Password — asmsnmp

     

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    18.2.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    Step 1. 给磁盘组命名

    Step 2. 选择冗余方式

    High: 为每个分配单元创建三个副本(因此至少需要三个磁盘)

    Normal: 单镜像(默认)

    External: 不会镜像,假定底层存在一个 LVM 并且它正在执行认为适当的任何一种 RAID 级别。

    Step 3. 添加磁盘挂载位置

    Step 4. 输入磁盘挂载位置和名称

    18.3.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」这里我只创建了两个磁盘,一个磁盘组,没有创建 FRA 区域。如果你创建了 3 个或者 3 个以上磁盘,可以留一部分给后面的 FRA 使用(什么是FRA? 就是闪回区)。

    18.4. 点击 Create ASM 创建 Disk Group

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    如果点击yes后报如下的错误,则说明前边的Oracle Grid Infrastructure没有配置好,需要重新配置:

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    配置好Oracle Grid Infrastructure 后,重新点击 Create ASM 创建 Disk Group

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    稍等一会后(其实大约需要1分钟吧):

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    18.5. Disk Group 创建完成,退出。

     

     

    1. 使用netmgr建立监听

    安装grid后采用grid来管理监听,所以监听也是在grid用户下创建的,创建办法参考下边的Oracle用户下创建监听的办法。

     

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1. 通过 grid 用户执行 crs_stat -t 检查 ASM 是否安装好,如下所示表示搭建 OK。

    [grid@localhost ~]$ crs_stat -t

    Name Type Target State Host

    ————————

    ora.DATA.dg ora….up.type ONLINE ONLINE localhost

    ora.asm ora.asm.type ONLINE ONLINE localhost

    ora.cssd ora.cssd.type ONLINE ONLINE localhost

    ora.diskmon ora….on.type ONLINE ONLINE localhost

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    [grid@localhost ~]$

     

    [grid@rhel_linux_asm ~]$ echo $ORACLE_SID

    +ASM

    [grid@rhel_linux_asm ~]$ sqlplus / as sysasm

     

    SQL*Plus: Release 11.2.0.1.0 Production on Mon Apr 28 23:07:24 2014

     

    Copyright (c) 1982, 2009, Oracle. All rights reserved.

     

    Connected to an idle instance.

     

    SQL> startup

    ASM instance started

     

    Total System Global Area 283930624 bytes

    Fixed Size 2212656 bytes

    Variable Size 256552144 bytes

    ASM Cache 25165824 bytes

    ASM diskgroups mounted

    SQL> select * from v$version;

     

    BANNER

    ——————————————————————————–

    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production

    PL/SQL Release 11.2.0.1.0 – Production

    CORE 11.2.0.1.0 Production

    TNS for Linux: Version 11.2.0.1.0 – Production

    NLSRTL Version 11.2.0.1.0 – Production

    SQL> select name,total_mb from v$asm_diskgroup;

     

    NAME TOTAL_MB

    —————————— ———-

    DATA 20480

     

    SQL> select name,group_number,file_number,alias_index,alias_directory,system_created from v$asm_alias;

     

    1. 开始搭建 Oracle 数据库

    如果系统之前搭建过ORACLE数据库的话就可以不用再搭建数据库了,只需要重新创建一个实例即可。。。。。

    20.1. 作为 oracle 用户登录系统,将安装包上传到 Oracle 家目录。

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.2. 解压两个压缩包

    [oracle@localhost ~]$ ll linux*

    -rw-r–r– 1 oracle oinstall 1239269270 Apr 18 20:44 linux.x64_11gR2_database_1of2.zip

    -rw-r–r– 1 oracle oinstall 1111416131 Apr 18 20:47 linux.x64_11gR2_database_2of2.zip

    [oracle@localhost ~]$ unzip linux.x64_11gR2_database_1of2.zip && unzip linux.x64_11gR2_database_2of2.zip

    20.3. 执行 runInstaller

    [oracle@localhost ~]$ /home/oracle/database/runInstaller

    Starting Oracle Universal Installer…

     

    Checking Temp space: must be greater than 120 MB. Actual 30971 MB Passed

    Checking swap space: must be greater than 150 MB. Actual 4088 MB Passed

    Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed

    Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-10-17_08-01-38PM. Please wait …[oracle@localhost ~]$

    20.4.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.5.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.6.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.7.

     

    这里选择字符集的时候最好把简体中文安装上,不然后边安装OEM后,网页浏览可能出现会乱码

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.8.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.9.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.10.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.11.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.12.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    20.13.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    安装时间较长,请耐心等待。

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    安装快结束时,会弹出窗口,提示你以 root 身份执行一个脚本。

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    [root@localhost ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

    Running Oracle 11g root.sh script…

     

    The following environment variables are set as:

    ORACLE_OWNER= oracle

    ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1

     

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The file “dbhome” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying dbhome to /usr/local/bin …

    The file “oraenv” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying oraenv to /usr/local/bin …

    The file “coraenv” already exists in /usr/local/bin. Overwrite it? (y/n)

    [n]: y

    Copying coraenv to /usr/local/bin …

     

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root.sh script.

    Now product-specific root actions will be performed.

    Finished product-specific root actions.

    [root@localhost ~]#

    20.14. 结束安装

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    1. 检查 $ORACLE_HOME/bin/oracle文件的所属组

    这一步不是必须的,如果在后边使用dbca创建数据库的时候选不到磁盘组的时候就返回来做这一步吧。。。

    RAC或者ORACLE RESTART中,oracle可执行文件的所属组是asmadmin

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    [root@khm5 bin]# chown oracle:asmadmin oracle

    [root@khm5 bin]# ls -l oracle

    rwxrxx 1 oracle asmadmin 232399473 Apr 19 07:04 oracle

     

    [root@khm5 bin]# chmod +s oracle

    [root@khm5 bin]# ls -l oracle

    rwsrsx 1 oracle asmadmin 232399473 Apr 19 07:04 oracle

     

     

    1. 使用 netmgr 建立监听–Oracle用户不需要创建

    需配置好环境变量,否则会出现找不到命令的情况。

    该步骤即建立监听文件,也可以从其它地方拷贝过来,或者自己新建文件:/u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora

    [oracle@rhel_linux_asm grid]$ netmgr

    21.1.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    21.2.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    21.3.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    21.4.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    21.5.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    21.6.

    点击 File –> Save Network Configuration

    然后退出,执行完后在以下路径有文件:

    [oracle@rhel_linux_asm admin]$ pwd

    /u01/app/oracle/product/11.2.0/dbhome_1/network/admin

    [oracle@rhel_linux_asm admin]$ ll

    total 8

    drwxr-xr-x. 2 oracle dba 4096 Apr 28 23:24 samples

    -rw-r–r–. 1 oracle dba 187 May 7 2007 shrept.lst

    [oracle@rhel_linux_asm admin]$ netmgr

    [oracle@rhel_linux_asm admin]$ ll

    total 12

    -rw-r–r–. 1 oracle dba 475 Apr 28 23:47 listener.ora

    drwxr-xr-x. 2 oracle dba 4096 Apr 28 23:24 samples

    -rw-r–r–. 1 oracle dba 187 May 7 2007 shrept.lst

    [oracle@rhel_linux_asm admin]$ cat listener.ora

    # listener.ora Network Configuration File: /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/listener.ora

    # Generated by Oracle configuration tools.

     

    SID_LIST_LISTENER =

    (SID_LIST =

    (SID_DESC =

    (GLOBAL_DBNAME = orclasm)

    (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)

    (SID_NAME = orclasm)

    )

    )

     

    LISTENER =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = rhel_linux_asm)(PORT = 1521))

    )

     

    ADR_BASE_LISTENER = /u01/app/oracle

     

     

    1. 使用 dbca 创建数据库

    以oracle用户执行dbca命令来创建一个数据库:【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.2.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.3.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.4.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.5.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.6.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.7.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.8.

    这里如果启用闪回区和归档的话,就选择fra磁盘组,不启用的话可以不选择

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.9.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.10.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.11.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.12.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    27.13.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    安装过程中可以查看日志:

    [root@rhel_linux_asm ~]# cd /u01/app/grid/cfgtoollogs/dbca/oralasm2/

    [root@rhel_linux_asm oralasm2]# tail -f trace.log

    。。。。。。。

    Datafile

    “+DATA/oralasm2/datafile/system.256.845998107”,

    “+DATA/oralasm2/datafile/sysaux.257.845998109”,

    “+DATA/oralasm2/datafile/undotbs1.258.845998109”,

    “+DATA/oralasm2/datafile/users.259.845998109”

    [Thread-151] [ 2014-04-27 15:29:49.328 CST ] [CloneDBCreationStep.executeImpl:419] Length of OriginalRedoLogsGrNames=3

    [Thread-151] [ 2014-04-27 15:29:49.330 CST ] [CloneDBCreationStep.executeImpl:427] 0th redoLogText = GROUP 1 SIZE 51200K

    [Thread-151] [ 2014-04-27 15:29:49.331 CST ] [CloneDBCreationStep.executeImpl:427] 1th redoLogText = GROUP 2 SIZE 51200K

    [Thread-151] [ 2014-04-27 15:29:49.331 CST ] [CloneDBCreationStep.executeImpl:427] 2th redoLogText = GROUP 3 SIZE 51200K

    [Thread-151] [ 2014-04-27 15:29:49.332 CST ] [CloneDBCreationStep.executeImpl:448] createCTLSql=Create controlfile reuse set database “oralasm2”

    MAXINSTANCES 8

    MAXLOGHISTORY 1

    MAXLOGFILES 16

    MAXLOGMEMBERS 3

    MAXDATAFILES 100

    Datafile

    “+DATA/oralasm2/datafile/system.256.845998107”,

    “+DATA/oralasm2/datafile/sysaux.257.845998109”,

    “+DATA/oralasm2/datafile/undotbs1.258.845998109”,

    “+DATA/oralasm2/datafile/users.259.845998109”

    LOGFILE GROUP 1 SIZE 51200K,

    GROUP 2 SIZE 51200K,

    GROUP 3 SIZE 51200K RESETLOGS;

    [Thread-151] [ 2014-04-27 15:29:55.330 CST ] [CloneDBCreationStep.executeImpl:460] calling zerodbid

    [Thread-151] [ 2014-04-27 15:30:02.332 CST ] [CloneDBCreationStep.executeImpl:470] Shutdown database

    [Thread-151] [ 2014-04-27 15:30:02.334 CST ] [CloneDBCreationStep.executeImpl:492] Startup ……nomount……

    [Thread-151] [ 2014-04-27 15:30:04.131 CST ] [CloneDBCreationStep.executeImpl:500] deleting dummy control file from v$controlfile: +DATA/oralasm2/controlfile/current.260.845998195

    [Thread-151] [ 2014-04-27 15:30:08.881 CST ] [CloneDBCreationStep.executeImpl:511] Enabling restricted session.

    [Thread-151] [ 2014-04-27 15:30:11.028 CST ] [CloneDBCreationStep.executeImpl:513] alter database “oralasm2” open resetlogs;

    [Thread-151] [ 2014-04-27 15:30:29.193 CST ] [CloneDBCreationStep.executeImpl:521] Removing existing services from sourcedb seeddata

    [Thread-151] [ 2014-04-27 15:30:30.025 CST ] [CloneDBCreationStep.executeImpl:526] Renaming globale_name

    [Thread-151] [ 2014-04-27 15:30:30.163 CST ] [CloneDBCreationStep.executeImpl:542] control file from v$controlfile: +DATA/oralasm2/controlfile/current.260.845998209

    [Thread-151] [ 2014-04-27 15:30:30.164 CST ] [CloneDBCreationStep.executeImpl:557] controlfiles(“+DATA/oralasm2/controlfile/current.260.845998209”)

    [Thread-151] [ 2014-04-27 15:30:30.186 CST ] [CloneDBCreationStep.executeImpl:601] Temp file to be added:=+DATA/{DB_UNIQUE_NAME}/temp01.dbf

    [Thread-151] [ 2014-04-27 15:30:30.187 CST ] [CloneDBCreationStep.executeImpl:602] Temp file size in KB:=20480

    [Thread-151] [ 2014-04-27 15:30:31.603 CST ] [CloneDBCreationStep.executeImpl:632] Establish USERS as the default permanent tablespace of the database

    [Thread-151] [ 2014-04-27 15:30:31.704 CST ] [TemplateManager.isInstallTemplate:2300] Selected Template by user:=General Purpose

    [Thread-151] [ 2014-04-27 15:30:31.704 CST ] [TemplateManager.isInstallTemplate:2307] The Message Id to be searched:=GENERAL_PURPOSE

    。。。。。。。。。

    27.14.

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

  4. 至此,基于 ASM Oracle Database 11g 环境搭建完毕。

     

    1. 关闭防火墙–不然客户端可能连接不上

    service iptables stop

    [root@rhel6_lhr ~]# service iptables stop

    iptables: Setting chains to policy ACCEPT: filter [ OK ]

    iptables: Flushing firewall rules: [ OK ]

    iptables: Unloading modules: [ OK ]

    [root@rhel6_lhr ~]#

     

    1. 是否有tnsnames.ora 生成

    检查 /u01/app/oracle/product/11.2.0/dbhome_1/network/ 下是否有 tnsnames.ora,如果没有就生成以为文件吧:

    /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora

     

    orclasm =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.128.134)(PORT = 1521))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = orclasm)

    )

    )

     

     

    1. 配置ORACLE自动启动

    如果不需要系统启动的时候自动起动则可以不用配置。

    1. 编辑/etc/oratab

    [oracle@dbserver1 ~]$ vi /etc/oratab

    orcl:/u01/app/oracle/product/11.2.0/db_1:Y

     

    1. ROOT用户创建/etc/init.d/dbora

    [root@dbserver1 ~]# cat /etc/init.d/dbora

    #!/bin/sh

    # chkconfig: 345 99 10

    # description: Oracle auto start-stop script.

    #

    # Set ORA_HOME to be equivalent to the $ORACLE_HOME

    # from which you wish to execute dbstart and dbshut;

    #

    # Set ORA_OWNER to the user id of the owner of the

    # Oracle database in ORA_HOME.

     

    #ORA_HOME=/u01/app/oracle/product/10.2.0/db_1

    #ORA_HOME=/u01/app/oracle/product/11.1.0/db_1

    ORA_HOME=/u01/app/oracle/product/11.2.0/db_1

    ORA_OWNER=oracle

     

    if [ ! -f $ORA_HOME/bin/dbstart ]

    then

    echo “Oracle startup: cannot start”

    exit

    fi

     

    case “$1” in

    “start”)

    # Start the Oracle databases:

    # The following command assumes that the oracle login

    # will not prompt the user for any values

    su – $ORA_OWNER -c “$ORA_HOME/bin/dbstart $ORA_HOME”

    touch /var/lock/subsys/dbora

    ;;

    “stop”)

    # Stop the Oracle databases:

    # The following command assumes that the oracle login

    # will not prompt the user for any values

    su – $ORA_OWNER -c “$ORA_HOME/bin/dbshut $ORA_HOME”

    rm -f /var/lock/subsys/dbora

    ;;

    esac

     

    1. 加入启动项

     

    [root@dbserver1 ~]# chmod 750 /etc/init.d/dbora

    [root@dbserver1 ~]# chkconfig –add dbora

     

     

     

    1. 验证

    [oracle@localhost ~]$ sqlplus / as sysdba

     

    SQL*Plus: Release 11.2.0.1.0 Production on Thu Oct 17 21:37:22 2013

     

    Copyright (c) 1982, 2009, Oracle. All rights reserved.

     

     

    Connected to:

    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production

    With the Partitioning, Automatic Storage Management, OLAP, Data Mining

    and Real Application Testing options

     

    SQL> select * from v$version;

     

    BANNER

    ——————————————–

    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production

    PL/SQL Release 11.2.0.1.0 – Production

    CORE 11.2.0.1.0 Production

    TNS for Linux: Version 11.2.0.1.0 – Production

    NLSRTL Version 11.2.0.1.0 – Production

    SQL> select file_name from dba_data_files;

     

    FILE_NAME

    ——————————————–

    +DATA/orcl/datafile/users.259.829084507

    +DATA/orcl/datafile/undotbs1.258.829084505

    +DATA/orcl/datafile/sysaux.257.829084505

    +DATA/orcl/datafile/system.256.829084505

    +DATA/orcl/datafile/example.265.829084649

     

    SQL>select name,total_mb,state from v$asm_diskgroup;

    select name,group_number,file_number,alias_index,alias_directory,system_created from v$asm_alias;

    # su grid$ crsctl check has$ crsctl check css$ crsctl check evm$ crs_stat t v$ ocrcheck

    # su oracle$ sqlplus / as sysdbaSQL> select name from v$datafile 2 union all 3 select name from v$controlfile 4 union all 5 select member from v$logfile;

    检查高可用性服务器的状态[grid@restart ~]$ crsctl check hasCRS-4638: Oracle High Availability Services is online[grid@restart ~]$ crsctl check cssCRS-4529: Cluster Synchronization Services is online[grid@restart ~]$ crsctl check evmCRS-4533: Event Manager is online

    [grid@restart ~]$ crs_stat -tName Type Target State Host ————————————————————ora.CRS.dg ora.up.type ONLINE ONLINE restart ora.ER.lsnr ora.er.type ONLINE ONLINE restart ora.asm ora.asm.type ONLINE ONLINE restart ora.cssd ora.cssd.type ONLINE ONLINE restart ora.diskmon ora.on.type OFFLINE OFFLINE ora.evmd ora.evm.type ONLINE ONLINE restart ora.ons ora.ons.type OFFLINE OFFLINE

    [grid@restart ~]$ ps -ef | grep asmgrid 16058 1 0 19:56 ? 00:00:00 asm_pmon_+ASMgrid 16060 1 0 19:56 ? 00:00:00 asm_psp0_+ASMgrid 16062 1 0 19:56 ? 00:00:00 asm_vktm_+ASMgrid 16066 1 0 19:56 ? 00:00:00 asm_gen0_+ASMgrid 16068 1 0 19:56 ? 00:00:00 asm_diag_+ASMgrid 16070 1 0 19:56 ? 00:00:00 asm_dia0_+ASMgrid 16072 1 0 19:56 ? 00:00:00 asm_mman_+ASMgrid 16074 1 0 19:56 ? 00:00:00 asm_dbw0_+ASMgrid 16076 1 0 19:56 ? 00:00:00 asm_lgwr_+ASMgrid 16078 1 0 19:56 ? 00:00:00 asm_ckpt_+ASMgrid 16080 1 0 19:56 ? 00:00:00 asm_smon_+ASMgrid 16082 1 0 19:56 ? 00:00:00 asm_rbal_+ASMgrid 16084 1 0 19:56 ? 00:00:00 asm_gmon_+ASMgrid 16086 1 0 19:56 ? 00:00:00 asm_mmon_+ASMgrid 16088 1 0 19:56 ? 00:00:00 asm_mmnl_+ASMgrid 16188 16152 0 19:59 pts/1 00:00:00 grep asm

    # cat /etc/oracle/ocr.lococrconfig_loc=/u01/app/11.2.0/grid/cdata/localhost/local.ocrlocal_only=TRUE

    # /u01/app/11.2.0/grid/bin/ocrcheckStatus of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 152 Available space (kbytes) : 261968 ID : 1179492779 Device/File Name : /u01/app/11.2.0/grid/cdata/localhost/local.ocr Device/File integrity check succeeded

    Device/File not configured

    Device/File not configured

    Device/File not configured

    Device/File not configured

    Cluster registry integrity check succeeded

    Logical corruption check succeeded

    [grid@restart ~]$ crs_stat -t -vName Type R/RA F/FT Target State Host ———————————————————————-ora.ARCH.dg ora.up.type 0/5 0/ ONLINE ONLINE restart ora.CRS.dg ora.up.type 0/5 0/ ONLINE ONLINE restart ora.DATA.dg ora.up.type 0/5 0/ ONLINE ONLINE restart ora.ER.lsnr ora.er.type 0/5 0/ ONLINE ONLINE restart ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE restart ora.cssd ora.cssd.type 0/5 0/5 ONLINE ONLINE restart ora.diskmon ora.on.type 0/10 0/5 OFFLINE OFFLINE ora.evmd ora.evm.type 0/10 0/5 ONLINE ONLINE restart ora.ons ora.ons.type 0/3 0/ OFFLINE OFFLINE ora.restart.db ora.se.type 0/2 0/1 ONLINE ONLINE restart

    SQL> select name from v$datafile 2 union all 3 select name from v$controlfile 4 union all 5 select member from v$logfile;

    NAME——————————————————————————–+DATA/restart/datafile/system.260.790288571+DATA/restart/datafile/sysaux.261.790288633+DATA/restart/datafile/undotbs1.262.790288683+DATA/restart/datafile/users.264.790288715+DATA/restart/controlfile/current.256.790288547+DATA/restart/onlinelog/group_1.257.790288549+DATA/restart/onlinelog/group_2.258.790288555+DATA/restart/onlinelog/group_3.259.790288561

    8 rows selected.

    SQL>SQL> archive log listDatabase log mode No Archive ModeAutomatic archival DisabledArchive destination /u01/app/oracle/product/11.2.0/dbhome_1/dbs/archOldest online log sequence 36Current log sequence 38

    不是归档模式运行,现在手动开启

    SQL> create pfile=”/u01/pfile-0802.bak” from spfile;

    File created.

    SQL> alter system set log_archive_dest_1=”LOCATION=+ARCH”;

    System altered.

    SQL> shutdown immediateDatabase closed.Database dismounted.ORACLE instance shut down.

    SQL> startup mountORACLE instance started.

    Total System Global Area 839282688 bytesFixed Size 2233000 bytesVariable Size 528485720 bytesDatabase Buffers 306184192 bytesRedo Buffers 2379776 bytesDatabase mounted.

    SQL> alter database archivelog;

    Database altered.

    SQL> alter database open;

    Database altered.

    SQL> archive log list;Database log mode Archive ModeAutomatic archival EnabledArchive destination +ARCHOldest online log sequence 36Next log sequence to archive 38Current log sequence 38

    SQL> select name from v$archived_log;

    no rows selected

    SQL> alter system switch logfile;

    System altered.

    SQL> select name from v$archived_log;

    NAME——————————————————————————–+ARCH/restart/archivelog/2012_08_02/thread_1_seq_38.256.790292467

    以监听为例用srvctl 关闭启动测试:[grid@restart ~]$ srvctl status listenerListener LISTENER is enabledListener LISTENER is running on node(s): restart[grid@restart ~]$ srvctl stop listener[grid@restart ~]$ srvctl status listenerListener LISTENER is enabledListener LISTENER is not running

    [grid@restart ~]$ srvctl start listener[grid@restart ~]$ crs_stat -t -vName Type R/RA F/FT Target State Host ———————————————————————-ora.ARCH.dg ora.up.type 0/5 0/ ONLINE ONLINE restart ora.CRS.dg ora.up.type 0/5 0/ ONLINE ONLINE restart ora.DATA.dg ora.up.type 0/5 0/ ONLINE ONLINE restart ora.ER.lsnr ora.er.type 0/5 0/ ONLINE ONLINE restart ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE restart ora.cssd ora.cssd.type 0/5 0/5 ONLINE ONLINE restart ora.diskmon ora.on.type 0/10 0/5 OFFLINE OFFLINE ora.evmd ora.evm.type 0/10 0/5 ONLINE ONLINE restart ora.ons ora.ons.type 0/3 0/ OFFLINE OFFLINE ora.restart.db ora.se.type 0/2 0/1 ONLINE ONLINE restart

    再测试kill监听进程,看能否自动起来。

    [grid@restart ~]$ ps -ef | grep lsnrgrid 28139 1 0 21:43 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inheritgrid 28325 28251 0 21:46 pts/1 00:00:00 grep lsnr

    [grid@restart ~]$ kill -9 28139

    过几秒钟后他就起来,因为这中间有监控进程的时间段

    [grid@restart ~]$ ps -ef | grep lsnrgrid 28455 1 0 21:47 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inheritgrid 28469 28251 0 21:47 pts/1 00:00:00 grep lsnr

    好,最后测试硬性重启之后看ORACLE能不能自动起来# reboot

    系统起来稍等之后:[root@restart bin]# ./crs_stat -tName Type Target State Host ————————————————————ora.ARCH.dg ora.up.type ONLINE ONLINE restart ora.CRS.dg ora.up.type ONLINE ONLINE restart ora.DATA.dg ora.up.type ONLINE ONLINE restart ora.ER.lsnr ora.er.type ONLINE ONLINE restart ora.asm ora.asm.type ONLINE ONLINE restart ora.cssd ora.cssd.type ONLINE ONLINE restart ora.diskmon ora.on.type OFFLINE OFFLINE ora.evmd ora.evm.type ONLINE ONLINE restart ora.ons ora.ons.type OFFLINE OFFLINE ora.restart.db ora.se.type ONLINE ONLINE restart

     


    1. 启动crs

    [root@b1 install]# /u01/app/grid/11.2.0/crs/install/roothas.pl -deconfig -force -verbose

    [root@b1 grid]#/u01/app/grid/11.2.0/root.sh

     

    —-同时执行

    /u01/app/grid/11.2.0/perl/bin/perl -I/u01/app/grid/11.2.0/perl/lib -I/u01/app/grid/11.2.0/crs/install /u01/app/grid/11.2.0/crs/install/roothas.pl

     

    /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

     

     

     

    —grid用户下启动

    crs_start -all

    crs_start -t

    crsctl check css

    crsctl check has


    1. 一些报错解决方案:

    CRS-4124: Oracle High Availability Services startup failed.

    CRS-4000: Command Start failed, or completed with errors.

    ohasd failed to start: Inappropriate ioctl for device

    ohasd failed to start at/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.

    第一次安装11gR2 RAC的时候就遇到了这个11.0.2.1的经典问题,上网一查才知道这是个bug,解决办法也很简单,

    就是在执行root.sh之前执行以下命令

    /bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

    如果出现

    /bin/dd: opening`/var/tmp/.oracle/npohasd”: No such file or directory

    的时候文件还没生成就继续执行,直到能执行为止,一般出现Adding daemon to inittab这条信息的时候执行dd命令。

    另外还有一种解决方法就是更改文件权限

    chown root:oinstall /var/tmp/.oracle/npohasd

    重新执行root.sh之前别忘了删除配置:/u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force-verbose

     

    在启动asm实例的时候报如下错误

    [grid@b1 ~]$ sqlplus / as sysasm

    SQL*Plus: Release 11.2.0.1.0 Production on Thu Sep 12 18:14:13 2013

    Copyright (c) 1982, 2009, Oracle. All rights reserved.

    Connected to an idle instance.

    SQL> startupORA-01078: failure in processing system parametersORA-29701: unable to connect to Cluster Synchronization Service

     

    然后用crsctl check css检查的时候报如下错误:

    [grid@b1 ~]$ crsctl check cssCRS-4639: Could not contact Oracle High Availability ServicesCRS-4000: Command Check failed, or completed with errors.

     

    解决CRS-4639: Could not contact Oracle High Availability Services过程如下:

    [root@b1 grid]# cd /u01/app/11.2.0/grid/crs/install[root@b1 install]# ./roothas.pl -deconfig -force -verbose2013-09-12 19:25:05: Checking for super user privileges2013-09-12 19:25:05: User has super user privileges2013-09-12 19:25:05: Parsing the host nameUsing configuration parameter file: ./crsconfig_paramsCRS-4639: Could not contact Oracle High Availability ServicesCRS-4000: Command Stop failed, or completed with errors.CRS-4639: Could not contact Oracle High Availability ServicesCRS-4000: Command Delete failed, or completed with errors.Failure at scls_scr_getval with code 1Internal Error Information: Category: -2 Operation: opendir Location: scrsearch1 Other: cant open scr home dir scls_scr_getval System Dependent Information: 2

    CRS-4544: Unable to connect to OHASCRS-4000: Command Stop failed, or completed with errors.ACFS-9200: SupportedSuccessfully deconfigured Oracle Restart stack

    [root@b1 install]# cd /u01/app/11.2.0/grid/[root@b1 grid]# ./root.shRunning Oracle 11g root.sh script…

    The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]: The file “dbhome” already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin …The file “oraenv” already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin …The file “coraenv” already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin …

    Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root.sh script.Now product-specific root actions will be performed.2013-09-12 19:27:31: Checking for super user privileges2013-09-12 19:27:31: User has super user privileges2013-09-12 19:27:31: Parsing the host nameUsing configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_paramsLOCAL ADD MODE Creating OCR keys for user “grid”, privgrp “oinstall”..Operation successful.CRS-4664: Node b1 successfully pinned.Adding daemon to inittabCRS-4123: Oracle High Availability Services has been started.ohasd is starting

    b1 2013/09/12 19:29:12 /u01/app/11.2.0/grid/cdata/b1/backup_20130912_192912.olrSuccessfully configured Oracle Grid Infrastructure for a Standalone ServerUpdating inventory properties for clusterwareStarting Oracle Universal Installer…

    Checking swap space: must be greater than 500 MB. Actual 4094 MB PassedThe inventory pointer is located at /etc/oraInst.locThe inventory is located at /u01/app/oraInventory”UpdateNodeList” was successful.

     

    [grid@b1 ~]$ crs_stat -tName Type Target State Host ————————————————————ora.cssd ora.cssd.type OFFLINE OFFLINE ora.diskmon ora….on.type OFFLINE OFFLINE [grid@b1 ~]$ crs_start -allAttempting to start `ora.diskmon` on member `b1`Attempting to start `ora.cssd` on member `b1`Start of `ora.diskmon` on member `b1` succeeded.Start of `ora.cssd` on member `b1` succeeded.

    [grid@b1 ~]$ sqlplus / as sysasm

    SQL*Plus: Release 11.2.0.1.0 Production on Thu Sep 12 19:34:50 2013

    Copyright (c) 1982, 2009, Oracle. All rights reserved.

    Connected to an idle instance.

    SQL> startupASM instance started

    Total System Global Area 283930624 bytesFixed Size 2212656 bytesVariable Size 256552144 bytesASM Cache 25165824 bytesASM diskgroups mountedASM diskgroups volume enabled

     

    1. ORA-29701: unable to connect to Cluster Synchronization Service

    [grid@vm11gr2] /home/grid> sqlplus “/as sysasm” SQL*Plus: Release 11.2.0.1.0 Production on Sun Oct 25 10:16:21 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORA-01078: failure in processing system parameters ORA-29701: unable to connect to Cluster Synchronization Service SQL> 无法连接到CSS服务上.到操作系统上检查一下看看 [grid@vm11gr2] /home/grid> crsctl check css CRS-4530: Communications failure contacting Cluster Synchronization Services daemon [grid@vm11gr2] /home/grid> [grid@vm11gr2] /home/grid> ps -ef|grep cssd 果然没有CSS的服务daemon进程,再看一下HAS(High Availability Service)的状态 [grid@vm11gr2] /home/grid> crsctl check has CRS-4638: Oracle High Availability Services is online [grid@vm11gr2] /home/grid> ps -ef|grep d.bin grid 5886 1 0 10:06 ? 00:00:01 /u01/app/grid/product/11.2/grid/bin/ohasd.bin reboot [grid@vm11gr2] /home/grid> 发现HAS的服务确实启动了的,而ora.cssd和ora.diskmon这2个服务是依赖于HAS维护的. 进一步查看各资源的状态 [grid@vm11gr2] /home/grid> crs_stat -t Name Type Target State Host ————————————————————– ora.FLASH_DATA.dg ora.diskgroup.type OFFLINE OFFLINE vm11gr2 ora.SYS_DATA.dg ora.diskgroup.type OFFLINE OFFLINE vm11gr2 ora.asm ora.asm.type OFFLINE OFFLINE vm11gr2 ora.cssd ora.cssd.type OFFLINE OFFLINE vm11gr2 ora.diskmon ora.diskmon.type OFFLINE OFFLINE vm11gr2 [grid@vm11gr2] /home/grid> [grid@vm11gr2] /home/grid> crsctl status resource -t ——————————————————————————– NAME TARGET STATE SERVER STATE_ DETAILS ——————————————————————————– Local Resources ——————————————————————————– ora.FLASH_DATA.dg OFFLINE OFFLINE vm11gr2 ora.SYS_DATA.dg OFFLINE OFFLINE vm11gr2 ora.asm OFFLINE OFFLINE vm11gr2 ——————————————————————————– Cluster Resources ——————————————————————————– ora.cssd 1 OFFLINE OFFLINE ora.diskmon 1 OFFLINE OFFLINE 再看一下ora.cssd和ora.diskmon的属性 [grid@vm11gr2] /home/grid> crs_stat -p ora.cssd NAME=ora.cssd TYPE=ora.cssd.type ACTION_SCRIPT= ACTIVE_PLACEMENT=0 AUTO_START=never CHECK_INTERVAL=30 DESCRIPTION=”Resource type for CSSD” FAILOVER_DELAY=0 FAILURE_INTERVAL=3 FAILURE_THRESHOLD=5 HOSTING_MEMBERS= PLACEMENT=balanced RESTART_ATTEMPTS=5 SCRIPT_TIMEOUT=600 START_TIMEOUT=600 STOP_TIMEOUT=900 UPTIME_THRESHOLD=1m [grid@vm11gr2] /home/grid> crs_stat -p ora.diskmon NAME=ora.diskmon TYPE=ora.diskmon.type ACTION_SCRIPT= ACTIVE_PLACEMENT=0 AUTO_START=never CHECK_INTERVAL=20 DESCRIPTION=”Resource type for Diskmon” FAILOVER_DELAY=0 FAILURE_INTERVAL=3 FAILURE_THRESHOLD=5 HOSTING_MEMBERS= PLACEMENT=balanced RESTART_ATTEMPTS=10 SCRIPT_TIMEOUT=60 START_TIMEOUT=60 STOP_TIMEOUT=60 UPTIME_THRESHOLD=5s [grid@vm11gr2] /home/grid> 到这里基本就找到了原因了,可以看到这两个资源的AUTO_START属性默认都设置为never,也就是说他们不会随着HAS服务的启动而自动启动的,尽管默认情况下HAS服务是开机自动启动的.好了,那我们就手动启动一下吧: [grid@vm11gr2] /home/grid> crsctl start resource ora.cssd CRS-2672: Attempting to start “ora.cssd” on “vm11gr2” CRS-2679: Attempting to clean “ora.diskmon” on “vm11gr2” CRS-2681: Clean of “ora.diskmon” on “vm11gr2” succeeded CRS-2672: Attempting to start “ora.diskmon” on “vm11gr2” CRS-2676: Start of “ora.diskmon” on “vm11gr2” succeeded CRS-2676: Start of “ora.cssd” on “vm11gr2” succeeded [grid@vm11gr2] /home/grid> :ora.cssd和ora.diskmon这两个服务是有依赖关系的,启动哪个都会把两个都起来. [grid@vm11gr2] /home/grid> crs_stat -t Name Type Target State Host ————————————————————– ora.FLASH_DATA.dg ora.diskgroup.type OFFLINE OFFLINE vm11gr2 ora.SYS_DATA.dg ora.diskgroup.type OFFLINE OFFLINE vm11gr2 ora.asm ora.asm.type OFFLINE OFFLINE vm11gr2 ora.cssd ora.cssd.type ONLINE ONLINE vm11gr2 ora.diskmon ora.diskmon.type ONLINE ONLINE vm11gr2 [grid@vm11gr2] /home/grid> CSS服务起来了,重启动asm instance [grid@vm11gr2] /home/grid> sqlplus “/as sysasm” SQL*Plus: Release 11.2.0.1.0 Production on Sun Oct 25 10:30:03 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ASM instance started Total System Global Area 284565504 bytes Fixed Size 1336036 bytes Variable Size 258063644 bytes ASM Cache 25165824 bytes ASM diskgroups mounted SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production With the Automatic Storage Management option [grid@vm11gr2] /home/grid> crs_stat -t Name Type Target State Host ————————————————————– ora.FLASH_DATA.dg ora.diskgroup.type ONLINE ONLINE vm11gr2 ora.SYS_DATA.dg ora.diskgroup.type ONLINE ONLINE vm11gr2 ora.asm ora.asm.type ONLINE ONLINE vm11gr2 ora.cssd ora.cssd.type ONLINE ONLINE vm11gr2 ora.diskmon ora.diskmon.type ONLINE ONLINE vm11gr2 [grid@vm11gr2] /home/grid> tips 1)默认情况下HAS(High Availability Service)是自动启动的.通过如下命令可以取消和启用自动启动 crsctl disable has crsctl enable has 2)HAS手动启动和停止 crsctl start has crsctl stop has 3)查看HAS的状态 crsctl check has 4)如果想让ora.css和ora.diskmon服务随着HAS的启动而自动启动,那么你可以这两个服务的AUTO_START属性 crsctl modify resource “ora.cssd” -attr “AUTO_START=1” or crsctl modify resource “ora.diskmon” -attr “AUTO_START=1” 5)如果想取消ora.css和ora.diskmon的Auto start crsctl modify resource “ora.cssd” -attr “AUTO_START=never” crsctl modify resource “ora.diskmon” -attr “AUTO_START=never”

     

    今天安装oracle11g R2grid,装完之后发现无法加载diskgroups,报错如下:

    SQL> startupORA-00099: warning: no parameter file specified for ASM instanceASM instance started

    Total System Global Area 283930624 bytesFixed Size 2225792 bytesVariable Size 256539008 bytesASM Cache 25165824 bytesORA-15110: no diskgroups mounted

    (1)分析认为是由于ORA-00099引起,无法找到diskgroup

    2)在网上搜索得到asm参数文件的配置格式如下:

    asm_diskstring=”/dev/oracleasm/disks/DISK*”

    asm_diskgroups=”DATA”asm_power_limit=1diagnostic_dest=”/opt/oracle”instance_type=”asm”large_pool_size=12Mremote_login_passwordfile=”EXCLUSIVE”

     

    生成文件:$ORACLE_HOME/dbs/init+ASM.ora

    注意asm_diskstring中必须包含”*“才能正确加载,否则报下面错误

     

    SQL> startupASM instance started

    Total System Global Area 283930624 bytesFixed Size 2225792 bytesVariable Size 256539008 bytesASM Cache 25165824 bytesORA-15032: not all alterations performedORA-15017: diskgroup “DATA” cannot be mountedORA-15063: ASM discovered an insufficient number of disks for diskgroup “DATA”

    3)保存设置到spfile

    SQL> create spfile from pfile;create spfile from pfile*ERROR at line 1:ORA-29786: SIHA attribute GET failed with error [Attribute “SPFILE” sts[200]lsts[0]]

    产生这个问题的原因为ORACLE认为ASM实例是手工创建的,并没有注册这个资源,那么首先增加ASM资源

    需要使用下面命令注册asm,在grid用户下执行:

    srvctl add asm -l LISTENER -d “/dev/oracleasm/disks/DISK*”

    然后重新执行命令:

    SQL> create spfile from pfile;

    File created.

     

    4)关闭asm实例后重新启动,成功加载:

    SQL> startupASM instance started

    Total System Global Area 283930624 bytesFixed Size 2225792 bytesVariable Size 256539008 bytesASM Cache 25165824 bytesASM diskgroups mountedASM diskgroups volume enabled

    在启动DB时报错ORA-27154 ORA-27300 ORA-27301 ORA-27302

    这个错误是内核参数设置的问题,测试过程如下。[oracle@gtlions ~]$ sqlplus “/as sysdba”SQL*Plus: Release 10.2.0.5.0 – Production on Sat Feb 4 23:47:02 2012Copyright (c) 1982, 2010, Oracle. All Rights Reserved.Connected to an idle instance.SQL> startupORA-27154: post/wait create failedORA-27300: OS system dependent operation:semget failed with status: 28ORA-27301: OS failure message: No space left on deviceORA-27302: failure occurred at: sskgpsemsperSQL>

     

    [root@gtlions ~]# /sbin/sysctl -pnet.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 2net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 4294967295kernel.shmall = 268435456fs.aio-max-nr = 1048576fs.file-max = 6815744net.ipv4.ip_local_port_range = 9000 65500net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 1048586kernel.sem = 25032000100128注意上面的最后一个参数kernel.sem = 25032000100128这个值看似很正常的,实际上是有问题的,我测试了修改为下面正常的值后OK,它值与值之间有空格标记的[root@gtlions ~]# vi /etc/sysctl.conf[root@gtlions ~]# /sbin/sysctl -pnet.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 2net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 4294967295kernel.shmall = 268435456fs.aio-max-nr = 1048576fs.file-max = 6815744net.ipv4.ip_local_port_range = 9000 65500net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 1048586kernel.sem = 250 32000 100 128[root@gtlions ~]#来看下是否可以正常启动数据库了。[oracle@gtlions ~]$ sqlplus “/as sysdba”SQL*Plus: Release 10.2.0.5.0 – Production on Sun Feb 5 00:00:39 2012Copyright (c) 1982, 2010, Oracle. All Rights Reserved.Connected to an idle instance.SQL> startupORACLE instance started.Total System Global Area 167772160 bytesFixed Size 1272600 bytesVariable Size 62915816 bytesDatabase Buffers 100663296 bytesRedo Buffers 2920448 bytesDatabase mounted.Database opened.SQL>

     

    Normal 0 7.8 磅 0 2 false false false EN-US ZH-CN X-NONE

    ORA-29786: SIHA attribute GET failed with error [Attribute “ASM_DISKSTRING” sts[200] lsts[0]]

     

    如果使用asmca创建的asm实例,asm会自动注册到crs,手动创建asm实例需要注册

     

    [10:18:03 oracle(grid)@test ~]$ sql

     

    SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 10 10:18:04 2013

     

    Copyright (c) 1982, 2011, Oracle. All rights reserved.

     

     

    Connected to:

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

    With the Automatic Storage Management option

     

    10:18:05 idle> show parameter name

     

    NAME TYPE VALUE

    ———————————— ——————————— ——————————

    db_unique_name string +ASM

    instance_name string +ASM

    lock_name_space string

    service_names string +ASM

    10:18:14 idle> create spfile from pfile;

    create spfile from pfile

    *

    ERROR at line 1:

    ORA-29786: SIHA attribute GET failed with error [Attribute “SPFILE” sts[200] lsts[0]]

     

     

    Elapsed: 00:00:00.17

    10:19:06 idle> exit

    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

    With the Automatic Storage Management option

    [10:19:12 oracle(grid)@test ~]$ srvctl add asm -l LISTENER -p /oracle/product/11.2.0/grid/dbs/init+ASM.ora -d “ASMDISK*”

    PRCR-1001 : Resource ora.LISTENER.lsnr does not exist

    [10:20:19 oracle(grid)@rmdgl ~]$ crs_stat -t

    Name Type Target State Host

    ————————————————————

    ora.cssd ora.cssd.type ONLINE ONLINE rmdgl

    ora.diskmon ora….on.type OFFLINE OFFLINE

    ora.evmd ora.evm.type ONLINE ONLINE rmdgl

    ora.ons ora.ons.type OFFLINE OFFLINE

    [10:20:35 oracle(grid)@test~]$ srvctl add asm -p /u01/app/11.2.0/grid/dbs/spfile+ASM.ora -d “ASMDISK*”

    [10:20:56 oracle(grid)@rmdgl ~]$ crs_stat -t

    Name Type Target State Host

    ————————————————————

    ora.asm ora.asm.type OFFLINE OFFLINE

    ora.cssd ora.cssd.type ONLINE ONLINE rmdgl

    ora.diskmon ora….on.type OFFLINE OFFLINE

    ora.evmd ora.evm.type ONLINE ONLINE rmdgl

    ora.ons ora.ons.type OFFLINE OFFLINE

    [10:20:58 oracle(grid)@test ~]$ cd /u01/app/11.2.0/grid/dbs/

    [10:21:21 oracle(grid)@test dbs]$ ll

    total 20

    -rw-rw—- 1 oracle oinstall 1257 Sep 5 02:34 ab_+ASM.dat

    -rw-rw—- 1 oracle oinstall 1544 Sep 5 02:34 hc_+ASM.dat

    -rw-r—– 1 oracle oinstall 169 Aug 30 13:49 init+ASM.ora

    -rw-r–r– 1 oracle oinstall 2851 May 15 2009 init.ora

    -rw-r—– 1 oracle oinstall 2560 Aug 30 13:50 orapw+ASM

    [10:21:22 oracle(grid)@test dbs]$ crs_stat -t

    Name Type Target State Host

    ————————————————————

    ora.asm ora.asm.type OFFLINE OFFLINE

    ora.cssd ora.cssd.type ONLINE ONLINE rmdgl

    ora.diskmon ora….on.type OFFLINE OFFLINE

    ora.evmd ora.evm.type ONLINE ONLINE rmdgl

    ora.ons ora.ons.type OFFLINE OFFLINE

    [10:21:40 oracle(grid)@rmdgl dbs]$ crs_stat -t

    Name Type Target State Host

    ————————————————————

    ora.asm ora.asm.type OFFLINE OFFLINE

    ora.cssd ora.cssd.type ONLINE ONLINE rmdgl

    ora.diskmon ora….on.type OFFLINE OFFLINE

    ora.evmd ora.evm.type ONLINE ONLINE rmdgl

    ora.ons ora.ons.type OFFLINE OFFLINE

    [10:23:16 oracle(grid)@rmdgl dbs]$ srvctl start asm

    [10:23:20 oracle(grid)@rmdgl dbs]$ crs_stat -t

    Name Type Target State Host

    ————————————————————

    ora.asm ora.asm.type ONLINE ONLINE rmdgl

    ora.cssd ora.cssd.type ONLINE ONLINE rmdgl

    ora.diskmon ora….on.type OFFLINE OFFLINE

    ora.evmd ora.evm.type ONLINE ONLINE rmdgl

    ora.ons ora.ons.type OFFLINE OFFLINE

    [11:13:50 oracle(grid)@test dbs]$ sql

     

    SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 10 11:14:39 2013

     

    Copyright (c) 1982, 2011, Oracle. All rights reserved.

     

     

    Connected to:

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

    With the Automatic Storage Management option

     

    11:14:39 idle> create spfile from pfile;

     

    File created.

     

    Elapsed: 00:00:00.24

     

     

    Elapsed: 00:00:00.26

    从11gR2开始,使用图形化方式安装Grid Infrastructure和使用ASMCA工具创建ASM实例都强制使用磁盘组作为ASM实例参数文件的存储方式。如果想将ASM参数文件存放到本地磁盘文件系统中,只能手动创建ASM实例。手动创建ASM实例,在执行CRETAE SPFILE FROM PFILE将PFILE转换成SPFILE的时候收到如下报错:SQL> create spfile from pfile;

    create spfile from pfile

    *

    ERROR at line 1:

    ORA-29786: SIHA attribute GET failed with error [Attribute “SPFILE” sts[200]

    lsts[0]]下面是METALINK的解决方案:ORA-29786: SIHA attribute GET failed with Error If 11gR2 ASM instance is created manually [ID 976075.1] 修改时间:2011-8-11【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」类型:PROBLEM【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」状态:PUBLISHED【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」优先级:3

    In this Document Symptoms Changes Cause Solution

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    Symptoms

    After creating an initial parameter file init+ASM.ora manually and starting an ASM instance, certain ASM commands fails with ORA-29786 in sqlplus:

    sqlplus / as sysasmSQL> create spfile=”$ORACLE_HOME/spfile+ASM.ora” from pfile=”$ORACLE_HOME/dbs/init+ASM.ora”;create spfile=”$ORACLE_HOME/spfile+ASM.ora” from pfile=”$ORACLE_HOME/dbs/init+ASM.ora”*ERROR at line 1:ORA-29786: SIHA attribute GET failed with error [Attribute “SPFILE” sts[200]lsts[0]]

     

    SQL> create diskgroup dg1 normal redundancy disk “/opt/oracle/oradata/nobilldb/DG1_dev0” disk “/opt/oracle/oradata/nobilldb/DG1_dev1”;Diskgroup created.SQL> drop diskgroup dg1;drop diskgroup dg1*ERROR at line 1:ORA-15039: diskgroup not droppedORA-29786: SIHA attribute GET failed with error [Attribute “SPFILE” sts[200]lsts[0]]

    Better option to create ASM instance is to run asmca in GUI/Silent mode.

    Changes

    Starting with 11gR2, ASM instance is a resoure in CRS repository also in single instance installations. Hence, it must be registered to OCR before doing certain operations like create/drop diskgroup, create pfile/spfile, etc.

    Cause

    ASM instance is created and started manually but it is not registered to cluster repository.

    Solution

    If ASM instance is created manually, add ASM instance to CRS repository with srvctl add asm:

    srvctl add asm -l LISTENER -p /oracle/product/11.2.0/grid/dbs/init+ASM.ora -d “/dev/sdc*”srvctl add asm -h — will give the options

    根据文章提示,执行以下命令:# pwd

    /u01/app/11.2.0/grid/bin

    # ./srvctl add asm -h

     

    Adds an ASM configuration to be managed by Oracle Restart.

     

    Usage: srvctl add asm [-l ] [-p ] [-d ]

    -l Listener name

    -p Server parameter file path

    -d ASM diskgroup discovery string

    -h Print usage

    # su – grid $ ./srvctl add asm -l LISTENER -p /u01/app/11.2.0/grid/dbs/spfile+ASM.ora -d “/dev/rhdisk*”

    注意:以上的srvctl add asm命令必须用grid用户执行,不能用root用户执行,详情参考文章《11gR2手动创建的ASM实例无法被Clusterware管理的问题的解决》http://space.itpub.net/?uid-23135684-action-viewspace-itemid-743090

     

    $ sqlplus / as sysdba

     

    SQL*Plus: Release 11.2.0.3.0 Production on Mon Sep 10 11:12:53 2012

     

    Copyright (c) 1982, 2011, Oracle. All rights reserved.

     

     

    Connected to:

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

    With the Automatic Storage Management option

     

    SQL> create spfile from pfile;

    create spfile from pfile

    *

    ERROR at line 1:

    ORA-27038: created file already exists

    Additional information: 1

     

     

    SQL> exit

    Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

    With the Automatic Storage Management option

    $ cd $ORACLE_HOME/dbs

    $ ls

    ab_+ASM.dat hc_+ASM.dat init+ASM.ora init.ora spfile+ASM.ora

    执行srvctl add asm命令会自动为ASM实例添加SPFILE参数文件。

    $ strings spfile*

    /M?

    *.asm_power_limit=1

    *.diagnostic_dest=”/u01/app/grid”

    *.instance_type=”asm”

    *.large_pool_size=12M

    *.remote_login_passwordfile=”EXCLUSIVE”

    $ sqlplus / as sysasm

     

    SQL*Plus: Release 11.2.0.3.0 Production on Mon Sep 10 11:13:21 2012

     

    Copyright (c) 1982, 2011, Oracle. All rights reserved.

     

     

    Connected to:

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

    With the Automatic Storage Management option

     

    SQL> shutdown immediate

    ORA-15100: invalid or missing diskgroup name

     

     

    ASM instance shutdown

    SQL>

    SQL> startup nomount

    ASM instance started

     

    Total System Global Area 283930624 bytes

    Fixed Size 2220800 bytes

    Variable Size 256544000 bytes

    ASM Cache 25165824 bytes

    SQL> show parameter spfile

     

    NAME TYPE

    ———————————— ———————-

    VALUE

    ——————————

    spfile string

    /u01/app/11.2.0/grid/dbs/spfil

    e+ASM.ora

    –end–

     

    1. ORACLE dbca 找不到asm disks

      2012-04-11 14:44:03

    ORACLE dbca 找不到asm disks

     

    oracle版本: 11.2.0.1.0

    grid 版本: 11.2

     

    前面的安装基本都一步一步都走过来了,当然走的不是那么的平坦.还好有百度google的鼎力相助终于是装好了.

     

    问题现象:dbca 到第6步的时候 “Database file location “,

    选择 Storage Type :Automatic Storage management

    然后 Databse Area : DATA(创建的asm disk group的名称)

    如下图:

    【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

     

    点击下一步系统提示: 无法找到指定的磁盘组.

     

    错误判断:

    1 执行: /usr/sbin/oracleasm scandisks

    /usr/sbin/oracleasm listdisks

    结果正常,而且listdisk可以正常的将磁盘组可显示出来.

     

    2 切换到 grid 帐户 ,执行数据查询

    sqlplus /nolog

    conn /as sysdba

    select name from v$asm_disks

    也可以查得到.

    3 用户组判断

    id oracle

    发现oracle没能在asmdba组里面,问题在此.

     

    解决问题

    执行 usermod -G oinstall -g dba,asmdba oracle

     

    再次选择,asm 磁盘组出来了.

     

    后记

    其实在我的最开始的创建用户及组的脚本里面是将oracle 加入到了asmdba中的,但是创建的脚本受了系统自带脚本orarun的影响.

     

    oracle 11g dbca 找不到asm diskgroup的解决办法

    oracle 11g dbca 找不到asm diskgroup的解决办法

    昨天在家里的本本虚拟机里配置11G R2 RAC时,GI安装正常,db soft 安装正常,查看资源asm 磁盘也正常,GI是用的GRID 用户及用ASMLIB配置的ASM DISKGROUP.但到最后一步用ORACLE用户DBCA建库时,在储存选择ASM,无法找到ASMDISKGROUP

    家里没法上网只能自己猜一下,尝试用GRID 用户运行DBCA,当然目的只是尝试到选择存储时会不会发现ASM DISKGROUP,开始会因为环境变量问题报错忽略,当到选择存储时发现正常的发现了先前用ASMCA创建的ASM磁盘组,取消安装,开始排查错误。

    命令 id oracle查看了ORACLE的用户组,发现有oinstall,asmdba,dba再查看 id grid发现grid的用户组要比ORACLE用户多出一个asmadmin的用户组然后再查看ASM设备的用户组ls -l /dev/oracleasm/diskstotal 0brw-rw– 1 grid asmadmin 8, 33 Nov 4 15:35 CRDATAbrw-rw– 1 grid asmadmin 8, 49 Nov 4 15:35 DBDATA

    所以你可以修改ASM设备的组为asmdba,也可以给ORACLE用户加到asmadmin,我选择第二个usermod -a -G asmadmin oracle

    或者:chown grid:asmdba /dev/asm*

    还有就是去检查一下$GRID_HOME/bin/oracle执行文件的权限是不是下面[grid@rac1 bin]$ ll oracle-rwsr-sx 1 grid oinstall 152462814 Apr 10 19:51 oracle

    我记的当时好像是没有s而是x,如果没有执行下面的命令chmod +s oracles对于执行文件是suid,就是告之以文件所有者的身份运行。

    执行完上面一系列的排查后,在ORACLE用户的DBCA中终于发现了ASM DISKGROUP.

    1. Incorrect permission setting for oracle user2. ASM instance was not started or diskgroups are not mounted.3. The diskgroup resources are not online.4. The permission setting for the asm devices are incorrect.5. The oracle executable under /bin has incorrect permission settings.6. the file system for grid home was mounted with option nosuid.

    Related Posts:

  • No related posts found!

 

 

安装RAC或者Oracle RestartGI(Grid Infrastructure)RDBMS安装顺序颠倒导致dbca无法识别ASM磁盘组问题解决

 

最近和一朋友做这样的测试,在安装RAC或者Oracle Restart的时候如果先安装RDBMS Software,后安装GI(Grid Infrastructure),会不会影响数据库的使用或者会不会影响创建数据库。我们怀着这样的疑问在测试环境里进行了实验。RDBMS Software安装和GI的安装都很顺利,通过asmca也创建了后续存放数据文件和用于闪回恢复区的ASM磁盘组。本以为很顺利进行下去,但dbca建库的时候遇到了问题,到选择存储方式的时候无法识别asm磁盘组。如下图:

【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

提醒:大家要相信如果安装顺序是正确的,那么在这个步骤中肯定是能够看到大家创建好的ASM磁盘组信息的。

那分析下常见的找不到ASM磁盘组的原因:1grid家目录或者其子目录权限错误2asm磁盘的权限错误3asm实例未启动或者asm磁盘组没有mount4asm磁盘组资源没有在线5oracle用户的权限错误6oracle($ORACLE_HOME/bin)可执行文件的权限错误

OK,那我就按照上面的分析结果进行逐一排查:1gi家目录或者其子目录权限错误

[root@khm5 ~]# ls -ld /u02/app/11.2.0/grid/

drwxrx 66 root oinstall 4096 Apr 19 01:36 /u02/app/11.2.0/grid/

我先简单查看了GI的家目录权限,这是正常的。这里想要提醒大家,有些DBA有意无意中看到这样的目录权限,发现所属主是root,以为出了问题,理所应当地通过命令去更改,如果只更改该目录权限问题也不大,回退方法很简单,但一旦加上-R参数递归方式把子目录、子文件的权限一并更改,那故障就发生了。所以,大家在操作的时候不要盲目去做没有把握的事情,掌握每个操作后面的原理以及其带来的后果,以至于充分准备好回退方法。

在这里我很清楚地知道没有做过修改权限的操作,所以初步认为是这个环节没有问题,所以pass

2asm磁盘的权限错误我是通过ASMLib驱动创建的asm磁盘,通过下面命令查看:

[root@khm5 ~]# ls -l /dev/oracleasm/disks/t

otal 0brwrw—- 1 grid asmadmin 8, 17 Apr 19 01:22 ASMDISK1

brwrw—- 1 grid asmadmin 8, 33 Apr 19 01:22 ASMDISK2

brwrw—- 1 grid asmadmin 8, 49 Apr 19 01:22 ASMDISK3

如果发现权限不对,通过如下命令修改:

[root@khm5 ~]# oracleasm configure -I 或者[root@khm5 ~]# /etc/init.d/oracleasm configure

修改之后查看:[root@khm5 ~]# oracleasm configureORACLEASM_ENABLED=trueORACLEASM_UID=grid

ORACLEASM_GID=asmadmin

ORACLEASM_SCANBOOT=trueORACLEASM_SCANORDER=“”ORACLEASM_SCANEXCLUDE=“”

3asm实例未启动或者asm磁盘组没有mount4asm磁盘组资源没有在线通过查看资源情况可以判断有没有34提到的问题

[grid@khm5 ~]$ crsctl stat res t

——————————————————————————–

NAME TARGET STATE SERVER STATE_DETAILS

——————————————————————————–Local Resources——————————————————————————–ora.DATA.dg

ONLINE ONLINE khm5

ora.FLASH.dg

ONLINE ONLINE khm5

ora.GRID.dg

ONLINE ONLINE khm5

ora.LISTENER.lsnr

ONLINE ONLINE khm5

ora.asm ONLINE ONLINE khm5 Started

ora.ons

OFFLINE OFFLINE khm5

——————————————————————————–Cluster Resources——————————————————————————–ora.cssd

1 ONLINE ONLINE khm5

ora.diskmon

1 OFFLINE OFFLINE

ora.evmd

1 ONLINE ONLINE khm5

5oracle用户的权限错误

[root@khm5 ~]# id oracle

uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1300(dba),1301(oper),1201(asmdba)

oracle用户需要加入到asmdba组,如果发现没有加入,进行如下操作:

[root@khm5 ~]# gpasswd -a oracle asmdba

Adding user oracle to group asmdba

6oracle($ORACLE_HOME/bin)可执行文件的权限错误

[root@khm5 ~]# su – oracle

[oracle@khm5 ~]$ cd $ORACLE_HOME/bin

[oracle@khm5 bin]$ ls l oracle

rwsrsx 1 oracle oinstall 232399473 Apr 19 07:04 oracle

好,到这里我们发现问题了,oracle可执行文件的权限不正确。RAC或者ORACLE RESTART中,oracle可执行文件的所属组是asmadmin

如下方式进行修改:

[root@khm5 ~]# cd /u02/app/oracle/product/11.2.0/dbhome_1/bin/

[root@khm5 bin]# chown oracle:asmadmin oracle

[root@khm5 bin]# ls -l oracle

rwxrxx 1 oracle asmadmin 232399473 Apr 19 07:04 oracle

 

[root@khm5 bin]# chmod +s oracle

[root@khm5 bin]# ls -l oracle

rwsrsx 1 oracle asmadmin 232399473 Apr 19 07:04 oracle

修改完后问题解决,我们能够看到ASM磁盘组信息了。

【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

 

  1. ora-15077ASM磁盘组不能挂载

一、现象分析:1、数据库和实例服务无法启动,如下:[oracle@rac1 ~]$ crs_stat -tName Type Target State Host————————————————————ora…..CRM.cs application ONLINE OFFLINE rac1ora….cl1.srv application ONLINE OFFLINE rac1ora.orcl.db application ONLINE OFFLINE rac2ora….l1.inst application ONLINE OFFLINE rac1ora….l2.inst application ONLINE OFFLINE rac2ora….SM1.asm application ONLINE ONLINE rac1ora….C1.lsnr application ONLINE ONLINE rac1ora.rac1.gsd application ONLINE ONLINE rac1ora.rac1.ons application ONLINE ONLINE rac1ora.rac1.vip application ONLINE ONLINE rac1ora….SM2.asm application ONLINE ONLINE rac2ora….C2.lsnr application ONLINE ONLINE rac2ora.rac2.gsd application ONLINE ONLINE rac2ora.rac2.ons application ONLINE ONLINE rac2ora.rac2.vip application ONLINE ONLINE rac22、单独启动某个应用服务依然启不起来3、用sqlplus启动实例,如下:[oracle@rac1]$ export ORACLE_SID=devdb1SQL> startup;ORA-01078: failure in processing system parametersORA-01565: error in identifying file “+DG1/devdb/spfiledevdb.ora”ORA-17503: ksfdopn:2 Failed to open file +DG1/devdb/spfiledevdb.oraORA-15077: could not locate ASM instance serving a required diskgroup可以看出diskgroup没有mount,所以先把diskgroup mount

也可做下面的测试,同样也会报磁盘组没挂载:[oracle@rac1 bdump]$ export ORACLE_SID=+ASM1[oracle@rac1 bdump]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 – Production on Sat Mar 22 17:59:39 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – ProductionWith the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> shutdown immediate;ORA-15100: invalid or missing diskgroup name

ASM instance shutdownSQL> startup;ASM instance started

Total System Global Area 92274688 bytesFixed Size 1217884 bytesVariable Size 65890980 bytesASM Cache 25165824 bytesORA-15110: no diskgroups mounted

二、解决办法:1、首先挂载ASM磁盘组[oracle@rac1 bdump]$ export ORACLE_SID=+ASM1[oracle@rac1 bdump]$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 – Production on Sat Mar 22 17:59:39 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – ProductionWith the Partitioning, Real Application Clusters, OLAP and Data Mining optionsSQL> select name,state from v$asm_diskgroup;

NAME STATE—————————— ———–RECOVERYDEST DISMOUNTEDDG1 DISMOUNTED

SQL> alter diskgroup RECOVERYDEST mount;

Diskgroup altered.

SQL> alter diskgroup DG1 mount;

Diskgroup altered.

SQL> exitDisconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – ProductionWith the Partitioning, Real Application Clusters, OLAP and Data Mining options

2、启动数据库实例[oracle@rac1 bdump]$ export ORACLE_SID=devdb1[oracle@rac1 bdump]$ sqlplus / as sysdbaSQL> startup;ORACLE instance started.

Total System Global Area 264241152 bytesFixed Size 1218868 bytesVariable Size 109053644 bytesDatabase Buffers 150994944 bytesRedo Buffers 2973696 bytesDatabase mounted.database open.

三、原因:发现可能是因为oracle用户下的.bashrc文件中$ORACLE_SID环境变量与实际建库的数据库名不一致所至,所以在数据库启动时会找不到环境变量对应的实例名.bashrc中是orcl1[oracle@rac1~]cat .bashrc…..export ORACLE_SID=orcl1…..

而实现的实例名是devdb1所以修改.bashrc中的ORACLE_SID=devdb1

 

 

 



About Me

……………………………………………………………………………………………………………….

● 本文作者:小麦苗,只专注于数据库的技术,更注重技术的运用

● 本文在itpub(http://blog.itpub.net/26736162)、博客园(http://www.cnblogs.com/lhrbest)和个人微信公众号(xiaomaimiaolhr)上有同步更新

● 本文itpub地址:http://blog.itpub.net/26736162/abstract/1/

● 本文博客园地址:http://www.cnblogs.com/lhrbest

● 本文pdf版及小麦苗云盘地址:http://blog.itpub.net/26736162/viewspace-1624453/

● 数据库笔试面试题库及解答:http://blog.itpub.net/26736162/viewspace-2134706/

● QQ群:230161599     微信群:私聊

● 联系我请加QQ好友(646634621),注明添加缘由

● 于 2017-07-01 09:00 ~ 2017-07-31 22:00 在魔都完成

● 文章内容来源于小麦苗的学习笔记,部分整理自网络,若有侵权或不当之处还请谅解

● 版权所有,欢迎分享本文,转载请保留出处

……………………………………………………………………………………………………………….

拿起手机使用微信客户端扫描下边的左边图片来关注小麦苗的微信公众号:xiaomaimiaolhr,扫描右边的二维码加入小麦苗的QQ群,学习最实用的数据库技术。

【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」

 

【ASM】Oracle ASM + 11gR2 + RHEL6.5 安装「终于解决」 DBA笔试面试讲解 欢迎与我联系

 

 

 

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/26736162/viewspace-1205206/,如需转载,请注明出处,否则将追究法律责任。

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
转载请注明出处: https://daima100.com/7460.html

(0)
上一篇 2023-03-20
下一篇 2023-03-20

相关推荐

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注