ocm练习题(修订)

教程发布:风哥 教程分类:ITPUX技术网 更新日期:2022-02-12 浏览学习:209

Section 0: 创建一个数据库[size=18pt] 修改环境变量:[oracle@rac1 ~]$ cd ~[oracle@rac1 ~]$ vi .bash_profileexport ORACLE_SID=PRODexport EDITOR=vi[oracle@rac1 ~]$ source .bash_profile -[oracle@rac1 ~]$ env|grep -i sidORACLE_SID=PROD 建立相关目录:[oracle@rac1 ~]$ cd $ORACLE_BASE[oracle@rac1 oracle]$ mkdir -p admin/PROD/{a,b,c,u}dump[oracle@rac1 oracle]$ mkdir -p oradata/PROD/Disk{1,2,3,4,5} 修改/etc/oratab文件[oracle@rac1 oracle]$ vi /etc/oratabPROD:/u01/app/oracle/OracleHomes/db10g:N 建立初始化参数文件initPROD.ora[oracle@rac1 ~]$ cd $ORACLE_HOME/dbs[oracle@rac1 dbs]$cat init.ora|grep -v ^#|grep -v ^$>initPROD.ora [oracle@rac1 dbs]$ vi initPROD.ora参考联机文档:Administrator's Guide —> 第二章Creating an Oracle Database —> Understanding Initialization Parameters —> Sample Initialization Parameter File修改如下:control_files = (/u01/app/oracle/oradata/PROD/Disk1/control01.ctl, /u01/app/oracle/oradata/PROD/Disk2/control02.ctl, /u01/app/oracle/oradata/PROD/Disk3/control03.ctl)db_name = PRODlog_archive_dest_1 = "LOCATION=/home/oracle/arch"log_archive_dest_state_1 = enabledb_block_size = 8192undo_management = AUTOundo_tablespace = undotbscompatible = 10.2.0sga_target = 300Msga_max_size = 300M 建立密码文件orapwPROD[oracle@rac1 dbs]$ orapwd file=orapwPROD password=oracle entries=20 启动SQLPLUS[oracle@rac1 dbs]$ sqlplus / as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Tue Mar 6 16:08:29 2012Copyright (c) 1982, 2005, Oracle. All rights reserved.Connected to an idle instance. 创建SPFILESQL> create spfile from pfile;File created. 启动到nomountSQL> startup nomountORACLE instance started. Total System Global Area 314572800 bytesFixed Size 1219184 bytesVariable Size 104859024 bytesDatabase Buffers 205520896 bytesRedo Buffers 2973696 bytesSQL> 在Gedit中编辑创建数据库语句,参考联机文档Administrator's Guide —> 第二章Creating an Oracle Database —> Step 7: Issue the CREATE DATABASE Statement[oracle@rac1 scripts]$ touch createPROD.sql[oracle@rac1 scripts]$ gedit createPROD.sqlCREATE DATABASE PROD USER SYS IDENTIFIED BY oracle USER SYSTEM IDENTIFIED BY oracle LOGFILE GROUP 1 ('/u01/app/oracle/oradata/PROD/Disk1/redo101.log','/u01/app/oracle/oradata/PROD/Disk2/redo102.log') SIZE 100M, GROUP 2 ('/u01/app/oracle/oradata/PROD/Disk1/redo201.log','/u01/app/oracle/oradata/PROD/Disk2/redo202.log') SIZE 100M, GROUP 3 ('/u01/app/oracle/oradata/PROD/Disk1/redo301.log','/u01/app/oracle/oradata/PROD/Disk2/redo302.log') SIZE 100M MAXLOGFILES 200 MAXLOGMEMBERS 5 MAXLOGHISTORY 200 MAXDATAFILES 100 MAXINSTANCES 2 CHARACTER SET AL32UTF8 NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/u01/app/oracle/oradata/PROD/Disk1/system01.dbf' SIZE 325M REUSE EXTENT MANAGEMENT LOCAL SYSAUX DATAFILE '/u01/app/oracle/oradata/PROD/Disk1/sysaux01.dbf' SIZE 325M REUSE DEFAULT TEMPORARY TABLESPACE temp TEMPFILE '/u01/app/oracle/oradata/PROD/Disk1/temp01.dbf' SIZE 100M REUSE UNDO TABLESPACE undotbs DATAFILE '/u01/app/oracle/oradata/PROD/Disk1/undotbs01.dbf' SIZE 100M REUSE AUTOEXTEND ON MAXSIZE 2G;到此,手动建库已完成。 注意事项: 1. 建库的路径 2. 字符集的选择 Globalization Support Guide --》第二章 2 Choosing a Character Set ,我都是搜索8859p1,找到WE8ISO8859P1 Globalization Support Guide --》A Locale Data 搜索需要的字符集 3. 跟踪文件的路径 Section 1: 数据库和网络配置1. Database Setup and Undo Management1.1 Run the minimum required scripts to complete the basic configuration of PROD database .@?/rdbms/admin/catalog.sql和@?/rdbms/admin/catproc.sql1.2 Set up automatic undo management in the PROD database to support the following requirements :1.2.1 Avoid ORA-01555 Snapshot too old errors for queries running up to 90 minutes on average .1.2.2 The number of concurrent OLTP users will be approximately 120 during normal business hours .1.2.3 The number of concurrent batch processes that will run in the evenings and weekings will be approximately 12 to 15 .更改参数设置Alter system set undo_retention=5400(1.2.1);(show parameter unto)使以上参数生效:Alter tablespace undotbs retention guarantee(1.2.1);(select tablespace_name,retention from dba_tablespaces;) alter system set sessions=170 scope=spfile(1.2.2)(show parameter session) alter system set job_queue_processes=15 scope=both(1.2.3)(show parameter job)2. Server-side Network Configuration2.1. Create a listener using the default listener name .2.1.1 The TCP/IP protocol will be used for all connections.Use the machine name (not the IP address) for host. 2.1.2 This listener will listener on the default port.2.1.3 Database: PROD and EMREP (created later) will be serviced by this listener.Net Services Administrator's Guide –》第10章 Listener Configuration During Installationvi $ORACLE_HOME/network/admin/listener.oraSID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME =emrep ) (ORACLE_HOME = /oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) )LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = test)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) )2.2 Add a second listener,named LSNR2,which will listen on port 1526,Configure this listener to support only automatic instance registrations. 2.2.1 Set up the PROD instance to automatically register with the LSNR2. vi $ORACLE_HOME/network/admin/listener.ora 添加以下信息#SID_LIST_LSNR2 =# (SID_LIST =# (SID_DESC =# (SID_NAME = PROD )# (ORACLE_HOME = /oracle/product/10.2.0/db_1)# )# )LSNR2 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = test)(PORT = 1526)) ) )vi $ORACLE_HOME/network/admin/tnsnames.ora 添加以下信息LSNR_2 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = test)(PORT = 1526)) ) (CONNECT_DATA = (SERVICE_NAME = LSNR_2) ? ) )alter system set local_listener=’prod_2’(2.2.1)alter system regitster(2.2.1) 2.3 Start both listeners.lsnrctl start lsnrctl start lsnr23. Shared Server Configuartion3.1 Configure the PROD database to support up to 300 sessions,reserving 100 for dedicated connection.SQL> alter system set sessions=300 scope=spfile ;SQL> alter system set shared_server_sessions=200 scope=both;3.2 Configure the PROD database to support. 3.2.1 Default of 3 TCP dispatchersSQL>alter system set dispatchers="(PROTOCOL=TCP)(DISPATCHERS=3)" scope=both; 3.2.2 Maximum of 10 dispatchersSQL> alter system set max_dispatchers=10 scope=both; 3.3.Configure the PROD database to support: 3.3.1 Minimum of 10 shared server processesSQL> alter system set shared_servers=10 scope=both; 3.3.2 Maximum of 30 shared server processesSQL> alter system set max_shared_servers=30 scope=both;4. Client-side Network configuartion4.1. Create the client-side network configuration files providing connect descriptors to your databases using local naming and easy connect methods. 4.1.1 The prod alias should connect to the PROD instance using the default listener and always use a dedicated server connection.prod = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = test)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = PROD) ) ) 4.1.2 The prod_s alias should connect to the PROD instance using LSNR2 and use a shared server connection.prod_s = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = test)(PORT = 1526)) ) (CONNECT_DATA = (SERVICE_NAME = PROD) ) )4.2. The racdb alias should connect to the RACDB service (created later) with a dedicated server connection. 4.2.1 The RACDB service will be running on your RAC Cluster.racdb = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = RACDB) ) )4.3. The emrep alias should connect to the EMREP instance instance (created later) with a dedicated server connection.emrep = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = test)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = EMREP) ) )5. Tablespace Creation and ConfigurationNote:Tablespaces must be named as specified in each task to receive credit .5.1 Create a temporary tablespace group that contains two(2) temporary tablespaces to support batch processing.the creation of large indexes, and analyzing tables.Use the following specifications: 5.1.1Temporary tablespace group named TEMP_GRP containing temporary tablespaces TEMP1 and TEMP2.create temporary tablespace temp1 tempfile '/oracle/oradata/temp1_01_grp.dbf' size 20m AUTOEXTEND ON MAXSIZE UNLIMITED tablespace group TEMP_GRP ; create temporary tablespace temp2 tempfile '/oracle/oradata/temp2_01_grp.dbf' size 20m AUTOEXTEND ON MAXSIZE UNLIMITED tablespace group TEMP_GRP ; 5.1.2 Make TEMP_GRP the default temporary tablespace for all new users.alter database default temporary tablespace TEMP_GRP;(SQL> select * from dba_tablespace_groups;)5.2 Create a permanent tablespace to store sample test data.Use the following specifications: 5.2.1 Tablespace name of EXAMPLE 5.2.2 Inital datafile size of 400MB with the file expected to grow to 4TB. 5.2.3 Initial extent size of 1MB 5.2.4 Next extent size of 1MB-- drop tablespace EXAMPLE including contents and datafiles ;CREATE BIGFILE TABLESPACE example DATAFILE '/oracle/oradata/example01.dbf' SIZE 400M AUTOEXTEND ON NEXT 1M MAXSIZE 4T EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M SEGMENT SPACE MANAGEMENT AUTO ; 5.3 Create a permanent tablespace to store indexes.Use the following specifications: 5.3.1 Tablespace name of INDX 5.3.2 File size of 40MBcreate tablespace INDX datafile '/oracle/oradata/index01.dbf' size 40m AUTOEXTEND ON ;5.4 Create a permanent tablespace to store data collected from various Oracle tools.Use the following specifications: 5.4.1 Tablespace name of TOOLS 5.4.2 File size of 10MB-- drop tablespace TOOLS including contents and datafiles ;create tablespace TOOLS datafile '/oracle/oradata/tools01.dbf' size 10m AUTOEXTEND ON ;5.5 Create a default permanent tablespace using the following specifications: 5.5.1 Tablespace name of USERS 5.5.2 File size of 48MB 5.5.3 Initial extent size of 4MB 5.5.4 Next extent size of 4MB-- drop tablespace USERS including contents and datafiles ;create tablespace USERS datafile '/oracle/oradata/users01.dbf' size 48m AUTOEXTEND ON NEXT 4m EXTENT MANAGEMENT LOCAL UNIFORM SIZE 4M SEGMENT SPACE MANAGEMENT AUTO ;alter database default tablespace USERS ;5.6 Create a permanent tablespace for storing segments associated with online transaction processing high insert rates.Due to the potential high volume of concurrent inserts,every effort should be taken to reduce contention for each of the tables that will be stored in this tablespace.Use the following specifications: 5.6.1 Tablespace name of OLTP 5.6.2 File size of 48MB 5.6.3 Initial extent size of 2MB 5.6.4 Next extent size of 2MB-- drop tablespace OLTP including contents and datafiles ;create tablespace OLTP datafile '/oracle/oradata/oltp01.dbf' size 48m AUTOEXTEND ON NEXT 2M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M SEGMENT SPACE MANAGEMENT AUTO ;6. Log File Management6.1. Due to the expected high volume of transactions,the database should have the following configuration: 6.1.1 A minimum of 5 redo log groups. 6.1.2 Each redo log group should not be a single point of failure 6.1.3 File size of 100MB 6.1.4 Specify the location such that it minimizes contention and reduces the risk of a single point of failure in case of disk drive failure.--alter database drop logfile group 1;--alter database drop logfile group 2;--alter database drop logfile group 3;--alter database add logfile group 1 ('/oracle/oradata/redo11.log','/oracle/oradata/redo12.log') size 100m; --alter database add logfile group 2 ('/oracle/oradata/redo21.log','/oracle/oradata/redo22.log') size 100m; --alter database add logfile group 3 ('/oracle/oradata/redo31.log','/oracle/oradata/redo32.log') size 100m; Alter database add logfile member ‘/oracle/oradata/redo12.log’ to group 1;Alter database add logfile member ‘/oracle/oradata/redo22.log’ to group 2;Alter database add logfile member ‘/oracle/oradata/redo32.log’ to group 3;alter database add logfile group 4 ('/oracle/oradata/redo41.log','/oracle/oradata/redo42.log') size 100m; alter database add logfile group 5 ('/oracle/oradata/redo51.log','/oracle/oradata/redo52.log') size 100m; 6.2. Triplex the controlfile to minimize recovery in case of disk drive failure.Alter database backup controlfile to trace;7. Schema Creation7.1. As user SYS,run the script /home/oracle/scripts/create_bishhr.sql,Ignore any errors concerning OE.But do not ignore any other errors.@/home/oracle/scripts/create_bishhr.sqlHr/users/temp/oracle/?/rdbms/log/Conn system/oracle@?/sqlplus/admin/pupbld.sql8. Schema Statistics and Parameter File Configuration8.1. Compute statistics for the various schemas in the database as necessary for use with cost based optimization.8.2. Investigate the parameter file for reasonable sizes for each parameter listed.Add additional parameters as you deem necessary to support an optimal database environment.In addition,modify or add the following listed parameters: UTL_FILE_DIR=('/home/oracle','/home/oracle/temp','/home/oracle/scripts') Note: Appalications that use Oracle 10g features will be running therefore,ensure the database and instance are appropriately configured.exec dbms_stats.gather_database_stats(degree=>5);alter system set utl_file_dir='/home/oracle','/home/oracle/temp','/home/oracle/scripts' scope=spfile ;9. Database Backup and Availability9.1. Backup the database to prepare for complete recovery under all circumstances.9.2. OPEN the database .rman target /run { backup full database format '/oracle/bak/full_%U.bak';backup archivelog all format '/oracle/bak/arc_%U.bak';copy current controlfile to '/oracle/bak/control_bak';} 至此,数据库与网络配置部分完成 Section 2: Grid Control安装配置1.Grid Control Installation1.1 Create a database for your repository 1.1.1 Use EMREP for database name and instance name1.1.2 on your even machine1.2 Install Grid Control on your Management Server 图解:DBCA建库 然后开始安装GC NEXT 根据提示信息完成相应的小操作即可。 然后在ODD机上下载 agent 22K的文件 在考试环境中需要我们手动把PROD数据库添加到GC里面 1.4 Create a Grid Control super user called EMADMIN with password EMADMIN 右上角Setup->Administrators->create 2.Using Grid Control 2.1. Using Grid Control,change the PGA_AGGREGATE_TARGET on your PROD server to 500MB so that it will revert when the instance is restarted.targets-> database-> administration-> Database Configuration-> All Initialization Parameters-> PGA-> Aggregate PGA Target 500M ->Apply 2.2. Using Grid Control,configure the instance to ensure that it will take up to five(5) minutes to recover your instance following an instance failure.targets-> database-> administration-> Database Configuration-> All Initialization Parameters->MTTR2.3. Configure an alert on the SYSTEM tablespace of the PROD database.The alert should register as a warning at 87% full and critical at 95% full.Metric and Policy Settings->Tablespace Space Used (%) ->edit-> SYSTEM 2.4. Setup notifications to be sent to the email address'dba@ocm.com'Notification messages should be sent to this address at anytime.右上角Preferences 2.5. Using Grid Control,create a new tablespace in the PROD database called REGISTRATION 2.5.1 Create with on 90MB datafile 2.5.2 Make sure this datafile can grow to 120MB if need be 2.5.3 Configure the tablespace for optimal block space utilizationtargets-> database-> administration->Tablespaces-> create-> REGISTRATION ->add ->continue->ok 3.Implementing Schedules and Jobs3.1. Using Grid Control,create a schedule for the PROD database. 1.1 Call this schedule DAILYREBUILD 1.2 Configure it to run at 2PM every daytargets-> database-> administration-> Schedules-> create link -> DAILYREBUILD -> ok 3.2. Create a program for the PROD database called EMP_IND_REBUILD that rebuilds all indexes on the HR.EMPLOYEES table.targets-> database-> administration-> Programs -> create -> EMP_IND_REBUILD -> ok 3.3. Create a window that utilites the DIALYREBUILD schedule and SYSTEM_PLAN resource manager plan.targets-> database -> administration -> Oracle Scheduler -> Scheduler Windows -> Use an existing schedule ->HR. DAILYREBUILD -> ok3.4. Create a job called REBUILD_JOB that uses the DAILYREBUILD schedule and EM_IND_REBUILD program.targets-> database-> administration-> Oracle Scheduler -> Jobs -> create ->name REBUILD_JOB-> Command (change command type) EMP_IND_REBUILD-> Schedule Schedule Type (Use Per-defined Schedule) DAILYREBUILD-> ok 至此,GC部分完成。 Section 3: 数据库备份恢复1.Create an RMAN Catalog1.1 Create a tablespace in your EMREP database called RC_DATA1.1.1 Make it locally managed1.1.2 Create it with one datafile of size 100MBSqlplus sys/oracle@emrep as sysdbaSQL>create tablespace RC_DATA Datafile ‘/home/oracle/oradata/PROD/rc_data01.dbf’ size 100M Autoextend on next 10M Extent management local Segment space management auto;1.2. Create a user named RC_ADMIN with password RC_ADMIN in your EMREP1.2.1 The user must have a default tablespace of RAC_DATA1.2.2 Give the user the ability to manage a Recovery CatalogSqlplus sys/oracle@emrep as sysdbaSQL>create user RC_ADMIN identified by RC_ADMIN default tablespace RC_ADMIN;SQL>grant connect,resource,recovery_catalog_owner to RC_ADMIN;1.3. Create a Recovery Catalog1.3.1 Create the catalog in the EMREP database conned by RC_ADMINRman catalog RC_ADMIN/RC_ADMIN@emrepRMAN>create catalog tablespace RC_DATA;1.3.2 Register the PROD database with the catalogRman target sys/oracle@prod catalog RC_ADMIN/RC_ADMIN@emrepRMAN>register database;RMAN> resync catalog;2.Using RMAN2.1. Configure RMAN options for the PROD database2.1.1 Turn backup optimization onRMAN>CONFIGURE BACKUP OPTIMIZATION ON;2.1.2 Set your default channel to write to /home/oracle/backup (you may have to create this directory)RMAN>CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT’/home/oracle/bakup/%d_%T_%U.bak’;2.1.3 Turn on controlfile autobackup to write to /home/oracle/backup/control (you may have to create this directory)RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;RMAN>CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/home/oracle/backup/control/%F' 2.1.4 Configure a reteation window of 7 daysRMAN>CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;2.2. Perform a backup2.2.1 Perform a backup using your default channel,with compreesion2.2.2 Include all datafiles in the backup2.2.3 Include your current control file and spfile2.2.4 Include all archive logs.then remove the originals RMAN>backup as compressed bakupset database include current controlfile plus archivelog delete all input;3. Flashback Database3.1. Turn on Flashback Database3.1.1 Configure a flash recovery area of 4GB3.1.2 Put your flash recovery area in /home/oracle/flash (you may have to create this directory)SQL>alter system set db_recovery_file_dest_size=4G scope=both; SQL>alter system set db_recovery_file_dest='/home/oracle/flash' scope=both;SQL>shutdown immediate;SQL>startup mount;SQL>alter database flashback on;(SQL>alter database archivelog;)3.2. your database open for reviewSQL>alter database open; 至此,数据库备份恢复部分完成。 Section 4: 数据仓库管理1. Fast Refreshable Materialized View1.1 Using the query found in the mview1.txt text file.create a fast refreshable materialized view named PROD_MV in the SH schema.SELECT time_id,prod_subcategory,SUM(unit_cost),COUNT(unit_cost),COUNT(*)FROM costs c,products pwhere c.prod_id=p.prod_idGROUP BY time_id,prod_subcategory;1.先创建表costs、products的 Materialized View Log[table=98%]
[align=RIGHT]选择schema和表以后点击这里

2.创建Materialized View[table=98%]
[align=RIGHT]填写题目中给出的SQL语句

[table=98%]
[align=RIGHT]按照题目要求指定刷新方式

CREATE MATERIALIZED VIEW LOG ON costsWITH SEQUENCE(prod_id, unit_cost, time_id, channel_id, promo_id, unit_price)INCLUDING NEW VALUES;CREATE MATERIALIZED VIEW LOG ON productsWITH SEQUENCE, ROWID,PRIMARY(/*所有字段*/)INCLUDING NEW VALUES; CREATE MATERIALIZED VIEW PROD_MVPCTFREE 0 TABLESPACE demoSTORAGE (INITIAL 8k NEXT 8k PCTINCREASE 0)BUILD IMMEDIATEREFRESH FASTENABLE QUERY REWRITEAS SELECT time_id,prod_subcategory,SUM(unit_cost),COUNT(unit_cost),COUNT(*)FROM costs c.products pwhere c.prod_id=p.prod_idGROUP BY time_id,prod_subcategory;2.Creating an Updatable Materialized View2.1. Using the HR.EMPLOYEES table in the PROD database. create an updatable materialized view in the EMREP database named EMP_UPD_MV consisting of the following columns: EMPLOYEE_ID,FIRST_NAME,LAST_NAME,PHONE_NUMBER,SALARY.1.创建database linkcreate database link lk_prod connect to hr identified by hr using 'prod';2.验证database link是否可用select EMPLOYEE_ID,FIRST_NAME,LAST_NAME,PHONE_NUMBER,SALARY from mailto:hr.employees@lk_prod]hr.employees@lk_prod;3.创建Updatable Materialized View [table=98%]
[align=RIGHT]按照题目要求指定刷新方式

3.Oracle_Loader External Tables3.1. In the scripts directory. you will find prod_master.dat and prod_master.ctl.Using the information found in these files.create and external table named PROD_MASTER in the SH schema of the PROD database.CREATE TABLE sh.prod_master (根据prod_master.ctl定义表 ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY dir1 LOCATION ('prod_master.dat') ) PARALLEL REJECT LIMIT UNLIMITED; 4. Oracle_Datapump External Table4.1. Create an external table called COUNTRIES_EXT in the PROD database owned by SH. containing the data from the COUNTRY_ID,COUNTRY_NAME,and COUNTRY_REGION columns of the SH.COUNTRIES table.1.创建目录Create directory dir1 as '/home/oracle/';Grant read,write on directory dr1 to sh;2.创建外部表CREATE TABLE sh.COUNTRIES_EXT ORGANIZATION EXTERNAL ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY dir1 LOCATION ('COUNTRIES_EXT.dat') ) PARALLEL REJECT LIMIT UNLIMITEDasselect COUNTRY_ID,COUNTRY_NAME,COUNTRY_REGION from SH.COUNTRIES;4.2. Create another external table called COUNTRIES_EXT in the EMREP database owned by SYSTEM.The source of the data is the external file(s) created in the previous step.CREATE TABLE sh.COUNTRIES_EXT(COUNTRY_ID NUMBER,COUNTRY_NAME VARCHAR2(40),COUNTRY_REGION VARCHAR2(20)) ORGANIZATION EXTERNAL ( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY dir1 LOCATION ('COUNTRIES_EXT.dat') ) PARALLEL REJECT LIMIT UNLIMITED; 至此,数据仓库管理部分完成。 Section 5: 数据库管理Transportable Tablespace 1. Use the import utility to import all of the objects contained in the sst.dmp file into the OLTP_USER schema in the PROD database.(The exported user was SST.)2. Transport a copy of the OLTP tablespace from the PROD database to the EMREP database. After you have completed the task.the OLTP tablespace should be available for both reading and writing in both databases.All of the objects owned by the user OLTP_USER in the PROD database should be present in the EMREP database after the tablespace is transported. 参考文档:Administrator's Guide=> 8 Managing Tablespaces=> Transporting Tablespaces Between Databases 题目说明:1.用Imp将sst.dmp文件的所有对象导入到OLTP_USER schema中(导出的用户是sst) 2.将PROD数据库中的OLTP表空间传输到EMREP数据库中,传输完成后,OLTP表空间必须在两个数据库均可读可写,所有在PROD数据库OLTP_USER用户下的对象必须在EMREP数据库中存在。 准备工作:生成sst.dmp文件:[oracle@rac1 scripts]$ export ORACLE_SID=PROD[oracle@rac1 scripts]$ sqlplus / as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Fri Mar 9 10:17:58 2012Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options SQL> create user sst identified by sst default tablespace oltp;User created. SQL> grant dba to sst;Grant succeeded. SQL> conn sst/sstConnected. SQL> create table t as select * from all_objects;Table created. [oracle@rac1 ~]$ exp sst/sst@prod file=sst.dmp Export: Release 10.2.0.1.0 - Production on Fri Mar 9 10:26:24 2012 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining optionsExport done in ZHS16GBK character set and AL16UTF16 NCHAR character setserver uses AL32UTF8 character set (possible charset conversion) About to export specified users .... exporting pre-schema procedural objects and actions. exporting foreign function library names for user SST. exporting PUBLIC type synonyms. exporting private type synonyms. exporting object type definitions for user SSTAbout to export SST's objects .... exporting database links. exporting sequence numbers. exporting cluster definitions. about to export SST's tables via Conventional Path .... . exporting table T 9612 rows exported. exporting synonyms. exporting views. exporting stored procedures. exporting operators. exporting referential integrity constraints. exporting triggers. exporting indextypes. exporting bitmap, functional and extensible indexes. exporting posttables actions. exporting materialized views. exporting snapshot logs. exporting job queues. exporting refresh groups and children. exporting dimensions. exporting post-schema procedural objects and actions. exporting statisticsExport terminated successfully without warnings. SQL> conn / as sysdbaConnected. SQL> drop user sst cascade;User dropped. 步骤:1.创建oltp_user用户SQL> create user oltp_user identified by oracle account unlock;User created. 2.给OLTP_USER赋予基本的权限SQL> grant connect,resource to oltp_user;Grant succeeded. 3. 使用imp导入sst.dmp到oltp_user schema[oracle@rac1 ~]$ imp system/oracle@prod file=sst.dmp buffer=100000 fromuser=sst touser=oltp_user Import: Release 10.2.0.1.0 - Production on Fri Mar 9 10:34:51 2012 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options Export file created by EXPORT:V10.02.01 via conventional path Warning: the objects were exported by SST, not by you import done in ZHS16GBK character set and AL16UTF16 NCHAR character setimport server uses AL32UTF8 character set (possible charset conversion). importing SST's objects into OLTP_USER. . importing table "T" 9612 rows importedImport terminated successfully without warnings. 4.检查OLTP表空间是否是自包含表空间(要被传输的表空间中的对象没有引用被传输的表空间之外的对象,这种表空间就是自包含表空间)[oracle@rac1 ~]$ sqlplus sys/oracle@prod as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Fri Mar 9 10:36:17 2012Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options SQL> exec dbms_tts.transport_set_check('OLTP',true);PL/SQL procedure successfully completed. SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;no rows selected [oracle@rac1 ~]$ mkdir dir 5.创建传输集的导出目录SQL> create directory dir as '/home/oracle/dir';Directory created. 6.将OLTP表空间置为只读模式SQL> alter tablespace oltp read only;Tablespace altered. 7.生成传输表空间的传输集[oracle@rac1 ~]$ expdp system/oracle@prod dumpfile=oltp.dmp directory=dir TRANSPORT_TABLESPACES=oltp; Export: Release 10.2.0.1.0 - Production on Friday, 09 March, 2012 10:43:44 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining optionsStarting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/********@prod dumpfile=oltp.dmp directory=dir TRANSPORT_TABLESPACES=oltpProcessing object type TRANSPORTABLE_EXPORT/PLUGTS_BLKProcessing object type TRANSPORTABLE_EXPORT/TABLEProcessing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLKMaster table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded******************************************************************************Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is: /home/oracle/dir/oltp.dmpJob "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 10:43:59 8.在EMREP数据库中创建用户oltp_user[oracle@rac1 ~]$ export ORACLE_SID=EMREP[oracle@rac1 ~]$ sqlplus sys/oracle@emrep as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Fri Mar 9 10:45:09 2012Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options SQL> create user oltp_user identified by oracle;User created. 9.赋予oltp_user基本的权限SQL> grant dba to oltp_user;Grant succeeded. 10.创建传输集导入的逻辑目录SQL> create directory dt_ws5 as '/home/oracle/dir';Directory created. 11.检查源端和目标端的BLOCK_SIZE是否一致源端:SQL> select block_size from dba_tablespaces where tablespace_name='OLTP'; BLOCK_SIZE---------- 8192目标端:SQL> show parameter db_block_size NAME TYPE------------------------------------ ----------------------VALUE------------------------------db_block_size integer8192 如不一致可在目标端通过诸如alter system set db_4k_cache_size=8M;语句来修改 12.将OLTP表空间对应的数据文件拷贝到EMREP数据库中[oracle@rac1 ~]$ cp /u01/app/oracle/oradata/PROD/Disk1/oltp1.dbf /u01/app/oracle/oradata/EMREP/ 13.导入传输集到EMREP数据库中,使OLTP表空间注册到EMREP数据库中[oracle@rac1 ~]$ impdp system/oracle@emrep dumpfile='oltp.dmp' directory=dt_ws5 TRANSPORT_DATAFILES='/u01/app/oracle/oradata/EMREP/oltp1.dbf' Import: Release 10.2.0.1.0 - Production on Friday, 09 March, 2012 11:04:00 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining optionsMaster table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloadedStarting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/********@emrep dumpfile=oltp.dmp directory=dt_ws5 TRANSPORT_DATAFILES=/u01/app/oracle/oradata/EMREP/oltp1.dbfProcessing object type TRANSPORTABLE_EXPORT/PLUGTS_BLKProcessing object type TRANSPORTABLE_EXPORT/TABLEProcessing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLKJob "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 11:04:06 14.将OLTP表空间在PROD和EMREP数据库中均置为可读可写状态[oracle@rac1 ~]$ sqlplus sys/oracle@emrep as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Fri Mar 9 11:05:09 2012Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options SQL> alter tablespace oltp read write;Tablespace altered. [oracle@rac1 scripts]$ sqlplus sys/oracle@prod as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Fri Mar 9 11:05:58 2012Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options SQL> alter tablespace oltp read write;Tablespace altered. Create Additional Buffer Cache 1. Create an additional buffer cache within the SGA of the PROD database for use with 16KB blocks.Ensure that the 16KB buffer cache will always be available in the SGA. 题目说明:在PROD数据库的SGA中创建额外的块大小为16k的buffer cache,保证它将一直在SGA中可用。[oracle@rac1 scripts]$ sqlplus sys/oracle@prod as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Fri Mar 9 13:43:00 2012Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options SQL> alter system set db_16k_cache_size=16M;System altered. SQL> startup force;ORACLE instance started. Total System Global Area 314572800 bytesFixed Size 1219184 bytesVariable Size 138413456 bytesDatabase Buffers 171966464 bytesRedo Buffers 2973696 bytesDatabase mounted.Database opened. Working with LOB Data 1. Create a new tablespace named LOB_DATA in the PROD database to store lob data and lob indexes with the following specifications: 1.1 Create 2 datafiles each in a different location. 1.2 Each file should be 64MB in size. 1.3 Block size 16KB1.4 Determine which type of extent management would be best for lob data and configure the extents appropriately. 方法一:在Sqlplus中用命令行创建:create tablespace LOB_DATA datafile '/u01/app/oracle/oradata/PROD/Disk1/lob_data01.dbf' size 64Mautoextend on next 2M,'/u01/app/oracle/oradata/PROD/Disk2/lob_data02.dbf' size 64Mautoextend on next 2Mextent management local uniform size 2Msegment space management autoblocksize 16k; 将以上创建LOB_DATA表空间的脚本保存为lob_data.sql,然后在Sqlplus中运行lob_data.sql[oracle@rac1 scripts]$ sqlplus sys/oracle@prod as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Fri Mar 9 13:55:32 2012Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options SQL> @lob_data.sqlTablespace created. 方法二:用GC图形界面创建:点击OK Manage Schema Data 1. Create a new table in the HR schema in the PROD database with the following specifications: 1.1 Table name MAGAZINE_ARTICLES 1.2 Tablespace USERS 1.3 Column names 1.3.1 AUTHOR VARCHAR2(30) 1.3.2 ARTICLE_NAME VARCHAR2(50) 1.3.3 ARTICLE_DATE DATE 1.3.4 ARTICLE_DATA CLOB 1.3.4.1 Tablespace LOB_DATA with a 16KB chunk size,initial and next extents each with a size of 2MB. 1.3.4.2 Use the nocache option and disable storage in row. 1.4 Use import to populate the HR.MAGAZINE_ARTICLES table with data from the exp_mag.dmp file. 方法一:用GC图形界面:方法二:用Sqlplus命令行,以hr身份登录PROD,执行以下SQL语句: CREATE TABLE MAGAZINE_ARTICLES ( AUTHOR VARCHAR2(30), ARTICLE_NAME VARCHAR2(50), ARTICLE_DATE DATE, ARTICLE_DATA CLOB)TABLESPACE USERS LOB(ARTICLE_DATA) STORE AS (TABLESPACE LOB_DATA STORAGE (INITIAL 2m NEXT 2m) CHUNK 16384 NOCACHE DISABLE STORAGE IN ROW); 生成exp_mag.dmp文件:SQL> insert into hr.magazine_articles values('prince','MY BOOK',sysdate,'I am princeOracle9iOracle9iOracle9i');1 row created. SQL> insert into hr.magazine_articles values('prince360','prince',sysdate,'ABCDEFG');1 row created. SQL> commit;Commit complete. [oracle@rac1 ~]$ expdp hr/oracle@prod directory=dir dumpfile=exp_mag.dmp tables=MAGAZINE_ARTICLES Export: Release 10.2.0.1.0 - Production on Friday, 09 March, 2012 15:09:59 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining optionsStarting "HR"."SYS_EXPORT_TABLE_01": hr/********@prod directory=dir dumpfile=exp_mag.dmp tables=MAGAZINE_ARTICLESEstimate in progress using BLOCKS method...Processing object type TABLE_EXPORT/TABLE/TABLE_DATATotal estimation using BLOCKS method: 6 MBProcessing object type TABLE_EXPORT/TABLE/TABLE. . exported "HR"."MAGAZINE_ARTICLES" 5.984 KB 2 rowsMaster table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded******************************************************************************Dump file set for HR.SYS_EXPORT_TABLE_01 is: /home/oracle/exp_mag.dmpJob "HR"."SYS_EXPORT_TABLE_01" successfully completed at 15:10:17 或者:[oracle@rac1 scripts]$ exp hr/oracle@prod file=exp_mag.dmp tables=MAGAZINE_ARTICLES Export: Release 10.2.0.1.0 - Production on Tue Apr 10 14:22:34 2012 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining optionsExport done in ZHS16GBK character set and AL16UTF16 NCHAR character setserver uses AL32UTF8 character set (possible charset conversion) About to export specified tables via Conventional Path .... . exporting table MAGAZINE_ARTICLES 2 rows exportedExport terminated successfully without warnings. SQL> delete HR.MAGAZINE_ARTICLES;2 rows deleted. SQL> commit;Commit complete. 用exp_mag.dmp文件执行导入:[oracle@rac1 ~]$ impdp hr/oracle@prod directory=dir dumpfile=exp_mag.dmp tables=MAGAZINE_ARTICLES Import: Release 10.2.0.1.0 - Production on Friday, 09 March, 2012 15:17:23 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining optionsMaster table "HR"."SYS_IMPORT_TABLE_01" successfully loaded/unloadedStarting "HR"."SYS_IMPORT_TABLE_01": hr/********@prod directory=dir dumpfile=exp_mag.dmp tables=MAGAZINE_ARTICLESProcessing object type TABLE_EXPORT/TABLE/TABLEORA-39151: Table "HR"."MAGAZINE_ARTICLES" exists. All dependent metadata and data will be skipped due to table_exists_action of skipProcessing object type TABLE_EXPORT/TABLE/TABLE_DATAJob "HR"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at 15:17:29此时发现表已存在,无法导入,于是在导入时加上一个参数 CONTENT=data_only在导入的时候将HR用户退出Sqlplus,否则导入时会卡住不动。[oracle@rac1 ~]$ impdp hr/oracle@prod directory=dir dumpfile=exp_mag.dmp tables=MAGAZINE_ARTICLES CONTENT=data_only Import: Release 10.2.0.1.0 - Production on Friday, 09 March, 2012 15:46:03 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining optionsMaster table "HR"."SYS_IMPORT_TABLE_01" successfully loaded/unloadedStarting "HR"."SYS_IMPORT_TABLE_01": hr/********@prod directory=dir dumpfile=exp_mag.dmp tables=MAGAZINE_ARTICLES CONTENT=data_onlyProcessing object type TABLE_EXPORT/TABLE/TABLE_DATA. . imported "HR"."MAGAZINE_ARTICLES" 5.984 KB 2 rowsJob "HR"."SYS_IMPORT_TABLE_01" successfully completed at 15:46:07 至此,导入成功。 或者:[oracle@rac1 scripts]$ imp hr/oracle@prod file=exp_mag.dmp tables=MAGAZINE_ARTICLES ignore=y Import: Release 10.2.0.1.0 - Production on Tue Apr 10 14:32:40 2012 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options Export file created by EXPORT:V10.02.01 via conventional pathimport done in ZHS16GBK character set and AL16UTF16 NCHAR character setimport server uses AL32UTF8 character set (possible charset conversion). importing HR's objects into HR. importing HR's objects into HR. . importing table "MAGAZINE_ARTICLES" 2 rows importedImport terminated successfully without warnings. 验证:SQL> select count(*) from MAGAZINE_ARTICLES; COUNT(*)---------- 2 2. Create a new table in the HR schema in the PROD database with the following specifications: 2.1 Table name ORACLE9I_REFERENCES 2.2 Tablespace USERS 2.3 Table structure: 2.3.1 ORACLE9I_ARTICLE ROWID 2.3.2 INSERT_TIME TIMESTAMP WITH LOCAL TIME ZONE3. For any row in the HR.MAGAZINE_ARTICLES table that contains three or more references to Oracle9i insert the corresponding rowid and a timestamp for the time that it was inserted into the ORACLE9I_REFERENCES table. CREATE TABLE "HR"."ORACLE9I_REFERENCES"("ORACLE9I_ARTICLE" ROWID,"INSERT_TIME" TIMESTAMP WITH LOCAL TIME ZONE)TABLESPACE "USERS"; 将以上脚本保存为oracle9i_references.sql,然后在Sqlplus中执行:SQL> @oracle9i_references.sqlTable created. SQL> insert into HR.ORACLE9I_REFERENCES select ROWID,SCN_TO_TIMESTAMP(ORA_ROWSCN) from HR.MAGAZINE_ARTICLES where instr(AUTHOR||ARTICLE_NAME||ARTICLE_DATE||ARTICLE_DATA,'Oracle9i',1,3)>0; 1 row created.INSTR(源字符串 ,要查找的字符串,从第几个字符开始, 要找到第几个匹配的序号)例如:SQL> select instr('Oracle9iOracle9iOracle9iOracle9i','Oracle9i',1,3) from dual; INSTR('ORACLE9IORACLE9IORACLE9IORACLE9I','ORACLE9I',1,3)-------------------------------------------------------- 17返回的是第三个Oracle9i开始的位置 SQL> SELECT * FROM HR.ORACLE9I_REFERENCES; ORACLE9I_ARTICLE------------------INSERT_TIME---------------------------------------------------------------------------AAAClkAAAAAACBRAAA30-MAR-12 05.07.40.528266 PM Partitioning 1. Create 5 new tablespaces in the PROD database as follows: 1.1 Use the name DATA01, DATA02, DATA03, DATA04 and DATA05. 1.2 Spread the datafiles across different disk directories. 1.3 Each file should be 250MB in size. 1.4 Use uniform extents of 4MB. 1.5 Block size should be 16KB create tablespace data01 datafile '/u01/app/oracle/oradata/PROD/Disk1/data01.dbf' size 250Mautoextend on next 10Mextent management local uniform size 4Msegment space management autoblocksize 16k; create tablespace data02 datafile '/u01/app/oracle/oradata/PROD/Disk2/data02.dbf' size 250Mautoextend on next 10Mextent management local uniform size 4Msegment space management autoblocksize 16k; create tablespace data03 datafile '/u01/app/oracle/oradata/PROD/Disk3/data03.dbf' size 250Mautoextend on next 10Mextent management local uniform size 4Msegment space management autoblocksize 16k; create tablespace data04 datafile '/u01/app/oracle/oradata/PROD/Disk4/data04.dbf' size 250Mautoextend on next 10Mextent management local uniform size 4Msegment space management autoblocksize 16k; create tablespace data05 datafile '/u01/app/oracle/oradata/PROD/Disk5/data05.dbf' size 250Mautoextend on next 10Mextent management local uniform size 4Msegment space management autoblocksize 16k; 将以上脚本保存为partition.sql,并在Sqlplus中执行:[oracle@rac1 scripts]$ sqlplus sys/oracle@prod as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Fri Mar 9 16:08:02 2012Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options SQL> @partition.sql Tablespace created. Tablespace created. Tablespace created. Tablespace created. Tablespace created. 2. Create a partitioned table named SALES_HISTORY in the SH schema in the PROD database. Use the following specifications: 2.1 The column names and definitions will be the same as the OLTP_USER.SALES table. 2.2 partition the table into 5 different partitions on the SDATE column using the following specifications: 2.2.1 Partition P1 will contain data for 1998 and should be placed in the DATA01 tablespace 2.2.2 Partition P2 will contain data for 1999 and should be placed in the DATA02 tablespace 2.2.3 Partition P3 will contain data for 2000 and should be placed in the DATA03 tablespace 2.2.4 Partition P4 will contain data for 2001 and should be placed in the DATA04 tablespace 2.2.5 Partition P5 will contain data for 2002 and should be placed in the DATA05 tablespace [oracle@rac1 scripts]$ sqlplus sys/oracle@prod as sysdbaSQL*Plus: Release 10.2.0.1.0 - Production on Fri Mar 9 16:32:49 2012Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProductionWith the Partitioning, OLAP and Data Mining options SQL> alter user sh 2 quota unlimited on data01 3 quota unlimited on data02 4 quota unlimited on data03 5 quota unlimited on data04 6 quota unlimited on data05; User altered. 创建分区表,参考联机文档Administrator's Guide=>17 Managing Partitioned Tables and Indexes=>Creating Partitioned Tablescreate table OLTP_USER.SALES ( INVC_ID NUMBER, ORDER_ID NUMBER, PROD_ID NUMBER, CUST_ID NUMBER, TIME_ID DATE, SDATE DATE, CHANNEL_ID NUMBER, PROMO_ID NUMBER, QUANTITY_SOLD NUMBER(10,2), AMOUNT_SOLD NUMBER(10,2)) CREATE TABLE SH.SALES_HISTORYPARTITION BY RANGE (SDATE)( PARTITION P1 VALUES LESS THAN (TO_DATE('01-JAN-1999','DD-MON-YYYY')) TABLESPACE DATA01, PARTITION P2 VALUES LESS THAN (TO_DATE('01-JAN-2000','DD-MON-YYYY')) TABLESPACE DATA02, PARTITION P3 VALUES LESS THAN (TO_DATE('01-JAN-2001','DD-MON-YYYY')) TABLESPACE DATA03, PARTITION P4 VALUES LESS THAN (TO_DATE('01-JAN-2002','DD-MON-YYYY')) TABLESPACE DATA04, PARTITION P5 VALUES LESS THAN (MAXVALUE) TABLESPACE DATA05)as select * from OLTP_USER.SALES where 1=2; 将以上脚本保存为sales_history.sql,并在Sqlplus中执行:SQL> @sales_history.sqlTable created. 3. Run the populate_sales_hist.sql script located to the /home/oracle/scripts directory to populate the SALES_HISTORY table with data. SQL> @/home/oracle/scripts/populate_sales_hist.sql 4. Create a unique index named SALES_HISTORY_PK in the SH schema of the PROD database on the SALES_HISTORY table. Partition the index into 4 partitions with each partition containing approximately the same amount of entries. Use the following specifications: 4.1 Create the index to include the ORDERID column. 4.2 Create the index in the INDX tablespace.4.3 Create the index with parallelism degree 4. 方法一:用GC做 最后,Show SQL检查下方法二:用Sqlplus命令行做CREATE UNIQUE INDEX "SH"."SALES_HISTORY_PK"ON "SH"."SALES_HISTORY" ("ORDER_ID")PARALLEL 4 TABLESPACE "INDX"GLOBALPARTITION BY HASH ("ORDER_ID")(PARTITION "SALES_HISTORY_PK_P1" TABLESPACE "INDX",PARTITION "SALES_HISTORY_PK_P2" TABLESPACE "INDX",PARTITION "SALES_HISTORY_PK_P3" TABLESPACE "INDX",PARTITION "SALES_HISTORY_PK_P4" TABLESPACE "INDX"); 5. Create an index named SALES_HISTORY_DATE_IDX in the SH schema on the SALES_HISTORY table.Use the following specifications: 5.1 Create the index on the SDATE column5.2 Partition the index into 5 partitions that are based on the SDATE column 方法一:用GC做 最后Show SQL检查下:方法二:用Sqlplus命令行做:CREATE INDEX "SH"."SALES_HISTORY_DATE_IDX"ON "SH"."SALES_HISTORY" ("SDATE")GLOBALPARTITION BY HASH ("SDATE")(PARTITION "SALES_HISTORY_DATE_IDX_P1" TABLESPACE "INDX",PARTITION "SALES_HISTORY_DATE_IDX_P2" TABLESPACE "INDX",PARTITION "SALES_HISTORY_DATE_IDX_P3" TABLESPACE "INDX",PARTITION "SALES_HISTORY_DATE_IDX_P4" TABLESPACE "INDX",PARTITION "SALES_HISTORY_DATE_IDX_P5" TABLESPACE "INDX"); 6. Query the data in the SALES_HISTORY table to validate the use of each of the indexes. SQL> set autot traceonlySQL> select * from SH.SALES_HISTORY where ORDER_ID=1; no rows selected Execution Plan----------------------------------------------------------Plan hash value: 480139289 ------------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ------------------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 122 | 2 (0)| 00:00:01 | | | | 1 | PARTITION HASH SINGLE | | 1 | 122 | 2 (0)| 00:00:01 | 4 | 4 | | 2 | TABLE ACCESS BY GLOBAL INDEX ROWID| SALES_HISTORY | 1 | 122 | 2 (0)| 00:00:01 | ROWID | ROWID | |* 3 | INDEX UNIQUE SCAN | SALES_HISTORY_PK | 1 | | 1 (0)| 00:00:01 | 4 | 4 | ------------------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id):--------------------------------------------------- 3 - access("ORDER_ID"=1) Statistics---------------------------------------------------------- 1 recursive calls 0 db block gets 1 consistent gets 0 physical reads 0 redo size 802 bytes sent via SQL*Net to client 374 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 0 rows processed 7. Truncate partition P1 of the SALES_HISTORY and ensure all indexes are available for use both during and after the truncate command completes. SQL> alter table sh.sales_history truncate partition p1 update global indexes;Table truncated. Fine-Grained Auditing 1. Set up FGA on the SALARY and COMMISSION_PCT columns of the EMPLOYEES table in the HR schema of the PROD database. An audit record should be created if either of these two columns are selected as part of the output of a query or are used in the where condition of a select statement and their values are not null.2. Validate that the FGA is taking place by executing statements that should result in auditing records being created and by executing statements where no auditing records will be generated. Do not delete your audit records. 参考联机文档:Security Guide—> 12 Configuring and Administering Auditing —> Policies in Fine-Grained AuditingPL/SQL Packages and Types Reference—>搜索DBMS_FGA—>ADD_POLICY ProcedurebeginDBMS_FGA.ADD_POLICY(OBJECT_SCHEMA=>'HR',OBJECT_NAME=>'EMPLOYEES',POLICY_NAME=>'SALARY',AUDIT_CONDITION=>'SALARY IS NOT NULL AND COMMISSION_PCT IS NOT NULL',AUDIT_COLUMN=>'SALARY,COMMISSION_PCT',STATEMENT_TYPES=>'SELECT',audit_trail => DBMS_FGA. DB_EXTENDED,audit_column_opts=> DBMS_FGA.ALL_COLUMNS);end;/ 将以上脚本保存为FGA.sql,然后在Sqlplus中运行SQL> @FGA.sqlPL/SQL procedure successfully completed. 验证:conn hr/oracleConnected. select * from employees where SALARY is not null and COMMISSION_PCT is not null35 rows selected. conn / as sysdbaConnected. PROD..>select AUDIT_TYPE,OBJECT_SCHEMA,OBJECT_NAME,POLICY_NAME,ACTION,SQL_TEXT from dba_common_audit_trail; AUDIT_TYPE OBJEC OBJECT_NAM POLICY ACTION SQL_TEXT-------------------- ----- ---------- ------ ---------- ------------------------------Fine Grained Audit HR EMPLOYEES SALARY select * from employees where SALARY is not null and COMMISS ION_PCT is not nullPROD..>select OBJECT_SCHEMA,OBJECT_NAME,POLICY_NAME,SQL_TEXT from dba_fga_audit_trail; OBJEC OBJECT_NAM POLICY SQL_TEXT----- ---------- ------ ------------------------------HR EMPLOYEES SALARY select * from employees where SALARY is not null and COMMISS ION_PCT is not null PROD..>select LSQLTEXT,OBJ$SCHEMA,OBJ$NAME ,POLICYNAME from SYS.FGA_LOG$; LSQLTEXT OBJ$S OBJ$NAME POLICY---------------------------------------- ----- ---------- ------select * from employees where SALARY is HR EMPLOYEES SALARYnot null and COMMISSION_PCT is not null Flashback 1. Create a table named ORIGINAL_SALARY in the HR schema in the PROD database that includes the employee_id, commission_pct and salaries of all records in the HR.EMPLOYEES table.Note the date and time and then commit the changes.2. Delete all employees in the HR.ORIGINAL_SALARY table whose HIRE_DATE is before 1994.Commit the chages.Create a view HR.SALARY_VIEW that will show all the original rows before the deletion. SQL> create table hr.ORIGINAL_SALARY as select employee_id,commission_pct,salary from hr.EMPLOYEES;Table created. SQL> col sys_scn new_value scnSQL> select current_scn sys_scn from v$database; SYS_SCN----------348289 SQL> commit;Commit complete. SQL> delete from hr.ORIGINAL_SALARY where EMPLOYEE_ID in (select EMPLOYEE_ID from hr.EMPLOYEES where HIRE_DATE commit;Commit complete. SQL> create view HR.SALARY_VIEW as select * from hr.ORIGINAL_SALARY as of scn &sys_scn;Enter value for sys_scn: 348289old 1: create view HR.SALARY_VIEW as select * from hr.ORIGINAL_SALARY as of scn &sys_scnnew 1: create view HR.SALARY_VIEW as select * from hr.ORIGINAL_SALARY as of scn 348289 View created. 验证:SQL> select count(*) from hr.ORIGINAL_SALARY; COUNT(*)---------- 101 SQL> select count(*) from hr.SALARY_VIEW; COUNT(*)---------- 107 闪回已删除的表:SQL> drop table hr.ORIGINAL_SALARY;Table dropped. SQL> select OWNER,OBJECT_NAME,ORIGINAL_NAME from dba_recyclebin where ORIGINAL_NAME='ORIGINAL_SALARY'; OWNER------------------------------------------------------------OBJECT_NAME------------------------------------------------------------ORIGINAL_NAME----------------------------------------------------------------HRBIN$uwQZgZEec9jgQKjADAodPA==$0ORIGINAL_SALARY SQL> flashback table HR."BIN$uwQZgZEec9jgQKjADAodPA==$0" to before drop rename to ORIGINAL_SALARY2;Flashback complete. 验证:SQL> SELECT COUNT(*) FROM ORIGINAL_SALARY2; COUNT(*)---------- 101 至此,数据库管理部分完成。 Section 6: 数据库性能管理 Instance Configuration 1. Configure your database to record checkpoints in the alert1.log file.2. Ensure all user trace files are placed in the USER_DUMP_DEST location. 首先show parameter alert 看log_checkpoints_to_alert的值是否为true。Alter system set log_checkpoints_to_alert=true;Show parameter user_dump 看一下路径是否存在。 Set Up and Configure Resource Manager 1. Set up and configure Resource Manager using the following specifications: 1.1 Assign the user SH as the resource administrator. 1.2 Create two resource manager consumer groups.OLTP and DSS.(Use comments with each to denote what the objects will be used for.) 1.3 Create a plan named WEEKDAYS with the following directives only : 1.3.1 For OLTP group,we cannot allow more than 20 active sessions.If the 21st user attempts and activity,the request should be aborted if the wait exceeds 60 seconds. 1.3.2 The maximum number of active sessions for the DSS group to 5.If more than 5 sessions are requested, then the request should abort at 120 seconds. 1.3.3 The maximum execution time for a query for a session in the OLTP group should be set to 5 seconds.If the query is estimated to take longer than 5 seconds the session should be automatically switched to the DSS group. 1.3.4 the maximum amount of undo that the OLTP group can use should be set to 200MB. 1.3.5 Set CPU rations for OLTP,DSS and OTHER_GROUPS as 50,30 and 20 respectively. 1.3.6 DSS group has parallel degree limit of 20. 1.3.7 Make sure that and idle OLTP sessions cannot block a DML statement for more than 60 seconds. 1.4 Assign the default consumer group for the OLTP_USER to OLTP group. 1.5 Assign the default consumer group for the SH user to DSS group. 1.6 Specify that the WEEKDAYS plan be used by the instance as the default.这个部分主要是考oracle 的file:///G:/OCM/Oracle%20文档/server.102/b14231/dbrm.htm#i1010776]Database Resource Manager参考文档Database Administrator's Guide 第24章这道题主要是让我们建一个Resource Plan和两个consumer group然后按照题目的要求去设置相关的参数的值。下面相关的脚本BEGINdbms_resource_manager.clear_pending_area();dbms_resource_manager.create_pending_area();dbms_resource_manager.create_plan( 'WEEKDAYS', '');dbms_resource_manager.create_plan_directive( plan => 'WEEKDAYS', group_or_subplan => 'DSS', comment => '', cpu_p1 => 30, cpu_p2 => NULL, cpu_p3 => NULL, cpu_p4 => NULL, cpu_p5 => NULL, cpu_p6 => NULL, cpu_p7 => NULL, cpu_p8 => NULL, parallel_degree_limit_p1 => 20, active_sess_pool_p1 => 5, queueing_p1 => 120, switch_group => NULL, switch_time => NULL, switch_estimate => false, max_est_exec_time => NULL, undo_pool => NULL, max_idle_time => NULL, max_idle_blocker_time => NULL, switch_time_in_call => NULL);dbms_resource_manager.create_plan_directive( plan => 'WEEKDAYS', group_or_subplan => 'OLTP', comment => '', cpu_p1 => 50, cpu_p2 => NULL, cpu_p3 => NULL, cpu_p4 => NULL, cpu_p5 => NULL, cpu_p6 => NULL, cpu_p7 => NULL, cpu_p8 => NULL, parallel_degree_limit_p1 => NULL, active_sess_pool_p1 => 20, queueing_p1 => 60, switch_group => 'DSS', switch_time => 5, switch_estimate => false, max_est_exec_time => 5, undo_pool => 204800, max_idle_time => NULL, max_idle_blocker_time => 60, switch_time_in_call => NULL);dbms_resource_manager.create_plan_directive( plan => 'WEEKDAYS', group_or_subplan => 'OTHER_GROUPS', comment => '', cpu_p1 => 20, cpu_p2 => NULL, cpu_p3 => NULL, cpu_p4 => NULL, cpu_p5 => NULL, cpu_p6 => NULL, cpu_p7 => NULL, cpu_p8 => NULL, parallel_degree_limit_p1 => NULL, active_sess_pool_p1 => NULL, queueing_p1 => NULL, switch_group => NULL, switch_time => NULL, switch_estimate => false, max_est_exec_time => NULL, undo_pool => NULL, max_idle_time => NULL, max_idle_blocker_time => NULL, switch_time_in_call => NULL);dbms_resource_manager.submit_pending_area();dbms_resource_manager.switch_plan( plan_name => 'WEEKDAYS', sid => 'PROD' );END; Manage Instance Memory Structures 1. Create a view owned by user SYS that lists the packages,procedures,triggers and functions that are in memory and occupy more than 50KB .The view should be named LARGE_PROC and visible to all users though a public synonym named LARGE_PROC.2. Set your maximum SGA to 512MB.Turn on Automatic Shared Memory Management.Restart the instance after specifying.3. Your developers notify you that they will need the Java Pool set to a minimum of 200MB.4. Limit the total amount of PGA that can be used on and instance-wide basis to 150MB.创建synonym时可以参考Database Administrator's Guide中搜索synonym关键字。第一问的意思是创建一个视图,该视图中包含驻留在共享内存中超过50 KB的对象的信息,这些对象为包,过程,触发器和函数。并为这个视图创建一个公共的synonym,使它能被所有的用户查看到。第二问是让我们根据需求修改内存参数。 Create view LARGE_PROCAsSelect * from V$DB_OBJECT_CACHEWHERE TYPEIN('PACKAGE','PROCEDURE','TRIGGER','FUNCTION') ANDSHARABLE_MEM>52100; CREATE PUBLIC SYNONYM LARGE_PROC FOR SYS. LARGE_PROC;Grant select on LARGE_PROC to public; Alter system set sga_max_size=512M scope=spfile;Alter system set sga_target=400M;Alter system set java_pool_size=200M;Alter system set pga_aggregatee_target=150M; Manage Objects for Performance 1. Our application needs to access the CUST_LAST_NAME column in the CUSTOMERS table in the SH schema.The problem is that the users can supply names without regard to case.The application changes all the user-supplied names to uppercase.Analysis reveals that a normal index we place on that column is not used.Create an index on the aforementioned column that will be usable by the application.(建一个函数索引)参考文档为Database Administrator's Guide第16章。 Create index cust_last_name_idx on CUSTOMERS(upper(CUST_LAST_NAME)); 2. Trun on monitoring for all of indexes on the SALES table in the OLTP_USER schema.(监控一个表的索引使用情况)参考文档为Database Administrator's Guide第16章。Select index_name from dba_indexes where owner=’ OLTP_USER’ and table_name=’SALES’;Alter index oltp_user.index_name monitoring usage; 在oltp_user用户下执行select * from v$object_usage;可以查看监控的情况。 3. Create two new tables named STUDENTS and ATTENDEES in the OLTP_USER schema.This STUDENTS table will contain three columns. STUD_ID will be a number and the primary key. FNAME and LNAME will be the other two columns and may vary in length with a maximum of 20 characters. ATTENDEES will be an intersection table in a many-to-many relationship between the STUDENTS and CLASSES tables also in the OLTP_USER schema.The ATTENDEES table will contain the primary keys of each of the other tables as its primary key.Create the ATTENDEES table so that the primary key index and the table itself are the same object.(建一个普通表和索引组织表)参考文档为Database Administrator's Guide第15章CREATE TABLE STUDENTS (STUD_ID NUMBER, FNAME VARCHAR2(20), LNAME VARCHAR2(20), PRIMARY KEY (STUD_ID) VALIDATE ); CREATE TABLE ATTENDEES ( STUD_ID NUMBER, CLASS_ID NUMBER, PRIMARY KEY (STUD_ID, CLASS_ID) VALIDATE , FOREIGN KEY (STUD_ID) REFERENCES STUDENTS (STUD_ID) VALIDATE , FOREIGN KEY (CLASS_ID) REFERENCES CLASSES (CLASS_ID) VALIDATE ) ORGANIZATION INDEX; 4. Because of the unevently distributed data in the DEPARTMENT_ID column of the EMPLOYEES table of the HR schema,you need to supply more information to the optimizer to allw for more efficient use of indexes.Regenerate statistics on the EMPLOYEES table to solve this problem.(收集表的统计信息)参考文档为Database Administrator's Guide第13章BEGIN dbms_stats.gather_table_stats(ownname=>’HR’,TABNAME=>’EMPLOYEES’);END;/ 5. Analysis has revealed that the COUNTRY_ID column of the CUSTOMERS table of the SH schema has very low cardinality.This column is never updated.Create an index that can take advantange of the above attributes of this column.(建一个位图索引)Create bitmap index idx_custtomers_bmp on CUSTOMERS(COUNTRY_ID); 6. Create an index on the COUNTRY_ID and CUST_CITY column in the CUSTOMERS table of the SH schema.The application requires that lead in column of this index must be the COUNTRY_ID column.Take advantage of the Oracle feature of indexes that allows the creation of the index to use less space wheen the leading column is not the most unique.(建一个复合压缩索引)参考文档为Database Administrator's Guide第16章Create index coutry_id_cust_city on CUSTOMERS(COUNTRY_ID, CUST_CITY) PARALLEL COMPRESS 1; 7. Make certain that the package named STANDARD is always kept in memory.参考文档Database PL/SQL Packages and Types Reference第97章Execute dbms_shared_pool.keep(‘SYS.STANDARD’); 8. Analysis reveals that a 3rd party application is not using bind variables and has skewed data.In addition,we find shared pool latch contention.Find the best solution to reduce shared pool usage.参考文档Database Reference中搜索cursor_sharingAlter system set cursor_sharing=similar; *9. You the has named a new tablespace FUST_TBS and have a table NEW_FUST in the tablespace: Abort show that the (重建一个ASSM的表空间)首先查看表空间FUST_TBS里有什么对象Select SEGMENT_NAME, SEGMENT_TYPE from dba_segments where TABLESPACE_NAME=’FUST_TBS’;然后把查到的表move 到其它表空间把索引rebuild到其它表空间。参考命令:Alter table table_name move tablespace_name;Alter index index_name rebuild tablespace tablespace_name;然后再删除表空间FUST_TBS,重新建FUST_TBS为ASSM表空间;接着把刚才的表和索引用样的方法搬回来。 *10. The following SQL statement a(用outline固定一个sql语句的执行计划) 首先解锁outln帐户然后Alter system set create_stored_outlines=true;(true表示使用默认的catalog)接着给建outline的用户授权Grant create any outline to user_name;接着在该用户下建outline.CREATE OUTLINE salaries ON SELECT last_name, salary FROM employees;最后ALTER SESSION SET USE_STORED_OUTLINES =true;Using STATSPACK 1. Install the STATSPACK package 1.1 Assign the TOOLS tablespace as the default tablespace for the PERFSTAT user. 1.2 Assign the TEMP1 tablespace as the temporary tablespace for the PERFSTAT user.2. Generate on initial set of statistics using STATSPACK making creatin that timing data and segment level statics are included.Specify a commment for the statistics MANUAL.3. Collect statistics using STATSPACK every five minutes for 15 minutes and then remove the job.During this interval run the oltp_workload.sql script.4. Generate a STATSPACK report using any two sets of sets of stats.Name the report statspack.1st and place it in the /home/oracle directory. 在sqlplus中输入@?/rdbms/admin/spcreate.sql创建statspack !grep error *.lis查看创建过程中产生的错误。 Execute statspack.snap(i_snap_level=>7,i_ucomment=>’MANUL’); 可查看spdoc.txt修改spauto.sql…….variable jobno number;variable instno number;begin select instance_number into :instno from v$instance; dbms_job.submit(:jobno, 'statspack.snap;', trunc(sysdate+1/24,'HH'), 'trunc(SYSDATE+1/24,''HH'')', TRUE, :instno); commit;end;………修改为…………variable jobno number;variable instno number;begin select instance_number into :instno from v$instance; dbms_job.submit(:jobno, 'statspack.snap;', trunc(sysdate+1/24/60*5,'MI'), 'trunc(SYSDATE+1/24/60*5,''MI'')', TRUE, :instno); commit;end; …………这样每隔5分钟就会做一个快照注意:”show parameter job_queue_processes “ job_queue_processes参数必须不为0;修改完后在sqlplus运行该脚本和题目给的oltp_workload.sql15分钟后删除刚才用脚本建的jobsExecute dbms_ijob.remove(job_id);select job from dba_jobs;查看job_id.mailto:最后在sqlplus中运行@?/rdbms/admin/spreport.sql]最后在sqlplus中运行@?/rdbms/admin/spreport.sql选择用job生成的3个快照中任意两中生成报告。保存在/home/oracle/目录下。 至此,数据库性能管理部分完成。 Section 8: Data Guard 主要流程搭建一个Data Guard 在最大保护模式下经行两次切换 在切换前跑相应脚本 1. 设置LISTENER,把备库(PRODSTD)也设置进默认LISTENER里面 SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = emrep) (ORACLE_HOME = /home/oracle/prodect/10.2.0/db_1) (SID_NAME = emrep) ) (SID_DESC = (GLOBAL_DBNAME = prod) (ORACLE_HOME = /home/oracle/prodect/10.2.0/db_1) (SID_NAME = prod) ) (SID_DESC = (GLOBAL_DBNAME = prodstd) (ORACLE_HOME = /home/oracle/prodect/10.2.0/db_1) (SID_NAME = prodstd) ) )LISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = celty.localdomain)(PORT = 1521)) )2. 数据库必须是归档状态,FORCE LOGGING状态 SQL> select FORCE_LOGGING from v$database;FOR---NO SQL> archive log list;Database log mode Archive ModeAutomatic archival EnabledArchive destination /home/oracle/oradata/prod/arc/Oldest online log sequence 17Next log sequence to archive 21Current log sequence 21 3. 在initPROD.ora的尾部加入修改后的参数(参考文档Data Guard Concepts and Administration中的3 Creating a Physical Standby Database)大致修改如下:主库(PROD):DB_UNIQUE_NAME=PRODLOG_ARCHIVE_CONFIG='DG_CONFIG=( PROD, PRODSTD)'LOG_ARCHIVE_DEST_1= 'LOCATION=/home/oracle/oradata/PROD/arc/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PROD'LOG_ARCHIVE_DEST_2= 'SERVICE= PRODSTD LGWR SYNC AFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME= PRODSTD 'LOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLE FAL_SERVER= PRODSTDFAL_CLIENT=PRODDB_FILE_NAME_CONVERT=' PRODSTD ','PROD'LOG_FILE_NAME_CONVERT=' PRODSTD ',' PROD'STANDBY_FILE_MANAGEMENT=AUTO 3.2复制一份initPROD.ora改名为initPRODSTD.ora cd $ORACLE_HOME/dbs cp initPROD.ora initPRODSTD.ora 3.3修改initPRODSTD.ora的参数如下备库(PRODSTD) DB_UNIQUE_NAME=PRODSTDLOG_ARCHIVE_CONFIG='DG_CONFIG=( PRODSTD, PROD)'LOG_ARCHIVE_DEST_1= 'LOCATION=/home/oracle/oradata/PRODSTD/arc/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRODSTD'LOG_ARCHIVE_DEST_2= 'SERVICE= PROD LGWR SYNC AFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME= PROD 'LOG_ARCHIVE_DEST_STATE_1=ENABLELOG_ARCHIVE_DEST_STATE_2=ENABLE FAL_SERVER= PRODFAL_CLIENT=PRODSTDDB_FILE_NAME_CONVERT=' PROD','PRODSTD'LOG_FILE_NAME_CONVERT=' PROD',' PRODSTD'STANDBY_FILE_MANAGEMENT=AUTO 4 配置密码文件cd $ORACLE_HOME/dbsorapwd file=orapwprodstd password=oracle5 PROD打开至MOUNT状态,复制文件 [oracle@celty oradata]$ sqlplus / as sysdba SQL*Plus: Release 10.2.0.1.0 - Production on Thu May 10 10:58:23 2012 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to an idle instance. SQL> startup mountORACLE instance started. Total System Global Area 314572800 bytesFixed Size 1219184 bytesVariable Size 96470416 bytesDatabase Buffers 213909504 bytesRedo Buffers 2973696 bytesDatabase mounted.SQL> ![oracle@celty oradata]$ cp -R prod prodstd (复制全部文件) 6. 创建备库的控制文件 alter database create standby controlfile as '/home/oracle/oradata/prodstd/control01.ctl'要跟原来的控制文件相同数量cp /home/oracle/oradata/prodstd/control01.ctl /home/oracle/oradata/prodstd/control02.ctlcp /home/oracle/oradata/prodstd/control01.ctl /home/oracle/oradata/prodstd/control03.ctl 7. 将PRODSTD打开也进入mount状态加入STANDBY LOGFILE在PROD中Alter database add standby logfileGroup ‘/home/oracle/oradata/prod/stdredo01.log’ size 100M, ….在PRODSTD中Alter database add standby logfileGroup ‘/home/oracle/oradata/prodstd/stdredo01.log’ size 100M,….(注:standby logfile的大小必须与原本数据库的logfile大小相同,而且数量比原来多一个)查询Select name from v$datafile;Select member from v$logfile;看看是否已经转换成功 8. 此时将PROD切换至最大保护模式(在MOUNT状态才可以使用该命令)SQL>ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION打开PROD数据库进入OPEN状态跑脚本q1.sqlSQL>@q1.sql9. 在PROD进行切换SQL>Alter database commit to switchover to physical standby成功后立刻shutdown immediate 然后再startup mount此时数据库PROD的database_role已经变成physical standby在PRODSTD经行一次恢复SQL>Alter database recover managed standby database disconnect from session等后台恢复结束后Alter database commit to switchover to parimary 此时PRODSTD的database_role已经变成PRIMARY运行脚本q2.sqlSQL>@q2.sql 10. 在PRODSTD经行切换SQL>Alter database commit to switchover to physical standby成功后立刻shutdown immediate 然后再startup mount此时数据库PRODSTD的database_role已经变成physical standby在PROD经行一次恢复SQL>Alter database recover managed standby database disconnect from session等后台恢复结束后SQL>Alter database commit to switchover to parimary 此时PROD的database_role已经变成PRIMARY运行脚本q3.sqlSQL>@q3.sql 11. 在PRODSTD经行一次恢复 SQL>Alter database recover managed standby database disconnect from session; 等待后台结束即可 至此,DG部分完成

本文标签:
本文标题:ocm练习题(修订)
网站声明:本文由风哥整理发布,转载请保留此段声明,本站所有内容将不对其使用后果做任何承诺,请读者谨慎使用!
【上一篇】
【下一篇】