This topic describes the procedures for patching and updating various
components in Exadata Cloud Service outside of the cloud automation.
For information related to patching and updating with dbaascli, refer to
"
Patching Oracle Grid Infrastructure and Oracle Databases Using dbaascli
".
For more guidance on achieving continuous service during patching operations, see the
Application Checklist for Continuous Service for MAA Solutions
white paper.
For daylight savings time, and some routine or one-off patches, it can
be necessary for you to patch software manually.
To perform routine patching of Oracle Database and Oracle Grid
Infrastructure software, Oracle recommends that you use the facilities provided by
Oracle Exadata Database Service on Dedicated
Infrastructure
. However, under
some circumstances, it can be necessary for you to patch the Oracle Database or Oracle
Grid Infrastructure software manually:
Daylight Savings Time (DST) Patching:
Because they cannot be
applied in a rolling fashion, patches for the Oracle Database DST definitions
are not included in the routine patch sets for
Exadata Cloud Infrastructure
. If you need to apply patches to the Oracle Database DST
definitions, you must do so manually. See My Oracle Support Doc ID
412160.1.
Non-routine or One-off Patching:
If you encounter a problem
that requires a patch which is not included in any routine patch set, then work
with Oracle Support Services to identify and apply the appropriate patch.
For general information about patching Oracle Database, refer to information
about patch set updates and requirements in
Oracle Database Upgrade
Guide
for your release.
Related Topics
https://support.oracle.com/epmos/faces/DocumentDisplay?cmd=show&type=NOT&id=1929745.1
https://support.oracle.com/epmos/faces/DocumentDisplay?cmd=show&type=NOT&id=412160.1
You update the operating systems of Exadata compute nodes by using the
patchmgr tool.
This utility manages the entire update of one or more compute nodes remotely, including
running pre-reboot, reboot, and post-reboot steps. You can run the utility from either
an Exadata compute node or a non-Exadata server running Oracle Linux. The server on
which you run the utility is known as the "driving system." You cannot use the driving
system to update itself. Therefore, if the driving system is one of the Exadata compute
nodes on a system you are updating, you must run a separate operation on a different
driving system to update that server.
The following two scenarios describe typical ways of performing the updates:
Scenario 1: Non-Exadata Driving System
The simplest way to run the update the Exadata system is to use a separate Oracle Linux
server to update all Exadata compute nodes in the system.
Scenario 2: Exadata Node Driving System
You can use one Exadata compute node to drive the updates for the rest of the compute
nodes in the system, and then use one of the updated nodes to drive the update on the
original Exadata driver node.
For example: You are updating a half rack Exadata system, which has four compute nodes -
node1, node2, node3, and node4. First, use node1 to drive the updates of node2, node3,
and node4. Then, use node2 to drive the update of node1.
The driving system requires root user
SSH
access to each compute node
the utility will update.
Preparing for the OS Updates
Determine the latest software version available, and connectivity to the proper
yum
repository
To update the OS on all compute nodes of an Exadata Cloud Infrastructure instance
Procedure to update all compute nodes using
patchmgr
.
Installing Additional Operating System Packages
Review these guidelines before you install additional operating system packages for
Oracle Exadata Database Service on Dedicated Infrastructure
.
Before you begin your updates, review
Exadata Cloud Service Software
Versions
(
Doc ID 2333222.1
) to determine the
latest software version and target version to use.
Some steps in the update process require you to specify a YUM
repository. The YUM repository URL is:
http://yum-<region_identifier>.oracle.com/repo/EngineeredSystems/exadata/dbserver/<latest_version>/base/x86_64.
Region identifiers are text strings used to identify Oracle
Cloud Infrastructure regions (for example,
us-phoenix-1
).
You can find a complete list of region identifiers in
Regions
.
You can run the following
curl
command to determine the latest
version of the YUM repository for your Exadata Cloud Service instance
region:
curl -s -X GET http://yum-<region_identifier>.oracle.com/repo/EngineeredSystems/exadata/dbserver/ |egrep "18.1."
This example returns the most current version of the YUM repository for the US West
(Phoenix) region:
curl -s -X GET http://yum-us-phoenix-1.oracle.com/repo/EngineeredSystems/exadata/dbserver/ |egrep "18.1."
<a href="18.1.4.0.0/">18.1.4.0.0/</a> 01-Mar-2018 03:36 -
To apply OS updates, the system's
VCN
must be configured to
allow access to the YUM repository. For more information, see
Option 2:
Service Gateway to Both Object Storage and YUM repos.
.
The target version is 18.1.4.0.0.180125.3.
Each of the two nodes is used as the driving system for the update
on the other one.
Gather the environment details.
-
SSH
to
node1
as
root
and run the following command to determine the version of Exadata:
[root@node1]# imageinfo -ver
12.2.1.1.4.171128
-
Switch to the grid user, and identify all computes in the
cluster.
[root@node1]# su - grid
[grid@node1]$ olsnodes
node1
node1
-
Configure the driving system.
-
Switch back to the
root
user on
node1
, check whether a root ssh key pair
(
id_rsa
and
id_rsa.pub
) already
exists. If not, then generate it.
[root@node1 .ssh]# ls /root/.ssh/id_rsa*
ls: cannot access /root/.ssh/id_rsa*: No such file or directory
[root@node1 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
93:47:b0:83:75:f2:3e:e6:23:b3:0a:06:ed:00:20:a5 [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
|o.. + . |
|o. o * |
|E . o o |
| . . = |
| o . S = |
| + = . |
| + o o |
| . . + . |
| ... |
+-----------------+
-
Distribute the public key to the target nodes, and verify this step. In
this example, the only target node is
node2
.
[root@node1 .ssh]# scp -i ~opc/.ssh/id_rsa ~root/.ssh/id_rsa.pub opc@node2:/tmp/id_rsa.node1.pub
id_rsa.pub
[root@node2 ~]# ls -al /tmp/id_rsa.node1.pub
-rw-r--r-- 1 opc opc 442 Feb 28 03:33 /tmp/id_rsa.node1.pub
[root@node2 ~]# date
Wed Feb 28 03:33:45 UTC 2018
-
On the target node (
node2
, in this example), add the
root public key of
node1
to the root
authorized_keys
file.
[root@node2 ~]# cat /tmp/id_rsa.node1.pub >> ~root/.ssh/authorized_keys
-
Download
dbserver.patch.zip
as
p21634633_12*_Linux-x86-64.zip
onto the driving
system (
node1
, in this example), and unzip it. See
dbnodeupdate.sh and dbserver.patch.zip
:
Updating Exadata
Database Server Software using the DBNodeUpdate Utility and patchmgr
(Doc ID 1553103.1)
for information about the files in this
.zip.
[root@node1 patch]# mkdir /root/patch
[root@node1 patch]# cd /root/patch
[root@node1 patch]# unzip p21634633_181400_Linux-x86-64.zip
Archive: p21634633_181400_Linux-x86-64.zip creating: dbserver_patch_5.180228.2/
creating: dbserver_patch_5.180228.2/ibdiagtools/
inflating: dbserver_patch_5.180228.2/ibdiagtools/cable_check.pl
inflating: dbserver_patch_5.180228.2/ibdiagtools/setup-ssh
inflating: dbserver_patch_5.180228.2/ibdiagtools/VERSION_FILE
extracting: dbserver_patch_5.180228.2/ibdiagtools/xmonib.sh
inflating: dbserver_patch_5.180228.2/ibdiagtools/monitord
inflating: dbserver_patch_5.180228.2/ibdiagtools/checkbadlinks.pl
creating: dbserver_patch_5.180228.2/ibdiagtools/topologies/
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/VerifyTopologyUtility.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/verifylib.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Node.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Rack.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Group.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topologies/Switch.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/topology-zfs
inflating: dbserver_patch_5.180228.2/ibdiagtools/dcli
creating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteScriptGenerator.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/CommonUtils.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/SolarisAdapter.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/LinuxAdapter.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteLauncher.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/remoteConfig.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/spawnProc.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/runDiagnostics.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/netcheck/OSAdapter.pm
inflating: dbserver_patch_5.180228.2/ibdiagtools/SampleOutputs.txt
inflating: dbserver_patch_5.180228.2/ibdiagtools/infinicheck
inflating: dbserver_patch_5.180228.2/ibdiagtools/ibping_test
inflating: dbserver_patch_5.180228.2/ibdiagtools/tar_ibdiagtools
inflating: dbserver_patch_5.180228.2/ibdiagtools/verify-topology
inflating: dbserver_patch_5.180228.2/installfw_exadata_ssh
creating: dbserver_patch_5.180228.2/linux.db.rpms/
inflating: dbserver_patch_5.180228.2/md5sum_files.lst
inflating: dbserver_patch_5.180228.2/patchmgr
inflating: dbserver_patch_5.180228.2/xcp
inflating: dbserver_patch_5.180228.2/ExadataSendNotification.pm
inflating: dbserver_patch_5.180228.2/ExadataImageNotification.pl
inflating: dbserver_patch_5.180228.2/kernelupgrade_oldbios.sh
inflating: dbserver_patch_5.180228.2/cellboot_usb_pci_path
inflating: dbserver_patch_5.180228.2/exadata.img.env
inflating: dbserver_patch_5.180228.2/README.txt
inflating: dbserver_patch_5.180228.2/exadataLogger.pm
inflating: dbserver_patch_5.180228.2/patch_bug_26678971
inflating: dbserver_patch_5.180228.2/dcli
inflating: dbserver_patch_5.180228.2/patchReport.py
extracting: dbserver_patch_5.180228.2/dbnodeupdate.zip
creating: dbserver_patch_5.180228.2/plugins/
inflating: dbserver_patch_5.180228.2/plugins/010-check_17854520.sh
inflating: dbserver_patch_5.180228.2/plugins/020-check_22468216.sh
inflating: dbserver_patch_5.180228.2/plugins/040-check_22896791.sh
inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_bash
inflating: dbserver_patch_5.180228.2/plugins/050-check_22651315.sh
inflating: dbserver_patch_5.180228.2/plugins/005-check_22909764.sh
inflating: dbserver_patch_5.180228.2/plugins/000-check_dummy_perl
inflating: dbserver_patch_5.180228.2/plugins/030-check_24625612.sh
inflating: dbserver_patch_5.180228.2/patchmgr_functions
inflating: dbserver_patch_5.180228.2/exadata.img.hw
inflating: dbserver_patch_5.180228.2/libxcp.so.1
inflating: dbserver_patch_5.180228.2/imageLogger
inflating: dbserver_patch_5.180228.2/ExaXMLNode.pm
inflating: dbserver_patch_5.180228.2/fwverify
-
Create the
dbs_group
file that contains the list of
compute nodes to update. Include the nodes listed after running the
olsnodes
command in step 1 except for the driving
system node. In this example,
dbs_group
should include
only
node2
.
[root@node1 patch]# cd /root/patch/dbserver_patch_5.180228
[root@node1 dbserver_patch_5.180228]# cat dbs_group
node2
You must run the precheck
operation with the
-nomodify_at_prereq
option to prevent
any changes to the system that could impact the backup you take in the next
step. Otherwise, the backup might not be able to roll back the system to its
original state, should that be necessary.
patchmgr -dbnodes dbs_group -precheck -yum_repo <yum_repository> -target_version <target_version> -nomodify_at_prereq
The output should look like the following example:
[root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes dbs_group -precheck -yum_repo http://yum-phx.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3 -nomodify_at_prereq
************************************************************************************************************
NOTE patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
************************************************************************************************************
2018-02-28 21:22:45 +0000 :Working: DO: Initiate precheck on 1 node(s)
2018-02-28 21:24:57 +0000 :Working: DO: Check free space and verify SSH equivalence for the root user to node2
2018-02-28 21:26:15 +0000 :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
2018-02-28 21:26:47 +0000 :Working: DO: dbnodeupdate.sh running a precheck on node(s).
2018-02-28 21:28:23 +0000 :SUCCESS: DONE: Initiate precheck on node(s).
-
Back up the current system.
This is the proper stage to
take the backup, before any modifications are made to the system.
patchmgr -dbnodes dbs_group -backup -yum_repo <yum_repository> -target_version <target_version> -allow_active_network_mounts
The output should look like the following example:
[root@node1 dbserver_patch_5.180228]# ./patchmgr -dbnodes dbs_group -backup -yum_repo http://yum-phx.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3 -allow_active_network_mounts
************************************************************************************************************
NOTE patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
************************************************************************************************************
2018-02-28 21:29:00 +0000 :Working: DO: Initiate backup on 1 node(s).
2018-02-28 21:29:00 +0000 :Working: DO: Initiate backup on node(s)
2018-02-28 21:29:01 +0000 :Working: DO: Check free space and verify SSH equivalence for the root user to node2
2018-02-28 21:30:18 +0000 :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
2018-02-28 21:30:51 +0000 :Working: DO: dbnodeupdate.sh running a backup on node(s).
2018-02-28 21:35:50 +0000 :SUCCESS: DONE: Initiate backup on node(s).
2018-02-28 21:35:50 +0000 :SUCCESS: DONE: Initiate backup on 1 node(s).
-
Remove all custom RPMs from the target compute nodes that will be updated.
Custom RPMs are reported in precheck results. They include RPMs that were
manually installed after the system was provisioned.
-
If you are updating the system from version
12.1.2.3.4.170111, and the precheck results include
krb5-workstation-1.10.3-57.el6.x86_64
, remove it.
(This item is considered a custom RPM for this version.)
-
Do
not
remove
exadata-sun-vm-computenode-exact
or
oracle-ofed-release-guest
. These two RPMs are
handled automatically during the update process.
-
Run the
nohup
command to perform the update.
nohup patchmgr -dbnodes dbs_group -upgrade -nobackup -yum_repo <yum_repository> -target_version <target_version> -allow_active_network_mounts &
The output should look like the following
example:
[root@node1 dbserver_patch_5.180228]# nohup ./patchmgr -dbnodes dbs_group -upgrade -nobackup -yum_repo http://yum-phx.oracle.com/repo/EngineeredSystems/exadata/dbserver/18.1.4.0.0/base/x86_64 -target_version 18.1.4.0.0.180125.3 -allow_active_network_mounts &
************************************************************************************************************
NOTE patchmgr release: 5.180228 (always check MOS 1553103.1 for the latest release of dbserver.patch.zip)
NOTE Database nodes will reboot during the update process.
WARNING Do not interrupt the patchmgr session.
WARNING Do not resize the screen. It may disturb the screen layout.
WARNING Do not reboot database nodes during update or rollback.
WARNING Do not open logfiles in write mode and do not try to alter them.
*********************************************************************************************************
2018-02-28 21:36:26 +0000 :Working: DO: Initiate prepare steps on node(s).
2018-02-28 21:36:26 +0000 :Working: DO: Check free space and verify SSH equivalence for the root user to node2
2018-02-28 21:37:44 +0000 :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
2018-02-28 21:38:43 +0000 :SUCCESS: DONE: Initiate prepare steps on node(s).
2018-02-28 21:38:43 +0000 :Working: DO: Initiate update on 1 node(s).
2018-02-28 21:38:43 +0000 :Working: DO: Initiate update on node(s)
2018-02-28 21:38:49 +0000 :Working: DO: Get information about any required OS upgrades from node(s).
2018-02-28 21:38:59 +0000 :SUCCESS: DONE: Get information about any required OS upgrades from node(s).
2018-02-28 21:38:59 +0000 :Working: DO: dbnodeupdate.sh running an update step on all nodes.
2018-02-28 21:48:41 +0000 :INFO : node2 is ready to reboot.
2018-02-28 21:48:41 +0000 :SUCCESS: DONE: dbnodeupdate.sh running an update step on all nodes.
2018-02-28 21:48:41 +0000 :Working: DO: Initiate reboot on node(s)
2018-02-28 21:48:57 +0000 :SUCCESS: DONE: Initiate reboot on node(s)
2018-02-28 21:48:57 +0000 :Working: DO: Waiting to ensure node2 is down before reboot.
2018-02-28 21:56:18 +0000 :Working: DO: Initiate prepare steps on node(s).
2018-02-28 21:56:19 +0000 :Working: DO: Check free space and verify SSH equivalence for the root user to node2
2018-02-28 21:57:37 +0000 :SUCCESS: DONE: Check free space and verify SSH equivalence for the root user to node2
2018-02-28 21:57:42 +0000 :SEEMS ALREADY UP TO DATE: node2
2018-02-28 21:57:43 +0000 :SUCCESS: DONE: Initiate update on node(s)
-
After the update operation completes, verify the version of the kernel on the
compute node that was updated.
[root@node2 ~]# imageinfo -ver
18.1.4.0.0.180125.3
-
f the driving system is a compute node that needs to be updated (as in this
example), repeat steps 2 through 7 of this procedure using an updated compute
node as the driving system to update the remaining compute node. In this example
update, you would use
node2
to update
node1
.
-
On each compute node, run the
uptrack-install
command as root
to install the available ksplice updates.
uptrack-install --all -y
Review these guidelines before you install additional operating system
packages for
Oracle Exadata Database Service on Dedicated
Infrastructure
.
You are permitted to install and update operating system packages on
Oracle Exadata Database Service on Dedicated
Infrastructure
as long as you do not modify the kernel or InfiniBand-specific packages. However, Oracle technical support, including installation, testing, certification and error resolution, does not apply to any non-Oracle software that you install.
Also be aware that if you add or update packages separate from an Oracle Exadata
software update, then these package additions or updates can introduce problems when you
apply an Oracle Exadata software update. Problems can occur because additional software
packages add new dependencies that can interrupt an Oracle Exadata update. For this
reason, Oracle recommends that you minimize customization.
If you install additional packages, then Oracle recommends that you have scripts to
automate the removal and reinstallation of those packages. After an Oracle Exadata
update, if you install additional packages, then verify that the additional packages are
still compatible, and that you still need these packages.
For more information, refer to
Oracle Exadata Database Machine Maintenance
Guide
.
Cloud-specific tooling is used on the
Exadata Cloud Infrastructure
Guest VMs for local operations, including dbaascli
commands.
The cloud tooling is automatically updated by Oracle when new releases are made
available. If needed, you can follow the steps in
Updating Cloud Tooling Using
dbaascli
to ensure you have the latest version of the cloud tooling on all
virtual machines in the VM cluster