DB · TECH

Migrating an Oracle 11g Database to a 12c Pluggable Database using Data Pump

1- Export DB:

as oradb:
if application is not down and in order to make consistent import get DB SCN when you export:

$ sqlplus / as sysdba
> select to_char(CURRENT_SCN) from v$database;

TO_CHAR(CURRENT_SCN)
—————————————-
236040412

Of course any changes happened after this SCN will be lost, but using SCN helps to test DB migration, for production DB you need to consider shutting down application or putting DB Tablespaces in read only mode.

expdp \’/ as sysdba\’ VERSION=12 full=Y flashback_scn=236040412 dumpfile=DATA_PUMP_DIR:export_${ORACLE_SID}.dmp logfile=DATA_PUMP_DIR:export_${ORACLE_SID}.log

2- Copy the dump file to target system

3- Generate ddl sql file from exported dump file:

impdp PDBADMIN/<password>@<service name>.tns DIRECTORY=DUMP_DIR dumpfile=export.dmp sqlfile=sample_ddl.sql

4- Using generated ddl.sql, create:

A- table spaces considering changing datafile if you are migrating from system datafile to ASM, you need to maintain same creation options

for example:

CREATE TABLESPACE “DATA” DATAFILE +DATAC1 SIZE <TABLESPACE SIZE>

B- Create Users owning Schemas:
Example:
CREATE USER “User1” IDENTIFIED BY VALUES ‘<Same value in SQL file>’
DEFAULT TABLESPACE “DATA”
TEMPORARY TABLESPACE “TEMP”
PROFILE “UNLIMITED”;

 

5- Create parameter file:

$ cat import.par
DIRECTORY=DUMP_DIR
FULL=YES
DUMPFILE=export.dmp
logfile=import.log

6- Run data import:
impdp PDBADMIN/<Password>@<service name>.tns parfile=import.par

7- Recompile All Invalid Objects

sqlplus “/ AS SYSDBA”
SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

8- Check &Fix compilation errors:

Check for warnings and errors, usually system may complain about missing DB links, or objects owned by SYS, you need to check them one by one and resolve them.

TECH

Enable Virtual Serial Port (VSP) for HP Servers iLO Running Linux

HP Integrated Lights-Out 3 (iLO 3) consists of an intelligent processor and firmware that lets you manage servers remotely. You can use iLO Virtual Serial Port (VSP) is one iLO method of accessing a remote server. By using the remote console, you can operate as if a physical serial connection exists on the remote server serial port.

Below are required steps to configure VSP for HP server running Reh Hat Linux 6.5:

1) Boot the Server into RBSU >> System options >> serial port options >> virtual serial port >> select COMM 2
2) Now go to “BIOS Serial Console and EMS” menu >> EMS console >> Select COMM 2

3) Save the changes and exit RBSU.

4) In RHEL 6.x

a) Create an init configuration file for ttyS1

# vi /etc/init/ttyS1.conf
start on runlevel [S345]
stop on runlevel [016]

respawn
instance /dev/ttyS1
exec /sbin/agetty ttyS1 115200 vt100-nav

b) Check the init configuration and start running the agetty process

#  initctl list | grep ttyS1
ttyS1 stop/waiting

# initctl start ttyS1
ttyS1 (/dev/ttyS1) start/running, process 38394

c) Test you have access to the System through vsp

</>hpiLO-> vsp
 
      Virtual Serial Port Active: COM2
 
      Starting virtual serial port.
      Press ‘ESC (‘ to return to the CLI Session.

 login:

d) Add Serial Port to securetty to allow login as rootNote:This is needed if we want root account to be able to log in through this serial console.

# echo “ttyS1” >> /etc/securetty 
e) Configure the Grub GRUB config file

  title Red Hat Enterprise Linux (2.6.32-220.el6.x86_64)
            root (hd0,0)
            kernel /vmlinuz-2.6.32-220.el6.x86_64 ro root=UUID=0084ea5e-39e2-       4994-aaa5-5abe8bf7eeb0 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8                 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=128M                         KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM console=tty0                         console=ttyS1,115200
        initrd /initramfs-2.6.32-220.el6.x86_64.img

If you need more info about HP virtual server port, you can read below document:

http://h10032.www1.hp.com/ctg/Manual/c00263709.pdf

I hope this is informative.

TECH · Unix/Linux

SSH takes long time in Solaris System

While trying to ssh to solaris 11 machine, it takes sometime to give a prompt(more than usual). The issue is mostly related to DNS lookup over the network. Please check below details and how i fixed the issue.

root@cms-cluster1:~# uname -a
SunOS cms-cluster1 5.11 11.1 sun4v sparc sun4v
root@cms-cluster1:~#
root@cms-cluster1:~# ssh -V
Sun_SSH_2.2, SSH protocols 1.5/2.0, OpenSSL 0x100000bf
root@cms-cluster1:~#
root@cms-cluster1:~# echo “GSSAPIAuthentication no” >> /etc/ssh/sshd_config
root@cms-cluster1:~# echo “LookupClientHostnames no” >> /etc/ssh/sshd_config
root@cms-cluster1:~#
root@cms-cluster1:~# svcadm restart ssh
root@cms-cluster1:~#

 

 

Virtualization

VMware vCenter Custom Email Alert

VMware is best virtualization technology till now, I tried Oracle VM as well, it is nice but not that fancy, VMware is bit costly on Enterprise level but it is feature rich, it makes live easier at configuration and you don’t need to worry a lot for operation.

A problem I faced, I needed to send customized Email alert sent by vCenter but without trying VMware-PowerCLI.

Here is the easy way:

– I have Windows based vCenter, open the file under below path: \VMware\Infrastructure\VirtualCenter Server\locale\en\stask.vmsg

Be aware if you have any localization so the directory might be “fr,de, … ext” under locale, in my case it was English version of vCenter so it was en directory under locale

I modified below lines:

###

Email.statefulAlarm.subject = “[vAlarm] {alarmName} changed from {oldStatus} to {newStatus}”

Email.statefulAlarm.body = “Target: {targetName}\nPrevious Status: {oldStatus}\nNew Status: {newStatus}\n\nAlarm Definition:\n{declaringSummary}\n\n{alarmValue}:\n {triggeringSummary}\n\nDescription:\n{eventDescription}”

###

Email.statefulEventAlarm.subject = “[vAlarm] {alarmName} {eventDescription}”

Email.statefulEventAlarm.body = “Target: {targetName}\nPrevious Status: {oldStatus}\nNew Status: {newStatus}\n\nAlarm Definition:\n{declaringSummary}\n\n{alarmValue}:\n {eventDescription}”

###

Email.statelessEventAlarm.subject = “[vAlarm] {alarmName} {eventDescription}”

Email.statelessEventAlarm.body = “Target: {targetName}\nStateless event alarm\n\nAlarm Definition:\n{declaringSummary}\n\n{alarmValue}:\n{eventDescription}”

###

So you can Edit what’s inside email body and subject.

Virtualization

VMware HA Cluster with Virtual Storage Appliance as Shared Storage

1     Introduction

1.1   VMware Basics

VMware ESXi™, VMware vCenter Server™, and vSphere Clients, which are the virtualization layer, management layer, and interface layer, respectively, of vSphere.

1.1.1  Relationships between the Components Layers of VMware vSphere

VShpere

1.1.1.1     Virtualization Layer

The virtualization layer of VMware vSphere includes infrastructure services and application services. Infrastructure services such as compute, storage, and network services abstract, aggregate, and allocate hardware or infrastructure resources.

Application services are the set of services provided to ensure availability, security, and scalability for applications. Examples include vSphere High Availability and Fault Tolerance.

1.1.1.2     Management Layer

VMware vCenter Server is the central point for configuring, provisioning, and managing virtualized IT environments.

1.1.1.3     Interface Layer

Users can access the VMware vSphere datacenter through GUI clients such as the vSphere Client or the vSphere Web Client. Additionally, users can access the datacenter through client machines that use command-line interfaces and SDKs for automated management.

1.1.2  VMware vSphere Components and Features

An introduction to the components and features of VMware vSphere helps you to understand the parts and how they interact. VMware vSphere includes the following components and features.

2    VMware vSphere Storage Appliance

A VSA cluster provides a set of datastores that are accessible by all hosts within VMware datacenter. You can create a VSA cluster with two or three VSA cluster members. The status of the VSA cluster is online only when more than half of the members are online, in two member cluster you need to use VSA cluster service as quorum device.

A VSA cluster is a virtual alternative to expensive SAN systems. While SAN systems provide centralized arrays of storage over a high-speed network, a VSA cluster provides a distributed array that runs across several physical servers and utilizes local storage that is attached to each ESXi host.

A VMware virtual appliance that runs SUSE Linux Enterprise Server 11 SP2 and a set of storage clustering services that perform the following tasks:

  • Manage the storage capacity, performance, and data redundancy for the hard disks that are installed on the ESXi hosts
  • Expose the disks of a host over the network
  • Manage hardware and software failures within the VSA cluster
  • Manage the communication between all instances of vSphere Storage Appliance, and between each vSphere Storage Appliance and the VSA Manager

2.1   VSA Cluster Components

A VSA cluster requires the following vSphere and vSphere Storage Appliance components:

  • ESXi Hosts: Two or three ESXi hosts version 5.0 or later, All hosts are running same ESXi version (Existing VMs can be migrated)
  • vCenter Server: Windows based is the only supported release to manage VSA instance (one vCenter can manage multiple VSA instances)
  • vSphere Client: can be Web or vShpere Client
  • VSA Manager: installed as plugin on vCenter (Windows), After you install it, you can see the VSA Manager tab in the vSphere Client. You can use VSA Manager to monitor, maintain, and troubleshoot a VSA cluster.
  • VSA Cluster Service: service is used in a VSA cluster with two members to act as a third member in case one of the VSA cluster members fails. In such a case, the online status of two out of three members maintains the online status of the cluster. The service does not provide storage volumes for the VSA datastores[1]
  • Ethernet Switches: Gigabit Ethernet or 10 Gigabit Ethernet switches provide the high-speed network backbone of the VSA cluster.

2.2   VSA Cluster Service Considerations for Two Members Cluster Setup

A VSA cluster with two VSA cluster members uses an additional service called VSA cluster service. The service participates as a member in the VSA cluster, but it does not provide storage. For the VSA datastores to remain online, a VSA cluster requires that more than half of the members are also online. If one instance of a vSphere Storage Appliance fails, the VSA datastores can remain online only if the remaining VSA cluster member and the VSA cluster service are online. In a simple configuration, the VSA cluster service can run on the vCenter Server machine. However, note that the installation of VSA Manager on vCenter Server always installs the VSA Cluster Service, whether it is going to be used or not.

When you use a single vCenter Server instance to manage multiple remote VSA clusters in a more complex configuration, the VSA cluster service must always run on the same network of the two member VSA cluster. Unfortunately it is not possible to have same VSA cluster service for multiple VSA clusters because VSA cluster service actually works as a third VSA cluster Node for 2 Node VSA Cluster.

Only one vSphere Storage Appliance can run on an ESXi host at a time

3.3   How a VSA Cluster Handles Failures

A VSA cluster provides automatic failover from hardware and software failures.

Each VSA datastore has two volumes. A VSA cluster member exports the main volume as the VSA datastore. Another VSA cluster member maintains the second volume as a replica. If a failure occurs to the hardware, network equipment, or the VSA cluster member of the main volume, the main volume becomes unavailable, and the replica volume takes its place without service interruption. After you fix the failure and bring the failed VSA cluster member back online, the member synchronizes the main volume with the replica to provide failover in case of further failures.

The following illustration depicts automatic failover in a VSA cluster with 2 members. The replica volume takes over the failed main volume. In this case, to make sure that more than half of the members are online, the VSA cluster service simulates a VSA cluster member.

A VSA cluster provides automatic failover from the following failures:

  • Failure of a single physical NIC or port, or a cable connecting the NIC port to its physical switch port
  • Single physical switch failure
  • Single physical host failure
  • Single VSA cluster member failure

4     vShpere Availability &HA Cluster

The vSphere vMotion and Storage vMotion[1] functionality in vSphere makes it possible for organizations to reduce planned downtime because workloads in a VMware environment can be dynamically moved to different physical servers or to different underlying storage without service interruption. Administrators can perform faster and completely transparent maintenance operations, without being forced to schedule inconvenient maintenance windows.

VMware vSphere HA protects application availability in the following ways:

  • It protects against a server failure by restarting the virtual machines on other hosts within the cluster.
  • It protects against application failure by continuously monitoring a virtual machine and resetting it in the event that a failure is detected.
  • You do not need to install special software within the application or virtual machine. All workloads are protected by vSphere HA.
  • After vSphere HA is, no actions are required to protect new virtual machines. They are automatically protected.

 

Unix/Linux

SSH without password Linux/Unix

Uncomment the following lines from the /usr/local/etc/ssh_config (or /etc/ssh/ssh_config) file:
RSAAuthentication yes
IdentityFile ~/.ssh/id_rsa

Now, let’s assume ServerA and ServerB both run the ssh daemons.To allow ServerA to SSH to ServerB without password,

please try the following:

# ssh-keygen -t rsa

This generates two files id_rsa.pub and id_rsa

Now, this needs to be copied to the authorized_keys file on ServerB

# scp id_rsa.pub ServerB:~/.ssh/ServerA_rsa.pub
# cat ServerA_rsa.pub >> authorized_keys

Unix/Linux

Send SNMP TRAP on Solaris 10

Send SNMP trace on Solaris 10:

snmp_trapsend -e 1.3.6.1.4.1.11080.400.3  -a “.1.3.6.1.4.1.11080.400.3 STRING (Test trap)” -T4

[-h host]             (default = localhost)
[-c community]        (default = public)
[-e enterprise | -E enterprise_str]   (default = 1.3.6.1.4.1.42)
[-g generic#]         (range 0..6, default = 6)
[-s specific#]        (default = 1)
[-i ipaddr]           (default = localhost)
[-p trap_port]        (default = 162)
[-t timestamp]        (a time in unix-time format, default is uptime)
-a “object-id object-type ( object-value )”
[-T trace-level]      (range 0..4, default = 0)