Friday, January 28, 2022

Journey from the Oracle Database to SingleStore: A Database Odyssey

Hello, it has been awhile since I shared a blog article. I just started a new chapter in my career with SingleStore. I wrote about my journey from working with the Oracle database and Exadata to now working with the SingleStore database platform. Please click the link below to read all about it on SingleStore's blog site! 😀

https://www.singlestore.com/blog/journey-from-the-oracle-database-to-singlestore-a-database-odyssey/

Tuesday, July 23, 2019

How To Check The Active Enabled Physical Cores on an Exadata Machine


Here is how to check the active enabled physical cores on an Exadata. The link to the oracle doc is below as well. The document also provides information on how to change the enabled cores, a useful feature for Capacity on Demand (COD). Note the COD feature started from Exadata version X4 and newer systems. This feature is useful when you need to increase or decrease the amount of active cores on the database compute node. It is important to know that changing the number of active cores on your Exadata database servers would affect the software licensing cost. In addition note that the number of active cores on the database servers on Oracle Exadata Database Machine X4-2 and newer systems can be reduced only during installation.

https://docs.oracle.com/en/engineered-systems/exadata-database-machine/dbmmn/maintaining-exadata-database-servers.html#GUID-6177B070-EF7C-4858-869D-E82C5F8293C0

In the table below for the most recent Exadata Machines X7 and X8 the minimum and maximum cores per server is shown along with the eligible system configuration. The cores are increments are in multiples of 2.

Oracle Exadata Database MachineEligible SystemsMinimum Cores per ServerMaximum Cores per ServerCore Increments
Oracle Exadata Database MachineX7-2 and X8-2
Any configuration except Eighth Rack
14
48
From 14 to 48, in multiples of 2:
14, 16, 18, ..., 46, 48
Oracle Exadata Database MachineX7-2 and X8-2
Eighth rack
8
24
From 8 to 24, in multiples of 2:
8, 10, 12, ..., 22, 24


You can verify the number of active physical cores using the following command:

DBMCLI> LIST DBSERVER attributes coreCount

Example below from an Exadata X7-2, from 3 database compute nodes. The example below shows only 16 out of the total 48 physical cores are enabled.

[root@prod_node1 ~]# dbmcli
DBMCLI: Release  - Production on Wed Jul 24 00:06:08 GMT-00:00 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved. 
DBMCLI> LIST DBSERVER attributes coreCount         
16/48
[root@prod_node2 ~]# dbmcli
DBMCLI: Release  - Production on Wed Jul 24 00:32:34 GMT 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.
DBMCLI> LIST DBSERVER attributes coreCount         
16/48                          
[root@prod_node3 ~]# dbmcli
DBMCLI: Release  - Production on Wed Jul 24 00:35:20 GMT 2019 Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.
DBMCLI> LIST DBSERVER attributes coreCount         
16/48


On another note with this example if you were to check the CPU_COUNT parameter that would report two times the number of current active cores which would be 2 x 16 => 32. We multiply by two since each intel CPU chip is capable of hyper threading which means one physical core is actually two virtual threads. So for an Exadata compute node with 16 active active cores we would show 32 for the value of the CPU_COUNT parameter since this parameter shows total active threads.

SQL> show parameter cpu_count

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cpu_count                            integer     32

Friday, July 20, 2018

Easy way to verify Oracle Database initialization parameters!

I wanted to share an easy way that we could verify the value of certain oracle database initialization parameters on-premise or in the Cloud across a 2 node RAC instance. I tested this script on a 12.2 multitenant oracle database and employ using decode to check for a value otherwise printing a message that the parameter is not set. Once you run the script you can run a simple command from the Unix prompt to see which parameters are not set correctly and then take the appropriate action to adjust the parameters. I also verify a database property and numeric values as well such as parallel_max_servers. This method should help you run a simple script to crosscheck your standard initilization parameters.

spool /tmp/verify_params.out

-- parallel_force_local FALSE
select decode(( select count(*) from gv$parameter where inst_id=1 and value='FALSE' and name='parallel_force_local' ),1, 'YES', 'NO parallel_force_local is not set to FALSE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2 and value='FALSE' and name='parallel_force_local' ),1, 'YES', 'NO parallel_force_local is not set to FALSE') from dual;

-- *._fix_control='14033181:0'
select decode(( select count(*) from gv$parameter where inst_id=1 and value='14033181:0' and name='_fix_control' ),1, 'YES', 'NO _fix_control is not set to 14033181:0') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2 and value='14033181:0' and name='_fix_control' ),1, 'YES', 'NO _fix_control is not set to 14033181:0') from dual;

-- *._gby_hash_aggregation_enabled=FALSE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='FALSE' and name='_gby_hash_aggregation_enabled' ),1, 'YES', 'NO _gby_hash_aggregation_enabled is not set to FALSE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='FALSE' and name='_gby_hash_aggregation_enabled' ),1, 'YES', 'NO _gby_hash_aggregation_enabled is not set to FALSE') from dual;

-- *._gc_trace_freelist_empty=FALSE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='FALSE' and name='_gc_trace_freelist_empty' ),1, 'YES', 'NO _gc_trace_freelist_empty is not set to FALSE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='FALSE' and name='_gc_trace_freelist_empty' ),1, 'YES', 'NO _gc_trace_freelist_empty is not set to FALSE') from dual;

-- *._ignore_desc_in_index=TRUE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='TRUE' and name='_ignore_desc_in_index' ),1, 'YES', '_ignore_desc_in_index is not set to TRUE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='TRUE' and name='_ignore_desc_in_index' ),1, 'YES', '_ignore_desc_in_index is not set to TRUE') from dual;

-- *._optimizer_skip_scan_enabled=TRUE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='TRUE' and name='_optimizer_skip_scan_enabled' ),1, 'YES', '_optimizer_skip_scan_enabled is not set to TRUE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='TRUE' and name='_optimizer_skip_scan_enabled' ),1, 'YES', '_optimizer_skip_scan_enabled is not set to TRUE') from dual;

-- *._unnest_subquery=FALSE
select decode(( select count(*) from gv$parameter where inst_id=1  and value='FALSE' and name='_unnest_subquery' ),1, 'YES', '_unnest_subquery is not set to FALSE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='FALSE' and name='_unnest_subquery' ),1, 'YES', '_unnest_subquery is not set to FALSE') from dual;

-- *.audit_trail='DB'
select decode(( select count(*) from gv$parameter where inst_id=1  and value='DB' and name='audit_trail' ),1, 'YES', 'audit_trail is not set to DB') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='DB' and name='audit_trail' ),1, 'YES', 'audit_trail is not set to DB') from dual;

-- *.compatible='12.2.0'
select decode(( select count(*) from gv$parameter where inst_id=1  and value='12.2.0' and name='compatible' ),1, 'YES', 'compatible is not set to 12.2') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value='12.2.0' and name='compatible' ),1, 'YES', 'compatible is not set to 12.2') from dual;

-- *.db_files=4000
select decode(( select count(*) from gv$parameter where inst_id=1  and to_number(value) >= 4000 and name='db_files' ),1, 'YES', 'db_files is not set to 4000 or greater') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and to_number(value) >= 4000 and name='db_files' ),1, 'YES', 'db_files is not set to 4000 or greater') from dual;

-- *.diagnostic_dest='/u02/app/oracle'
select decode(( select count(*) from gv$parameter where inst_id=1  and value = '/u02/app/oracle' and name='diagnostic_dest' ),1, 'YES', 'diagnostic_dest is not set to /u02/app/oracle') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value = '/u02/app/oracle' and name='diagnostic_dest' ),1, 'YES', 'diagnostic_dest is not set to /u02/app/oracle') from dual;

-- *.log_archive_format='%t_%s_%r.arc'
select decode(( select count(*) from gv$parameter where inst_id=1  and value = '%t_%s_%r.arc' and name='log_archive_format' ),1, 'YES', 'log_archive_format is not set to %t_%s_%r.arc ') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value = '%t_%s_%r.arc' and name='log_archive_format' ),1, 'YES', 'log_archive_format is not set to %t_%s_%r.arc ') from dual;


-- *.max_dump_file_size='50K'
select decode(( select count(*) from gv$parameter where inst_id=1  and value = '50K' and name='max_dump_file_size' ),1, 'YES', 'max_dump_file_size is not set to 50K') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value = '50K' and name='max_dump_file_size' ),1, 'YES', 'max_dump_file_size is not set to 50K') from dual;

-- *.nls_length_semantics='CHAR'
select decode(( select count(*) from gv$parameter where inst_id=1  and value = 'CHAR' and name='nls_length_semantics' ),1, 'YES', 'nls_length_semantics is not set to CHAR') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and value = 'CHAR' and name='nls_length_semantics' ),1, 'YES', 'nls_length_semantics is not set to CHAR') from dual;

-- *.open_cursors=10000
select decode(( select count(*) from gv$parameter where inst_id=1  and to_number(value) >= 10000 and name='open_cursors' ),1, 'YES', 'open_cursors is not set 10000 or higher') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and to_number(value) >= 10000 and name='open_cursors' ),1, 'YES', 'open_cursors is not set 10000 or higher') from dual;

-- *.processes=4000
select decode(( select count(*) from gv$parameter where inst_id=1  and to_number(value) >= 4000 and name='processes' ),1, 'YES', 'processes is not set to 4000 or higher') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and to_number(value) >= 4000 and name='processes' ),1, 'YES', 'processes is not set to 4000 or higher') from dual;

-- *.parallel_max_servers should be 6 per pdb
select decode(( select count(*) from gv$parameter where inst_id=1  and to_number(value) * 6 >= (select count(*) - 1 from v$pdbs) and name='parallel_max_servers' ),1, 'YES', 'parallel_max_servers needs to be 6 per pdb') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2  and to_number(value) * 6 >= (select count(*) - 1 from v$pdbs) and name='parallel_max_servers' ),1, 'YES', 'parallel_max_servers needs to be 6 per pdb') from dual;

-- *.temp_undo_enabled=TRUE
select decode(( select count(*) from gv$parameter where inst_id=1 and value = 'TRUE' and name='temp_undo_enabled' ),1, 'YES', 'temp_undo_enabled is not set to TRUE') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2 and value = 'TRUE' and name='temp_undo_enabled' ),1, 'YES', 'temp_undo_enabled is not set to TRUE') from dual;

-- *.undo_retention=10800
select decode(( select count(*) from gv$parameter where inst_id=1 and to_number(value) >= 10800 and name='undo_retention' ),1, 'YES', 'undo_retention is not 10800 or greater') from dual;
select decode(( select count(*) from gv$parameter where inst_id=2 and to_number(value) >= 10800 and name='undo_retention' ),1, 'YES', 'undo_retention is not 10800 or greater') from dual;

-- LOCAL_UNDO_ENABLED
select decode((SELECT count(*) FROM   database_properties WHERE  property_name = 'LOCAL_UNDO_ENABLED' and PROPERTY_VALUE='TRUE'), 1, 'YES','LOCAL_UNDO_ENABLED property is not TRUE') from dual;

exit

Example run of verifying the output and what is not set correctly. In this case we have 3 parameters that requires attention.

$ grep -i not /tmp/verify_params.out|sort -u
max_dump_file_size is not set to 50K                                                                                                                             
NO _fix_control is not set to 14033181:0                                                                                                                         
undo_retention is not 10800 or greater                                                

Friday, February 23, 2018

Oracle Data Pump Presentation online now

Thank you to the Dallas Oracle Users Group for letting me present recently on my "Optimize your Database Import" topic.

I covered the following topics in my presentation -
  • Data Pump Overview
  • 12c (12.1 & 12.2) New Features
  • Data guard & Data pump working together
  • Customer case study - Optimizing Data pump import

You may view my presentation at the following link.

https://www.slideshare.net/NabilNawaz/optimizing-your-database-import



Wednesday, February 21, 2018

Presenting at the Dallas Oracle Users Group on Feb 22, 2018!

Tomorrow (Thursday, 2/22/18 from 5 – 7:15 pm)  I will be presenting at the Dallas Oracle Users Group(OUG). My topic is "Optimize your Database Import". I am happy to be presenting at the Dallas OUG once again, thank you! I also want to thank BIAS Corporation for sponsoring my event as well. 


Presentation Overview

“Optimize your Database Import” by Nabil Nawaz, BIAS Corporation.

Oracle DataPump is an excellent tool for cloning databases and schemas and it is widely used as a common toolset today among DBAs and Developers to transfer data and structure between databases. Please come and learn about new Data pump features for Oracle version 12.2. We will also be sharing a case study for a large multi-terabyte database for optimizing a data pump import process that originally ran for more than a day and then the process was tuned to run in just about 4-6 hours a nearly 90% performance enhancement. The tips that will be shared will be of great value and help to ensure you are able to have a well-tuned import process with DataPump.

Refreshments sponsored by BIAS Corporation.


To attend, RSVP here.

Contact dougadministrative@gmail.com if your plans change, so we can order the right amount of food.

Location/Directions
University of Dallas, Gorman Lecture Center, Room C
1845 East Northgate Drive
Irving, Texas

*Park near the bell tower.
Region 10 ESC - DOUG Meetings
Directions to the campus:http://udallas.edu/visitors/directions.php

Campus map:http://udallas.edu/visitors/documents/CampusMap_11-16-16.pdf

Presenter Bio

Nabil Nawaz started his career with Oracle in 1997 and is currently a Technical Manager at BIAS Corporation and has 20 years of experience working as an Oracle DBA & Solution Architect starting with version 7.1.6, he is Cloud IaaS, OCP and Exadata certified and also an Oracle ACE associate.

He is a contributing author on the recent book Oracle Exadata Expert's Handbook. His background is quite vast with Oracle and has had the opportunity to work as a consultant in many large fortune 500 companies focusing on architecting high available solutions using RAC & Data guard and currently working with architecting Oracle Cloud and Engineered systems such as Exadata, Supercluster, ODAs and Virtualization technologies.

He can be followed at his blog, http://nnawaz.blogspot.com/ and on Twitter @Nabil_Nawaz
Visit DOUG at http://doug.org/ or join our LinkedIn Group.

Thursday, November 23, 2017

Copy files in Parallel to the Cloud and Beyond!

As an IT professional there might be many times you might be tasked with copying multiple files from one server to another. A simple and straightforward way would be just to issue a single sftp or scp command as follows:

$ scp filename*.dmp remotehost:/Backups/

For the most part that will work fine but if you have a limited network pipe between your source and target server and if you have multiple large files then this serial or single-threaded method may not be the optimal solution especially if time is of the essence because you would end up waiting for each file to be fully copied before the next file would be copied.

I have come up with a cool innovative method I have been using for many years to copy a list of multiple files in parallel from your source server to the target server. Please note this method can be used for just about anything such as multiple export dump files, RMAN backup files, logfiles files  and so on. I would highly recommend this method when copying files to the Cloud or any remote host.

Hope these steps help you to save time and copy your files to the remote server that could be in your own Datacenter, a remote Datacenter or even in the Cloud!

1. First create a file that will contain a list of all of the files to be copied.


$ ls -1 /Backup/*dmp > copy_dump_files   

2. Review the list of the contents of the files.

$ cat copy_dump_files


/Backup/export_01.dmp
/Backup/export_02.dmp
/Backup/export_03.dmp
/Backup/export_04.dmp
/Backup/export_05.dmp
/Backup/export_06.dmp
/Backup/export_07.dmp
/Backup/export_08.dmp
/Backup/export_09.dmp
/Backup/export_10.dmp
/Backup/export_11.dmp
/Backup/export_12.dmp
/Backup/export_13.dmp
/Backup/export_14.dmp
/Backup/export_15.dmp
/Backup/export_16.dmp
/Backup/export_17.dmp
/Backup/export_18.dmp

/Backup/export_19.dmp
/Backup/export_20.dmp
/Backup/export_21.dmp
/Backup/export_22.dmp


3. Edit the file copy_dump_files and on each line of the file put the beginning of each file put the string "scp" and at the end of each line put the string “remotehost:/Backup/” shown as follows:

scp /Backup/export_01.dmp remotest:/Backups/
...


4. Next we split the copy_dump_files file into smaller pieces as follows with only 2 lines per resulting file. Note you could change the number from 2 to anything. The higher the number the more files will be in each split file.

$ split -l 2 copy_dump_files copy_dump_files_1

5. Please note the resulting files will each contain 2 lines as specified in the previous step with the split command.

$ wc -l copy_dump_files_1*
   2 copy_dump_files_1aa
   2 copy_dump_files_1ab
   2 copy_dump_files_1ac
   2 copy_dump_files_1ad
   2 copy_dump_files_1ae
   2 copy_dump_files_1af
   2 copy_dump_files_1ag
   2 copy_dump_files_1ah
   2 copy_dump_files_1ai
   2 copy_dump_files_1aj
   2 copy_dump_files_1ak
  22 total

5. Put all of the copy files into a script shown as follows 

$ ls -1 copy_dump_files_1* > copy_script.sh


6.  Set all of the copy files with executable permissions.

$ chmod 700 copy_*

7. Edit the file copy_script.sh and put the following string at the beginning of each line “nohup ./” and at the end of each line an character “&” which will allow us to run 11 copies simultaneously. Shown as follows:



   $ nohup ./copy_dump_files_1aa  &
   $ nohup ./copy_dump_files_1ab  &

   $ nohup ./copy_dump_files_1ac  &
  ...

  8. Now invoke the copy script to kick off in nohup and in the background to run multiple scp copies to your remote host shown as follows and all copies will run in the background! Please also monitor the target location to ensure the file sizes are increasing.


$ nohup ./copy_script.sh &