Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 1 hour 18 min ago

Keine Angst vor Container Technologie (DOAG 2018)

Wed, 2018-11-28 01:46

Seit 30 Jahren bin ich in der IT-Branche tätig, hatte immer wieder mit Oracle mit RDBMS Systemen zu tun. Nun, seit bald 4 Jahren als Berater bei dbi services habe ich sehr viel Berührung mit Oracle Datenbaken, daher auch mein Besuch der DOAG 2018.

Mit grossem Interesse reiste ich zur DOAG nach Nürnberg und hatte mir vorgenommen zum Thema OpenShift und Container, diverse Sessions zu besuchen.

Warum OpenShift? Nun seit einiger Zeit sehen wir Projekte(PoC) bei unseren Kunden in diesem Bereich. Red Hat bietet eine komplette Lösung an, die alle Komponenten beinhaltet. Ein Start wir deutlich schneller möglich.

Interessanterweise, ist nicht nur Euphorie zu spüren, es gibt auch kritische Stimmen zu diesem Thema. Doch es erinnert mich an die Zeit als die Hardware-Virtualisierung aufkam. Auch damals wurden kritische Fragen gestellt. Wird das funktionieren und ist viel zu komplex! Diese Technologie eignet sich nur für Dienstleister, Cloud-Anbieter etc.

Also Grund genug erste Erfahrungsberichte an der DOAG anzuhören und Vorträge zu diesem Thema zu besuchen.
 

Was ändert sich denn hier?

Nach der Hardware-Virtualisierung folgt nun der nächste Virtualisierungsschritt, Docker (Einsatz von Container).
Der Unterschiede der Hardware-Virtualisierung zu Docker kann am besten mit einer Schematischen Darstellung aufgezeigt werden.

Schematische Darstellung der Hardware-Virtualisierung

Server-Virtualisierung
 

Der Unterschied in der Architektur zwischen Hardware-Virtualisierung und Container

docker_fig1

Grösster Unterschied, bei der Hardware-Virtualisierung hat jeder Virtuelle-Server ein komplettes eigenes Betriebssystem. Durch die Container Architektur, fällt dieser Teil zum grössten Teil weg, was den einzelnen Container deutlich kleiner und vor allem portabler macht. Es werden weniger Ressourcen benötig auf der Infrastruktur, oder auf der selben Infrastruktur können deutlich mehr Containers betrieben werden.
 

OpenShift die Red hat Lösung für Docker hat folgende Architektur

architecture_overview
 

Was erwartet uns mit OpenShift, was müssen wir auf jeden Fall beachten

– Nächster Schritt zum Thema Virtualisierung -> Container
– Komplexe Infrastruktur, bei Red Hat alles aus einer Hand (Inkl. Kubernetes)
– Der Start in die Container Welt, muss sehr gut vorbereitet sein
– Technologie ist noch sehr jung, hier wird sich noch einiges ändern
– Wenn möglich ein PoC durchführen, nicht zu lange warten
– Konzepte und Prozesse werden zwingend benötigt
 

Mein Fazit

Mein erster Besuch an der DOAG hat mir sehr wertvolle Informationen und Erkenntnisse geliefert zu den beiden Theme OpenShift und Containers. Im speziellen die Lösung von Red Hat, mit dieser Technologie werde ich mich in der nächster Zeit beschäftigen. Ich bin sicher das wir hier wieder einmal an einem sehr interessanten Technologie-Wendepunkt stehen, dem Start in die Container Infrastrukturen mit kompletten Lösungen wie OpenShift von Red Hat. Jedoch trotz aller Euphorie, ein start in diese Technologie sollte geplant und kontrolliert erfolgen. Speziell sollten Erfahrungen in einem PoC gesammelt werden. Der Schritt ein OpenShift Infrastruktur in einem produktiven Umfeld einzusetzen, muss basiert auf den Erfahrungen gut geplant und kontrolliert erfolgen um nicht in die gleichen Probleme wie es damals bei der Hardware-Virtualisierung zu laufen!

chaos_container

Für den produktiven Betrieb, braucht es Sicherheit, Stabilität, Kontinuität ebenfalls sollten alle Komponenten aktuell bleiben. Monitoring und Backup/Restore sind ebenso Themen mit denen man sich vor der Inbetriebnahme auseinandersetzen muss. Sicher ermöglicht diese Technologie mehr Tempo, aber es braucht Regelungen und Prozesse damit nicht nach einer gewissen Zeit, die Container Welt plötzlich so wie auf dem Bild oberhalb aussieht!

Cet article Keine Angst vor Container Technologie (DOAG 2018) est apparu en premier sur Blog dbi services.

AWS re:invent 2018 – Day 1

Tue, 2018-11-27 10:20

Yesterday was my first day at AWS re:Invent conference. The venue is quite impressive, the conference is split between 6 hotels where you can attend different types of sessions including chalk talk, keynotes, hands-on labs or workshop. For my first day, I stayed in the same area in The Venetian to make it easy.

Invent

The walking distance is quite big between some places so it requires to carefully plan the day to be able to see what you want to see. Hopefully there is a shuttle service and I’ll move a bit more between hotels tomorrow. You also need to reserve your seat and be here in advance to be sure to enter the room.

In my own example, I wanted to attend a chalk talk about Oracle Licensing in the Cloud to start the week. As I was not able to reserve a seat I had to wait on the walk up line. The session was full, Oracle still interests lots of people and licensing is still a concern besides performance for lots of customers when they start planning to move to public cloud.

I’m working with AWS services for a bit more than 1 year at a customer but there are still a lot to learn and understand about AWS, that’s why I also attended to an Introductory session about VPC (Virtual Private Cloud) to better understand the network options when going to AWS. To make it simple, a VPC allows to to have a private network configured as you wish inside AWS. You have the control of the IP range you would like to use and you can configure the routing tables and so on.

I also tried to attend a workshop about running Oracle on the Amazon RDS, the AWS managed database service and especially how to migrate them from Oracle to the Amazon Aurora database using PostgreSQL compatibility. The goal was to use 2 AWS products to run the migration: AWS Schema Convertion Tool and AWS Database Migration Service. Unfortunately some issues with the WiFi constantly changing the IP and a limitation on my brand new AWS account that required additional checks from Amazon prevented me from going to the end of the workshop. But I got some credits to try it by myself a bit later so I’ll most probably try the Schema Conversion Tool.

Some DBA may worry about the managed database services or announces from Oracle about autonomous database but I agree with the slides below from AWS speaker during the workshop. I personally think that DBA won’t disappear. Data itself and applications will still be around for quite long time and the job may evolve and we will spend more time on application/data side than before.

DBA role in the Cloud

Today is another day, let’s forget a bit about the DBA part and try to see more about DevOps…

Cet article AWS re:invent 2018 – Day 1 est apparu en premier sur Blog dbi services.

SQL Server 2019 CTP 2.1 – A replacement of DBCC PAGE command?

Tue, 2018-11-27 07:08

Did you ever use the famous DBCC PAGE command? Folks who are interested in digging further to the SQL Server storage already use it for a while. We also use it during our SQL Server performance workshop by the way. But the usage of such command may sometimes go beyond and it may be used for some troubleshooting scenarios. For instance, last week, I had to investigate a locking contention scenario where I had to figure out which objects were involved and with their related pages (resource type) as the only way to identify them. SQL Server 2019 provides the sys.dm_db_page_info system function that can be useful in this kind of scenario.

blog 148 - 0 - banner

To simulate locks let’s start updating some rows in the dbo.bigTransactionHistory as follows:

USE AdventureWorks_dbi;
GO

BEGIN TRAN;

UPDATE TOP (1000) dbo.bigTransactionHistory
SET Quantity = Quantity + 1

 

Now let’s take a look at the sys.dm_tran_locks to get a picture of locks held by the above query:

SELECT 
	resource_type,
	COUNT(*) AS nb_locks
FROM 
	sys.dm_tran_locks AS tl
WHERE 
	tl.request_session_id = 52
GROUP BY
	resource_type

 

blog 148 - 1 - query locks

Referring to my customer scenario, let’s say I wanted to investigate locks and objects involved. For the simplicity of the demo I focused only the sys.dm_tran_locks DMV but generally speaking you would probably add other ones as sys.dm_exec_requests, sys.dm_exec_sessions etc …

SELECT 
	tl.resource_database_id,
	SUBSTRING(tl.resource_description, 0, CHARINDEX(':', tl.resource_description)) AS file_id,
	SUBSTRING(tl.resource_description, CHARINDEX(':', tl.resource_description) + 1, LEN(tl.resource_description)) AS page_id
FROM 
	sys.dm_tran_locks AS tl
WHERE 
	tl.request_session_id = 52
	AND tl.resource_type = 'PAGE'

 

blog 148 - 2 - locks and pages

The sys.dm_tran_locks DMV contains the resource_description column that provides contextual information about the resource locked by my query. Therefore the resource_description value column will inform about [file_id:page_id] when resource_type is PAGE.

SQL Server 2019 will probably lead the DBCC PAGE command to return to the stone age for some tasks but let’s start with this old command as follows:

DBCC PAGE (5, 1, 403636, 3) WITH TABLERESULTS;

 

blog 148 - 3 - dbcc page

The DBCC PAGE did the job and provides and output that includes the page header section where the Metadata: ObjectId is stored. We may then use it with OBJECT_NAME() function to get the corresponding table name.

SELECT OBJECT_NAME(695673526)

 

blog 148 - 4 - dbcc page - object_name

But let’s say that using this command may be slightly controversial because this is always an undocumented command so far and no need to explain here how it can be dangerous to use it in production. Honestly, I never encountered situations where DBCC PAGE was an issue but I may not provide a full guarantee and it is obviously at your own risk. In addition, applying DBCC PAGE for all rows returned from my previous query can be a little bit tricky and this is where the new sys.dm_db_page_info comes into play.

;WITH tran_locks
AS
(
	SELECT 
		tl.resource_database_id,
		SUBSTRING(tl.resource_description, 0, CHARINDEX(':', tl.resource_description)) AS file_id,
		SUBSTRING(tl.resource_description, CHARINDEX(':', tl.resource_description) + 1, LEN(tl.resource_description)) AS page_id
	FROM 
		sys.dm_tran_locks AS tl
	WHERE 
		tl.request_session_id = 52
		AND tl.resource_type = 'PAGE'
)
SELECT 
	OBJECT_NAME(page_info.object_id) AS table_name,
	page_info.*
FROM 
	tran_locks AS t
CROSS APPLY 
	sys.dm_db_page_info(t.resource_database_id, t.file_id, t.page_id,DEFAULT) AS page_info

 

This system function provides a plenty of information mainly coming from the page header in tabular format and makes my previous requirement easier to address as show below.

blog 148 - 5 - sys.dm_db_page_info

The good news is this function is officially documented but un/fortunately (as you convenience) for the deep dive study you will still continue to rely on the DBCC PAGE.

Happy troubleshooting!

 

 

Cet article SQL Server 2019 CTP 2.1 – A replacement of DBCC PAGE command? est apparu en premier sur Blog dbi services.

Strange behavior when patching GI/ASM

Mon, 2018-11-26 12:45

I tried to apply a patch to my 18.3.0 GI/ASM two node cluster on RHEL 7.5.
The first node worked fine, but the second node got always an error…

Environment:
Server Node1: dbserver01
Server Node2: dbserver02
Oracle Version: 18.3.0 with PSU OCT 2018 ==> 28660077
Patch to be installed: 28655784 (RU 18.4.0.0)

First node (dbserver01)
Everything fine:

cd ${ORACLE_HOME}/OPatch
sudo ./opatchauto apply /tmp/28655784/
...
Sucessfull

Secondary node (dbserver02)
Same command but different output:

cd ${ORACLE_HOME}/Patch
sudo ./opatchauto apply /tmp/28655784/
...
Remote command execution failed due to No ECDSA host key is known for dbserver01 and you have requested strict checking.
Host key verification failed.
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

After playing around with the keys I found out, that the host keys had to be exchange also for root.
So I connected as root and made an ssh from dbserver01 to dbserver02 and from dbserver02 to dbserver01.

After I exchanged the host keys the error message changed:

Remote command execution failed due to Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

So I investigated the log file a litte further and the statement with the error was:

/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 dbserver01 \
/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 dbserver01 \
/u00/app/oracle/product/18.3.0/dbhome_1//perl/bin/perl \
/u00/app/oracle/product/18.3.0/dbhome_1/OPatch/auto/database/bin/RemoteHostExecutor.pl \
-GRID_HOME=/u00/app/oracle/product/18.3.0/grid_1 \
-OBJECTLOC=/u00/app/oracle/product/18.3.0/dbhome_1//cfgtoollogs/opatchautodb/hostdata.obj \
-CRS_ACTION=get_all_homes -CLUSTERNODES=dbserver01,dbserver02,dbserver02 \
-JVM_HANDLER=oracle/dbsysmodel/driver/sdk/productdriver/remote/RemoteOperationHelper

Soooooo: dbserver02 starts a ssh session to dbserver01 and from there an additional session to dbserver01 (himself).
I don’t know why but it is as it is….after I did a keyexchange from dbserver01 (root) to dbserver01 (root) the patching worked fine.
At the moment I can not remeber that I ever had to do a keyexchange from the root User on to the same host.

Did you got the same proble or do you know a better way to do that? Write me a comment!

Cet article Strange behavior when patching GI/ASM est apparu en premier sur Blog dbi services.

DOAG 2018: OVM or KVM on ODA?

Mon, 2018-11-26 03:51

The DOAG 2018 is over, for me the most important topics were in the field of licensing. The insecurity among the users is great, let’s take virtualization on the ODA, for example:

The starting point: The customer uses Oracle Enterprise Edition, has 2 CPU licenses, uses Dataguard as disaster protection on 2 ODA X7-2M systems and wants to virtualize, he also has 2 application servers that are also to be virtualized.

Sure, if I use the HA variant of the ODA or Standard Edition, this does not concern me, there OVM is used as a hypervisor and this allows hard partitioning. The database system (ODA_BASE) automatically gets its own CPU pool in Virtualized Deployment; additional VMs can be distributed to the rest of the CPU.

On the small and medium models only KVM is available as a hypervisor. This has some limitations: on the one hand there is no virtualized deployment of the ODA 2S / 2M system, on the other hand, the operation of databases as KVM guests is not supported. This means that the ODA must be set up as a bare metal system, the application servers are virtualized in KVM.

What does that mean for the customer described above? We set up the system in bare metal mode, we activate 2 cores on each system, set up the database and set up the Dataguard between primary and standby. The customer costs 2 EE CPU licenses (about $ 95k per price list).

Now he wants to virtualize his 2 application servers and notes that 4 cores are needed per application server. Of 36 cores (per system) but only 2 cores are available, so he also activates 4 more cores (odacli update-cpucore -c 6) on both systems and installs the VM.

But: The customer has also changed his Oracle EE licenses, namely from 1 EE CPU to 3 CPU per ODA, so overall he has to buy 6 CPU licenses (about $ 285k according to the price list)!

Now Oracle propagates that in the future KVM in the virtualization should be the means of choice. However, this will not work without hard partitioning under KVM or the support of databases in KVM machines.

Tammy Bednar (Oracle’s Oracle Database Appliance Product Manager) announced in her presentation “KVM or OVM? Use Cases for Solution in a Box” that solutions to this problem are expected by mid-2019:

– Oracle databases and applications should be supported as KVM guests
– Support for hard partitioning
– Windows guests under KVM
– Tooling (odacli / Web Console) should support the deployment of KVM guests
– A “privileged” VM (similar to the ODA_BASE on the HA models) for the databases should be provided
– Automated migration of OVM guests to KVM

All these measures would certainly make the “small” systems much more attractive for consolidation. It will also help to simplify the “license jungle” a bit and to give the customers a bit more security. I am curious what will come.

Cet article DOAG 2018: OVM or KVM on ODA? est apparu en premier sur Blog dbi services.

AWS re:invent 2018 warm up

Mon, 2018-11-26 03:07

The Cloud is now part of our job so we have to get a deeper look on the available services to understand and take best advantage of them. The annual AWS conference re:invent has started tonight in The Venetian at Las Vegas and will last until Friday.

AWS logo

Today was a bit special because there were no sessions yet but instead I was able to participate to a ride to Red Rock canyon on a Harley Davidson motorbike.

It’s a 56 miles ride and you can enjoy beautiful landscapes very different from the city and the light of the casinos. We were a small group with around 13 bikes and even if it was a bit cold it was a really nice tour. I really recommend people in Vegas to escape the city for few hours to discover such places like Red Rock or Valley of Fire.

Harley Davidson ride to Red Rock Canyon

 

Then the conference opened on Midnight Madness and an attempt to beat the world record of ensemble air drumming. I don’t know yet if we achieve the goal but I tried to help and participated to the challenge.

invent Midnight Madness

The 1st launch of the week has been also done this evening and it’s a new service called AWS RoboMaker. You can now use AWS cloud to develop new robotics applications and use other services like Lex or Polly to allow your robot to understand voice orders and answer it for example.

Tomorrow the real thing begins with hand-on labs and some sessions, stay tuned.

Cet article AWS re:invent 2018 warm up est apparu en premier sur Blog dbi services.

Flashback to the DOAG conference 2018

Sat, 2018-11-24 14:40

Each year, since the company creation in 2010, dbi services attends the DOAG conference in Nürnberg. Since 2013 we even have a booth.

The primary goal of participating to the DOAG Conference, is to get an overview about the main trends in the Oracle business. Furthermore, this conference and our booth allow us to welcome our Swiss and German customers and thank them for their trust. They’re always pleased to receive some nice Swiss Chocolate produced in Delémont (Switzerland), city of our Headquarter.

But those are not the only reasons why we attend this event. The DOAG conference is also a way to promote our expertise with our referents and to thank our performing consultants for their work all over the year. We consider the conference as a way to train people and improve their skills.

Finally some nice social evenings take place, first of all the Swiss Oracle User Group (SOUG) “Schweizer Abend”, the Tuesday Evening, secondly the “DOAG party” on Wednesday evening. dbi services being active in the Swiss Oracle User Group, we always keep a deep link to the Oracle community.

As a Chief Sales Officer I tried to get an overview of the main technical “Oracle trends”, through the successes of our sessions (9 in total) all over the conference. The “success” being measured in term of number of participants to those sessions.

At a first glance I did observe a kind of “stagnation” of the interest about Cloud topics. I can provide several evidences and explanations about that. First of all the Key Note during the first day presenting a study over German customers concerning the cloud adoption didn’t reveal any useful information, according to me. The Cloud adoption increases, however there are still some limitations in the deployment of Cloud solutions because of security issues and in particular the cloud act.

Another possible reason about the “small” interest about Cloud topics during the conference, according to me, relies on the fact that Cloud became a kind of “commodity”. Furthermore, we all have to admit that Oracle has definitively not a leadership position in this business. Amazon, Azure and Google definitively are the leaders in this business and Oracle remains a “small” challenger.

Our session from Thomas Rein did not had so much attendees, even if we really presented a concrete use case about Oracle Cloud usage and adoption. The DOAG conference is a user group conference, Techies mostly attend the conference and Techies have to deal with concrete issues, currently the Oracle Cloud does not belong to them.

So what were the “main topics” according to what I could observe ?

Open Source had a huge success for us, both the MySQL and the two PostgreSQL tracks were very very successful, thanks to Elisa Usai and Daniel Westermann.

Some general topics like an “introduction to Blockchain” also had a huge success, thanks to Alain Lacour for this successful session.

Finally the “classicals”, like DB tuning on the old-fashion “On Prem” architectures also had a huge success, thanks to our technology leader Clemens Bleile and to Jérôme Witt who explained all about the I/O internals (which are of course deeply link with performance issues).

Thanks to our other referents: Pascal Brand (Implement SAML 2.0 SSO in WLS and IDM Federation Services) and our CEO David Hueber (ODA HA: What about VMs and backup?) who presented some more “focused” topics.

I use this Blog post to also thank the Scope Alliance and in particular Esentri for the very nice party on Wednesday Evening, beside hard work, hard party is also necessary :-)

Below, Daniel Westermann with our customer “die Mobiliar” on the stage, full room :

IMG_5143

Cet article Flashback to the DOAG conference 2018 est apparu en premier sur Blog dbi services.

My DOAG Debut

Fri, 2018-11-23 08:50

Unbelievable! After more than 10 years working in the Oracle Database environment, this year was my first participation at the DOAG Conference + Exhibition.

After a relaxed travel to Nürnberg with all the power our small car could provide on the German Autobahn, we arrived at the Messezentrum.
With the combined power of our dbi services’ team, the booth was ready in no time and we could switch to the more relaxed part of the day and ended up in our hotel’s bar with other DOAG participants.

The next few days were a firework of valuable sessions, stimulating discussions and some after hour parties who gave me to think about my life decisions and led me to the question: Why did it take me so long for participating in the DOAG Conference + Exhibition?

It would make this post unreadable long and boring if I would sum up all sessions I attended.
So I will just mention a few highlights with the links to the presentations:

Boxing-Gloves-Icons

Boxing Gloves Vectors by Creativology.pk

And of course, what must be mentioned is The Battle: Oracle vs. Postgres: Jan Karremans vs. Daniel Westermann

The red boxing glow (for Oracle) represents Daniel Westermann, Oracle expert for many many years who now is the Open Infrastructure Technologie Leader @ dbi services, while Jan Karremans, Senior Sales Engineer at Enterprise DB put on the blue glow (for Postgres). The room was fully packed with over 200 people who have more sympathy for Oracle.

 Oracle vs. Postgres

The Battle: Oracle vs. Postgres

Knowing how much Daniel loves the Open Source database it was inspiring to see how eloquent he defended the Oracle system and brought Jan multiple times into troubles.
It was a good and brave fight between the opponents in which Daniel had the better arguments and gained a win after points.
For the next time, I would wish to see Daniel on the other side defending Postgres because I am sure he could fight down almost every opponent.

In the end, this DOAG was a wonderful experience and I am sure it won’t take another 10 years until I come back.

PS: I could write about the after party, but as you know, what happens at the after party stays at the after party expect the headache, this little b… stays a little bit longer.

PPS: On the last day I’ve got a nice little present from virtual7 for winning the F1 grand prix challenge. I now exactly on which dbi event we will open this bottle, stay tuned…
IMG_20181122_153112

Cet article My DOAG Debut est apparu en premier sur Blog dbi services.

DOAG 2018: Key word: “Docker”

Fri, 2018-11-23 06:34

Capture

In my blog about the DOAG Last year I said that saw a growing interest on the automatic deployment tools and Docker containers. This year confirmed the interest. They were a lot of presentations about Docker containers, Kubernetes, OpenShift. This for the database stream, the DevOps stream but also the Middleware one. I numbered more than 25 sessions where the keyword Docker appeared in the Abstract.

Despite my will, I was not able to assist to all of those. They were to many to be able to.

One of those interesting presentations that retained my attention was the following one: “Management von Docker Containern mit Openshift & Kubernetes” from Heiko Stein. He gave us a very good overview of the services for Kubernetes and Openshift and showed us how they can be complementary.

An other one was about monitoring and diagnosing performances of a Java application (OpenJDK 11) running in a Docker Container.
Monitoring of JVM in Docker to Diagnose Performance Issues. This one was interesting at several levels, as it talked about: Docker container, OpenJDK 11 and the tools that are delivered with. Monitring applications and diagnosing issues are always interesting subjects to follow to get some hints from someone else experiences.

The last I will list but not the least one was “MS Docker: 42 Tips & Tricks for Working with Containers“. This one in summary is all you ever wanted to know about Docker. But the 45 minutes sessions were really to short to get everything from it :-(.

Those presentations just made my interest for those technologies grow faster.

Cet article DOAG 2018: Key word: “Docker” est apparu en premier sur Blog dbi services.

My first presentation at the DOAG – “MySQL 8.0 Community: Ready for GDPR?”

Fri, 2018-11-23 02:35

This year I participated for the first time to the DOAG, the conference which takes place in November in Nuremberg. Here some key words about this event: Oracle and other technologies, 2000 visitors, more than 400 sessions, more than 800 abstracts sent, exhibitors…
And for me everything started when in June I decided to send an abstract for a MySQL session.

Preparation

I’ve been working on MySQL for several years. At the beginning of this year, I started testing the new 8.0 version. We live in an age where security is more important than ever, GDPR and other regulations force us to review some subjects such as privacy and data policies. MySQL put in place lots of improvements regarding security in this last version.
So my session proposal for the DOAG was the following one:

MySQL 8.0 Community – Ready for GDPR ?
One of the most topical subject today is security.
New MySQL 8.0 version introduces several improvements about that, such as:
Encryption of Undo and Redo Logs, which comes to enrich existing datafile encryption
Password rotation policy, to avoid a user to always use the same passwords
New caching_sha2_password plugin, which let you manage authentication in a faster and more secure way
SQL Roles, to simplify the user access right management
So… let’s have a look!

When I received the e-mail that told me that my abstract had been accepted, I was happy and stressed at the same time.
I directly started testing and studying more and more these new features, writing my slides and preparing some demos and my speech in English. I know, for the most of you this is simple, but – hey – this would have been my first session ever! ;)
Working at dbi services is also the possibility to present a session to colleagues and so to test it and have some feedback during our internal events, before presenting this same session to abroad/external events. So in September I could present my session a first time and this helped me to feel more comfortable about the fact of presenting something. Time passed and November was suddenly there…

Arriving to the DOAG

So on 19th November I caught my flight and at 7pm I was in Nuremberg. And the day after I arrived to the Conference Center.
badge
doag
My session was planned for 3pm so I had some time in the morning to visit some booths and people that I wanted to meet (Oracle MySQL, Quest, EDB, and my colleagues on booth of dbi services).
dbi services
And I also got some useful tips from my colleagues to calm my stress and better manage my session: take a few seconds before starting talking to catch the visual attention of the audience, breath correctly, visit the room before, and so on (thank you guys for your support during the last weeks!).

My session

The expected moment came, my VMs were running for demos and slides ready, and some people arrived in the room.
IMG_9528
I started my talk with a little introduction to GDPR explaining the importance of having some privacy and data policies in our hyper connected world. And this aspect let me doing a link with the fact that MySQL 8.0 came out with lots of improvements in terms of security.
So I finally could go deeper into technical part to explain these new important features:

- SQL Roles:
Thanks to roles, we could have a faster user administration and grant handling is managed in a centralized way. During the session I did a demo to explain how roles are created and activated in MySQL and I used the yEd desktop application to generate the diagram of the whole roles representation from a graphml file.
For more details about roles, read my previous blog and the MySQL Documentation.

- Password Reuse Policy:
It avoids users to use previous passwords. This can be activated in order of changes (with the system variable password_history) or time elapsed (password_reuse_interval) and it’s not valid for privileged accounts.

- Password Verification Policy:
If this feature is activated, attempts to change an account password require before to specify the current password to be replaced.
For more details about password verification policy, read my previous blog and the MySQL Documentation.

- Validate Password Component:
It was already there on previous versions but now this is not a plugin anymore but a component instead. For some statements like ALTER|CREATE USER, GRANT, SET PASSWORD, it checks the password of an user account against the policy that we defined (LOW, MEDIUM or HIGH) and rejects the password if it’s weak.

- InnoDB Tablespace Encryption:
It’s a 2-tier encryption architecture, based on a master key and tablespace keys. When a table is encrypted, a tablespace key is encrypted and stored into the tablespace header. When an user wants to access to his data, a master key is used to decrypt the tablespace key. So during the session I explained how it works, which are the requirements and how we can setup this feature. I also did a demo to show how we could extract some clear-text data without connecting to the MySQL Server in opposition to the fact that if encryption is activated that is not possible.
This feature is there starting from MySQL 5.7.11 but it helped me to introduce the next chapter.

- InnoDB Redo/Undo Log Encryption:
Redo log data is encrypted/decrypted with the tablespace encryption key which is stored in the header of ib_logfile0. Through a demo I explained, which are the requirements, how to setup it and what we have to think about before activating this option. And I showed how we could extract some sensitive data in the Redo Log Files if encryption is turned off.
IMG_9531
Same thing for the encryption of InnoDB Undo Log files, which can be activated with the system variable innodb_undo_log_encrypt.

- caching_sha2_password Plugin:
In MySQL 8.0 the new caching_sha2_password plugin makes the authentication strong as its predecessor (it still uses the SHA-256 password hashing method) but at the same time faster: a cache on the server side let the user accounts that already connected once bypass the full authentication.
Here the schema through which I explained the whole authentication process using RSA key pairs:
auth

A little conclusion

Participating to the DOAG and presenting there has been for me a very important professional, human and social experience. I went beyond my limits, I learned lots of news things thanks to the other speakers sessions, I met new people working on IT, had fun with colleagues sharing some spare time. This was my first participation to a conference , it will not be the last one. Why did not I start that before? ;)

Cet article My first presentation at the DOAG – “MySQL 8.0 Community: Ready for GDPR?” est apparu en premier sur Blog dbi services.

Technical and non-technical sessions at the DOAG 2018

Thu, 2018-11-22 19:03

The amazing DOAG 2018 conference is over now. As every year we saw great technical as well as great non-technical sessions. What impressed me was the non-technical presentation “Zurück an die Arbeit – Wie aus Business-Theatern wieder echte Unternehmen werden” (back to work – how business theatres become real business company again) provided by Lars Vollmer. It was very funny, but also thought-provoking. Lars started with the provocative sentence that people do work too less. Not in terms of time, but in terms of what people do. I.e. lots of things they do looks like work, but actually is not. Like meetings, yearly talks, reports, presentations, etc. At one point Lars started talking about Dabbawalas in India to show that people may work with highest quality and totally independent without any hierarchical structure. It’s a very old concept. As lunch is too expensive in some Indian cities there is a service to bring the lunch in boxes from home to work. The people delivering the lunch boxes work totally independently, they have no boss and often are not able to read. But still, they are able to handover the lunch boxes from Dabbawala to Dabbawala until it finally arrives at the destination. The independent workers provide the service with an unbelievable high quality. I.e. out of 6 million lunch boxes only 1 is not delivered to the correct target. That’s impressive. See e.g. Wikipedia Dabbawala for it.

One of the technical sessions, which I appreciated to be able to attend was about “Oracle’s kernel debug, diagnostics & tracing infrastructure” provided by Stefan Köhler.
IMG_2040
That returned to mind my early days at Oracle Support with statements like


alter system set events '<EVENT_NUMBER> trace name context forever, level <X>';

However, the time of setting numeric events (e.g. 10046) is over as Oracle uses the UTS (Unified Tracing Service) for new events only and already maps some numbers to an Event++.

An example for UTS is:


alter session set events 'trace[RDBMS.SQL_Compiler.*][SQL: 869cv4hgb868z] disk=highest';

I.e. the section within the second brackets is the scope. In the example above that means create a 10053-trace on SQL_ID 869cv4hgb868z once that is parsed.

To make sure that events are populated from the SGA to all sessions PGA, Oracle introduced a new parameter “_evt_system_event_propagation” in 11g. Unfortunately that feature was broken in Oracle 12.2 (bug #25989066 & #25994378) and fixed in Oracle 18c. See also the comments in Blog Enable 10046 tracing for a specific SQL.

It’s sad that the conference DOAG 2018 is over, but we are looking forward to an interesting event in 2019. The world is changing to more Open Source software used in businesses and it will be interesting to see how Oracle (and also the DOAG) will react on that.

Cet article Technical and non-technical sessions at the DOAG 2018 est apparu en premier sur Blog dbi services.

DOAG 2018 – Fazit: it’s not only about Oracle anymore

Thu, 2018-11-22 05:38

Amazing conference, amazing people and awesome party like every year  :mrgreen: and of course great networking like every year

DOAG2018_booth242
This year, I was quite impressed about the number of different technologies. Speeches were not only centered around the big red O. As already summarized, by my colleague Alain yesterday:

DOAG 2018 – Not only database – Docker and Kubernetes rule the world

Open Source RDBMS, containers and microservices are arising much more quickly than cloud computing even if, in some cases, both are linked/mixed together.

Of course, a lot of Oracle RDBMS presentations were really good and worth for partners as well as for customers. Some were centered around new features, the new release model but surprisingly in spite of the Cloud Computing trend all sessions about Oracle-, Linux basics were very popular. It makes me think that we are at the dawn of change on how Database Administrators are integrating RDBMS in the arising DevOps organization within their companies; DevOps requires stuff running fast without any long running complicated process.
It sounds like  the opposite of the database purpose which implies stability, performance, availability, etc.
So, what? if the developers are constantly looking for new technologies why shall we (DBAs) not look for alternative RDBMS? OpenSource RDBMS?

In my opinion, this is a quite good summary of what I felt during the conference and what I feel everyday at my customers as a consultant over the past five years.
Surprisingly, Paolo Kreth (Schweizerische Mobiliar) had a talk titled “Hilfe, die Open Source DBs kommen!” (Help, the Open Source DBs are arising) which exactly described that “feeling” from the opposite customer side with a real-life customer case.

DOAG_speech

Thanks to Paolo for sharing his mindset and the spirit of the IT @die Mobiliar. As a consultant, I fully agree and confirm the concept he introduced  are matching the reality with more or less success.

Basically, the more people are communicating, sharing expertise (whatever the budget), the more successful actions will be; This applies of course to DevOps but as well at entreprise level. Which indeed matches to our company values and that’s why we also have the label “Great Place To Work” for the second year in a row.

GPTW_CH_Logo_2018_FR

If you share the same mindset, feel free to send us your CV here.

 

Last one but not least, see U next year (again) :-)

Cet article DOAG 2018 – Fazit: it’s not only about Oracle anymore est apparu en premier sur Blog dbi services.

DOAG 2018 – What to learn from a battle on IT technologies?

Wed, 2018-11-21 23:54

This year’s my 6th participation with dbi services at the DOAG Conference + Exhibition Nuremberg (as a “non-techie” attendee, no need to say), but it was my very first battle on IT technologies. And it was fun!

On the DOAG 2018 Conference + Exhibition

DOAG 2018 Conference + Exhibition is taking place November 20 – 23, 2018 in Nuremberg. Participants have the opportunity to attend a three-day lecture program with more than 400 talks and international top speakers, plus a wide choice of workshops and community activities. This is a great opportunity to expand your knowledge and benefit from the know-how of the Oracle community.

doag1

On dbi services at the DOAG 2018

dbi services attends the DOAG with sessions + booth every year since 2013. Our consultants share their knowledge within the German speaking Oracle community, this year with 9 technical sessions. Customers and contacts are welcome during the sessions and at booth number 242 on the 2nd floor. Every day a prize draw takes place at the booth with a Swiss watch to win and many other things which guarantees a relaxed and funny atmosphere.

DSC_1719_2 DSC_1729_2

On battles at community events

Usually, techies present specific topics on one specific technology, i.e. Oracle new features, new versions, interesting findings, tools, and practices. However, it may happen that some speakers rather perform in “team”, probably because they are too shy to come alone on stage… especially on big rooms with 1 or more huge screens behind the speaker where he/she spends the 45 minutes presentation like feeling his/her “maxi-Me” in his/her back!

mini-me and maxi-me

Anyway, Jan Karremans, Senior Sales Engineer for EnterpriseDB / EDB Postgres, and our Daniel Westermann, Senior Consultant and Technology Leader Open Infrastructure at dbi services, decided to have a battle on a funny topic: Oracle vs. PostgreSQL. And some situations were really funny indeed. The opponents went through critical topics like budget, scalability, security, performance and administration. Daniel was chosen to represent Oracle DB technologies where Jan represented PostgreSQL. No need to say on which side the balance tilted regarding budget considerations. But in general I would say that the battle turned a little in favor of Oracle.

DSC_1711_2 battle Jan Daniel

What does the battle tells us about IT technologies?

The battle could have been even more bloody. Some of us have expected it for sure. Daniel started the show with an arrogant “something that costs nothing is useless” that brought fire on the stage and lots of laughs from the spectators. But both counterparts remained quite fair at the end. And the battle became a real opportunity to compare – more or less – two different DB technologies and approaches.

At the end, the interest was huge. More than 250 people attended the session where the room contains only 200 seats!

DSC_1707_2

Conclusion

The level of attendance was huge, not for nothing. The fun of attending a battle was one the reasons for sure. But having the chance to get a clear picture on which IT solution suits best is critical. So what about getting a neutral and independent view for evaluating IT technologies and designing future projects?

See you soon for more battles and feasibility studies ;-)

 

Cet article DOAG 2018 – What to learn from a battle on IT technologies? est apparu en premier sur Blog dbi services.

DOAG 2018 – Not only database – Docker and Kubernetes rule the world

Wed, 2018-11-21 11:12

The DOAG Konferenz is about everything around the Oracle database and Oracle technologies.
But this year, as stated in the blog of Daniel Westermann more and more open source as well as “hype” subjects from which Docker and Kubernetes are very popular.
Looking at the program we can see about 20 presentations on those 2 technologies:

  • DevOps mit der Oracle Datenbank
  • Oracle, PostgreSQL, Docker und Kubernetes bei der Mobiliar
  • Docker und die Oracle Fusion Middleware im BPM- und Forms-Bereich
  • Pimp your DevOps with Docker: An Oracle BI Example
  • Docker Security
  • Dockerize It – Mit APEX in die Amazon Cloud
  • Using Vagrant and Docker For APEX Development
  • Cloud Perspective: Kubernetes is like an app server, but more cloudy
  • Management von Docker Containern mit Openshift & Kubernetes
  • DevOps Supercharged with Docker on Exadata
  • Einführung in Kubernetes
  • Docker for Database Developers
  • Oracle Container Workloads mit Kubernetes auf AWS?
  • Practical Guide to Oracle Virtual Environments
  • Orchestrierung & Docker für DBAs
  • Monitoring of JVM in Docker to Diagnose Performance Issues
  • Container-native Entwicklung und Deployment
  • Alternativen des Betriebs von Weblogic mit Kubernetes/Docker
  • MS Docker: 42 Tips & Tricks for Working with Containers

Most of the ones my colleagues or myself had a chance to attend, were crowded.
Docker is growing for years, now replacing step by step the usage of full blown VMs.
Kubernetes is capitalizing on that success by providing complementary tools.
This is not only the future, it’s today…
… have a deaper look into them and follow-up.

Cet article DOAG 2018 – Not only database – Docker and Kubernetes rule the world est apparu en premier sur Blog dbi services.

Funny session Oracle vs PostgreSQL battle at the #DOAG2018

Wed, 2018-11-21 10:14

As each year dbi services is present at the DOAG with a nice booth and with many presentations. Today my colleague Daniel Westermann had a very funny session with Jan Karremans “Oracle vs PostgreSQL” battle. The rule of who will defend which technology was not defined before :-) and without surprise Daniel won the Oracle Technology. But Daniel since the last year is convinced from the PostgreSQL technology.

IMG_6364 2

Daniel started the Battle with aggressive question to Jan “Was gratis ist taugt nichts!”, so the battle was started !

During the battle many topics was addressed, as the license and support cost, where Oracle was compared to the community PostgreSQL. For me this was not fair, because comparing the cost of the monster Oracle to the PostgreSQL community is not something what you can compare. I would prefer that they compared Oracle to EDB starting the beginning of the battle.

What I will bring back from this battle:

First, was that many people are interested on this topic ” Oracle vs PostgreSQL”. The room was completely full and more than 50 persons were also standing, so around 250 persons attended this session, which is very impressive !

Second, the participants participate also actively to the battle, they want to have information and answers on some topics. This shows that currently lot’s of companies are asking themselves this question.

Third, the battle for PostgreSQL without including the features from EDB was not well balanced. Therefore I would suggested to remake a battle between Oracle and EDB.

Last but not least, I trust more Daniel when he speak about PostgreSQL that when he talks about Oracle, because when he say “with Oracle Autonomous Database you don’t need anymore DBA’s” I can’t trust him :-!

Cet article Funny session Oracle vs PostgreSQL battle at the #DOAG2018 est apparu en premier sur Blog dbi services.

DOAG 2018, more and more open source

Wed, 2018-11-21 04:35

We speak about the increasing interest in open source technologies since several years and now, in 2018, you even feel that at the DOAG. There are sessions about PostgreSQL, MariaDB, Docker, Kubernetes and much more. As usual we had to do our hard preparation work before the conference opened to the public and prepare our booth and Michael almost came to his limits (and maybe he was not even sure on what he was doing there :) ):

cof

Jerome kicked of the DOAG 2018 for us with one of the very sessions and talked about “Back to the roots: Oracle Datenbanken I/O Management”. That is still an interesting topic and even if it was a session early in the morning the room was almost full.

Elisa continued in the afternoon with her session about MySQL and GPDR and she was deep diving into MySQL data and redo log files:
large A very well prepared and interesting talk.

David followed with his favorite topic, which is ODA and the session was “ODA HA, what about VM backups?”:
LRG_DSC07146

During the sessions not much is happening at our booth, so there is time for internal discussions:
cof

Finally, in the late afternoon, it was Hans’ (from Mobiliar) and my turn to talk about Oracle, PostgreSQL, Docker and Kubernetes and we have been quite surprised on how many people we had in the room. The room just filled, see yourself:
6818f84d-19fe-49cf-b11b-b08c19d8e40d-original

It seems this is a topic almost everywhere.

And now: Day two at the DOAG, lets see what happens today.

Cet article DOAG 2018, more and more open source est apparu en premier sur Blog dbi services.

ODA and CIS / GDPR features

Tue, 2018-11-20 10:50

We all know that security becomes…sorry, is one of the hottest topic when setting up IT environment. One basis for that is to be compliant with regulations or standards such as GDPR or CIS. What is not so well known, is that ODA already integrates some tool to support you for that.

During this first day @DOAG2018 I followed and interesting session from Tammy Bednar, Senior Director of Product Management for ODA, about ODA and Security.

Beside the traditional points about the integrated stack of ODA, SUDO configuration or the Oracle Database Security options, I also heard about nice scripts available on ODA since version 12.2.1.3 to check ODA compliance against CIS standards.

For reminder the CIS, Center for Internet Security, produces security guidelines for components such as Linux, databases and much more. As member of the CIS, dbi services proposes security audits based on these guidelines (https://www.dbi-services.com/offering/services/it-security-services/)

On ODA there is now, out of the box, a „small“ Python script, which allows to check the CIS „status“ on OS level for your ODA.

To do so you can simply go in /opt/oracle/oak/bin and run the script cis.py.

IMG_0181

Sorry, as I couldn‘t take my ODA with me in Nürnberg, I do have only a picture of the script so far ;-)

There are 2 good news when running this script on an brand new installed ODA.

  1. The ODA is out of the box already 41% CIS compliant, which is not bad at all
  2. The ODA is only 41% compliant with CIS, which means there still room for improvement and some work for sysadmins like me ;-)

More seriously a real added value of this tool is that beside doing the compliance check it provides a features to fix some/all points. The advantage here is that in comparison of manual changes it makes sure it does not change anything which ODA relies on and breaks it.

What about the database?

Of course ODA is not only an Operating System. At the end there are databases running on it. So the question is: if the cis.py performs checks on OS level, what can I do on DB one?

For this Oracle released of free (yes free) tool called DBSAT, which stands for Database Security Assessment Tool.
https://www.oracle.com/database/technologies/security/dbsat.html

This tools runs against your database and make CIS but also some GDPR compliance checks providing a report. The report can be export in JSON for activities such as cross databases check.

More blogs to follow about these tools, once back from the DOAG…but now it‘s slowly time for the traditional Schweitzer Abend and some party ;-)

Cet article ODA and CIS / GDPR features est apparu en premier sur Blog dbi services.

Power BI Report Server Kerberos Setup

Mon, 2018-11-19 10:46
In the case you have the following configuration and requirements

Your Power BI, paginated mobile KPI reports are published on your on premise Power BI Report Server (Named i.e. SRV-PBIRS), their data sources is an Analysis Services located on another server (Named i.e. SRV-SSASTAB\INST01, INST01 being the named instance) and you want to track/monitor who is accessing the data on Analysis Services or you have row level security constraints.

In such case, if you have configure your Analysis connection using Windows integrated authentication, and therefore you have to setup the Kerberos delegation from the Power BI Report Server to the Analysis Services Server. If you don’t do that, your users will be faced to the famous “double-hop” issue and they won’t be able to access the Analysis Services data or you won’t be able to identify who is consuming your data on Analysis Services side.

In order to setup the Kerberos delegation you can follow steps below:

1- Be sure to be Domain Admin or to have sufficient permission to create SPN and change the Service Account and /or computer settings in the Active Directory. 2- On your Power BI Report Server  server, get the Service account starting your Power BI Report Server service.

(i.e. SvcAcc_PBIRS)

pic1

Note: If you do not have used domain service account you will have to use the server name instead in the following steps.

While you are on the server, make first a backup and then change the rsreportserver.config configuration file (for a default installation it is located here: C:\Program Files\Microsoft Power BI Report Server\PBIRS\ReportServer). Add the parameter <RSWindowsNegotiate/>> in the <AuthenticationType> xml node

pic2

Save an close the file.

3. On your Analysis Services server, get the server account starting your Analysis Services service

(i.e. SvcAcc_SSASTab)

pic3

Note: If you do not have used domain service account you will have to use the server name instead in the following steps.

4- Open a PowerShell console on a  any domain computer with your domain admin user.

Execute the following command to get SPN associated with your Power BI Report Service account:

Setspn -l PBIRSServiceAccount

If you do not see the following entry

HTTP/SRV-PBIRS.Domain
HTTP/SRV-PBIRS

Execute the following commands to register HTTP SPN for your server FQDN and NETBIOS names

SetSpn -a http/SRV-PBIRS.Domain PBIRSServiceAccount
SetSpn -a http/SRV-PBIRS PBIRSServiceAccount

Note that you have to replace the SRV-PBIRS.Domain with the URL (without the virtual directory) of your Power BI Report Server site in the case you defined an URL or you defined an HTTPS  URL with a certificate.

Check again if you the SPN’s are correctly registered after.

 5- In your PowerShell session, execute the following command to get SPN registered for your Analysis Services Service account:
SetSpn -l SvcAcc_SSASTab

You should see the following entries, meaning your Analysis Services SPN’s have been registered:

MSOLAPSVC.3/ SRV-SSASTAB:INST01
MSOLAPSVC.3/ SRV-SSASTAB.domain:INST01

If not run the following commands:

SetSpn -a MSOLAPSVC.3/ SRV-SSASTAB:INST01
SetSpn -a MSOLAPSVC.3/ SRV-SSASTAB.domain:INST01

Furthermore, in the case you installed your Analysis Services with a named instance (in my example INST01), check if SPN’s have been registered for the Analysis Services SQL Browser Service (the server name is used in that case for the SQL Server Browser is started with a local service account):

SetSpn -l SRV-SSASTAB

You should see the following entries:

MSOLAPDisco.3/SRV-SSASTAB
MSOLAPDisco.3/SRV-SSASTAB.domain

If not, run the following command:

SetSpn -a MSOLAPDisco.3/SRV-SSASTAB
SetSpn -a MSOLAPDisco.3/SRV-SSASTAB.domain

 

6- For the next step you have to open Active Directory administration.

Open the properties of your Power BI Report Server service account.In the Account tab, uncheck the “Account is sensitive and cannot be delegated”

pic4

Then in the Delegation tab, select the “Trust this user for delegation to any service”. If you have security constraint with the delegation, it is recommended to use the third option and to select the only services you defined in step 5.

pic5

 7- Finally restart you Power BI Report Server Service.

Cet article Power BI Report Server Kerberos Setup est apparu en premier sur Blog dbi services.

Is there too much memory for my SQL Server instance?

Mon, 2018-11-19 02:56

Is there too much memory for my SQL Server instance? This is definitely an uncommon question I had to deal with of my customers a couple of weeks ago. Usually DBAs complain when they don’t have enough memory for environments they have to manage and the fact is SQL Server (like other SGBDRs) provides a plenty of tools for memory pressure troubleshooting. But what about of the opposite? This question raised in a context of an environment that includes a lot of virtual database servers (> 100) on the top of VMWare where my customer was asked for lowering the SQL Server instance memory reservations when possible in order to free memory from ESX hosts.

blog 147 - 0 - banner

Let’s start with the sys.dm_os_sys_memory. This is the first one that my customer wanted to dig into. This DMV may be helpful to get a picture of the overall system state including external memory conditions at the operating system level and the physical limits of the underlying hardware.

select 
	available_physical_memory_kb / 1024 AS available_physical_memory_MB,
	total_physical_memory_kb / 1024 AS total_physical_memory_MB,
	total_page_file_kb / 1024 AS total_page_file_MB,
	available_page_file_kb / 1024 AS available_page_file_MB,
	system_cache_kb / 1024 AS system_cache_MB
from sys.dm_os_sys_memory;

 

blog 147 - 1 - sys.dm_os_memory

 

But in the context of my customer, it partially helped to figure out SQL Server memory consumption instances because we didn’t really face any environments under pressure here.

However, another interesting DMV we may rely on is sys.dm_os_sys_info. We may also use their counterparts with perfmon counters \Memory Manager\Target Server Memory (KB) and \Memory Manager\Total Server Memory (KB) as shown below:

select 
	physical_memory_kb / 1024 AS physical_memory_MB,
	visible_target_KB,
	committed_kb,
	committed_target_kb
from sys.dm_os_sys_info;

 

blog 147 - 9 - sys.dm_os_sys_info

The concept of committed and Target commit memory are important here to figure out how SQL Server deals with memory space. The commit memory represents the physical memory allocated by the SQL Server process whereas the Target memory is the amount of memory SQL Server tries to maintain as committed memory regarding different factors described in the BOL. Chances are the latter is closed to the max server memory value in most of scenarios from my experience by the way.

But relying blindly on the committed memory may contribute to misinterpretation about what SQL Server is really consuming for a specific period of time. Indeed, let’s say my SQL Server instance is capped to 2GB and after the daily business workload here the corresponding figures. Let’s say the values in the context of my customer were of a different order of magnitude but this demo will help to figure out the issue that motivated this write-up:

select 
	physical_memory_kb / 1024 AS physical_memory_MB,
	visible_target_KB,
	committed_kb,
	committed_target_kb
from sys.dm_os_sys_info;

 

blog 147 - 10 - sys.dm_os_sys_info_after_daily_workload

The committed memory is about 365MB and by far from the configured max server memory parameter value – 2GB. But now let’s the database maintenance kicks-in. Usually this is a nightly and a daily or weekly basis job that includes generally a rebuilding index task that consists in reading generally all the data structures to get external fragmentation values through the DMF sys.dm_db_index_physical_stats(). This operation can touch structures that are not used during daily business and may have a huge impact on the buffer pool. In my case here the new memory state after executing this maintenance task:

blog 147 - 11 - sys.dm_os_sys_info_after_maintenance

The game has changed here because SQL Server has committed all the memory until reaching the max server memory value. This time we may go through the sys.dm_os_memory_clerks DMV to get details from different memory clerks of my SQL Server instance. pages_kb column is used because SQL instances run with SQL 2014 version.

SELECT
	[type],
	pages_kb / 1024 AS size_MB
FROM sys.dm_os_memory_clerks
WHERE memory_node_id = 0
ORDER BY size_MB DESC

 

blog 147 - 12 - sys.dm_os_memory_clerks

So, from now the committed memory has good chance to keep closed from the max server memory value while in fact the daily business workload won’t probably need all this memory allocated to the SQL Server process. This exactly why my customer asked me for a way to get a more realistic picture of memory consumption of its SQL Server instances during daily business by excluding the nightly database maintenance workload.

We went through a solution that consisted in freeing up the committed memory before starting the journey and to leave the memory grow up gradually until reaching its maximum usage. It is worth noting that there is no easy way. as far as I know, to free up the committed memory and SQL Server may decrease it only if the corresponding target server memory value is lower. From my experience this situation is more an exception than the rule of thumbs and therefore it is difficult to rely on it. One potential workaround might be to restart the SQL Server instance(s) but in the case of my customer restarting the database servers was not an option and we looked into a solution that forced SQL Server making room by setting up the max server memory closed to the min server memory value. Don’t get me wrong, I don’t consider this as a best practice but more as an emergency procedure because as restarting a SQL Server instance, it may lead to a temporary impact but in a high number of magnitudes especially whether the workload performance is directly tied to the buffer cache state (warm vs cold). In addition, I would say that scaling the max server memory value with only the daily business workload may be controversial in many ways and in fact we have to consider some tradeoffs here. In the context of my customer, the main goal was to release “unused memory” from SQL Server instances during daily business to free up memory from VMWare ESX hosts but there is no free lunch. For instance, the nightly basis workload execution may become suddenly higher in duration if there is less room to work in memory. Another direct side effect of working with less memory might be the increase of I/O operations from the storage layout. In a nutshell, there is no black or white solution and we have to deal with what we consider at the best solution for the specific context.

See you!

 

 

 

Cet article Is there too much memory for my SQL Server instance? est apparu en premier sur Blog dbi services.

Patching a virtualized ODA to patch 12.2.1.4.0

Tue, 2018-11-13 02:24

This article describes patching a virtualized Oracle Database Appliance (ODA) containing only an ODA_BASE virtual machine.

Do this patching first on test machines because it can not be guaranteed that all causes of failures of single VM ODAs are covered in this article. I got the experience that precheck for ODA patches does not detect some failure conditions which may lead to an unusuable ODA.

Overview:
Patch first to 12.1.2.12.0
After that patch to 12.2.1.4.0

Procedure for both patches:

Preparation:

Apply all files of the patch to repository on all nodes as user root:

oakcli unpack -package /directory_name/file_name

Verify patch and parts to be patched on all servers:

[root@xx1 ~]# oakcli update -patch 12.2.1.4.0 --verify
INFO: 2018-09-24 08:32:52: Reading the metadata file now...
Component Name Installed Version Proposed Patch Version
--------------- ------------------ -----------------
Controller_INT 4.650.00-7176 Up-to-date
Controller_EXT 13.00.00.00 Up-to-date
Expander 0291 0306
SSD_SHARED {
[ c1d20,c1d21,c1d22, A29A Up-to-date
c1d23 ] [ c1d0,c1d1,c1d2,c1d A29A Up-to-date
3,c1d4,c1d5,c1d6,c1d
7,c1d8,c1d9,c1d10,c1
d11,c1d12,c1d13,c1d1
4,c1d15,c1d16,c1d17,
c1d18,c1d19 ] }
SSD_LOCAL 0R3Q Up-to-date
ILOM 3.2.9.23 r116695 4.0.2.26.a r123797
BIOS 38070200 38100300
IPMI 1.8.12.4 Up-to-date
HMP 2.3.5.2.8 2.4.1.0.11
OAK 12.1.2.12.0 12.2.1.4.0
OL 6.8 6.9
OVM 3.4.3 3.4.4
GI_HOME 12.1.0.2.170814(2660 12.2.0.1.180417(2767
9783,26609945) 4384,27464465)
DB_HOME {
[ OraDb12102_home1 ] 12.1.0.2.170814(2660 12.1.0.2.180417(2733
9783,26609945) 8029,27338020)
[ OraDb11204_home2 ] 11.2.0.4.170418(2473 11.2.0.4.180417(2733
2075,23054319) 8049,27441052)
}

Validate the whole ODA (not during peak load):

oakcli validate -a

Show versions of all installed components (example is after patching):

[root@xx1 ~]# oakcli show version -detail
Reading the metadata. It takes a while...
System Version Component Name Installed Version Supported Version
-------------- --------------- ------------------ -----------------
12.2.1.4.0
Controller_INT 4.650.00-7176 Up-to-date
Controller_EXT 13.00.00.00 Up-to-date
Expander 0306 Up-to-date
SSD_SHARED {
[ c1d20,c1d21,c1d22, A29A Up-to-date
c1d23 ] [ c1d0,c1d1,c1d2,c1d A29A Up-to-date
3,c1d4,c1d5,c1d6,c1d
7,c1d8,c1d9,c1d10,c1
d11,c1d12,c1d13,c1d1
4,c1d15,c1d16,c1d17,
c1d18,c1d19 ] }
SSD_LOCAL 0R3Q Up-to-date
ILOM 4.0.2.26.a r123797 Up-to-date
BIOS 38100300 Up-to-date
IPMI 1.8.12.4 Up-to-date
HMP 2.4.1.0.11 Up-to-date
OAK 12.2.1.4.0 Up-to-date
OL 6.9 Up-to-date
OVM 3.4.4 Up-to-date
GI_HOME 12.2.0.1.180417(2767 Up-to-date
4384,27464465)
DB_HOME 11.2.0.4.170418(2473 11.2.0.4.180417(2733
2075,23054319) 8049,27441052)

To dry run of ospatch (does not work for any other components than ospatch):

[root@xx1 ~]# oakcli validate -c ospatch -ver 12.2.1.4.0
INFO: Validating the OS patch for the version 12.2.1.4.0
INFO: 2018-09-25 08:34:28: Performing a dry run for OS patching
INFO: 2018-09-25 08:34:52: There are no conflicts. OS upgrade could be successful

All packages which are mentioned as incompatible must be removed before patching. Also somebody who is able to install and configure compatible versions of these packages properly after patching should be available. Also compatible versions of these packages should be prepared beforehand.

Before applying patch:
In dataguard installations, set state to APPLY-OFF for all standby databases
Disable all jobs which use Grid Infrastructure or databases
Set all ACFS replications to “pause”.
Unmount all ACFS filesystems
Stop all agents on all ODA nodes
Remove all resources from Grid Infrastructure which depend on ACFS filesystems (srvctl remove)
These resources can be determined with:

crsctl stat res -dependency | grep -i acfs

Remove all packages which were found incompatible to patch.

Note:
Scripts of both patches cannot unmount ACFS filesystems (at least filesystems mounted with registry) and usage of Grid Infrastructure files by mounted ACFS filesystems causes both patches to fail. Check scripts of both patches seem not to check for this condition. In Grid Infrastructure all resources on which other resources have dependencies must exist, otherwise their configuration must be saved and the resources must be removed from GI.

Use UNIX tool screen for applying patch because any network interruption causes patch to fail.

Patching:
Only server and storage should be patched with oakcli script, databases should be patched manually. In / filesystem at least 10 GB, in /u01 at least 15 GB available disk space must exist.

All commands have to be executed on primary ODA node as user root. The http server error at end of server patching can be ignored.


[root@xx1 ~]# screen
[root@xx1 ~]# oakcli update -patch 12.2.1.4.0 --server
INFO: DB, ASM, Clusterware may be stopped during the patch if required
INFO: Both Nodes may get rebooted automatically during the patch if required
Do you want to continue: [Y/N]?: Y
INFO: User has confirmed for the reboot
INFO: Patch bundle must be unpacked on the second Node also before applying the patch
Did you unpack the patch bundle on the second Node? : [Y/N]? : Y
INFO: All the VMs except the ODABASE will be shutdown forcefully if needed
Do you want to continue : [Y/N]? : Y
INFO: Running pre-install scripts
INFO: Running prepatching on node 0
INFO: Running prepatching on node 1
INFO: Completed pre-install scripts
INFO: Patching server component (rolling)
INFO: Stopping VMs, repos and OAKD on both nodes...
INFO: Stopped Oakd
...
INFO: Patching the server on node: xx2
INFO: it may take upto 60 minutes. Please wait
INFO: Infrastructure patching summary on node: xx1
INFO: Infrastructure patching summary on node: xx2
SUCCESS: 2018-09-25 09:42:24: Successfully upgraded the HMP
SUCCESS: 2018-09-25 09:42:24: Successfully updated the OAK
SUCCESS: 2018-09-25 09:42:24: Successfully updated the JDK
INFO: 2018-09-25 09:42:24: IPMI is already upgraded
SUCCESS: 2018-09-25 09:42:24: Successfully upgraded the OS
SUCCESS: 2018-09-25 09:42:24: Successfully updated the device OVM
SUCCESS: 2018-09-25 09:42:24: Successfully upgraded the HMP on Dom0
INFO: 2018-09-25 09:42:24: Local storage patching summary on Dom0...
SUCCESS: 2018-09-25 09:42:24: Successfully upgraded the local storage
SUCCESS: 2018-09-25 09:42:24: Successfully updated the device Ilom
SUCCESS: 2018-09-25 09:42:24: Successfully updated the device BIOS
INFO: 2018-09-25 09:42:24: Some of the components patched on node
INFO: 2018-09-25 09:42:24: require node reboot. Rebooting the node
INFO: 2018-09-25 09:42:24: rebooting xx2 via /tmp/dom0reboot...
..........
INFO: 2018-09-25 09:48:03: xx2 is rebooting...
INFO: 2018-09-25 09:48:03: Waiting for xx2 to reboot...
........
INFO: 2018-09-25 09:55:24: xx2 has rebooted...
INFO: 2018-09-25 09:55:24: Waiting for processes on xx2 to start...
..
INFO: Patching server component on node: xx1
INFO: 2018-09-25 09:59:31: Patching ODABASE Server Components (including Grid software)
INFO: 2018-09-25 09:59:31: ------------------Patching HMP-------------------------
SUCCESS: 2018-09-25 10:00:26: Successfully upgraded the HMP
INFO: 2018-09-25 10:00:26: creating /usr/lib64/sun-ssm symlink
INFO: 2018-09-25 10:00:27: ----------------------Patching OAK---------------------
SUCCESS: 2018-09-25 10:00:59: Successfully upgraded OAK
INFO: 2018-09-25 10:01:02: ----------------------Patching JDK---------------------
SUCCESS: 2018-09-25 10:01:12: Successfully upgraded JDK
INFO: 2018-09-25 10:01:12: ----------------------Patching IPMI---------------------
INFO: 2018-09-25 10:01:12: IPMI is already upgraded or running with the latest version
INFO: 2018-09-25 10:01:13: ------------------Patching OS-------------------------
INFO: 2018-09-25 10:01:36: Removed kernel-uek-firmware-4.1.12-61.44.1.el6uek.noarch
INFO: 2018-09-25 10:01:52: Removed kernel-uek-4.1.12-61.44.1.el6uek.x86_64
INFO: 2018-09-25 10:02:03: Clusterware is running on local node
INFO: 2018-09-25 10:02:03: Attempting to stop clusterware and its resources locally
SUCCESS: 2018-09-25 10:03:22: Successfully stopped the clusterware on local node
SUCCESS: 2018-09-25 10:07:36: Successfully upgraded the OS
INFO: 2018-09-25 10:07:40: ------------------Patching Grid-------------------------
INFO: 2018-09-25 10:07:45: Checking for available free space on /, /tmp, /u01
INFO: 2018-09-25 10:07:50: Attempting to upgrade grid.
INFO: 2018-09-25 10:07:50: Executing /opt/oracle/oak/pkgrepos/System/12.2.1.4.0/bin/GridUpgrade.pl...
SUCCESS: 2018-09-25 10:55:07: Grid software has been updated.
INFO: 2018-09-25 10:55:07: Patching DOM0 Server Components
INFO: 2018-09-25 10:55:07: Attempting to patch OS on Dom0...
INFO: 2018-09-25 10:55:16: Clusterware is running on local node
INFO: 2018-09-25 10:55:16: Attempting to stop clusterware and its resources locally
SUCCESS: 2018-09-25 10:56:45: Successfully stopped the clusterware on local node
SUCCESS: 2018-09-25 11:02:19: Successfully updated the device OVM to 3.4.4
INFO: 2018-09-25 11:02:19: Attempting to patch the HMP on Dom0...
SUCCESS: 2018-09-25 11:02:26: Successfully updated the device HMP to the version 2.4.1.0.11 on Dom0
INFO: 2018-09-25 11:02:26: Attempting to patch the IPMI on Dom0...
INFO: 2018-09-25 11:02:27: Successfully updated the IPMI on Dom0
INFO: 2018-09-25 11:02:30: Attempting to patch the local storage on Dom0...
INFO: 2018-09-25 11:02:30: Stopping clusterware on local node...
INFO: 2018-09-25 11:02:37: Disk : c0d0 is already running with MS4SC2JH2ORA480G 0R3Q
INFO: 2018-09-25 11:02:38: Disk : c0d1 is already running with MS4SC2JH2ORA480G 0R3Q
INFO: 2018-09-25 11:02:40: Controller : c0 is already running with 0x005d 4.650.00-7176
INFO: 2018-09-25 11:02:41: Attempting to patch the ILOM on Dom0...
SUCCESS: 2018-09-25 11:27:49: Successfully updated the device Ilom to 4.0.2.26.a r123797
SUCCESS: 2018-09-25 11:27:49: Successfully updated the device BIOS to 38100300
INFO: Infrastructure patching summary on node: xxxx1
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the HMP
SUCCESS: 2018-09-25 11:27:54: Successfully updated the OAK
SUCCESS: 2018-09-25 11:27:54: Successfully updated the JDK
INFO: 2018-09-25 11:27:54: IPMI is already upgraded
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the OS
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded GI
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device OVM
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the HMP on Dom0
INFO: 2018-09-25 11:27:54: Local storage patching summary on Dom0...
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the local storage
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device Ilom
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device BIOS
INFO: Infrastructure patching summary on node: xxxx2
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the HMP
SUCCESS: 2018-09-25 11:27:54: Successfully updated the OAK
SUCCESS: 2018-09-25 11:27:54: Successfully updated the JDK
INFO: 2018-09-25 11:27:54: IPMI is already upgraded
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the OS
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device OVM
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the HMP on Dom0
INFO: 2018-09-25 11:27:54: Local storage patching summary on Dom0...
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded the local storage
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device Ilom
SUCCESS: 2018-09-25 11:27:54: Successfully updated the device BIOS
SUCCESS: 2018-09-25 11:27:54: Successfully upgraded GI
INFO: Running post-install scripts
INFO: Running postpatch on node 1...
INFO: Running postpatch on node 0...
...
...
INFO: Started Oakd
INFO: 2018-09-25 11:32:26: Some of the components patched on node
INFO: 2018-09-25 11:32:26: require node reboot. Rebooting the node
INFO: Rebooting Dom0 on node 0
INFO: 2018-09-25 11:32:26: Running /tmp/dom0reboot on node 0
INFO: 2018-09-25 11:33:10: Clusterware is running on local node
INFO: 2018-09-25 11:33:10: Attempting to stop clusterware and its resources locally
SUCCESS: 2018-09-25 11:35:52: Successfully stopped the clusterware on local node
INFO: 2018-09-25 11:38:54: RPC::XML::Client::send_request: HTTP server error: read timeout
[root@xx1 ~]#
Broadcast message from root@xx1
(unknown) at 11:39 ...
The system is going down for power off NOW!

[root@xx1 ~]# oakcli update -patch 12.2.1.4.0 --storage
INFO: DB, ASM, Clusterware may be stopped during the patch if required
INFO: Both Nodes may get rebooted automatically during the patch if required
Do you want to continue: [Y/N]?: Y
INFO: User has confirmed for the reboot
INFO: Running pre-install scripts
INFO: Running prepatching on node 0
INFO: Running prepatching on node 1
INFO: Completed pre-install scripts
INFO: Shared Storage components need to be patched
INFO: Stopping OAKD on both nodes...
INFO: Stopped Oakd
INFO: Attempting to shutdown clusterware (if required)..
INFO: 2018-09-25 12:07:13: Clusterware is running on one or more nodes of the cluster
INFO: 2018-09-25 12:07:13: Attempting to stop clusterware and its resources across the cluster
SUCCESS: 2018-09-25 12:07:59: Successfully stopped the clusterware
INFO: Patching storage on node xx2
INFO: Patching storage on node xx1
INFO: 2018-09-25 12:08:23: ----------------Patching Storage-------------------
INFO: 2018-09-25 12:08:23: ....................Patching Shared SSDs...............
INFO: 2018-09-25 12:08:23: Disk : d0 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:23: Disk : d1 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:23: Disk : d2 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:24: Disk : d3 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:24: Disk : d4 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:24: Disk : d5 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:25: Disk : d6 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:25: Disk : d7 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:25: Disk : d8 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:26: Disk : d9 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:26: Disk : d10 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:26: Disk : d11 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:27: Disk : d12 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:27: Disk : d13 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:27: Disk : d14 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:28: Disk : d15 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:28: Disk : d16 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:28: Disk : d17 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:29: Disk : d18 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:29: Disk : d19 is already running with : HSCAC2DA2SUN1.6T A29A
INFO: 2018-09-25 12:08:30: Disk : d20 is already running with : HSCAC2DA6SUN200G A29A
INFO: 2018-09-25 12:08:30: Disk : d21 is already running with : HSCAC2DA6SUN200G A29A
INFO: 2018-09-25 12:08:30: Disk : d22 is already running with : HSCAC2DA6SUN200G A29A
INFO: 2018-09-25 12:08:31: Disk : d23 is already running with : HSCAC2DA6SUN200G A29A
INFO: 2018-09-25 12:08:31: ....................Patching Shared HDDs...............
INFO: 2018-09-25 12:08:31: ....................Patching Expanders...............
INFO: 2018-09-25 12:08:31: Updating the Expander : c0x0 with the Firmware : DE3-24C 0306
SUCCESS: 2018-09-25 12:09:24: Successfully updated the Firmware on Expander : c0x0 to DE3-24C 0306
INFO: 2018-09-25 12:09:24: Updating the Expander : c1x0 with the Firmware : DE3-24C 0306
SUCCESS: 2018-09-25 12:10:16: Successfully updated the Firmware on Expander : c1x0 to DE3-24C 0306
INFO: 2018-09-25 12:10:16: ..............Patching Shared Controllers...............
INFO: 2018-09-25 12:10:16: Controller : c0 is already running with : 0x0097 13.00.00.00
INFO: 2018-09-25 12:10:17: Controller : c1 is already running with : 0x0097 13.00.00.00
INFO: 2018-09-25 12:10:17: ------------ Completed Storage Patching------------
INFO: 2018-09-25 12:10:17: Completed patching of shared_storage
INFO: Patching completed for component Storage
INFO: Running post-install scripts
INFO: Running postpatch on node 1...
INFO: Running postpatch on node 0...
INFO: 2018-09-25 12:10:28: Some of the components patched on node
INFO: 2018-09-25 12:10:28: require node reboot. Rebooting the node
INFO: 2018-09-25 12:10:28: Running /tmp/pending_actions on node 1
INFO: Node will reboot now.
INFO: Please check reboot progress via ILOM interface
INFO: This session may appear to hang, press ENTER after reboot
INFO: 2018-09-25 12:12:53: Rebooting Dom1 on node 0
INFO: Running /tmp/pending_actions on node 0
Broadcast message from oracle@xx1
(/dev/pts/0) at 12:13 ...
The system is going down for reboot NOW!

After successful patching:

Install and configure compatible versions of all previously removed packages
Mount all ACFS filesystems
Recreate all deleted Grid Infrastructure resources and start them
Reenable all jobs disabled before
Resume all ACFS replications
Set state of all dataguard standby databases to APPLY-ON
Check ACFS replications
Check dataguard status
Check whether all works as before

Cet article Patching a virtualized ODA to patch 12.2.1.4.0 est apparu en premier sur Blog dbi services.

Pages