Feed aggregator

[FREE Masterclass] How To Become Oracle Certified Cloud Architect [1Z0-932] in 8 Weeks

Online Apps DBA - 16 hours 4 min ago

Oracle Get2 Cloud (OCI) is must know for everyone [Apps DBA, DBA Architects, SOA Admins, Architects, FMW Admins & Security Professionals] because all Oracle software runs on top of Oracle Cloud Infrastructure (OCI) i.e. Oracle’s Gen2 Cloud! Learning & Being Certified On Cloud will take your career to next level in 2019 Visit: HERE to Join […]

The post [FREE Masterclass] How To Become Oracle Certified Cloud Architect [1Z0-932] in 8 Weeks appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Extreme Nulls

Jonathan Lewis - Fri, 2018-12-14 13:01

This note is a variant of a note that I wrote a few months ago about the impact of nulls on column groups. The effect showed up recently on a client site with a little camouflage that confused the issue for a little while, so I thought it would be worth a repeat.  We’ll start with a script to generate some test date:

rem
rem     Script:         pt_hash_cbo_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:        
rem
rem     Last tested 
rem             12.1.0.2
rem

create table t1 (
        hash_col,
        rare_col,
        n1,
        padding
)
nologging
partition by hash (hash_col)
partitions 32
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        mod(rownum,128),
        case when mod(rownum,1021) = 0 
                then rownum + trunc(dbms_random.value(-256, 256))
        end case,
        rownum,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1048576 -- > comment to avoid WordPress format issue
;

create index t1_i1 on t1(hash_col, rare_col) nologging
local compress 1
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                granularity => 'ALL',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’ve got a hash-partitioned table with 32 partitions; the partitioning key is called hash_col, and there is another column called rare_col that is almost alway null – roughly 1 row in every 1,000 holds a value. I’ve added a local index on (hash_col, rare_col) compressing the leading column since hash_col is very repetitive, and gathered stats on the partitions and table. Here’s a view of the data for a single value of hash_col, and a summary report of the whole data set:

select  
        hash_col, rare_col, count(*)
from
        t1
where
        hash_col = 63
group by
        hash_col, rare_col
order by
        hash_col, rare_col
;

  HASH_COL   RARE_COL   COUNT(*)
---------- ---------- ----------
        63     109217          1
        63     240051          1
        63     370542          1
        63     501488          1
        63     631861          1
        63     762876          1
        63     893249          1
        63    1023869          1
        63                  8184

9 rows selected.

select
        count(*), ct
from    (
        select
                hash_col, rare_col, count(*) ct
        from
                t1
        group by
                hash_col, rare_col
        order by
                hash_col, rare_col
        )
group by ct
order by count(*)
;

  COUNT(*)         CT
---------- ----------
         3       8183
       125       8184
      1027          1

Given the way I’ve generated the data any one value for hash_col will have there are 8,184 (or 8,183) rows where the rare_col is null; but there are 1027 rows which have a value for both hash_col and rare_col with just one row for each combination.

Now we get to the problem. Whenever rare_col is non null the combination of hash_col and rare_col is unique (though this wasn’t quite the cased at the client site) so when we query for a given hash_col and rare_col we would hope that the optimizer would be able to estimate a cardinality of one row; but this is what we see:


variable n1 number
variable n2 number

explain plan for
select /*+ index(t1) */
        n1
from
        t1
where
        hash_col = :n1
and     rare_col = :n2
;

select * from table(dbms_xplan.display);

========================================

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |   908 | 10896 |    76   (0)| 00:00:01 |       |       |
|   1 |  PARTITION HASH SINGLE                     |       |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |   908 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("HASH_COL"=TO_NUMBER(:N1) AND "RARE_COL"=TO_NUMBER(:N2))

The optimizer has predicted a massive 908 rows. A quick check of the object stats shows us that this is “number of rows in table” / “number of distinct keys in index” (1,048,576 / 1,155, rounded up).

Any row with rare_col set to null cannot match the predicate “rare_col = :n2”, but because the optimizer is looking at the statistics of complete index entries (and there are 1048576 of them, with 1155 distinct combinations, and none that are completely null) it has lost sight of the frequency of nulls for rare_col on its own. (The same problem appears with column groups – which is what I commented on in my previous post on this topic).

I’ve often said in the past that you shouldn’t create histograms on data unless your code is going to use them. In this case I need to stop the optimizer from looking at the index.distinct_keys and one way to do that is to create a histogram on one of the columns that defines the index; and I’ve chosen to do this with a fairly arbitrary size of 10 buckets:


execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for columns rare_col size 10')

explain plan for
select /*+ index(t1) */
        n1
from
        t1
where
        hash_col = :n1
and     rare_col = :n2
;

select * from table(dbms_xplan.display);

========================================

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |     1 |    12 |     2   (0)| 00:00:01 |       |       |
|   1 |  PARTITION HASH SINGLE                     |       |     1 |    12 |     2   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |     1 |    12 |     2   (0)| 00:00:01 |   KEY |   KEY |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |     1 |       |     1   (0)| 00:00:01 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("HASH_COL"=TO_NUMBER(:N1) AND "RARE_COL"=TO_NUMBER(:N2))

Bonus observation

This problem came to my attention (and I’ve used a partitioned table in my demonstration) because I had noticed an obvious optimizer error in the client’s execution plan for exactly this simple a query. I can demonstrate the effect the client saw by running the test again without creating the histogram but declaring hash_col to be not null. Immediately after creating the index I’m going to add the line:


alter table t1 modify hash_col not null;

(The client’s system didn’t declare the column not null, but their equivalent of hash_col was part of the primary key of the table which meant it was implicitly declared not null). Here’s what my execution plan looked like with this constraint in place:


--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |   908 | 10896 |    76   (0)| 00:00:01 |       |       |
|   1 |  PARTITION HASH SINGLE                     |       |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |    28 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("HASH_COL"=TO_NUMBER(:N1) AND "RARE_COL"=TO_NUMBER(:N2))

Spot the difference.

The estimate of index rowids is far smaller than the estimate of the rows that will be fetched using those rowids. This is clearly an error.

If you’re wondering how Oracle got this number divide 908 by 32 (the number of partitions in the table) – the answer is 28.375.

Fortunately it’s (probably) an error that doesn’t matter despite looking worryingly wrong. Critically the division hasn’t changed the estimate of the number of table rows (we’ll ignore the fact that that estimate is wrong thanks to a different error), and the cost of the index range scan and table access have not changed. The error is purely cosmetic in effect.

Interestingly if you modify the query to be index-only (i.e. you restrict the select list to columns in the index) this extra division disappears.

Summary

1) If you have a B-tree index where one (or more) of the columns is null for a large fraction of the entries then the optimizer may over-estimate the cardinality of a predicate of the form: “(list of all index columns) = (list of values)” as it will be using the index.distinct_keys in its calculations and ignore the effects of nulls in the individual columns. If you need to work around this issue then creating a histogram on one of the index columns will be sufficient to switch Oracle back to the strategy of multiplying the individual column selectivities.

2) There are cases of plans for accessing partitioned tables where Oracle starts by using table-level statistics to get a suitable set of estimates but then displays a plan with the estimate of rows for an index range scan scaled down by the number of partitions in the table. This results in a visible inconsistency between the index estimate and the table estimate, but it doesn’t affect the cardinality estimate for the table access or either of the associated costs – so it probably doesn’t have a destabilising effect on the plan.

Lightning Web Components - the dawn of (another) new era

Robert Baillie - Fri, 2018-12-14 12:18
Salesforce have a new technology. Lightning Components look like they’re on the way out, and are being replaced with a new technology ‘Lightning Web Components’. The reasons behind that, and the main principles behind its design are covered very nicely in this article on developer.salesforce.com. From that we can then get to a series of examples here. (Note: some of the code snippets used below, to illustrate points, are taken from the recipes linked above) Now I’m a big supporter of evolution, and I love to see new tools being given to developers on the Salesforce platform, so, with a couple of hours to play with it - what’s the immediate impression? This is an article on early impressions, based on reviewing and playing with the examples - I fully expect there to be misunderstandings, bad terminology, and mistakes in here - If you're OK with that, I'm OK with that. I admit, I got excited and wanted to post something as quickly as possible before my cynical side took over. So...

Lightning Web Components - the dawn of (another) new era

Rob Baillie - Fri, 2018-12-14 08:04

Salesforce have a new technology. Lightning Components look like they’re on the way out, and are being replaced with a new technology ‘Lightning Web Components’.

The reasons behind that, and the main principles behind its design are covered very nicely in this article on developer.salesforce.com.

From that we can then get to a series of examples here.

(Note: some of the code snippets used below, to illustrate points, are taken from the recipes linked above)

Now I’m a big supporter of evolution, and I love to see new tools being given to developers on the Salesforce platform, so, with a couple of hours to play with it - what’s the immediate impression?

This is an article on early impressions, based on reviewing and playing with the examples - I fully expect there to be misunderstandings, bad terminology, and mistakes in here - If you're OK with that, I'm OK with that. I admit, I got excited and wanted to post something as quickly as possible before my cynical side took over. So here it is - mistakes and all.

WOW. Salesforce UI development has grown up.

Salesforce aren’t lying when they’ve said that they’re trying to bring the development toolset up to the modern standards.

We get imports, what look like annotations and decorators, and there’s even mention of Promises. Maybe there’s some legs in this…

It’s easy to dismiss this as ‘Oh no, yet another change’, but the thing is - the rest of industry develops and improves its toolset - why shouldn’t Salesforce?

The only way to keep the product on point IS to develop the frameworks, replace the technology, upgrade, move on. If you don’t do that then the whole Salesforce Ecosystem starts to stagnate.

Or to put it another way - in every other part of the developer community, learning from what was built yesterday and evolving is seen as a necessity. It’s good to see Salesforce trying to keep up.

So what are the big things that I’ve spotted immediately?

import is supported, and that makes things clearer

Import is a massive addition to Javascript that natively allows us to define the relationships between javascript files within javascript, rather than at the HTML level.

Essentially, this replaces the use of most ‘script’ tags in traditional Javascript development.

For Lightning Web Components,we use this to bring in capabilities from the framework, as well as static resources.

E.g. Importing modules from the Lightning Web Components framework:


import { LightningElement, track } from 'lwc';

Importing from Static Resources:


import { loadScript } from 'lightning/platformResourceLoader’;
import chartjs from '@salesforce/resourceUrl/chart';

What this has allowed Salesforce to do is to split up the framework into smaller components. If you don’t need to access Apex from your web component, then you don’t need to import the part of the framework that enables that capability.

This *should* make individual components much more lightweight and targeted - only including the capabilities that are required, when they are required.

Getting data on screen is simpler

Any javascript property is visible to the HTML template.

E.g.


export default class WebAppComponentByMe extends LightningElement {
contacts;

We can then render this property in the HTML with {contacts} (none of those attributes to define and none of those pesky v dot things to forget).

Much neater, much more concise.

We track properties

Looking at the examples, my assumption was that if we want to perform actions when a property is changed, we mark the property trackable using the @track decorator.

For example:


export default class WebAppComponentByMe extends LightningElement {
@track contacts;

I was thinking that, at this point, anything that references this property (on page, or in Javascript) will be notified whenever that property changes.

However, at this point I can't really tell what the difference is between tracked and non-tracked properties - a mystery for another day

Wiring up to Apex is much simpler

One of the big criticisms of Lightning Components that I always had was the amount of code you need to write in order to call an Apex method. OK, so you have force:recordData for a lot of situations, but there are many times when only an Apex method will do.

In Web Components, this is much simpler.

In order to connect to Apex, we import the ‘wire’ module, and then import functions into our javascript


import { LightningElement, wire } from 'lwc';
import getContactList from '@salesforce/apex/ContactController.getContactList';

The first line imports the wire capabilities from the framework, the second then imports the Apex method as a javascript method, therefore making it available to the component.

We can then connect a javascript property up to the method using the wire decorator:


@wire(getContactList) contacts;

Or wire up a javascript method:


@wire(getContactList)
wiredContacts({ error, data }) {
if (data) {
this.contacts = data;
} else if (error) {
this.error = error;
}
}

When the component is initialised, the getContactList method will be executed.

If the method has parameters, that’s also very simple (E.g. wiring to a property):


@wire(getContactList, { searchKey: '$searchKey' })
contacts;

Changing the value of a property causes Apex to re-execute

Having wired up a property as a parameter to an Apex bound Javascript function, any changes to that property will cause the function to be re-executed

For example, if we:


searchKey = '';

@wire(findContacts, { searchKey: '$searchKey' })
contacts;

Whenever the searchKey property changes, the Apex method imported as ‘findContacts’ will be executed and the contacts property is updated.

Thankfully, we can control when that property changes, as it looks like changing the property in the UI does not automatically fire a change the property on the Javascript object. In order to do that, we need to change the property directly.

E.g. Let’s say we extend the previous example and there’s an input that is bound to the property, and there’s an onchange event defined:



And the handler does the following:


handleKeyChange(event) {
this.searchKey = event.target.value;
}

This will cause the findContacts method to fire whenever the value in the input is changed.

Note that it is the assignment to this.searchKey that causes the event to fire - it looks like the binding from the HTML is 1-way. I admit that I need to investigate this further.

Events do not require configuration to be implemented

Events work in a completely different way - but then that’s not a problem - Application and Component events were different enough to cause headaches previously. The model is actually much simpler.

The example in the above referenced repository to look at is ‘PubSub’.

It’s much too involved to into detail here, but the result is that you need to:

  • Implement a Component that acts as the messenger (implementing registerListener, unregisterListener and fireEvent)
  • Any component that wants to fire an event, or listen for an event will import that component to do so, firing events or registering listeners.

This would seem to imply that (at least a certain amount of) state within components is shared - looking like those defined with 'const'

Whatever the precise nature of the implementation, a pure Javascript solution is surely one that anyone involved in OO development will welcome.

I suspect that, in a later release, this will become a standard component.

Summary

Some people will be thinking "Man, glad I didn’t migrate from Classic / Visualforce to Lightning Experience / Components - maybe I’ll just wait a little longer for it all to settle down”.

You’re wrong - it won’t settle, it’ll continue to evolve and the technologies will be continually replaced by new ones. Eventually, the jump from what you have to where you need to get to will be so huge that you’ll find it incredibly hard. There’s a reason why Salesforce pushes out 3 releases a year, whether you want it or not, these technology jumps are just the same. The more you put it off, the more painful it’ll be.

The change from Lightning Components to Lightning Web Components is vast - a lot more than a single 3 letter word would have you suspect. The only real similarities between the two frameworks that I’ve seen up to now are:

  • Curlies are used to bind things
  • The Base Lightning Components are the same
  • You need to know Javascript

Other than that, they’re a world apart.

Also, I couldn’t find any real documentation - only examples - although those examples are a pretty comprehensive starting point.

Now, obviously it's early days - we're in pre-release right now, but what I've seen gives me great hope for the framework, it's a significant step forward and I can't wait to see what happens next. I wonder if a Unit Testing framework might follow (I can but hope)

You could wait, but hey, really, what are you waiting for? Come on, jump in. The change is exciting...

Oracle Database 18c XE available for Everyone

Oracle Database Express Edition (XE) Release 18.4.0.0.0 (18c) was released on October 19, 2018 and is now available for DBAs, developers, data scientists or anyone curious about the worlds #1...

We share our skills to maximize your revenue!
Categories: DBA Blogs

[BLOG] Templates in Oracle SOA Suite 12c

Online Apps DBA - Fri, 2018-12-14 06:35

Are you learning SOA Dev and want to enhance your skills on the same? If yes, then visit: https://k21academy.com/soadev23 and learn about: ✔What are Templates in SOA 12C ✔The 3 levels of templates ✔Steps to create different levels of templates & much more… Are you learning SOA Dev and want to enhance your skills on […]

The post [BLOG] Templates in Oracle SOA Suite 12c appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Solved] Oracle GoldenGate Extract/Replicat Abended- OGG-02091: Operation not supported

Online Apps DBA - Fri, 2018-12-14 03:02

Are you learning Oracle GoldenGate and are facing issues such as ‘Extract or Replicat processes getting abended’? If you are facing the above issue and want to know all about it, then visit https://k21academy.com/goldengate32 where we have covered: ✔’Extract or Replicat processes getting abended’ issue in detail ✔The Reason why it occurs ✔The Solution to […]

The post [Solved] Oracle GoldenGate Extract/Replicat Abended- OGG-02091: Operation not supported appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Packages are invalid -- ORA-04061

Tom Kyte - Fri, 2018-12-14 02:46
Hi, In our DB, few packages became invalid. And when we verified it we saw that there are no errors related to them. I was expecting sessions to run this package without any error, however when it was executed for first time, we got ORA-04061 erro...
Categories: DBA Blogs

Full-Text Index (Table, Index and LOB files) Size Creep

Tom Kyte - Fri, 2018-12-14 02:46
Q: How do we "manage" the size of a table's associated SYS_LOB files? Background: I have a table [SRCH_CACHE] that was setup as a lookup table, because the two base tables could not be joined/filtered with sufficient speed. The table is pretty ...
Categories: DBA Blogs

How to forecast archive generation for a 350 GB table with no large objects.

Tom Kyte - Fri, 2018-12-14 02:46
Hi Gurus, Thanks for helping us whenever we need you. I have a task to import a 350 GB table. I will create index later. Its a 12.2 CDB with one single PDB and have a standby DB with force_logging enabled. Is there a method so that I can f...
Categories: DBA Blogs

How to compare date duration of first row with second and so on.. and convert it to rows to column with indicators

Tom Kyte - Fri, 2018-12-14 02:46
Hi Tom, I have below table <code>CREATE TABLE PRDCT (ID NUMBER(10,0), PRDCT_CD VARCHAR2(1), START_DT DATE, END_DT DATE);</code> With data in this like - <code>Insert Into PRDCT Values (1,'A',to_date('01-May-2017'),to_date('31-Jul-2017')...
Categories: DBA Blogs

Red Hat Enterprise Linux 8 – Application Streams

Yann Neuhaus - Thu, 2018-12-13 09:05

You may have heard that : Red Hat Enterprise Linux 8 is available for downloading in beta version since few weeks…
You want to download it ? Click here.
RH_EnterpriseLinux8beta_stacked_RGB_BlueA significant change coming with this new version is the way the applications packages are provided. As you know, up to RHEL7 packages were downloaded via repositories listed in .repo files located by default under /etc/yum.repos.d/. This is still the same with RHEL8, but two new major repositories are available in the default redhat.repo files.

In order to get access to them we must of course register my system to a Red Hat Subscription…
[root@rhel8beta1 ~]# subscription-manager register --username xxx.yyy@zzz.com
Registering to: subscription.rhsm.redhat.com:443/subscription
Password:
The system has been registered with ID: e42829a5-8a8e-42d3-a69a-07a1499e9b0e
The registered system name is: rhel8beta1
[root@rhel8beta1 ~]#

…and attach it to a Pool (here the Pool will be chosen automatically) :
[root@rhel8beta1 ~]# subscription-manager attach --auto
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux for x86_64 Beta
Status: Subscribed
[root@rhel8beta1 ~]#

As the /etc/yum.repos.d/redhat.repo is now available, let’s check which repository does it contain :
[root@rhel8beta1 ~]# grep -B1 name /etc/yum.repos.d/redhat.repo
[rhel-8-for-x86_64-rt-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (Debug RPMs)
--
[rhel-8-for-x86_64-rt-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (Source RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-source-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (Source RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-debug-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (Debug RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-source-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (Source RPMs)
--
[rhel-8-for-x86_64-appstream-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (Source RPMs)
--
[rhel-8-for-x86_64-nfv-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (Source RPMs)
--
[rhel-8-for-x86_64-nfv-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (Source RPMs)
--
[codeready-builder-beta-for-rhel-8-x86_64-rpms] name = Red Hat CodeReady Linux Builder Beta for RHEL 8 x86_64 (RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (Source RPMs)
--
[rhel-8-for-x86_64-supplementary-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Supplementary Beta (Source RPMs)
--
[rhel-8-for-x86_64-appstream-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
--
[rhel-8-for-x86_64-rt-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time Beta (RPMs)
--
[rhel-8-for-x86_64-appstream-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-debug-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (Debug RPMs)
--
[rhel-8-for-x86_64-resilientstorage-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Resilient Storage Beta (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)
--
[rhel-8-for-x86_64-nfv-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - Real Time for NFV Beta (Debug RPMs)
--
[fast-datapath-beta-for-rhel-8-x86_64-rpms] name = Fast Datapath Beta for RHEL 8 x86_64 (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-debug-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (Debug RPMs)
--
[rhel-8-for-x86_64-highavailability-beta-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - High Availability Beta (RPMs)
--
[rhel-8-for-x86_64-baseos-beta-source-rpms] name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (Source RPMs)
[root@rhel8beta1 ~]#

AppStream ? BaseOS ? What’s that… This is what we will discover in this blog.

First of all we can see that both are enabled by default :
[root@rhel8beta1 ~]# subscription-manager repos --list-enabled
+----------------------------------------------------------+
Available Repositories in /etc/yum.repos.d/redhat.repo
+----------------------------------------------------------+
Repo ID: rhel-8-for-x86_64-baseos-beta-rpms
Repo Name: Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel8/8/x86_64/baseos/os
Enabled: 1


Repo ID: rhel-8-for-x86_64-appstream-beta-rpms
Repo Name: Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Repo URL: https://cdn.redhat.com/content/beta/rhel8/8/x86_64/appstream/os
Enabled: 1
[root@rhel8beta1 ~]#

BaseOS

Content in BaseOS is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations“.
This is how Red Hat define it. Over a thousand of packages are available from the BaseOS repository :
[root@rhel8beta1 ~]# yum --disablerepo "*" --enablerepo "rhel-8-for-x86_64-baseos-beta-rpms" list available | wc -l
1145
[root@rhel8beta1 ~]#

You can get the full list here.
Basically, those packages are system-related and are mainly used to manage and configure the OS and services (such as NetworkManager, Chrony, Dracut, aso…). In other words most of them are intended for use by system administrators. So nothing very new here except for the fact that they are all grouped in a unique dedicated repository.

AppStream

The second repository contains much more packages (full list here) :
[root@rhel8beta1 /]# yum --disablerepo "*" --enablerepo "rhel-8-for-x86_64-appstream-beta-rpms" list available | wc -l
4318
[root@rhel8beta1 /]#

Application Stream provides additional user space applications, runtime languages and databases. It replaces the “extra” Repos and the Software Collection. All the content in AppStream is available in two formats : the well known RPM format and a brand new one called “module” which an extension to the RPM format.
A module is a set of RPM packages that are linked together. For exemple, if you want to check which packages are concerned by the Postgresql module, you must use the new “yum module” command :
[root@rhel8beta1 /]# yum module list postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:47:42 ago on Mon Nov 26 15:13:03 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
postgresql 10 [d] client, default [d] postgresql module
postgresql 9.6 client, default [d] postgresql module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 /]#

By the way, YUM is no longer the default Packages Manager with RHEL8. The command is still available but it’s actually an alias of the DNF tool (coming from Fedora) :
[root@rhel8beta1 ~]# which yum
/usr/bin/yum
[root@rhel8beta1 ~]# ll /usr/bin/yum
lrwxrwxrwx. 1 root root 5 Oct 15 10:25 /usr/bin/yum -> dnf-3
[root@rhel8beta1 ~]#

If you want to have a look to the main usage differences between YUM and DNF, check that :
[root@rhel8beta1 /]# man yum2dnf
YUM2DNF(8)


NAME
yum2dnf - Changes in DNF compared to YUM
[...] [...]

Let’s go back to our modules. Here is how you can check the packages contained in a module :
[root@rhel8beta1 /]# yum module info postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs) 2.9 kB/s | 4.1 kB 00:01
Red Hat Enterprise Linux 8 for x86_64 - BaseOS Beta (RPMs) 2.9 kB/s | 4.1 kB 00:01
Name : postgresql
Stream : 10 [d] Version : 20180813131250
Context : 9edba152
Profiles : client, default [d] Default profiles : default
Repo : rhel-8-for-x86_64-appstream-beta-rpms
Summary : postgresql module
Description : This postgresql module has been generated.
Artifacts : postgresql-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-contrib-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-docs-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-plperl-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-plpython3-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-pltcl-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-server-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-server-devel-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-static-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-test-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-test-rpm-macros-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-upgrade-0:10.5-1.el8+1546+27ad5f8e.x86_64
: postgresql-upgrade-devel-0:10.5-1.el8+1546+27ad5f8e.x86_64


Name : postgresql
Stream : 9.6
Version : 20180813131400
Context : 9edba152
Profiles : client, default [d] Default profiles : default
Repo : rhel-8-for-x86_64-appstream-beta-rpms
Summary : postgresql module
Description : This postgresql module has been generated.
Artifacts : postgresql-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-contrib-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-docs-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-plperl-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-plpython3-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-pltcl-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-server-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-server-devel-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-static-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-test-0:9.6.10-1.el8+1547+210b7007.x86_64
: postgresql-test-rpm-macros-0:9.6.10-1.el8+1547+210b7007.x86_64


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 /]#

The above output shows that a module can contain several streams. Each stream represent a different version of the application.
Moreover, a module can have couple of profiles. A profile is a set of certain RPM packages selected to be installed together for a particular use-case (server, client, development, minimal install, aso…).

To install an application from the default stream and with the default profile, add the ‘@’ character before the application name :
[root@rhel8beta1 /]# yum install @postgresql
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:25:55 ago on Mon Nov 26 16:27:13 2018.
Dependencies resolved.
=======================================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================================
Installing group/module packages:
postgresql-server x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 5.1 M
Installing dependencies:
libpq x86_64 10.5-1.el8 rhel-8-for-x86_64-appstream-beta-rpms 188 k
postgresql x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 1.5 M
Installing module profiles:
postgresql/default
Enabling module streams:
postgresql 10


Transaction Summary
=======================================================================================================================================================================================
Install 3 Packages


Total download size: 6.7 M
Installed size: 27 M
Is this ok [y/N]: y
[...] [...] [root@rhel8beta1 /]#

You can also use the “yum module install postgresql” command.

Quick check :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 10.5
[root@rhel8beta1 ~]#

And if you want to install Postgres from an oldest stream and with another profile (here Postgres 9.6 client only) :
[root@rhel8beta1 /]# yum install @postgresql:9.6/client
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:33:45 ago on Mon Nov 26 16:27:13 2018.
Dependencies resolved.
=======================================================================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================================================================
Installing group/module packages:
postgresql x86_64 9.6.10-1.el8+1547+210b7007 rhel-8-for-x86_64-appstream-beta-rpms 1.4 M
Installing dependencies:
libpq x86_64 10.5-1.el8 rhel-8-for-x86_64-appstream-beta-rpms 188 k
Installing module profiles:
postgresql/client
Enabling module streams:
postgresql 9.6


Transaction Summary
=======================================================================================================================================================================================
Install 2 Packages


Total download size: 1.6 M
Installed size: 5.8 M
Is this ok [y/N]: y
[...] [...] [root@rhel8beta1 /]#

Check :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 9.6.10
[root@rhel8beta1 ~]#


[root@rhel8beta1 ~]# yum module list --enabled
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:04:43 ago on Thu Dec 13 08:34:05 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
container-tools 1.0 [d][e] default [d] Common tools and dependencies for container runtimes
postgresql 9.6 [e] client [i], default [d] [i] postgresql module
satellite-5-client 1.0 [d][e] gui, default [d] Red Hat Satellite 5 client packages
virt rhel [d][e] default [d] Virtualization module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 ~]#

Hummm… Only the module 9.6 is enabled ? Let’s try to enable the version 10 :
[root@rhel8beta1 ~]# yum module enable postgresql:10
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:08:06 ago on Thu Dec 13 07:52:30 2018.
Dependencies resolved.
======================================================================================================================================
Package Arch Version Repository Size
======================================================================================================================================
Switching module streams:
postgresql 9.6 -> 10


Transaction Summary
======================================================================================================================================


Is this ok [y/N]: y
Complete!


Switching module streams does not alter installed packages (see 'module enable' in dnf(8) for details)
[root@rhel8beta1 ~]#

It’s better now :
[root@rhel8beta1 ~]# yum module list --enabled
Failed to set locale, defaulting to C
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:13:22 ago on Thu Dec 13 08:34:05 2018.
Red Hat Enterprise Linux 8 for x86_64 - AppStream Beta (RPMs)
Name Stream Profiles Summary
container-tools 1.0 [d][e] default [d] Common tools and dependencies for container runtimes
postgresql 10 [d][e] client [i], default [d] [i] postgresql module
satellite-5-client 1.0 [d][e] gui, default [d] Red Hat Satellite 5 client packages
virt rhel [d][e] default [d] Virtualization module


Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@rhel8beta1 ~]#

But…
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 9.6.10
[root@rhel8beta1 ~]#

…still using 9.6 :-(
After switching from one module to another, we must upgrade the corresponding packages :
[root@rhel8beta1 ~]# yum distro-sync
Updating Subscription Management repositories.
Updating Subscription Management repositories.
Last metadata expiration check: 0:18:40 ago on Thu Dec 13 08:34:05 2018.
Dependencies resolved.
=========================================================§=====================================================================================
Package Arch Version Repository Size
==============================================================================================================================================
Upgrading:
postgresql x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 1.5 M
postgresql-server x86_64 10.5-1.el8+1546+27ad5f8e rhel-8-for-x86_64-appstream-beta-rpms 5.1 M


Transaction Summary
==============================================================================================================================================
Upgrade 2 Packages


Total download size: 6.5 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64.rpm 1.3 MB/s | 5.1 MB 00:03
(2/2): postgresql-10.5-1.el8+1546+27ad5f8e.x86_64.rpm 371 kB/s | 1.5 MB 00:04
--------------------------------------------------------------------------------------------------------------- --------------------------------
Total 1.6 MB/s | 6.5 MB 00:04
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/1
Upgrade: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64
Upgrading : postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/4
Upgrade: postgresql-10.5-1.el8+1546+27ad5f8e.x86_64
Upgrade: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64
Running scriptlet: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Upgrading : postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Running scriptlet: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 2/4
Upgrade: postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64
Upgraded: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Cleanup : postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Upgraded: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 3/4
Upgraded: postgresql-9.6.10-1.el8+1547+210b7007.x86_64
Cleanup : postgresql-9.6.10-1.el8+1547+210b7007.x86_64 4/4
Upgraded: postgresql-9.6.10-1.el8+1547+210b7007.x86_64
Running scriptlet: postgresql-9.6.10-1.el8+1547+210b7007.x86_64 4/4
Verifying : postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 1/4
Verifying : postgresql-9.6.10-1.el8+1547+210b7007.x86_64 2/4
Verifying : postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64 3/4
Verifying : postgresql-server-9.6.10-1.el8+1547+210b7007.x86_64 4/4


Upgraded:
postgresql-10.5-1.el8+1546+27ad5f8e.x86_64 postgresql-server-10.5-1.el8+1546+27ad5f8e.x86_64


Complete!
[root@rhel8beta1 ~]#

And now it’s fine :
[root@rhel8beta1 ~]# which postgres
/usr/bin/postgres
[root@rhel8beta1 ~]# /usr/bin/postgres --version
postgres (PostgreSQL) 10.5
[root@rhel8beta1 ~]#

So what ?

That was only a first quick try with the AppStream fonctionality. What we can remember here is that with this new way to manage packages we can benefit from parallel availability of multiple versions of software. This is due to the disassociation from the kernel space (BaseOS) – which is still managed in a traditional way, and the user space (AppStream) – which is now deployed in the form of “containerized” applications.
Up to now, when we wanted to upgrade an application to a given version, we had to think about the inter-dependency between this application and the other one that we didn’t want to update. With RHEL8, we can now upgrade one while keeping the other in its current version.

Cet article Red Hat Enterprise Linux 8 – Application Streams est apparu en premier sur Blog dbi services.

SQL Plan Directives

Tom Kyte - Thu, 2018-12-13 08:26
Team, reading this article <u>https://blogs.oracle.com/oraclemagazine/on-learning-from-mistakes</u> and setting up an demo like this in my local database. In the below example pretend OWNER as "state" and OBJECT_TYPE as "city" (similar to the examp...
Categories: DBA Blogs

To arrive alphanumeric value from a string

Tom Kyte - Thu, 2018-12-13 08:26
Hi Tom, I have a problem to arrive unique alphanumeric value I have to arrive unique short name(5digit) from names and insert into the table. We have to check existing short names and arrive next heirarchy. Eg.The names are like below ...
Categories: DBA Blogs

List tablespaces and usage

Tom Kyte - Thu, 2018-12-13 08:26
how can List all the tablespaces in the ORCL database by displaying the available space, used space, status and type. Also permanent and temporary files related to those tablespaces in a separate query.
Categories: DBA Blogs

Inconsistent behavior with CROSS APPLY and OUTER APPLY

Tom Kyte - Thu, 2018-12-13 08:26
This question is also asked on http://stackoverflow.com/questions/42593148/inconsistent-behavior-with-cross-apply-and-outer-apply by a Graphql collaborator who is developing pagination for supporting Oracle db. have a schema in Oracle-12c similar...
Categories: DBA Blogs

PCF Heathwatch 1.4 just got a new UI

Pas Apicella - Thu, 2018-12-13 04:45
PCF Healthwatch is a service for monitoring and alerting on the current health, performance, and capacity of PCF to help operators understand the operational state of their PCF deployment

Finally got around to installing the new PCF Healthwatch 1.4 and the first thing which struck me was the UI main dashboard page. It's clear what I need to look at in seconds and the alerts on the right hand side also useful

Some screen shots below





papicella@papicella:~/pivotal/PCF/APJ/PEZ-HaaS/haas-99$ http http://healthwatch.run.haas-99.pez.pivotal.io/info
HTTP/1.1 200 OK
Content-Length: 45
Content-Type: application/json;charset=utf-8
Date: Thu, 13 Dec 2018 10:26:07 GMT
X-Vcap-Request-Id: ac309698-74a6-4e94-429a-bb5673c1c8f7

{
    "message": "PCF Healthwatch available"
}

More Information

Pivotal Cloud Foundry Healthwatch
https://docs.pivotal.io/pcf-healthwatch/1-4/index.html

Categories: Fusion Middleware

Understand Oracle Text at a glance

Yann Neuhaus - Thu, 2018-12-13 00:19

What is Oracle Text?

Oracle Text provides indexing, word and theme searching, and viewing capabilities for text in query applications and document classification applications.

Oracle text activation for a user

create user ORATXT identified by oratxt ;
grant ctxapp to ORATXT ;
grant execute on ctxsys.ctx_cls to ORATXT ;
grant execute on ctxsys.ctx_ddl to ORATXT ;
grant execute on ctxsys.ctx_doc to ORATXT ;
grant execute on ctxsys.ctx_output to ORATXT ;
grant execute on ctxsys.ctx_query to ORATXT ;
grant execute on ctxsys.ctx_report to ORATXT ;
grant execute on ctxsys.ctx_thes to ORATXT ;
grant execute on ctxsys.ctx_ulexer to ORATXT ;

Oracle Text configuration and usage

To design an Oracle Text application, first determine the type of queries you expect to run. This enables you to choose the most suitable index for the task. There are 4 use cases with Oracle Text:

  1. Document Collection Applications
    • The collection is typically static with no significant change in content after the initial indexing run. Documents can be of any size and of different formats, such as HTML, PDF, or Microsoft Word. These documents are stored in a document table. Searching is enabled by first indexing the document collection.
    • Queries usually consist of words or phrases. Application users can specify logical combinations of words and phrases using operators such as OR and AND. Other query operations can be used to improve the search results, such as stemming, proximity searching, and wildcarding.
    • An important factor for this type of application is retrieving documents relevant to a query while retrieving as few non-relevant documents as possible. The most relevant documents must be ranked high in the result list.
    • The queries for this type of application are best served with a CONTEXT index on your document table. To query this index, the application uses the SQL CONTAINS operator in the WHERE clause of a SELECT statement.
    • Example of searching
    • SQL> select score(1), doc_id, html_content from docs where contains(html_content, 'dbi', 1) > 0;
       
      SCORE(1) ID HTML_CONTENT
      ---------- ---------- -----------------------------------------------------------
      4 1 <HTML>dbi services provide various IT services</HTML>
      4 9 <HTML>You can become expert with dbi services</HTML>
      4 3 <HTML>The compaany dbi services is in Switzerland.</HTML>

  2. Catalog Information Applications
    • The stored catalog information consists of text information, such as book titles, and related structured information, such as price. The information is usually updated regularly to keep the online catalog up to date with the inventory.
    • Queries are usually a combination of a text component and a structured component. Results are almost always sorted by a structured component, such as date or price. Good response time is always an important factor with this type of query application.
    • Catalog applications are best served by a CTXCAT index. Query this index with the CATSEARCH operator in the WHERE clause of a SELECT statement.
    • Example of searching
    • SQL> select product, price from auction where catsearch(title, 'IT', 'order by price')> 0;
       
      PRODUCT PRICE
      ----------------------------------- ----------
      IT Advice 1 hour 499
      Course IT management 3999
      License IT monitoring 199
      IT desk 810

  3. Document Classification Applications
    • In a document classification application, an incoming stream or a set of documents is compared to a pre-defined set of rules. When a document matches one or more rules, the application performs some action. For example, assume there is an incoming stream of news articles. You can define a rule to represent the category of Finance. The rule is essentially one or more queries that select document about the subject of Finance. The rule might have the form ‘stocks or bonds or earnings’.
    • When a document arrives about a Wall Street earnings forecast and satisfies the rules for this category, the application takes an action, such as tagging the document as Finance or e-mailing one or more users.
    • To create a document classification application, create a table of rules and then create a CTXRULE index. To classify an incoming stream of text, use the MATCHES operator in the WHERE clause of a SELECT statement. See Figure 1-5 for the general flow of a classification application.
    • SQL> select category_id, category_name
      from categories
      where matches(blog_string, 'Dbi services add value to your IT infrastructure by providing experts in different technologies to cover all your needs.');
       
      QUERY_ID QUERY_STRING
      ---------- -----------------------------------
      9 Expertise
      2 Advertisement
      6 IT Services

  4. XML Search Applications
    • An XML search application performs searches over XML documents. A regular document search usually searches across a set of documents to return documents that satisfy a text predicate; an XML search often uses the structure of the XML document to restrict the search.
      Typically, only that part of the document that satisfies the search is returned. For example, instead of finding all purchase orders that contain the word electric, the user might need only purchase orders in which the comment field contains electric.

In conclusion, there is various uses cases for which Oracle Text will help you with text indexation. Before implementing, verify which will best suit your need. Also, it may be interesting to compare with an external text indexer like Solr which is also able to index your database via a JDBC driver.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

Cet article Understand Oracle Text at a glance est apparu en premier sur Blog dbi services.

Partitioning -- 12 : Data Dictionary Queries

Hemant K Chitale - Wed, 2018-12-12 19:47
Here's a compilation of some useful data dictionary queries on the implementation of Partitioning.

REM  List all Partitioned Tables in the database (or filter by OWNER in the WHERE clause)
REM Note that the last column is the *defined* default subpartition count
select owner, table_name, partitioning_type, subpartitioning_type, partition_count,
def_subpartition_count as default_subpart_count
from dba_part_tables
order by owner, table_name


REM List all Partitioned Indexes in the database (or filter by OWNER in the WHERE clause)
REM Note that the last column is the *defined* default subpartition count
select owner, index_name, table_name, partitioning_type, subpartitioning_type, partition_count,
def_subpartition_count as default_subpart_count
from dba_part_indexes
order by owner, index_name


REM List Partition Key Columns for all Partitioned Tables
REM (or filter by OWNER or NAME (NAME is TABLE_NAME when object_type='TABLE'))
REM Need to order by column_position as Partition Key may consist of multiple columns
select owner, name, column_name
from dba_part_key_columns
where object_type = 'TABLE'
order by owner, name, column_position


REM List Partition Key Columns for Table SubPartitions
REM (or filter by OWNER or NAME (NAME is TABLE_NAME when object_type='TABLE'))
REM Need to order by column_position as SubPartition Key may consist of multiple columns
select owner, name, column_name
from dba_subpart_key_columns
where object_type = 'TABLE'
order by owner, name, column_position


REM List Partition Key Columns for Index SubPartitions
REM (or filter by OWNER or NAME (NAME is INDEX_NAME when object_type='INDEX'))
REM Need to order by column_position as SubPartition may consist of multiple columns
select owner, name, column_name
from dba_subpart_key_columns
where object_type = 'INDEX'
order by owner, name, column_position


REM List all Table Partitions (or filter by TABLE_OWNER or TABLE_NAME in the WHERE clause)
REM Need to order by partition_position
select table_owner, table_name, partition_name, high_value, tablespace_name, num_rows, last_analyzed
from dba_tab_partitions
order by table_owner, table_name, partition_position


REM List all Table SubPartitions (or filter by TABLE_OWNER or TABLE_NAME in the WHERE clause)
select table_owner, table_name, partition_name, subpartition_name, high_value, tablespace_name, num_rows, last_analyzed
from dba_tab_subpartitions
order by table_owner, table_name, subpartition_position


REM List all Index Partitions (or filter by INDEX_OWNER or INDEX_NAME in the WHERE clause)
REM Need to order by partition_position
REM Note : For Table Names, you have to join back to dba_indexes
select index_owner, index_name, partition_name, tablespace_name, num_rows, last_analyzed
from dba_ind_partitions
order by index_owner, index_name, partition_position


REM List all Index SubPartitions (or filter by INDEX_OWNER or INDEX_NAME in the WHERE clause)
select index_owner, index_name, partition_name, subpartition_name, tablespace_name, num_rows, last_analyzed
from dba_ind_subpartitions
order by index_owner, index_name, subpartition_position


I have listed only a few columns from the data dictionary views of interest.  You may extract more information by referencing other columns or joining to other data dictionary views.



Categories: DBA Blogs

How Can You Protect Your Business From Cybercrime?

Chris Warticki - Wed, 2018-12-12 19:15
Cybercrime is Real.

By 2021, the estimated annual cost of worldwide cybercrime damages will be $6 trillion[1]. You’ve seen the headlines—the average cost of a data breach in 2016 was $4 million[2]—many companies never recover.

The U.S. Department of Homeland Security advises, “It is necessary for all organizations to establish a strong ongoing patch management process to ensure the proper preventative measures are taken against potential threats[3].”

Oracle continues to periodically receive reports of attempts to maliciously exploit vulnerabilities for which Oracle has already released fixes. In some instances, it has been reported that attackers have been successful because targeted customers had failed to apply available Oracle patches. Oracle therefore strongly recommends that customers remain on actively-supported versions and apply Critical Patch Update fixes without delay[4].

Support Matters.

Oracle Support is the best way to legally receive mission critical security updates and protection for your Oracle Software. Because Oracle creates and owns the source code, we can address vulnerabilities and emerging threats in the source code. As an Oracle Premier Support customer you have access reliable quarterly Critical Patch Updates to the source and security bulletins.

When your business is on the line, there is no substitute for trusted, secure, and comprehensive

See why Oracle recommends patching to fix software vulnerabilities at the source on Oracle Forbes BrandVoice and read more on the importance of software patching to avoid data breaches.

Don’t become a cybercrime statistic.

Pages

Subscribe to Oracle FAQ aggregator